text stringlengths 4 602k |
|---|
Angles: An Introduction
An angle is formed when two rays are joined together at a common point. The common point here is called node or vertex and the two rays are called arms of the angle. The angle is represented by the symbol ‘∠’. The word angle came from the Latin word “Angulus”. Learn more about lines and angles here.
The angle is usually measured in degrees, using a protractor. Degrees 30°, 45°, 60°, 90°, 180° shows different angles here. The types of angles are based on the values of angles in degrees.
We can also represent angles in radians, i.e., in terms of pi (π). 180 degrees is equal to π in radians.
|Table of contents:|
An angle is a form of geometrical shape, that is constructed by joining two rays to each other at their end-points. The angle can also be represented by three letters of the shape that define the angle, with the middle letter being where the angle actually is (i.e.its vertex). Angles are generally represented by Greek letters such as θ, α, β, etc.
Eg. ∠ABC, where B is the given angle.
Angle measurement terms are – degree °, radians or gradians.
For More Information On Parts Of An Angle, Watch The Below Video:
Types of Angles
There are majorly six types of angles in Geometry. The names of all angles with their properties are:
- Acute Angle: It lies between 0° to 90.
- Obtuse Angle: It lies between 90° to 180°
- Right Angle: The angle which is exactly equal to 90°
- Straight Angle: The angle which is exactly equal to 180°
- Reflex Angle: The angle which is greater than 180 degrees and less than 360 degrees
- Full Rotation: The complete rotation of angle equal to 360 degrees
Note: Sometimes full rotation is not considered as a kind of angle. Therefore, in such cases, we consider there are five types of angles.
|Type of angles||Description|
|Acute Angle||< 90°|
|Obtuse Angle||> 90°|
|Right Angle||= 90°|
|Full rotation/complete angle||=360°|
Interior and Exterior Angles
In case of a polygon, such as a triangle, quadrilateral, pentagon, hexagon, etc., we have both interior and exterior angles.
- Interior angles are those that lie inside the polygon or a closed shape having sides and angles.
- Exterior angles are formed outside the shape, between any side and line extended from adjacent sides.
For example, an image of a pentagon is given here, representing its interior angles and exterior angles.
Positive & Negative Angles
- Positive Angle- An Angle measured in Anti-Clockwise direction is Positive Angle.
- Negative Angle- An angle measured in Clockwise direction is Negative Angle.
Parts of Angles
- Vertex- The corner points of an angle is known as Vertex. It is the point where two rays meet.
- Arms– The two sides of angle, joined at a common endpoint.
- Initial Side – It is also known as the reference line. All the measurements are done taking this line as the reference.
- Terminal Side- It is the side (or ray) up to which the angle measurement is done.
Degree of an Angle
It is represented by ° (read as a degree). It most likely comes from Babylonians, who used a base 60 (Sexagesimal) number system. In their calendar, there was a total of 360 days. Hence, they adopted a full angle to be 360°. First, they tried to divide a full angle into angles using the angle of an equilateral triangle. Later, following their number system (base 60), they divided 60° by 60 and defined that as 1°. Sometimes, it is also referred to as arc degree or arc-degree which means the degree of an arc.
An angle is said to be equal to 1° if the rotation from the initial to the terminal side is equal to 1/360 of the full rotation.
A degree is further divided into minutes and seconds. 1′ (1 minute) is defined as one-sixtieth of a degree and 1” (1 second) is defined as one-sixtieth of a minute. Thus,
1°= 60′ = 3600”
Radian of an Angle
This is the SI unit of angle. Radian is mostly used in Calculus. All the formula for derivatives and integrals hold true only when angles are measured in terms of a radian. It is denoted by ‘rad’.
The length of the arc of a unit circle is numerically equal to the measurement in radian of the angle that it subtends.
In a complete circle, there are 2π radians.
360 = 2π; radian
Therefore, 1 radian = 180°/π
Gradian of an Angle
This unit is least used in Maths. It is also called a gon or a grade.
An angle is equal to 1 gradian if the rotation from the initial to terminal side is 1/400 of the full rotation. Hence, the full angle is equal to 400 gradians.
It is denoted by ‘grad’.
Figure 3 shows the example of angles in gradian.
Draw angles using a protractor for the following measurements:
- 45 degrees
- 55 degrees
- 70 degrees
- 90 degrees
- 130 degrees
Frequently Asked Questions – FAQs
What is an angle?
What are the six types of angles?
How angles are measured?
What is the value of an angle equal to 60 degrees, in radians?
Since, 180 degrees equals to π, therefore,
60 degrees = π/180 x 60 = π/3 (in radians) |
Evolutionary history of life
|Part of a series on|
The evolutionary history of life on Earth traces the processes by which living and fossil organisms have evolved since life on the planet first originated until the present day. Earth formed about 4.5 Ga (billion years) ago and life appeared on its surface within 1 billion years. The similarities between all present-day organisms indicate the presence of a common ancestor from which all known species have diverged through the process of evolution.
The earliest evidence for life on Earth is graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean and many of the major steps in early evolution are thought to have taken place within them. The evolution of oxygenic photosynthesis, around 3.5 Ga, eventually led to the oxygenation of the atmosphere, beginning around 2.4 Ga. The earliest evidence of eukaryotes (complex cells with organelles) dates from 1.85 Ga, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. Later, around 1.7 Ga, multicellular organisms began to appear, with differentiated cells performing specialised functions. Bilateria, animals with a front and a back, appeared by 555 million years ago.
The earliest land plants date back to around 450 Ma (million years ago), although evidence suggests that microbes formed the earliest terrestrial ecosystems, at least 2.9 Ga ago. Microbes are thought to have paved the way for the inception of land plants in the Phanerozoic. Land plants were so successful that they are thought to have contributed to the late Devonian extinction event. Invertebrate animals appear during the Ediacaran period, while vertebrates originated about during the Cambrian explosion. During the Permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the Permian–Triassic extinction event . During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates; one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods. After the Cretaceous–Paleogene extinction event killed off the dinosaurs, mammals increased rapidly in size and diversity. Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.
- 1 Earliest history of Earth
- 2 Earliest evidence for life on Earth
- 3 Origins of life on Earth
- 4 Environmental and evolutionary impact of microbial mats
- 5 Diversification of eukaryotes
- 6 Sexual reproduction and multicellular organisms
- 7 Emergence of animals
- 8 Colonization of land
- 9 Dinosaurs, birds and mammals
- 10 Flowering plants
- 11 Social insects
- 12 Humans
- 13 Mass extinctions
- 14 See also
- 15 Footnotes
- 16 References
- 17 Further reading
- 18 External links
Earliest history of Earth
The oldest meteorite fragments found on Earth are about 4.54 billion years old; this, coupled primarily with the dating of ancient lead deposits, has put the estimated age of Earth at around that time. The Moon has the same composition as Earth's crust but does not contain an iron-rich core like the Earth's. Many scientists think that about 40 million years later a body the size of Mars struck the Earth, throwing into orbit crust material that formed the Moon. Another hypothesis is that the Earth and Moon started to coalesce at the same time but the Earth, having much stronger gravity than the early Moon, attracted almost all the iron particles in the area.
Until 2001, the oldest rocks found on Earth were about 3.8 Ga. leading scientists to believe that the Earth's surface had been molten until then. Accordingly, they named this part of Earth's history the Hadean eon, whose name means "hellish". However analysis of zircons formed 4.4 billion years ago indicates that Earth's crust solidified about 100 Ma after the planet's formation and that the planet quickly acquired oceans and an atmosphere, which may have been capable of supporting life.
Evidence from the Moon indicates that from 4 billion to 3.8 billion years ago it suffered a Late Heavy Bombardment by debris that was left over from the formation of the Solar System, and the Earth should have experienced an even heavier bombardment due to its stronger gravity. While there is no direct evidence of conditions on Earth 4 billion to 3.8 billion years ago, there is no reason to think that the Earth was not also affected by this late heavy bombardment. This event may well have stripped away any previous atmosphere and oceans; in this case gases and water from comet impacts may have contributed to their replacement, although volcanic outgassing on Earth would have supplied at least half. However, if subsurface microbial life had evolved by this point, it would have survived the bombardment.
Earliest evidence for life on Earth
The earliest identified organisms were minute and relatively featureless, and their fossils look like small rods, which are very difficult to tell apart from structures that arise through abiotic physical processes. The oldest undisputed evidence of life on Earth, interpreted as fossilized bacteria, dates to 3 Ga.[contradictory] Other finds in rocks dated to about 3.5 Ga have been interpreted as bacteria, with geochemical evidence also seeming to show the presence of life 3.8 Ga. However these analyses were closely scrutinized, and non-biological processes were found which could produce all of the "signatures of life" that had been reported. While this does not prove that the structures found had a non-biological origin, they cannot be taken as clear evidence for the presence of life. Geochemical signatures from rocks deposited 3.4 Ga have been interpreted as evidence for life, although these statements have not been thoroughly examined by critics.
Origins of life on Earth
Biologists reason that all living organisms on Earth must share a single last universal ancestor, because it would be virtually impossible that two or more separate lineages could have independently developed the many complex biochemical mechanisms common to all living organisms. As previously mentioned the earliest organisms for which fossil evidence is available are bacteria. The lack of fossil or geochemical evidence for earlier organisms has left plenty of scope for hypotheses, which fall into two main groups: 1) that life arose spontaneously on Earth or 2) that it was "seeded" from elsewhere in the Universe.
Life "seeded" from elsewhere
The idea that life on Earth was "seeded" from elsewhere in the Universe dates back at least to the Greek philosopher Anaximander in the sixth century BCE. In the twentieth century it was proposed by the physical chemist Svante Arrhenius, by the astronomers Fred Hoyle and Chandra Wickramasinghe, and by molecular biologist Francis Crick and chemist Leslie Orgel. There are three main versions of the "seeded from elsewhere" hypothesis: from elsewhere in our Solar System via fragments knocked into space by a large meteor impact, in which case the most credible sources are Mars and Venus; by alien visitors, possibly as a result of accidental contamination by micro-organisms that they brought with them; and from outside the Solar System but by natural means. Experiments in low Earth orbit, such as EXOSTACK, demonstrated that some micro-organism spores can survive the shock of being catapulted into space and some can survive exposure to outer space radiation for at least 5.7 years. Scientists are divided over the likelihood of life arising independently on Mars, or on other planets in our galaxy.
Independent emergence on Earth
Life on Earth is based on carbon and water. Carbon provides stable frameworks for complex chemicals and can be easily extracted from the environment, especially from carbon dioxide. The only other element with similar chemical properties, silicon, forms much less stable structures and, because most of its compounds are solids, would be more difficult for organisms to extract. Water is an excellent solvent and has two other useful properties: the fact that ice floats enables aquatic organisms to survive beneath it in winter; and its molecules have electrically negative and positive ends, which enables it to form a wider range of compounds than other solvents can. Other good solvents, such as ammonia, are liquid only at such low temperatures that chemical reactions may be too slow to sustain life, and lack water's other advantages. Organisms based on alternative biochemistry may however be possible on other planets.
Research on how life might have emerged from non-living chemicals focuses on three possible starting points: self-replication, an organism's ability to produce offspring that are very similar to itself; metabolism, its ability to feed and repair itself; and external cell membranes, which allow food to enter and waste products to leave, but exclude unwanted substances. Research on abiogenesis still has a long way to go, since theoretical and empirical approaches are only beginning to make contact with each other.
Replication first: RNA world
Even the simplest members of the three modern domains of life use DNA to record their "recipes" and a complex array of RNA and protein molecules to "read" these instructions and use them for growth, maintenance and self-replication. The discovery that some RNA molecules can catalyze both their own replication and the construction of proteins led to the hypothesis of earlier life-forms based entirely on RNA. These ribozymes could have formed an RNA world in which there were individuals but no species, as mutations and horizontal gene transfers would have meant that the offspring in each generation were quite likely to have different genomes from those that their parents started with. RNA would later have been replaced by DNA, which is more stable and therefore can build longer genomes, expanding the range of capabilities a single organism can have. Ribozymes remain as the main components of ribosomes, modern cells' "protein factories".
Although short self-replicating RNA molecules have been artificially produced in laboratories, doubts have been raised about where natural non-biological synthesis of RNA is possible. The earliest "ribozymes" may have been formed of simpler nucleic acids such as PNA, TNA or GNA, which would have been replaced later by RNA.
In 2003 it was proposed that porous metal sulfide precipitates would assist RNA synthesis at about 100 °C (212 °F) and ocean-bottom pressures near hydrothermal vents. Under this hypothesis, lipid membranes would be the last major cell components to appear and, until then, the protocells would be confined to the pores.
Metabolism first: Iron–sulfur world
A series of experiments starting in 1997 showed that early stages in the formation of proteins from inorganic materials including carbon monoxide and hydrogen sulfide could be achieved by using iron sulfide and nickel sulfide as catalysts. Most of the steps required temperatures of about 100 °C (212 °F) and moderate pressures, although one stage required 250 °C (482 °F) and a pressure equivalent to that found under 7 kilometres (4.3 mi) of rock. Hence it was suggested that self-sustaining synthesis of proteins could have occurred near hydrothermal vents.
Membranes first: Lipid world
It has been suggested that double-walled "bubbles" of lipids like those that form the external membranes of cells may have been an essential first step. Experiments that simulated the conditions of the early Earth have reported the formation of lipids, and these can spontaneously form liposomes, double-walled "bubbles", and then reproduce themselves. Although they are not intrinsically information-carriers as nucleic acids are, they would be subject to natural selection for longevity and reproduction. Nucleic acids such as RNA might then have formed more easily within the liposomes than they would have outside.
The clay theory
RNA is complex and there are doubts about whether it can be produced non-biologically in the wild. Some clays, notably montmorillonite, have properties that make them plausible accelerators for the emergence of an RNA world: they grow by self-replication of their crystalline pattern; they are subject to an analog of natural selection, as the clay "species" that grows fastest in a particular environment rapidly becomes dominant; and they can catalyze the formation of RNA molecules. Although this idea has not become the scientific consensus, it still has active supporters.
Research in 2003 reported that montmorillonite could also accelerate the conversion of fatty acids into "bubbles", and that the "bubbles" could encapsulate RNA attached to the clay. These "bubbles" can then grow by absorbing additional lipids and then divide. The formation of the earliest cells may have been aided by similar processes.
Environmental and evolutionary impact of microbial mats
Microbial mats are multi-layered, multi-species colonies of bacteria and other organisms that are generally only a few millimeters thick, but still contain a wide range of chemical environments, each of which favors a different set of micro-organisms. To some extent each mat forms its own food chain, as the by-products of each group of micro-organisms generally serve as "food" for adjacent groups.
Stromatolites are stubby pillars built as microbes in mats slowly migrate upwards to avoid being smothered by sediment deposited on them by water. There has been vigorous debate about the validity of alleged fossils from before 3 Ga, with critics arguing that so-called stromatolites could have been formed by non-biological processes. In 2006 another find of stromatolites was reported from the same part of Australia as previous ones, in rocks dated to 3.5 Ga.
In modern underwater mats the top layer often consists of photosynthesizing cyanobacteria which create an oxygen-rich environment, while the bottom layer is oxygen-free and often dominated by hydrogen sulfide emitted by the organisms living there. It is estimated that the appearance of oxygenic photosynthesis by bacteria in mats increased biological productivity by a factor of between 100 and 1,000. The reducing agent used by oxygenic photosynthesis is water, which is much more plentiful than the geologically produced reducing agents required by the earlier non-oxygenic photosynthesis. From this point onwards life itself produced significantly more of the resources it needed than did geochemical processes. Oxygen is toxic to organisms that are not adapted to it, but greatly increases the metabolic efficiency of oxygen-adapted organisms. Oxygen became a significant component of Earth's atmosphere about 2.4 Ga. Although eukaryotes may have been present much earlier, the oxygenation of the atmosphere was a prerequisite for the evolution of the most complex eukaryotic cells, from which all multicellular organisms are built. The boundary between oxygen-rich and oxygen-free layers in microbial mats would have moved upwards when photosynthesis shut down overnight, and then downwards as it resumed on the next day. This would have created selection pressure for organisms in this intermediate zone to acquire the ability to tolerate and then to use oxygen, possibly via endosymbiosis, where one organism lives inside another and both of them benefit from their association.
Cyanobacteria have the most complete biochemical "toolkits" of all the mat-forming organisms. Hence they are the most self-sufficient of the mat organisms and were well-adapted to strike out on their own both as floating mats and as the first of the phytoplankton, providing the basis of most marine food chains.
Diversification of eukaryotes
Chromatin, nucleus, endomembrane system, and mitochondria
|This section requires expansion. (October 2012)|
Eukaryotes may have been present long before the oxygenation of the atmosphere, but most modern eukaryotes require oxygen, which their mitochondria use to fuel the production of ATP, the internal energy supply of all known cells. In the 1970s it was proposed and, after much debate, widely accepted that eukaryotes emerged as a result of a sequence of endosymbioses between "prokaryotes". For example: a predatory micro-organism invaded a large prokaryote, probably an archaean, but the attack was neutralized, and the attacker took up residence and evolved into the first of the mitochondria; one of these chimeras later tried to swallow a photosynthesizing cyanobacterium, but the victim survived inside the attacker and the new combination became the ancestor of plants; and so on. After each endosymbiosis began, the partners would have eliminated unproductive duplication of genetic functions by re-arranging their genomes, a process which sometimes involved transfer of genes between them. Another hypothesis proposes that mitochondria were originally sulfur- or hydrogen-metabolising endosymbionts, and became oxygen-consumers later. On the other hand mitochondria might have been part of eukaryotes' original equipment.
There is a debate about when eukaryotes first appeared: the presence of steranes in Australian shales may indicate that eukaryotes were present 2.7 Ga; however an analysis in 2008 concluded that these chemicals infiltrated the rocks less than 2.2 Ga and prove nothing about the origins of eukaryotes. Fossils of the alga Grypania have been reported in 1.85 Ga rocks (originally dated to 2.1 Ga but later revised), and indicates that eukaryotes with organelles had already evolved. A diverse collection of fossil algae were found in rocks dated between 1.5 and 1.4 Ga. The earliest known fossils of fungi date from 1.43 Ga.
Plastids are thought to have originated from endosymbiotic cyanobacteria. The symbiosis evolved around 1500 million years ago and enabled eukaryotes to carry out oxygenic photosynthesis. Three evolutionary lineages have since emerged in which the plastids are named differently: chloroplasts in green algae and plants, rhodoplasts in red algae and cyanelles in the glaucophytes.
Sexual reproduction and multicellular organisms
Evolution of sexual reproduction
The defining characteristics of sexual reproduction in eukaryotes are meiosis and fertilization. There is much genetic recombination in this kind of reproduction, in which offspring receive 50% of their genes from each parent, in contrast with asexual reproduction, in which there is no recombination. Bacteria also exchange DNA by bacterial conjugation, the benefits of which include resistance to antibiotics and other toxins, and the ability to utilize new metabolites. However conjugation is not a means of reproduction, and is not limited to members of the same species – there are cases where bacteria transfer DNA to plants and animals.
On the other hand, bacterial transformation is clearly an adaptation for transfer of DNA between bacteria of the same species. Bacterial transformation is a complex process involving the products of numerous bacterial genes and can be regarded as a bacterial form of sex. This process occurs naturally in at least 67 prokaryotic species (in seven different phyla). Sexual reproduction in eukaryotes may have evolved from bacterial transformation. (Also see Evolution of sexual reproduction#Origin of sexual reproduction.)
The disadvantages of sexual reproduction are well-known: the genetic reshuffle of recombination may break up favorable combinations of genes; and since males do not directly increase the number of offspring in the next generation, an asexual population can out-breed and displace in as little as 50 generations a sexual population that is equal in every other respect. Nevertheless the great majority of animals, plants, fungi and protists reproduce sexually. There is strong evidence that sexual reproduction arose early in the history of eukaryotes and that the genes controlling it have changed very little since then. How sexual reproduction evolved and survived is an unsolved puzzle.
The Red Queen Hypothesis suggests that sexual reproduction provides protection against parasites, because it is easier for parasites to evolve means of overcoming the defenses of genetically identical clones than those of sexual species that present moving targets, and there is some experimental evidence for this. However there is still doubt about whether it would explain the survival of sexual species if multiple similar clone species were present, as one of the clones may survive the attacks of parasites for long enough to out-breed the sexual species. Furthermore, contrary to the expectations of the Red Queen Hypothesis, Hanley et al. found that the prevalence, abundance and mean intensity of mites was significantly higher in sexual geckos than in asexuals sharing the same habitat. In addition, Parker, after reviewing numerous genetic studies on plant disease resistance, failed to find a single example consistent with the concept that pathogens are the primary selective agent responsible for sexual reproduction in the host.
The Mutation Deterministic Hypothesis assumes that each organism has more than one harmful mutation and the combined effects of these mutations are more harmful than the sum of the harm done by each individual mutation. If so, sexual recombination of genes will reduce the harm that bad mutations do to offspring and at the same time eliminate some bad mutations from the gene pool by isolating them in individuals that perish quickly because they have an above-average number of bad mutations. However the evidence suggests that the MDH's assumptions are shaky, because many species have on average less than one harmful mutation per individual and no species that has been investigated shows evidence of synergy between harmful mutations. Further criticisms of this hypothesis are discussed in the article Evolution of sexual reproduction#Removal of deleterious genes
The random nature of recombination causes the relative abundance of alternative traits to vary from one generation to another. This genetic drift is insufficient on its own to make sexual reproduction advantageous, but a combination of genetic drift and natural selection may be sufficient. When chance produces combinations of good traits, natural selection gives a large advantage to lineages in which these traits become genetically linked. On the other hand, the benefits of good traits are neutralized if they appear along with bad traits. Sexual recombination gives good traits the opportunities to become linked with other good traits, and mathematical models suggest this may be more than enough to offset the disadvantages of sexual reproduction. Other combinations of hypotheses that are inadequate on their own are also being examined.
The adaptive function of sex today remains a major unresolved issue in biology. The competing models to explain the adaptive function of sex were reviewed by Birdsell and Wills. The hypotheses discussed above all depend on possible beneficial effects of random genetic variation produced by genetic recombination. An alternative view is that sex arose, and is maintained, as a process for repairing DNA damage, and that the genetic variation produced is an occasionally beneficial byproduct.
The simplest definitions of "multicellular", for example "having multiple cells", could include colonial cyanobacteria like Nostoc. Even a professional biologist's definition such as "having the same genome but different types of cell" would still include some genera of the green alga Volvox, which have cells that specialize in reproduction. Multicellularity evolved independently in organisms as diverse as sponges and other animals, fungi, plants, brown algae, cyanobacteria, slime moulds and myxobacteria. For the sake of brevity this article focuses on the organisms that show the greatest specialization of cells and variety of cell types, although this approach to the evolution of complexity could be regarded as "rather anthropocentric".
The initial advantages of multicellularity may have included: more efficient sharing of nutrients that are digested outside the cell, increased resistance to predators, many of which attacked by engulfing; the ability to resist currents by attaching to a firm surface; the ability to reach upwards to filter-feed or to obtain sunlight for photosynthesis; the ability to create an internal environment that gives protection against the external one; and even the opportunity for a group of cells to behave "intelligently" by sharing information. These features would also have provided opportunities for other organisms to diversify, by creating more varied environments than flat microbial mats could.
Multicellularity with differentiated cells is beneficial to the organism as a whole but disadvantageous from the point of view of individual cells, most of which lose the opportunity to reproduce themselves. In an asexual multicellular organism, rogue cells which retain the ability to reproduce may take over and reduce the organism to a mass of undifferentiated cells. Sexual reproduction eliminates such rogue cells from the next generation and therefore appears to be a prerequisite for complex multicellularity.
The available evidence indicates that eukaryotes evolved much earlier but remained inconspicuous until a rapid diversification around 1 Ga. The only respect in which eukaryotes clearly surpass bacteria and archaea is their capacity for variety of forms, and sexual reproduction enabled eukaryotes to exploit that advantage by producing organisms with multiple cells that differed in form and function.
The Francevillian Group Fossils, dated to 2.1 Ga, are the earliest known fossil organisms that are clearly multicellular. They may have had differentiated cells. Another early multicellular fossil, Qingshania,[note 1] dated to 1.7 Ga, appears to consist of virtually identical cells. The red alga called Bangiomorpha, dated at 1.2 Ga, is the earliest known organism that certainly has differentiated, specialized cells, and is also the oldest known sexually reproducing organism. The 1.43 billion-year-old fossils interpreted as fungi appear to have been multicellular with differentiated cells. The "string of beads" organism Horodyskia, found in rocks dated from 1.5 Ga to 900 Ma, may have been an early metazoan; however it has also been interpreted as a colonial foraminiferan.
Emergence of animals
Animals are multicellular eukaryotes,[note 2] and are distinguished from plants, algae, and fungi by lacking cell walls. All animals are motile, if only at certain life stages. All animals except sponges have bodies differentiated into separate tissues, including muscles, which move parts of the animal by contracting, and nerve tissue, which transmits and processes signals.
The earliest widely accepted animal fossils are rather modern-looking cnidarians (the group that includes jellyfish, sea anemones and hydras), possibly from around , although fossils from the Doushantuo Formation can only be dated approximately. Their presence implies that the cnidarian and bilaterian lineages had already diverged.
The Ediacara biota, which flourished for the last 40 Ma before the start of the Cambrian, were the first animals more than a very few centimeters long. Many were flat and had a "quilted" appearance, and seemed so strange that there was a proposal to classify them as a separate kingdom, Vendozoa. Others, however, been interpreted as early molluscs (Kimberella), echinoderms (Arkarua), and arthropods (Spriggina, Parvancorina). There is still debate about the classification of these specimens, mainly because the diagnostic features which allow taxonomists to classify more recent organisms, such as similarities to living organisms, are generally absent in the Ediacarans. However there seems little doubt that Kimberella was at least a triploblastic bilaterian animal, in other words significantly more complex than cnidarians.
The small shelly fauna are a very mixed collection of fossils found between the Late Ediacaran and Mid Cambrian periods. The earliest, Cloudina, shows signs of successful defense against predation and may indicate the start of an evolutionary arms race. Some tiny Early Cambrian shells almost certainly belonged to molluscs, while the owners of some "armor plates", Halkieria and Microdictyon, were eventually identified when more complete specimens were found in Cambrian lagerstätten that preserved soft-bodied animals.
In the 1970s there was already a debate about whether the emergence of the modern phyla was "explosive" or gradual but hidden by the shortage of Pre-Cambrian animal fossils. A re-analysis of fossils from the Burgess Shale lagerstätte increased interest in the issue when it revealed animals, such as Opabinia, which did not fit into any known phylum. At the time these were interpreted as evidence that the modern phyla had evolved very rapidly in the "Cambrian explosion" and that the Burgess Shale's "weird wonders" showed that the Early Cambrian was a uniquely experimental period of animal evolution. Later discoveries of similar animals and the development of new theoretical approaches led to the conclusion that many of the "weird wonders" were evolutionary "aunts" or "cousins" of modern groups – for example that Opabinia was a member of the lobopods, a group which includes the ancestors of the arthropods, and that it may have been closely related to the modern tardigrades. Nevertheless there is still much debate about whether the Cambrian explosion was really explosive and, if so, how and why it happened and why it appears unique in the history of animals.
Deuterostomes and the first vertebrates
Most of the animals at the heart of the Cambrian explosion debate are protostomes, one of the two main groups of complex animals. The other major group, the deuterostomes, contains invertebrates such as sea stars and urchins (echinoderms), as well as chordates (see below). Many echinoderms have hard calcite "shells", which are fairly common from the Early Cambrian small shelly fauna onwards. Other deuterostome groups are soft-bodied, and most of the significant Cambrian deuterostome fossils come from the Chengjiang fauna, a lagerstätte in China. The chordates are another major deuterostome group: animals with a distinct dorsal nerve cord. Chordates include soft-bodied invertebrates such as tunicates as well as vertebrates- animals with a backbone. While tunicate fossils predate the Cambrian explosion, the Chengjiang fossils Haikouichthys and Myllokunmingia appear to be true vertebrates, and Haikouichthys had distinct vertebrae, which may have been slightly mineralized. Vertebrates with jaws, such as the Acanthodians, first appeared in the Late Ordovician.
Colonization of land
Adaptation to life on land is a major challenge: all land organisms need to avoid drying-out and all those above microscopic size must create special structures to withstand gravity; respiration and gas exchange systems have to change; reproductive systems cannot depend on water to carry eggs and sperm towards each other. Although the earliest good evidence of land plants and animals dates back to the Ordovician Period ( ), and a number of microorganism lineages made it onto land much earlier, modern land ecosystems only appeared in the late Devonian, about .
Evolution of terrestrial antioxidants
Oxygen is a potent oxidant whose accumulation in terrestrial atmosphere resulted from the development of photosynthesis over 3 Ga, in blue-green algae (cyanobacteria), which were the most primitive oxygenic photosynthetic organisms. Brown algae (seaweeds) accumulate inorganic mineral antioxidants such as rubidium, vanadium, zinc, iron, copper, molybdenum, selenium and iodine which is concentrated more than 30,000 times the concentration of this element in seawater. Protective endogenous antioxidant enzymes and exogenous dietary antioxidants helped to prevent oxidative damage. Most marine mineral antioxidants act in the cells as essential trace-elements in redox and antioxidant metallo-enzymes.
When plants and animals began to transfer from the sea to rivers and land about 500 Ma ago, environmental deficiency of these marine mineral antioxidants and iodine, was a challenge to the evolution of terrestrial life. Terrestrial plants slowly optimized the production of “new” endogenous antioxidants such as ascorbic acid, polyphenols, flavonoids, tocopherols etc. A few of these appeared more recently, in last 200-50 Ma ago, in fruits and flowers of angiosperm plants.
In fact, angiosperms (the dominant type of plant today) and most of their antioxidant pigments evolved during the late Jurassic Period. Plants employ antioxidants to defend their structures against reactive oxygen species produced during photosynthesis. Animals are exposed to the same oxidants, and they have evolved endogenous enzymatic antioxidant systems. Iodine is the most primitive and abundant electron-rich essential element in the diet of marine and terrestrial organisms, and as iodide acts as an electron-donor and has this ancestral antioxidant function in all iodide-concentrating cells from primitive marine algae to more recent terrestrial vertebrates.
Evolution of soil
Before the colonization of land, soil, a combination of mineral particles and decomposed organic matter, did not exist. Land surfaces would have been either bare rock or unstable sand produced by weathering. Water and any nutrients in it would have drained away very quickly.
Films of cyanobacteria, which are not plants but use the same photosynthesis mechanisms, have been found in modern deserts, and only in areas that are unsuitable for vascular plants. This suggests that microbial mats may have been the first organisms to colonize dry land, possibly in the Precambrian. Mat-forming cyanobacteria could have gradually evolved resistance to desiccation as they spread from the seas to tidal zones and then to land. Lichens, which are symbiotic combinations of a fungus (almost always an ascomycete) and one or more photosynthesizers (green algae or cyanobacteria), are also important colonizers of lifeless environments, and their ability to break down rocks contributes to soil formation in situations where plants cannot survive. The earliest known ascomycete fossils date from in the Silurian.
Soil formation would have been very slow until the appearance of burrowing animals, which mix the mineral and organic components of soil and whose feces are a major source of the organic components. Burrows have been found in Ordovician sediments, and are attributed to annelids ("worms") or arthropods.
Plants and the Late Devonian wood crisis
In aquatic algae, almost all cells are capable of photosynthesis and are nearly independent. Life on land required plants to become internally more complex and specialized: photosynthesis was most efficient at the top; roots were required in order to extract water from the ground; the parts in between became supports and transport systems for water and nutrients.
Spores of land plants, possibly rather like liverworts, have been found in Mid Ordovician rocks dated to about . In Mid Silurian rocks there are fossils of actual plants including clubmosses such as Baragwanathia; most were under 10 centimetres (3.9 in) high, and some appear closely related to vascular plants, the group that includes trees.
By the late Devonian , trees such as Archaeopteris were so abundant that they changed river systems from mostly braided to mostly meandering, because their roots bound the soil firmly. In fact they caused a "Late Devonian wood crisis", because:
- They removed more carbon dioxide from the atmosphere, reducing the greenhouse effect and thus causing an ice age in the Carboniferous period. In later ecosystems the carbon dioxide "locked up" in wood is returned to the atmosphere by decomposition of dead wood. However, the earliest fossil evidence of fungi that can decompose wood also comes from the Late Devonian.
- The increasing depth of plants' roots led to more washing of nutrients into rivers and seas by rain. This caused algal blooms whose high consumption of oxygen caused anoxic events in deeper waters, increasing the extinction rate among deep-water animals.
Animals had to change their feeding and excretory systems, and most land animals developed internal fertilization of their eggs. The difference in refractive index between water and air required changes in their eyes. On the other hand, in some ways movement and breathing became easier, and the better transmission of high-frequency sounds in air encouraged the development of hearing.
The oldest known air-breathing animal is Pneumodesmus, an archipolypodan millipede from the Mid Silurian, about . Its air-breathing, terrestrial nature is evidenced by the presence of spiracles, the openings to tracheal systems. However, some earlier trace fossils from the Cambrian-Ordovician boundary about are interpreted as the tracks of large amphibious arthropods on coastal sand dunes, and may have been made by euthycarcinoids, which are thought to be evolutionary "aunts" of myriapods. Other trace fossils from the Late Ordovician a little over probably represent land invertebrates, and there is clear evidence of numerous arthropods on coasts and alluvial plains shortly before the Silurian-Devonian boundary, about , including signs that some arthropods ate plants. Arthropods were well pre-adapted to colonise land, because their existing jointed exoskeletons provided protection against desiccation, support against gravity and a means of locomotion that was not dependent on water.
The fossil record of other major invertebrate groups on land is poor: none at all for non-parasitic flatworms, nematodes or nemerteans; some parasitic nematodes have been fossilized in amber; annelid worm fossils are known from the Carboniferous, but they may still have been aquatic animals; the earliest fossils of gastropods on land date from the Late Carboniferous, and this group may have had to wait until leaf litter became abundant enough to provide the moist conditions they need.
The earliest confirmed fossils of flying insects date from the Late Carboniferous, but it is thought that insects developed the ability to fly in the Early Carboniferous or even Late Devonian. This gave them a wider range of ecological niches for feeding and breeding, and a means of escape from predators and from unfavorable changes in the environment. About 99% of modern insect species fly or are descendants of flying species.
Early land vertebrates
Tetrapods, vertebrates with four limbs, evolved from other rhipidistian fish over a relatively short timespan during the Late Devonian ( ). The early groups are grouped together as Labyrinthodontia. They retained aquatic, fry-like tadpoles, a system still seen in modern amphibians. From the 1950s to the early 1980s it was thought that tetrapods evolved from fish that had already acquired the ability to crawl on land, possibly in order to go from a pool that was drying out to one that was deeper. However, in 1987, nearly complete fossils of Acanthostega from about showed that this Late Devonian transitional animal had legs and both lungs and gills, but could never have survived on land: its limbs and its wrist and ankle joints were too weak to bear its weight; its ribs were too short to prevent its lungs from being squeezed flat by its weight; its fish-like tail fin would have been damaged by dragging on the ground. The current hypothesis is that Acanthostega, which was about 1 metre (3.3 ft) long, was a wholly aquatic predator that hunted in shallow water. Its skeleton differed from that of most fish, in ways that enabled it to raise its head to breathe air while its body remained submerged, including: its jaws show modifications that would have enabled it to gulp air; the bones at the back of its skull are locked together, providing strong attachment points for muscles that raised its head; the head is not joined to the shoulder girdle and it has a distinct neck.
The Devonian proliferation of land plants may help to explain why air breathing would have been an advantage: leaves falling into streams and rivers would have encouraged the growth of aquatic vegetation; this would have attracted grazing invertebrates and small fish that preyed on them; they would have been attractive prey but the environment was unsuitable for the big marine predatory fish; air-breathing would have been necessary because these waters would have been short of oxygen, since warm water holds less dissolved oxygen than cooler marine water and since the decomposition of vegetation would have used some of the oxygen.
Later discoveries revealed earlier transitional forms between Acanthostega and completely fish-like animals. Unfortunately there is then a gap (Romer's gap) of about 30 Ma between the fossils of ancestral tetrapods and Mid Carboniferous fossils of vertebrates that look well-adapted for life on land. Some of these look like early relatives of modern amphibians, most of which need to keep their skins moist and to lay their eggs in water, while others are accepted as early relatives of the amniotes, whose waterproof skin enables them to live and breed far from water.
Dinosaurs, birds and mammals
Amniotes, whose eggs can survive in dry environments, probably evolved in the Late Carboniferous period (synapsids and sauropsids, date from around . The synapsid pelycosaurs and their descendants the therapsids are the most common land vertebrates in the best-known Permian ( ) fossil beds. However at the time these were all in temperate zones at middle latitudes, and there is evidence that hotter, drier environments nearer the Equator were dominated by sauropsids and amphibians.). The earliest fossils of the two surviving amniote groups,
The Permian-Triassic extinction wiped out almost all land vertebrates, as well as the great majority of other life. During the slow recovery from this catastrophe, estimated to have taken 30 million years, a previously obscure sauropsid group became the most abundant and diverse terrestrial vertebrates: a few fossils of archosauriformes ("ruling lizard forms") have been found in Late Permian rocks, but, by the Mid Triassic, archosaurs were the dominant land vertebrates. Dinosaurs distinguished themselves from other archosaurs in the Late Triassic, and became the dominant land vertebrates of the Jurassic and Cretaceous periods ( ).
During the Late Jurassic, birds evolved from small, predatory theropod dinosaurs. The first birds inherited teeth and long, bony tails from their dinosaur ancestors, but some had developed horny, toothless beaks by the very Late Jurassic and short pygostyle tails by the Early Cretaceous.
While the archosaurs and dinosaurs were becoming more dominant in the Triassic, the mammaliaform successors of the therapsids evolved into small, mainly nocturnal insectivores. This ecological role may have promoted the evolution of mammals, for example nocturnal life may have accelerated the development of endothermy ("warm-bloodedness") and hair or fur. By in the Early Jurassic there were animals that were very like today's mammals in a number of respects. Unfortunately there is a gap in the fossil record throughout the Mid Jurassic. However fossil teeth discovered in Madagascar indicate that the split between the lineage leading to monotremes and the one leading to other living mammals had occurred by . After dominating land vertebrate niches for about 150 Ma, the dinosaurs perished in the Cretaceous–Paleogene extinction ( ) along with many other groups of organisms. Mammals throughout the time of the dinosaurs had been restricted to a narrow range of taxa, sizes and shapes, but increased rapidly in size and diversity after the extinction, with bats taking to the air within 13 Ma, and cetaceans to the sea within 15 Ma.
The first flowering plants appeared around 130 million years ago. The 250,000 to 400,000 species of flowering plants outnumber all other ground plants combined, and are the dominant vegetation in most terrestrial ecosystems. There is fossil evidence that flowering plants diversified rapidly in the Early Cretaceous, from , and that their rise was associated with that of pollinating insects. Among modern flowering plants Magnolias are thought to be close to the common ancestor of the group. However paleontologists have not succeeded in identifying the earliest stages in the evolution of flowering plants.
The social insects are remarkable because the great majority of individuals in each colony are sterile. This appears contrary to basic concepts of evolution such as natural selection and the selfish gene. In fact there are very few eusocial insect species: only 15 out of approximately 2,600 living families of insects contain eusocial species, and it seems that eusociality has evolved independently only 12 times among arthropods, although some eusocial lineages have diversified into several families. Nevertheless social insects have been spectacularly successful; for example although ants and termites account for only about 2% of known insect species, they form over 50% of the total mass of insects. Their ability to control a territory appears to be the foundation of their success.
The sacrifice of breeding opportunities by most individuals has long been explained as a consequence of these species' unusual haplodiploid method of sex determination, which has the paradoxical consequence that two sterile worker daughters of the same queen share more genes with each other than they would with their offspring if they could breed. However Wilson and Hölldobler argue that this explanation is faulty: for example, it is based on kin selection, but there is no evidence of nepotism in colonies that have multiple queens. Instead, they write, eusociality evolves only in species that are under strong pressure from predators and competitors, but in environments where it is possible to build "fortresses"; after colonies have established this security, they gain other advantages through co-operative foraging. In support of this explanation they cite the appearance of eusociality in bathyergid mole rats, which are not haplodiploid.
The earliest fossils of insects have been found in Early Devonian rocks from about Mazon Creek lagerstätten from the Late Carboniferous, about , include about 200 species, some gigantic by modern standards, and indicate that insects had occupied their main modern ecological niches as herbivores, detritivores and insectivores. Social termites and ants first appear in the Early Cretaceous, and advanced social bees have been found in Late Cretaceous rocks but did not become abundant until the Mid Cenozoic., which preserve only a few varieties of flightless insect. The
The idea that, along with other life forms, modern-day humans evolved from an ancient, common ancestor was proposed by Robert Chambers in 1844 and taken up by Charles Darwin in 1871. Modern humans evolved from a lineage of upright-walking apes that has been traced back over to Sahelanthropus. The first known stone tools were made about , apparently by Australopithecus garhi, and were found near animal bones that bear scratches made by these tools. The earliest hominines had chimp-sized brains, but there has been a fourfold increase in the last 3 Ma; a statistical analysis suggests that hominine brain sizes depend almost completely on the date of the fossils, while the species to which they are assigned has only slight influence. There is a long-running debate about whether modern humans evolved all over the world simultaneously from existing advanced hominines or are descendants of a single small population in Africa, which then migrated all over the world less than 200,000 years ago and replaced previous hominine species. There is also debate about whether anatomically modern humans had an intellectual, cultural and technological "Great Leap Forward" under 100,000 years ago and, if so, whether this was due to neurological changes that are not visible in fossils.
Life on Earth has suffered occasional mass extinctions at least since life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the new dominant group is "superior" to the old and usually because an extinction event eliminates the old dominant group and makes way for the new one.. Although they were disasters at the time, mass extinctions have sometimes accelerated the evolution of
The fossil record appears to show that the gaps between mass extinctions are becoming longer and the average and background rates of extinction are decreasing. Both of these phenomena could be explained in one or more ways:
- The oceans may have become more hospitable to life over the last 500 Ma and less vulnerable to mass extinctions: dissolved oxygen became more widespread and penetrated to greater depths; the development of life on land reduced the run-off of nutrients and hence the risk of eutrophication and anoxic events; and marine ecosystems became more diversified so that food chains were less likely to be disrupted.
- Reasonably complete fossils are very rare, most extinct organisms are represented only by partial fossils, and complete fossils are rarest in the oldest rocks. So paleontologists have mistakenly assigned parts of the same organism to different genera, which were often defined solely to accommodate these finds – the story of Anomalocaris is an example of this. The risk of this mistake is higher for older fossils because these are often unlike parts of any living organism. Many of the "superfluous" genera are represented by fragments which are not found again and the "superfluous" genera appear to become extinct very quickly.
Biodiversity in the fossil record, which is
- "the number of distinct genera alive at any given time; that is, those whose first occurrence predates and whose last occurrence postdates that time"
- Constructal law
- Evolution of mammals
- Evolution of sexual reproduction
- Evolutionary history of plants
- Evolution of viruses
- History of evolutionary thought
- On the Origin of Species
- Taxonomy of commonly fossilised invertebrates
- Timeline of evolutionary history of life
- Treatise on Invertebrate Paleontology
- Name given as in Butterfield's paper "Bangiomorpha pubescens ..." (2000). A fossil fish, also from China, has also been named Qingshania. The name of one of these will have to change.
- Myxozoa were thought to be an exception, but are now thought to be heavily modified members of the Cnidaria: Jímenez-Guri, E., Philippe, H., Okamura, B. and Holland, P. W. H. (July 2007). "Buddenbrockia is a cnidarian worm". Science 317 (116): 116–118. Bibcode:2007Sci...317..116J. doi:10.1126/science.1142024. PMID 17615357. Retrieved 2008-09-03.
- Beraldi-Campesi H, Early life on land and the first terrestrial ecosystems. Ecological Processes. 2:1. doi:10.1186/2192-1709-2-1
- Futuyma, Douglas J. (2005). Evolution. Sunderland, Massachusetts: Sinuer Associates, Inc. ISBN 0-87893-187-2.
- Yoko Ohtomo, Takeshi Kakegawa, Akizumi Ishida, Toshiro Nagase, Minik T. Rosing (8 December 2013). "Evidence for biogenic graphite in early Archaean Isua metasedimentary rocks". Nature Geoscience. doi:10.1038/ngeo2025. Retrieved 9 Dec 2013.
- Borenstein, Seth (13 November 2013). "Oldest fossil found: Meet your microbial mom". AP News. Retrieved 15 November 2013.
- Noffke, Nora; Christian, Daniel; Wacey, David; Hazen, Robert M. (8 November 2013). "Microbially Induced Sedimentary Structures Recording an Ancient Ecosystem in the ca. 3.48 Billion-Year-Old Dresser Formation, Pilbara, Western Australia". Astrobiology (journal) 13 (12): 1103–24. Bibcode:2013AsBio..13.1103N. doi:10.1089/ast.2013.1030. PMC 3870916. PMID 24205812. Retrieved 15 November 2013.
- Nisbet, E.G., and Fowler, C.M.R. (December 7, 1999). "Archaean metabolic evolution of microbial mats". Proceedings of the Royal Society B 266 (1436): 2375. doi:10.1098/rspb.1999.0934. PMC 1690475. - abstract with link to free full content (PDF)
- Anbar, A.; Duan, Y.; Lyons, T.; Arnold, G.; Kendall, B.; Creaser, R.; Kaufman, A.; Gordon, G.; Scott, C.; Garvin, J.; Buick, R. (2007). "A whiff of oxygen before the great oxidation event?". Science 317 (5846): 1903–1906. Bibcode:2007Sci...317.1903A. doi:10.1126/science.1140325. PMID 17901330.
- Knoll, Andrew H.; Javaux, E.J, Hewitt, D. and Cohen, P. (2006). "Eukaryotic organisms in Proterozoic oceans". Philosophical Transactions of the Royal Society B 361 (1470): 1023–38. doi:10.1098/rstb.2006.1843. PMC 1578724. PMID 16754612.
- Fedonkin, M. A. (March 2003). "The origin of the Metazoa in the light of the Proterozoic fossil record" (PDF). Paleontological Research 7 (1): 9–41. doi:10.2517/prpsj.7.9. Retrieved 2008-09-02.
- Bonner, J.T. (1998) The origins of multicellularity. Integr. Biol. 1, 27–36
- Fedonkin, M. A.; Simonetta, A.; Ivantsov, A. Y. (2007). "New data on Kimberella, the Vendian mollusc-like organism (White Sea region, Russia): palaeoecological and evolutionary implications". Geological Society, London, Special Publication2 286: 157–179. Bibcode:2007GSLSP.286..157F. doi:10.1144/SP286.12. Retrieved May 16, 2013.
- "The oldest fossils reveal evolution of non-vascular plants by the middle to late Ordovician Period (~450-440 m.y.a.) on the basis of fossil spores" Transition of plants to land
- "Early life on land and the first terrestrial ecosystems"
- Algeo, T.J.; Scheckler, S. E. (1998). "Terrestrial-marine teleconnections in the Devonian: links between the evolution of land plants, weathering processes, and marine anoxic events". Philosophical Transactions of the Royal Society B 353 (1365): 113–130. doi:10.1098/rstb.1998.0195.
- Chen, J-Y.; Oliveri, P; Li, CW; Zhou, GQ; Gao, F; Hagadorn, JW; Peterson, KJ; Davidson, EH (2000). "Putative phosphatized embryos from the Doushantuo Formation of China". Proceedings of the National Academy of Sciences 97 (9): 4457–4462. Bibcode:2000PNAS...97.4457C. doi:10.1073/pnas.97.9.4457. PMC 18256. PMID 10781044. Retrieved 2009-04-30.
- Shu et al. (November 4, 1999). "Lower Cambrian vertebrates from south China". Nature 402 (6757): 42–46. Bibcode:1999Natur.402...42S. doi:10.1038/46965.
- Hoyt, Donald F. (1997). "Synapsid Reptiles".
- Barry, Patrick L. (January 28, 2002). "The Great Dying". Science@NASA. Science and Technology Directorate, Marshall Space Flight Center, NASA. Retrieved March 26, 2009.
- Tanner LH, Lucas SG & Chapman MG (2004). "Assessing the record and causes of Late Triassic extinctions" (PDF). Earth-Science Reviews 65 (1–2): 103–139. Bibcode:2004ESRv...65..103T. doi:10.1016/S0012-8252(03)00082-5. Archived from the original on October 25, 2007. Retrieved 2007-10-22.
- Benton, M.J. (2004). Vertebrate Palaeontology. Blackwell Publishers. ISBN 0-632-05614-2.
- Fastovsky DE, Sheehan PM (2005). "The extinction of the dinosaurs in North America". GSA Today 15 (3): 4–10. doi:10.1130/1052-5173(2005)015<4:TEOTDI>2.0.CO;2. ISSN 1052-5173. Retrieved 2007-05-18.
- "Dinosaur Extinction Spurred Rise of Modern Mammals". News.nationalgeographic.com. Retrieved 2009-03-08.
- Van Valkenburgh, B. (1999). "Major patterns in the history of carnivorous mammals". Annual Review of Earth and Planetary Sciences 27: 463–493. Bibcode:1999AREPS..27..463V. doi:10.1146/annurev.earth.27.1.463.
- El Albani, Abderrazak; Bengtson, Stefan; Canfield, Donald E.; Bekker, Andrey; Macchiarelli, Reberto; Mazurier, Arnaud; Hammarlund, Emma U.; Boulvais, Philippe; Dupuy, Jean-Jacques (July 2010). "Large colonial organisms with coordinated growth in oxygenated environments 2.1 Gyr ago". Nature 466 (7302): 100–104. Bibcode:2010Natur.466..100A. doi:10.1038/nature09166. PMID 20596019.
- Dalrymple, G.B. (1991). The Age of the Earth. California: Stanford University Press. ISBN 0-8047-1569-6.
- Newman, W.L. (July 2007). "Age of the Earth". Publications Services, USGS. Retrieved 2008-08-29.
- Dalrymple, G.B. (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications 190 (1): 205–221. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Retrieved 2007-09-20.
- Galimov, E.M. and Krivtsov, A.M. (December 2005). "Origin of the Earth-Moon System". J. Earth Syst. Sci. 114 (6): 593–600. Bibcode:2005JESS..114..593G. doi:10.1007/BF02715942.
- Dalrymple, G.B. (1991). The Age of the Earth. California: Stanford University Press. ISBN 0-8047-1569-6.
- Newman, W.L. (July 2007). "Age of the Earth". Publications Services, USGS. Retrieved 2008-08-29.
- Dalrymple, G.B. (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications 190 (1): 205–221. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Retrieved 2007-09-20.
- Cohen, B.A., Swindle, T.D. and Kring, D.A. (December 2000). "Support for the Lunar Cataclysm Hypothesis from Lunar Meteorite Impact Melt Ages". Science 290 (5497): 1754–1756. Bibcode:2000Sci...290.1754C. doi:10.1126/science.290.5497.1754. PMID 11099411. Retrieved 2008-08-31.
- "Early Earth Likely Had Continents And Was Habitable". University of Colorado. 2005-11-17. Retrieved 2009-01-11.
- Cavosie, A.J., Valley, J.W., Wilde, S. A. and the Edinburgh Ion Microprobe Facility (July 15, 2005). "Magmatic δ18O in 4400-3900 Ma detrital zircons: A record of the alteration and recycling of crust in the Early Archean". Earth and Planetary Science Letters 235 (3–4): 663–681. Bibcode:2005E&PSL.235..663C. doi:10.1016/j.epsl.2005.04.028.
- Britt, R.R. (2002-07-24). "Evidence for Ancient Bombardment of Earth". Space.com. Retrieved 2006-04-15.
- Valley, J.W., Peck, W.H., King, E.M. and Wilde, S.A. (April 2002). "A cool early Earth" (PDF). Geology 30 (4): 351–354. Bibcode:2002Geo....30..351V. doi:10.1130/0091-7613(2002)030<0351:ACEE>2.0.CO;2. ISSN 0091-7613. Retrieved 2008-09-13.
- Dauphas, N., Robert, F. and Marty, B. (December 2000). "The Late Asteroidal and Cometary Bombardment of Earth as Recorded in Water Deuterium to Protium Ratio". Icarus 148 (2): 508–512. Bibcode:2000Icar..148..508D. doi:10.1006/icar.2000.6489.
- Scalice, Daniella (May 20, 2009). "Microbial Habitability During the Late Heavy Bombardment". Astrobiology (NASA). Retrieved May 18, 2013.
- Brasier, M., McLoughlin, N., Green, O. and Wacey, D. (June 2006). "A fresh look at the fossil evidence for early Archaean cellular life" (PDF). Philosophical Transactions of the Royal Society B 361 (1470): 887–902. doi:10.1098/rstb.2006.1835. PMC 1578727. PMID 16754605. Retrieved 2008-08-30.
- Schopf, J. W. (April 1993). "Microfossils of the Early Archean Apex Chert: New Evidence of the Antiquity of Life". Science 260 (5108): 640–646. Bibcode:1993Sci...260..640S. doi:10.1126/science.260.5108.640. PMID 11539831. Retrieved 2008-08-30.
- Altermann, W. and Kazmierczak, J. (2003). "Archean microfossils: a reappraisal of early life on Earth". Res Microbiol 154 (9): 611–7. doi:10.1016/j.resmic.2003.08.006. PMID 14596897.
- Mojzsis, S.J., Arrhenius, G., McKeegan, K.D., Harrison, T.M., Nutman, A.P. and Friend, C.R.L. (November 1996). "Evidence for life on Earth before 3.8 Ga". Nature 384 (6604): 55–59. Bibcode:1996Natur.384...55M. doi:10.1038/384055a0. PMID 8900275. Retrieved 2008-08-30.
- Grotzinger, J.P. and Rothman, D.H. (1996). "An abiotic model for stromatolite morphogenesis". Nature 383 (6599): 423–425. Bibcode:1996Natur.383..423G. doi:10.1038/383423a0.
- Fedo, C.M. and Whitehouse, M.J. (May 2002). "Metasomatic Origin of Quartz-Pyroxene Rock, Akilia, Greenland, and Implications for Earth's Earliest Life". Science 296 (5572): 1448–1452. Bibcode:2002Sci...296.1448F. doi:10.1126/science.1070336. PMID 12029129. Retrieved 2008-08-30.
- Lepland, A., van Zuilen, M.A., Arrhenius, G., Whitehouse, M.J. and Fedo, C.M. (January 2005). "Questioning the evidence for Earth's earliest life — Akilia revisited". Geology 33 (1): 77–79. Bibcode:2005Geo....33...77L. doi:10.1130/G20890.1. Retrieved 2008-08-30.
- Schopf, J. (2006). "Fossil evidence of Archaean life". Philosophical Transactions of the Royal Society B 361 (1470): 869–85. doi:10.1098/rstb.2006.1834. PMC 1578735. PMID 16754604.
- Ciccarelli, F.D., Doerks, T., von Mering, C., Creevey, C.J. et al. (2006). "Toward automatic reconstruction of a highly resolved tree of life". Science 311 (5765): 1283–7. Bibcode:2006Sci...311.1283C. doi:10.1126/science.1123061. PMID 16513982.
- Mason, S.F. (1984). "Origins of biomolecular handedness". Nature 311 (5981): 19–23. Bibcode:1984Natur.311...19M. doi:10.1038/311019a0. PMID 6472461.
- Orgel, L.E. (October 1994). "The origin of life on the earth" (PDF). Scientific American 271 (4): 76–83. doi:10.1038/scientificamerican1094-76. PMID 7524147. Retrieved 2008-08-30. Also available as a web page
- Needs citation
- O'Leary, M.R. (2008). Anaxagoras and the Origin of Panspermia Theory. iUniverse, Inc. ISBN 0-595-49596-6.
- Arrhenius, S. (1903). "The Propagation of Life in Space". p. 32. Bibcode:1980qel..book...32A. Reprinted in Goldsmith, D., (ed.). The Quest for Extraterrestrial Life. University Science Books. ISBN 0-19-855704-3.
- Hoyle, F. and Wickramasinghe, C. (1979). "On the Nature of Interstellar Grains". Astrophysics and Space Science 66: 77–90. Bibcode:1979Ap&SS..66...77H. doi:10.1007/BF00648361.
- Crick, F.H.; Orgel, L.E. (1973). "Directed Panspermia". Icarus 19 (3): 341–348. Bibcode:1973Icar...19..341C. doi:10.1016/0019-1035(73)90110-3.
- Warmflash, D. and Weiss, B. (November 2005). "Did Life Come From Another World?". Scientific American 293 (5): 64–71. doi:10.1038/scientificamerican1105-64. Retrieved 2008-09-02.
- Wickramasinghe, N. C.; Wickramasinghe, J. T. (2008). "On the possibility of microbiota transfer from Venus to Earth". Astrophysics and Space Science 317 (1–2): 133–137. Bibcode:2008Ap&SS.317..133W. doi:10.1007/s10509-008-9851-2.
- Paul Clancy (Jun 23, 2005). Looking for Life, Searching the Solar System. Cambridge University Press.
- Horneck, Gerda; David M. Klaus and Rocco L. Mancinelli. (March 2010). "Space Microbiology". Microbiology and Molecular Biology Reviews 74 (1): 121–156. doi:10.1128/mmbr.00016-09. Retrieved 2013-07-29.
- Ker, Than (August 2007). "Claim of Martian Life Called 'Bogus'". space.com. Retrieved 2008-09-02.
- Bennett, J. O. (2008). "What is life?". Beyond UFOs: The Search for Extraterrestrial Life and Its Astonishing Implications for Our Future. Princeton University Press. pp. 82–85. ISBN 0-691-13549-5. Retrieved 2009-01-11.
- Schulze-Makuch, D., Irwin, L. N. (April 2006). "The prospect of alien life in exotic forms on other worlds". Naturwissenschaften 93 (4): 155–72. Bibcode:2006NW.....93..155S. doi:10.1007/s00114-005-0078-6. PMID 16525788.
- Peretó, J. (2005). "Controversies on the origin of life" (PDF). Int. Microbiol. 8 (1): 23–31. PMID 15906258. Retrieved 2007-10-07.
- Szathmáry, E. (February 2005). "Life: In search of the simplest cell". Nature 433 (7025): 469–470. Bibcode:2005Natur.433..469S. doi:10.1038/433469a. PMID 15690023. Retrieved 2008-09-01.
- Luisi, P. L., Ferri, F. and Stano, P. (2006). "Approaches to semi-synthetic minimal cells: a review". Naturwissenschaften 93 (1): 1–13. Bibcode:2006NW.....93....1L. doi:10.1007/s00114-005-0056-z. PMID 16292523.
- Joyce, G.F. (2002). "The antiquity of RNA-based evolution". Nature 418 (6894): 214–21. Bibcode:2002Natur.418..214J. doi:10.1038/418214a. PMID 12110897.
- Hoenigsberg, H. (December 2003). "Evolution without speciation but with selection: LUCA, the Last Universal Common Ancestor in Gilbert's RNA world". Genetic and Molecular Research 2 (4): 366–375. PMID 15011140. Retrieved 2008-08-30.(also available as PDF)
- Trevors, J. T. and Abel, D. L. (2004). "Chance and necessity do not explain the origin of life". Cell Biol. Int. 28 (11): 729–39. doi:10.1016/j.cellbi.2004.06.006. PMID 15563395.
- Forterre, P., Benachenhou-Lahfa, N., Confalonieri, F., Duguet, M., Elie, C. and Labedan, B. (1992). "The nature of the last universal ancestor and the root of the tree of life, still open questions". BioSystems 28 (1–3): 15–32. doi:10.1016/0303-2647(92)90004-I. PMID 1337989.
- Cech, T.R. (August 2000). "The ribosome is a ribozyme". Science 289 (5481): 878–9. doi:10.1126/science.289.5481.878. PMID 10960319. Retrieved 2008-09-01.
- Johnston, W. K. et al. (2001). "RNA-Catalyzed RNA Polymerization: Accurate and General RNA-Templated Primer Extension". Science 292 (5520): 1319–1325. Bibcode:2001Sci...292.1319J. doi:10.1126/science.1060786. PMID 11358999.
- Levy, M. and Miller, S.L. (July 1998). "The stability of the RNA bases: Implications for the origin of life". Proc. Natl. Acad. Sci. U.S.A. 95 (14): 7933–8. Bibcode:1998PNAS...95.7933L. doi:10.1073/pnas.95.14.7933. PMC 20907. PMID 9653118.
- Larralde, R., Robertson, M. P. and Miller, S. L. (August 1995). "Rates of decomposition of ribose and other sugars: implications for chemical evolution". Proc. Natl. Acad. Sci. U.S.A. 92 (18): 8158–60. Bibcode:1995PNAS...92.8158L. doi:10.1073/pnas.92.18.8158. PMC 41115. PMID 7667262.
- Lindahl, T. (April 1993). "Instability and decay of the primary structure of DNA". Nature 362 (6422): 709–15. Bibcode:1993Natur.362..709L. doi:10.1038/362709a0. PMID 8469282.
- Orgel, L. (November 2000). "Origin of life. A simpler nucleic acid". Science 290 (5495): 1306–7. doi:10.1126/science.290.5495.1306. PMID 11185405.
- Nelson, K.E., Levy, M., and Miller, S.L. (April 2000). "Peptide nucleic acids rather than RNA may have been the first genetic molecule". Proc. Natl. Acad. Sci. U.S.A. 97 (8): 3868–71. Bibcode:2000PNAS...97.3868N. doi:10.1073/pnas.97.8.3868. PMC 18108. PMID 10760258.
- Martin, W. and Russell, M.J. (2003). "On the origins of cells: a hypothesis for the evolutionary transitions from abiotic geochemistry to chemoautotrophic prokaryotes, and from prokaryotes to nucleated cells". Philosophical Transactions of the Royal Society B 358 (1429): 59–85. doi:10.1098/rstb.2002.1183. PMC 1693102. PMID 12594918.
- Wächtershäuser, G. (August 2000). "Origin of life. Life as we don't know it". Science 289 (5483): 1307–8. doi:10.1126/science.289.5483.1307. PMID 10979855.
- Trevors, J.T. and Psenner, R. (2001). "From self-assembly of life to present-day bacteria: a possible role for nanocells". FEMS Microbiol. Rev. 25 (5): 573–82. doi:10.1111/j.1574-6976.2001.tb00592.x. PMID 11742692.
- Segré, D., Ben-Eli, D., Deamer, D. and Lancet, D. (February–April 2001). "The Lipid World" (PDF). Origins of Life and Evolution of Biospheres 2001 31 (1–2): 119–45. doi:10.1023/A:1006746807104. PMID 11296516. Retrieved 2008-09-01.
- Cairns-Smith, A.G. (1968). "An approach to a blueprint for a primitive organism". In Waddington, C,H. Towards a Theoretical Biology 1. Edinburgh University Press. pp. 57–66.
- Ferris, J.P. (June 1999). "Prebiotic Synthesis on Minerals: Bridging the Prebiotic and RNA Worlds". Biological Bulletin. Evolution: A Molecular Point of View (Biological Bulletin, Vol. 196, No. 3) 196 (3): 311–314. doi:10.2307/1542957. JSTOR 1542957. PMID 10390828.
- Hanczyc, M.M., Fujikawa, S.M. and Szostak, Jack W. (October 2003). "Experimental Models of Primitive Cellular Compartments: Encapsulation, Growth, and Division". Science 302 (5645): 618–622. Bibcode:2003Sci...302..618H. doi:10.1126/science.1089904. PMID 14576428. Retrieved 2008-09-01.
- Hartman, H. (October 1998). "Photosynthesis and the Origin of Life". Origins of Life and Evolution of Biospheres 28 (4–6): 512–521. Bibcode:1998OLEB...28..515H. doi:10.1023/A:1006548904157. Retrieved 2008-09-01.
- Krumbein, W.E., Brehm, U., Gerdes, G., Gorbushina, A.A., Levit, G. and Palinska, K.A. (2003). "Biofilm, Biodictyon, Biomat Microbialites, Oolites, Stromatolites, Geophysiology, Global Mechanism, Parahistology" (PDF). In Krumbein, W.E., Paterson, D.M., and Zavarzin, G.A. Fossil and Recent Biofilms: A Natural History of Life on Earth. Kluwer Academic. pp. 1–28. ISBN 1-4020-1597-6. Archived from the original on January 6, 2007. Retrieved 2008-07-09.
- Risatti, J. B., Capman, W. C. and Stahl, D. A. (October 11, 1994). "Community structure of a microbial mat: the phylogenetic dimension" (PDF). Proceedings of the National Academy of Sciences 91 (21): 10173–10177. Bibcode:1994PNAS...9110173R. doi:10.1073/pnas.91.21.10173. PMC 44980. PMID 7937858. Retrieved 2008-07-09.
- (the editor) (June 2006). "Editor's Summary: Biodiversity rocks". Nature 441 (7094). Retrieved 2009-01-10.
- Allwood, A. C., Walter, M. R., Kamber, B. S., Marshall, C. P. and Burch, I. W. (June 2006). "Stromatolite reef from the Early Archaean era of Australia". Nature 441 (7094): 714–718. Bibcode:2006Natur.441..714A. doi:10.1038/nature04764. PMID 16760969. Retrieved 2008-08-31.
- Blankenship, R.E. (1 January 2001). "Molecular evidence for the evolution of photosynthesis". Trends in Plant Science 6 (1): 4–6. doi:10.1016/S1360-1385(00)01831-8. PMID 11164357. Retrieved 2008-07-14.
- Hoehler, T.M., Bebout, B.M. and Des Marais, D.J. (19 July 2001). "The role of microbial mats in the production of reduced gases on the early Earth". Nature 412 (6844): 324–327. doi:10.1038/35085554. PMID 11460161. Retrieved 2008-07-14.
- Abele, D. (7 November 2002). "Toxic oxygen: The radical life-giver". Nature 420 (27): 27. Bibcode:2002Natur.420...27A. doi:10.1038/420027a. PMID 12422197. Retrieved 2008-07-14.
- "Introduction to Aerobic Respiration". University of California, Davis. Archived from the original on October 29, 2007. Retrieved 2008-07-14.
- Goldblatt, C., Lenton, T.M. and Watson, A.J. (2006). "The Great Oxidation at ~2.4 Ga as a bistability in atmospheric oxygen due to UV shielding by ozone" (PDF). Geophysical Research Abstracts 8 (770). Retrieved 2008-09-01.
- Glansdorff, N., Xu, Y. and Labedan, B. (2008). "The Last Universal Common Ancestor: emergence, constitution and genetic legacy of an elusive forerunner". Biology Direct 3 (29): 29. doi:10.1186/1745-6150-3-29. PMC 2478661. PMID 18613974.
- Brocks, J. J., Logan, G. A., Buick, R. and Summons, R. E. (1999). "Archaean molecular fossils and the rise of eukaryotes". Science 285 (5430): 1033–1036. doi:10.1126/science.285.5430.1033. PMID 10446042. Retrieved 2008-09-02.
- Hedges, S. B., Blair, J. E., Venturi, M. L. and Shoe, J. L (January 2004). "A molecular timescale of eukaryote evolution and the rise of complex multicellular life". BMC Evolutionary Biology 4: 2. doi:10.1186/1471-2148-4-2. PMC 341452. PMID 15005799. Retrieved 2008-07-14.
- Burki, F., Shalchian-Tabrizi, K., Minge, M., Skjæveland, Å., Nikolaev (2007). Butler, Geraldine, ed. "Phylogenomics Reshuffles the Eukaryotic Supergroups". PLoS ONE 2 (8): e790. Bibcode:2007PLoSO...2..790B. doi:10.1371/journal.pone.0000790. PMC 1949142. PMID 17726520.
- Parfrey, L. W., Barbero, E., Lasser, E., Dunthorn, M., Bhattacharya, D., Patterson, D.J. and Katz, L.A. (December 2006). "Evaluating Support for the Current Classification of Eukaryotic Diversity". PLoS Genetics 2 (12): e220. doi:10.1371/journal.pgen.0020220. PMC 1713255. PMID 17194223.
- Margulis, L. (1981). Symbiosis in cell evolution. San Francisco: W.H. Freeman. ISBN 0-7167-1256-3.
- Vellai, T. and Vida, G. (1999). "The origin of eukaryotes: the difference between prokaryotic and eukaryotic cells". Proceedings of the Royal Society B 266 (1428): 1571–1577. doi:10.1098/rspb.1999.0817. PMC 1690172. PMID 10467746.
- Selosse, M-A., Abert, B., and Godelle, B. (2001). "Reducing the genome size of organelles favours gene transfer to the nucleus". Trends in ecology & evolution 16 (3): 135–141. doi:10.1016/S0169-5347(00)02084-X. Retrieved 2008-09-02.
- Pisani, D., Cotton, J.A. and McInerney, J.O. (2007). "Supertrees disentangle the chimerical origin of eukaryotic genomes". Mol Biol Evol. 24 (8): 1752–60. doi:10.1093/molbev/msm095. PMID 17504772.
- Gray, M.W., Burger, G., and Lang, B.F. (1999). "Mitochondrial evolution". Science 283 (5407): 1476–1481. Bibcode:1999Sci...283.1476G. doi:10.1126/science.283.5407.1476. PMID 10066161. Retrieved 2008-09-02.
- Rasmussen, B., Fletcher, I.R., Brocks, J.R. and Kilburn, M.R. (October 2008). "Reassessing the first appearance of eukaryotes and cyanobacteria". Nature 455 (7216): 1101–1104. Bibcode:2008Natur.455.1101R. doi:10.1038/nature07381. PMID 18948954.
- Han, T.M. and Runnegar, B. (July 1992). "Megascopic eukaryotic algae from the 2.1-billion-year-old negaunee iron-formation, Michigan". Science 257 (5067): 232–235. Bibcode:1992Sci...257..232H. doi:10.1126/science.1631544. PMID 1631544. Retrieved 2008-09-02.
- Javaux, E. J., Knoll, A. H. and Walter, M. R. (September 2004). "TEM evidence for eukaryotic diversity in mid-Proterozoic oceans". Geobiology 2 (3): 121–132. doi:10.1111/j.1472-4677.2004.00027.x. Retrieved 2008-09-02.
- Butterfield, N. J. (2005). "Probable Proterozoic fungi". Paleobiology 31 (1): 165–182. doi:10.1666/0094-8373(2005)031<0165:PPF>2.0.CO;2. ISSN 0094-8373. Retrieved 2008-09-02.
- Hedges SB, Blair JE, Venturi ML, Shoe JL (January 2004). "A molecular timescale of eukaryote evolution and the rise of complex multicellular life". BMC Evol. Biol. 4: 2. doi:10.1186/1471-2148-4-2. PMC 341452. PMID 15005799.
- Jokela, J. (2001). "Sex: Advantage". "Encyclopedia of Life Sciences". John Wiley & Sons, Ltd. doi:10.1038/npg.els.0001716. ISBN 0-470-01617-5.
- Holmes, R.K. and Jobling, M.G. (1996). "Genetics: Exchange of Genetic Information". In Baron, S. Baron's Medical Microbiology (4th ed.). Galveston: University of Texas Medical Branch. ISBN 0-9631172-1-1. Retrieved 2008-09-02.
- Christie, P. J. (April 2001). "Type IV secretion: intercellular transfer of macromolecules by systems ancestrally related to conjugation machines". Molecular Microbiology 40 (22): 294–305. doi:10.1046/j.1365-2958.2001.02302.x. PMID 11309113. Retrieved 2008-09-02.
- Michod RE, Bernstein H, Nedelcu AM (May 2008). "Adaptive value of sex in microbial pathogens". Infect. Genet. Evol. 8 (3): 267–85. doi:10.1016/j.meegid.2008.01.002. PMID 18295550.http://www.hummingbirds.arizona.edu/Faculty/Michod/Downloads/IGE%20review%20sex.pdf
- Bernstein H, Bernstein C. (2010) Evolutionary Origin of Recombination during Meiosis. BioScience 60(7) 498-505. doi:10.1525/bio.2010.60.7.5
- Johnsborg O, Eldholm V, Håvarstein LS (December 2007). "Natural genetic transformation: prevalence, mechanisms and function". Res. Microbiol. 158 (10): 767–78. doi:10.1016/j.resmic.2007.09.004. PMID 17997281.
- Bernstein H, Bernstein C, Michod RE (2012). DNA repair as the primary adaptive function of sex in bacteria and eukaryotes. Chapter 1: pp.1-49 in: DNA Repair: New Research, Sakura Kimura and Sora Shimizu editors. Nova Sci. Publ., Hauppauge, N.Y. ISBN 978-1-62100-808-8 https://www.novapublishers.com/catalog/product_info.php?products_id=31918
- Ramesh, M. A., Malik, S-B. and Logsdon, J. M. Jr. (January 2005). "A phylogenomic inventory of meiotic genes; evidence for sex in Giardia and an early eukaryotic origin of meiosis" (PDF). Current Biology 15 (2): 185–91. doi:10.1016/j.cub.2005.01.003. PMID 15668177. Retrieved 2008-12-22.
- Otto, S. P., and Gerstein, A. C. (2006). "Why have sex? The population genetics of sex and recombination". Biochemical Society Transactions 34 (Pt 4): 519–522. doi:10.1042/BST0340519. PMID 16856849. Retrieved 2008-12-22.
- Hanley KA, Fisher RN, Case TJ (1995). "Lower mite infestations in an asexual gecko compared with its sexual ancestors". Evolution 49 (3): 418–426. doi:10.2307/2410266.
- Parker MA (1994). "Pathogens and sex in plants". Evolutionary Ecology 8: 560–584. doi:10.1007/bf01238258.
- Dong, L., Xiao, S., Shen, B. and Zhou, C. (January 2008). "Silicified Horodyskia and Palaeopascichnus from upper Ediacaran cherts in South China: tentative phylogenetic interpretation and implications for evolutionary stasis". Journal of the Geological Society 165: 367–378. doi:10.1144/0016-76492007-074. Retrieved 2008-09-02.
- Birdsell JA, Wills C (2003). The evolutionary origin and maintenance of sexual recombination: A review of contemporary models. Evolutionary Biology Series >> Evolutionary Biology, Vol. 33 pp. 27-137. MacIntyre, Ross J.; Clegg, Michael, T (Eds.), Springer. Hardcover ISBN 978-0306472619, ISBN 0306472619 Softcover ISBN 978-1-4419-3385-0.
- Bernstein H, Hopf FA, Michod RE (1987). "The molecular basis of the evolution of sex" 24. pp. 323–70. doi:10.1016/s0065-2660(08)60012-7. PMID 3324702.
- Bell, G. and Mooers, A.O. (1968). "Size and complexity among multicellular organisms". Biological Journal of the Linnean Society 60 (3): 345–363. doi:10.1111/j.1095-8312.1997.tb01500.x. Retrieved 2008-09-03.
- Kaiser, D. (2001). "Building a multicellular organism". Annual Review of Genetics 35: 103–123. doi:10.1146/annurev.genet.35.102401.090145. PMID 11700279.
- Bonner, J. T. (January 1999). "The Origins of Multicellularity". Integrative Biology 1 (1): 27–36. doi:10.1002/(SICI)1520-6602(1998)1:1<27::AID-INBI4>3.0.CO;2-6. Retrieved 2008-09-03.
- Nakagaki, T., Yamada, H. and Tóth, Á. (September 2000). "Intelligence: Maze-solving by an amoeboid organism". Nature 407 (6803): 470. doi:10.1038/35035159. PMID 11028990. Retrieved 2008-09-03.
- Koschwanez, JH., Foster, KR, and Murray, AW (August 2011). "Sucrose Utilization in Budding Yeast as a Model for the Origin of Undifferentiated Multicellularity". PLoS Biology 9 (8): e1001122. doi:10.1371/journal.pbio.1001122.
- Butterfield, N. J. (September 2000). "Bangiomorpha pubescens n. gen., n. sp.: implications for the evolution of sex, multicellularity, and the Mesoproterozoic/Neoproterozoic radiation of eukaryotes". Paleobiology 26 (3): 386–404. doi:10.1666/0094-8373(2000)026<0386:BPNGNS>2.0.CO;2. ISSN 0094-8373. Retrieved 2008-09-02.
- Dickey, Gwyneth. "African fossils suggest complex life arose early", Science News, Washington, D.C., Wednesday, June 30th, 2010. Retrieved on 2010-07-02.
- Gaidos, E., Dubuc, T., Dunford, M., McAndrew, P., Padilla-gamiño, J., Studer, B., Weersing, K. and Stanley, S. (2007). "The Precambrian emergence of animal life: a geobiological perspective" (PDF). Geobiology 5 (4): 351. doi:10.1111/j.1472-4669.2007.00125.x. Retrieved 2008-09-03.[dead link]
- Davidson, M.W. "Animal Cell Structure". Florida State University. Retrieved 2008-09-03.
- Saupe, S.G. "Concepts of Biology". College of St. Benedict / St. John's University. Retrieved 2008-09-03.
- Hinde, R. T. (1998). "The Cnidaria and Ctenophora". In Anderson, D.T.,. Invertebrate Zoology. Oxford University Press. pp. 28–57. ISBN 0-19-551368-1.
- Chen, J.-Y., Oliveri, P., Gao, F., Dornbos, S.Q., Li, C-W., Bottjer, D.J. and Davidson, E.H. (August 2002). "Precambrian Animal Life: Probable Developmental and Adult Cnidarian Forms from Southwest China" (PDF). Developmental Biology 248 (1): 182–196. doi:10.1006/dbio.2002.0714. PMID 12142030. Retrieved 2008-09-03.
- Grazhdankin, D. (2004). "Patterns of distribution in the Ediacaran biotas: facies versus biogeography and evolution". Paleobiology 30 (2): 203. doi:10.1666/0094-8373(2004)030<0203:PODITE>2.0.CO;2. ISSN 0094-8373.
- Seilacher, A. (1992). "Vendobionta and Psammocorallia: lost constructions of Precambrian evolution" (abstract). Journal of the Geological Society, London 149 (4): 607–613. doi:10.1144/gsjgs.149.4.0607. ISSN 0016-7649. Retrieved 2007-06-21.
- Martin, M.W.; Grazhdankin, D. V., Bowring, S. A., Evans, D. A. D., Fedonkin, M. A. and Kirschvink, J. L. (2000-05-05). "Age of Neoproterozoic Bilaterian Body and Trace Fossils, White Sea, Russia: Implications for Metazoan Evolution" (abstract). Science 288 (5467): 841–5. Bibcode:2000Sci...288..841M. doi:10.1126/science.288.5467.841. PMID 10797002. Retrieved 2008-07-03.
- Fedonkin, M. A. and Waggoner, B. (1997). "The late Precambrian fossil Kimberella is a mollusc-like bilaterian organism" (abstract). Nature 388 (6645): 868–871. Bibcode:1997Natur.388..868F. doi:10.1038/42242. Retrieved 2008-07-03.
- Mooi, R. and Bruno, D. (1999). "Evolution within a bizarre phylum: Homologies of the first echinoderms" (PDF). American Zoologist 38 (6): 965–974. doi:10.1093/icb/38.6.965. Retrieved 2007-11-24.
- McMenamin, M. A. S (2003). "Spriggina is a trilobitoid ecdysozoan" (abstract). Abstracts with Programs (Geological Society of America) 35 (6): 105. Retrieved 2007-11-24.
- Lin, J. P.; Gon, S. M.; Gehling, J. G.; Babcock, L. E.; Zhao, Y. L.; Zhang, X. L.; Hu, S. X.; Yuan, J. L.; Yu, M. Y.; Peng, J. (2006). "A Parvancorina-like arthropod from the Cambrian of South China". Historical Biology 18 (1): 33–45. doi:10.1080/08912960500508689.
- Butterfield, N. J. (2006). "Hooking some stem-group "worms": fossil lophotrochozoans in the Burgess Shale". BioEssays 28 (12): 1161–6. doi:10.1002/bies.20507. PMID 17120226.
- Bengtson, S. (2004). "Early skeletal fossils" (PDF). In Lipps, J.H., and Waggoner, B.M. Neoproterozoic - Cambrian Biological Revolutions. Paleontological Society Papers 10. pp. 67–78. Retrieved 2008-07-18.
- Gould, S. J. (1989). Wonderful Life. Hutchinson Radius. pp. 124–136 and many others. ISBN 0-09-174271-4.
- Gould, S. J. (1989). Wonderful Life: The Burgess Shale and the Nature of History. W.W. Norton & Company. ISBN 0-393-30700-X.
- Budd, G. E. (2003). "The Cambrian Fossil Record and the Origin of the Phyla" (Free full text). Integrative and Comparative Biology 43 (1): 157–165. doi:10.1093/icb/43.1.157. PMID 21680420. Retrieved 2008-07-15.
- Budd, G. E. (1996). "The morphology of Opabinia regalis and the reconstruction of the arthropod stem-group". Lethaia 29 (1): 1–14. doi:10.1111/j.1502-3931.1996.tb01831.x.
- Marshall, C. R. (2006). "Explaining the Cambrian "Explosion" of Animals". Annu. Rev. Earth Planet. Sci. 34: 355–384. Bibcode:2006AREPS..34..355M. doi:10.1146/annurev.earth.33.031504.103001. Retrieved 2007-11-06.
- Janvier, P. (2001). "Vertebrata (Vertebrates)". "Encyclopedia of Life Sciences". Wiley InterScience. doi:10.1038/npg.els.0001531. ISBN 0-470-01617-5.
- Conway Morris, S. (August 2, 2003). "Once we were worms". New Scientist 179 (2406): 34. Retrieved 2008-09-05.
- Chen, Jun-Yuan; Huang, Di-Ying; Peng, Qing-Qing; Chi, Hui-Mei; Wang,Xiu-Qiang; Feng, Man (2003). "The first tunicate from the Early Cambrian of South China". Proceedings of the National Academy of Sciences 100 (14): 8314–8318. doi:10.1073/pnas.1431177100. PMC 166226. PMID 12835415.
- Shu, D-G., Luo, H-L., Conway Morris, S., Zhang, X-L., Hu, S-X., Chen, L., J. Han, J., Zhu, M., Li, Y. and Chen, L-Z. (November 1999). "Lower Cambrian vertebrates from south China" (PDF). Nature 402 (6757): 42–46. Bibcode:1999Natur.402...42S. doi:10.1038/46965. Retrieved 2008-09-05.
- Shu, D.-G., Conway Morris, S., Han, J., Zhang, Z.-F., Yasui, K., Janvier, P., Chen, L., Zhang, X.-L., Liu, J.-N., Li, Y. and Liu, H.-Q. (January 2003). "Head and backbone of the Early Cambrian vertebrate Haikouichthys". Nature 421 (6922): 526–529. Bibcode:2003Natur.421..526S. doi:10.1038/nature01264. PMID 12556891. Retrieved 2008-09-05.
- Sansom I. J., Smith, M. M. and Smith, M. P. (2001). "The Ordovician radiation of vertebrates". In Ahlberg, P.E. Major Events in Early Vertebrate Evolution. Taylor and Francis. pp. 156–171. ISBN 0-415-23370-4.
- Cowen, R. (2000). History of Life (3rd ed.). Blackwell Science. pp. 120–122. ISBN 0-632-04444-6.
- Selden, P. A. (2001). ""Terrestrialization of Animals"". In Briggs, D.E.G., and Crowther, P.R. Palaeobiology II: A Synthesis. Blackwell. pp. 71–74. ISBN 0-632-05149-3. Retrieved 2008-09-05.
- Battistuzzi, F. U.; Feijao, A.; Hedges, S. B. (2004). "A genomic timescale of prokaryote evolution: insights into the origin of methanogenesis, phototrophy, and the colonization of land". BMC Evolutionary Biology 4: 44. doi:10.1186/1471-2148-4-44. PMC 533871. PMID 15535883.
- Shear, W.A. (2000). "The Early Development of Terrestrial Ecosystems". In Gee, H. Shaking the Tree: Readings from Nature in the History of Life. University of Chicago Press. pp. 169–184. ISBN 0-226-28496-4. Retrieved 2008-09-09.
- Venturi, Sebastiano (2011). "Evolutionary Significance of Iodine". Current Chemical Biology- 5 (3): 155–162. doi:10.2174/187231311796765012. ISSN 1872-3136.
- Crockford, S.J. (2009). "Evolutionary roots of iodine and thyroid hormones in cell-cell signaling". Integr Comp Biol 49 (2): 155–166. doi:10.1093/icb/icp053. PMID 21669854.
- Venturi, S.; Donati, F.M.; Venturi, A.; Venturi, M. (2000). "Environmental Iodine Deficiency: A Challenge to the Evolution of Terrestrial Life?". Thyroid 10 (8): 727–9. doi:10.1089/10507250050137851. PMID 11014322.
- Küpper FC, Carpenter LJ, McFiggans GB et al. (2008). "Iodide accumulation provides kelp with an inorganic antioxidant impacting atmospheric chemistry" (Free full text). Proceedings of the National Academy of Sciences of the United States of America 105 (19): 6954–8. Bibcode:2008PNAS..105.6954K. doi:10.1073/pnas.0709959105. PMC 2383960. PMID 18458346.
- Hawksworth, D.L. (2001). "Lichens". "Encyclopedia of Life Sciences". John Wiley & Sons, Ltd. doi:10.1038/npg.els.0000368. ISBN 0-470-01617-5.
- Retallack, G.J.; Feakes, C.R. (1987). "Trace Fossil Evidence for Late Ordovician Animals on Land". Science 235 (4784): 61–63. Bibcode:1987Sci...235...61R. doi:10.1126/science.235.4784.61. PMID 17769314.
- Kenrick, P. and Crane, P. R. (September 1997). "The origin and early evolution of plants on land" (PDF). Nature 389 (6646): 33. Bibcode:1997Natur.389...33K. doi:10.1038/37918. Retrieved 2008-09-05.[dead link]
- Scheckler, S. E. (2001). ""Afforestation – the First Forests"". In Briggs, D.E.G., and Crowther, P.R. Palaeobiology II: A Synthesis. Blackwell. pp. 67–70. ISBN 0-632-05149-3. Retrieved 2008-09-05.
- The phrase "Late Devonian wood crisis" is used at "Palaeos – Tetrapoda: Acanthostega". PALAEOS: The Trace of Life on Earth. Retrieved 2008-09-05.
- Algeo, T. J. and Scheckler, S. E. (1998). "Terrestrial-marine teleconnections in the Devonian: links between the evolution of land plants, weathering processes, and marine anoxic events". Philosophical Transactions of the Royal Society B 353 (1365): 113–130. doi:10.1098/rstb.1998.0195. PMC 1692181.
- Taylor T. N. and Osborn J. M. (1996). "The importance of fungi in shaping the paleoecosystem". Review of Paleobotany and Palynology 90 (3–4): 249–262. doi:10.1016/0034-6667(95)00086-0. Retrieved 2008-09-05.
- Heather M. Wilson & Lyall I. Anderson (2004). "Morphology and taxonomy of Paleozoic millipedes (Diplopoda: Chilognatha: Archipolypoda) from Scotland". Journal of Paleontology 78 (1): 169–184. doi:10.1666/0022-3360(2004)078<0169:MATOPM>2.0.CO;2.
- Selden, Paul; Helen Read (2008). "The Oldest Land Animals: Silurian Millipedes from Scotland". Bulletin of the British Myriapod & Isopod Group 23: 36–37.
- Shear, William A.; Edgecombe, Gregory D. (2010). "The geological record and phylogeny of the Myriapoda". Arthropod Structure & Development 39 (2-3): 174–190. doi:10.1016/j.asd.2009.11.002. PMID 19944188.
- MacNaughton, R. B., Cole, J. M., Dalrymple, R. W., Braddy, S. J., Briggs, D. E. G. and Lukie, T. D. (May 2002). "First steps on land: Arthropod trackways in Cambrian-Ordovician eolian sandstone, southeastern Ontario, Canada". Geology 30 (5): 391–394. Bibcode:2002Geo....30..391M. doi:10.1130/0091-7613(2002)030<0391:FSOLAT>2.0.CO;2. ISSN 0091-7613. Retrieved 2008-09-05.
- Vaccari, N. E., Edgecombe, G. D. and Escudero, C. (2004). "Cambrian origins and affinities of an enigmatic fossil group of arthropods". Nature 430 (6999): 554–557. Bibcode:2004Natur.430..554V. doi:10.1038/nature02705. PMID 15282604.
- Buatois, L. A., Mangano, M. G., Genise, J. F. and Taylor, T. N. (June 1998). "The ichnologic record of the continental invertebrate invasion; evolutionary trends in environmental expansion, ecospace utilization, and behavioral complexity". PALAIOS (PALAIOS, Vol. 13, No. 3) 13 (3): 217–240. doi:10.2307/3515447. JSTOR 3515447. Retrieved 2008-09-05.
- Cowen, R. (2000). History of Life (3rd ed.). Blackwell Science. p. 126. ISBN 0-632-04444-6.
- Grimaldi, D. and Engel, M. (2005). "Insects Take to the Skies". Evolution of the Insects. Cambridge University Press. pp. 155–160. ISBN 0-521-82149-5. Retrieved 2009-01-11.
- Grimaldi, D. and Engel, M. (2005). "Diversity of evolution". Evolution of the Insects. Cambridge University Press. p. 12. ISBN 0-521-82149-5. Retrieved 2009-01-11.
- Clack, J. A. (November 2005). "Getting a Leg Up on Land". Scientific American. Retrieved 2008-09-06.
- Ahlberg, P. E. and Milner, A. R. (April 1994). "The Origin and Early Diversification of Tetrapods". Nature 368 (6471): 507–514. Bibcode:1994Natur.368..507A. doi:10.1038/368507a0. Retrieved 2008-09-06.
- Gordon, M. S., Graham, J. B. and Wang, T. (September–October 2004). "Revisiting the Vertebrate Invasion of the Land". Physiological and Biochemical Zoology 77 (5): 697–699. doi:10.1086/425182.
- Daeschler, E. B., Shubin, N. H. and Jenkins, F. A. (April 2006). "A Devonian tetrapod-like fish and the evolution of the tetrapod body plan" (PDF). Nature 440 (7085): 757–763. Bibcode:2006Natur.440..757D. doi:10.1038/nature04639. PMID 16598249. Retrieved 2008-09-06.
- Debraga, M. and Rieppel, O. (July 1997). "Reptile phylogeny and the interrelationships of turtles". Zoological Journal of the Linnean Society 120 (3): 281–354. doi:10.1111/j.1096-3642.1997.tb01280.x. Retrieved 2008-09-07.
- Benton M. J. and Donoghue, P. C. J. (2007). "Paleontological Evidence to Date the Tree of Life". Molecular Biology and Evolution 24 (1): 26–53. doi:10.1093/molbev/msl150. PMID 17047029. Retrieved 2008-09-07.
- Benton, M. J. (May 1990). "Phylogeny of the Major Tetrapod Groups: Morphological Data and Divergence Dates". Journal of Molecular Evolution 30 (5): 409–424. doi:10.1007/BF02101113. PMID 2111854. Retrieved 2008-09-07.
- Sidor, C. A., O'Keefe, F. R., Damiani, R., Steyer, J. S., Smith, R. M. H., Larsson, H. C. E., Sereno, P. C., Ide, O., and Maga, A. (April 2005). "Permian tetrapods from the Sahara show climate-controlled endemism in Pangaea". Nature 434 (7035): 886–889. Bibcode:2005Natur.434..886S. doi:10.1038/nature03393. PMID 15829962. Retrieved 2008-09-08.
- Smith, R. and Botha, J. (September–October 2005). "The recovery of terrestrial vertebrate diversity in the South African Karoo Basin after the end-Permian extinction". Comptes Rendus Palevol 4 (6–7): 623–636. doi:10.1016/j.crpv.2005.07.005. Retrieved 2008-09-08.
- Benton, M. J. (2005). When Life Nearly Died: The Greatest Mass Extinction of All Time. Thames & Hudson. ISBN 978-0-500-28573-2.
- Sahney, S. and Benton, M.J. (2008). "Recovery from the most profound mass extinction of all time" (PDF). Proceedings of the Royal Society B 275 (1636): 759–65. doi:10.1098/rspb.2007.1370. PMC 2596898. PMID 18198148.
- Gauthier, J., Cannatella, D. C., de Queiroz, K., Kluge, A. G. and Rowe, T. (1989). "Tetrapod Phylogeny" (PDF). In B. Fernholm, B., Bremer K., and Jörnvall, H. The Hierarchy of Life. Elsevier Science. p. 345. Retrieved 2008-09-08.
- Benton, M. J. (March 1983). "Dinosaur Success in the Triassic: a Noncompetitive Ecological Model" (PDF). Quarterly Review of Biology 58 (1). Retrieved 2008-09-08.
- Padian, K. (2004). "Basal Avialae". In Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 210–231. ISBN 0-520-24209-2.
- Hou, L., Zhou, Z., Martin, L. D. and Feduccia, A. (October 2002). "A beaked bird from the Jurassic of China". Nature 377 (6550): 616–618. Bibcode:1995Natur.377..616H. doi:10.1038/377616a0. Retrieved 2008-09-08.
- Clarke, J. A., Zhou, Z. and Zhang, F. (2006). "Insight into the evolution of avian flight from a new clade of Early Cretaceous ornithurines from China and the morphology of Yixianornis grabaui". Journal of Anatomy 208 (3): 287–308. doi:10.1111/j.1469-7580.2006.00534.x. PMC 2100246. PMID 16533313. Retrieved 2008-09-08.
- Ruben, J. A. and Jones, T. D. (2000). "Selective Factors Associated with the Origin of Fur and Feathers". American Zoologist 40 (4): 585–596. doi:10.1093/icb/40.4.585.
- Luo, Z-X., Crompton, A. W. and Sun, A-L. (May 2001). "A New Mammaliaform from the Early Jurassic and Evolution of Mammalian Characteristics". Science 292 (5521): 1535–1540. Bibcode:2001Sci...292.1535L. doi:10.1126/science.1058476. PMID 11375489. Retrieved 2008-09-08.
- Cifelli, R.L. (November 2001). "Early mammalian radiations". Journal of Paleontology 75 (6): 1214. doi:10.1666/0022-3360(2001)075<1214:EMR>2.0.CO;2. ISSN 0022-3360.
- Flynn, J. J., Parrish, J. M. Rakotosamimanana, B., Simpson, W. F. and Wyss, A.R. (September 1999). "A Middle Jurassic mammal from Madagascar". Nature 401 (6748): 57–60. Bibcode:1999Natur.401...57F. doi:10.1038/43420. Retrieved 2008-09-08.
- MacLeod, N., Rawson, P. F., Forey, P. L., Banner. F. T., Boudagher-Fadel, M. K., Bown, P. R., Burnett, J. A., Chambers, P., Culver, S., Evans, S. E., Jeffery, C., Kaminski, M. A., Lord, A. R., Milner, A. C., Milner, A. R., Morris, N., Owen, E., Rosen, B. R., ,Smith, A. B., Taylor, P. D., Urquhart, E. and Young, J. R. (1997). "The Cretaceous–Tertiary biotic transition". Journal of the Geological Society 154 (2): 265–292. doi:10.1144/gsjgs.154.2.0265.
- Alroy, J. (March 1999). "The fossil record of North American mammals: evidence for a Paleocene evolutionary radiation". Systematic Biology 48 (1): 107–18. doi:10.1080/106351599260472. PMID 12078635.
- Archibald, J. D. and Deutschman, D. H. (June 2001). "Quantitative Analysis of the Timing of the Origin and Diversification of Extant Placental Orders". Journal of Mammalian Evolution 8 (2): 107–124. doi:10.1023/A:1011317930838. Retrieved 2008-09-24.
- Simmons, N. B., Seymour, K. L., Habersetzer, J. and Gunnell, G. F. (February 2008). "Primitive Early Eocene bat from Wyoming and the evolution of flight and echolocation". Nature 451 (7180): 818–821. Bibcode:2008Natur.451..818S. doi:10.1038/nature06549. PMID 18270539.
- Thewissen, J. G. M., Madar, S. I. and Hussain, S. T. (1996). "Ambulocetus natans, an Eocene cetacean (Mammalia) from Pakistan". Courier Forschungsinstitut Senckenberg 191: 1–86. ISBN 978-3-510-61084-6.
- Crane, P. R., Friis, E. M. and Pedersen, K. R. (2000). "The Origin and Early Diversification of Angiosperms". In Gee, H. Shaking the Tree: Readings from Nature in the History of Life. University of Chicago Press. pp. 233–250. ISBN 0-226-28496-4. Retrieved 2008-09-09.
- Crepet, W. L. (November 2000). "Progress in understanding angiosperm history, success, and relationships: Darwin's abominably "perplexing phenomenon"". Proceedings of the National Academy of Sciences 97 (24): 12939–12941. Bibcode:2000PNAS...9712939C. doi:10.1073/pnas.97.24.12939. PMC 34068. PMID 11087846. Retrieved 2008-09-09.
- Hughes, W. O. H., Oldroyd, B. P., Beekman, M. and Ratnieks, F. L. W. (2008-05-30). "Ancestral Monogamy Shows Kin Selection Is Key to the Evolution of Eusociality". Science (American Association for the Advancement of Science) 320 (5880): 1213–1216. Bibcode:2008Sci...320.1213H. doi:10.1126/science.1156108. PMID 18511689. Retrieved 2008-08-04.
- Lovegrove, B. G. (January 1991). "The evolution of eusociality in molerats (Bathyergidae): a question of risks, numbers, and costs". Behavioral Ecology and Sociobiology 28 (1): 37–45. doi:10.1007/BF00172137. Retrieved 2008-09-07.
- Labandeira, C. and Eble, G. J. (2000). "The Fossil Record of Insect Diversity and Disparity" (PDF). In Anderson, J., Thackeray, F., van Wyk, B., and de Wit, M. Gondwana Alive: Biodiversity and the Evolving Biosphere. Witwatersrand University Press. Retrieved 2008-09-07.
- Brunet, M., Guy, F., Pilbeam, D., Mackaye, H. T. et al. (July 2002). "A new hominid from the Upper Miocene of Chad, Central Africa". Nature 418 (6894): 145–151. doi:10.1038/nature00879. PMID 12110880. Retrieved 2008-09-09.
- de Heinzelin, J., Clark, J. D., White, T. et al. (April 1999). "Environment and Behavior of 2.5-Million-Year-Old Bouri Hominids". Science 284 (5414): 625–629. doi:10.1126/science.284.5414.625. PMID 10213682. Retrieved 2008-09-09.
- De Miguel, C. and Henneberg, M. (2001). "Variation in hominid brain size: How much is due to method?". HOMO - Journal of Comparative Human Biology 52 (1): 3–58. doi:10.1078/0018-442X-00019. Retrieved 2008-09-09.
- Leakey, Richard (1994). The Origin of Humankind. Science Masters Series. New York, NY: Basic Books. pp. 87–89. ISBN 0-465-05313-0.
- Mellars, Paul (2006). "Why did modern human populations disperse from Africa ca. 60,000 years ago? A new model". Proceedings of the National Academy of Sciences 103 (25): 9381–6. Bibcode:2006PNAS..103.9381M. doi:10.1073/pnas.0510792103. PMC 1480416. PMID 16772383.
- Benton, M. J. (2004). "6. Reptiles Of The Triassic". Vertebrate Palaeontology (3rd ed.). Blackwell. ISBN 978-0-632-05637-8.
- MacLeod, N. (2001-01-06). "Extinction!". Retrieved 2008-09-11.
- Martin, R. E. (1995). "Cyclic and secular variation in microfossil biomineralization: clues to the biogeochemical evolution of Phanerozoic oceans". Global and Planetary Change 11 (1): 1. Bibcode:1995GPC....11....1M. doi:10.1016/0921-8181(94)00011-2.
- Martin, R.E. (1996). "Secular increase in nutrient levels through the Phanerozoic: Implications for productivity, biomass, and diversity of the marine biosphere". PALAIOS (PALAIOS, Vol. 11, No. 3) 11 (3): 209–219. doi:10.2307/3515230. JSTOR 3515230.
- Rohde, R. A. and Muller, R. A. (March 2005). "Cycles in fossil diversity" (PDF). Nature 434 (7030): 208–210. Bibcode:2005Natur.434..208R. doi:10.1038/nature03339. PMID 15758998. Retrieved 2008-09-22.
- Cowen, R. (2004). History of Life (4th ed.). Blackwell Publishing Limited. ISBN 978-1-4051-1756-2.
- Richard Dawkins (2004). The Ancestor's Tale, A Pilgrimage to the Dawn of Life. Boston: Houghton Mifflin Company. ISBN 0-618-00583-8.
- Richard Dawkins (1990). The Selfish Gene. Oxford University Press. ISBN 0-19-286092-5.
- Smith, John Maynard; Eörs Szathmáry (1997). The Major Transitions in Evolution. Oxfordshire: Oxford University Press. ISBN 0-19-850294-X.
- Ruse, Michael; Travis, Joseph (eds) (2009). Evolution: The First Four Billion Years. Cambridge, Massachusetts: Belknap Press of Harvard University Press. ISBN 978-0-674-03175-3. Retrieved 24 November 2012.
- General information on evolution- Fossil Museum nav.
- Understanding Evolution from University of California, Berkeley
- National Academies Evolution Resources
- Evolution poster- PDF format "tree of life"
- Everything you wanted to know about evolution by New Scientist
- Howstuffworks.com — How Evolution Works
- Synthetic Theory Of Evolution: An Introduction to Modern Evolutionary Concepts and Theories
History of evolutionary thought
- The Complete Work of Charles Darwin Online
- Understanding Evolution: History, Theory, Evidence, and Implications |
Mass is a property of matter equal to the measure of an object’s resistance to changes in either the speed or direction of its motion. The mass of an object is not dependent on gravity and therefore is different from but proportional to its weight.
Speed is the time rate of change of position of a body without regard to direction. Linear speed is commonly measured in such units as meters per second, miles per hour, or feet per second. Velocity represents speed but according to the bodies direction. We can calculate from a distance time graph with dy/dx.
Acceleration describes the time rate the velocity is changing at. The relationship between acceleration and velocity is like the relationship between velocity and displacement. Acceleration is a vector quantity. For uniform velocity, a = 0. If ‘a’ is a non-zero constant, the object is said to be uniformly accelerated. The average acceleration of an object is defined as:
Average acceleration = change in velocity / time taken
In my investigation, I will aim to find the relationship between mass and acceleration.
I will do this be setting up an apparatus which will measure the rate of acceleration. First, I will set up a height of 15cm and length of 227cm ramp. At this height, I do not have to apply a force to the trolley to accelerate the trolley because it will be able to slide down due to the force of gravity. This way, the force of gravity can be kept constant. Then, I will use a ticker machine and ticker tape to measure the rate of acceleration. I will stick the ticker tape into a trolley of 850g and let it fall. Each 10 mark on the ticker tape represents 0.2 seconds so I will cut the ticker tape in strips of 10 marks. By plotting the strips onto a graph, it would tell us the speed in which the trolley travelled. From this, we can calculate the acceleration of the trolley:
Acceleration = final velocity – initial velocity = D v
I used a ticker machine to calculate the rate of acceleration because it would show the rate in which acceleration changes. If we just timed how long it takes for the trolley to reach the end of the trolley, it would only give us the average acceleration. It would not be possible to measure the change in acceleration.
I chose 15cm height ramp because from our preliminary results we found the marks on the ticker tape appeared most clearly at this height. Previously, the height of the ramp was 43cm and it was too high of the marks to appear clearly and because of this, my results weren’t as accurate.
The average angle of the ramp was 3.87 IS. I chose this angle because I found from preliminary results that if the angle is too high, the marks on the ticker tape would not print accurately. Before, the average angle was 10.7 IS and we found it difficult to read the ticker tape.
I clamped the ramp in place because this way, the height of the ramp is less prone to change so it acceleration will only be affected by the mass of the trolley. This will make our results more accurate.
I added 400g of mass each time because from preliminary tests, I found that the ranges of the results were too close to each other to see a correlation when we added 100g each time. So to make the results more clear to see if mass affects acceleration, I decided to add more weights. This way, there would be a greater difference in the results and it would be clearer to distinguish a correlation.
I chose a trolley of 850g because the trolley was light weight and the wheels were fairly smooth. Because it was light weight it would be easier to add mass on and be less affected by friction. Because the wheels were smooth, the frictional force would be less. This will make our results more accurate.
To keep my investigation fair, I will only change one factor- the trolley’s mass. I will keep everything else the same such as the height of the ramp and the ramp itself because these factors would affect the results if they’re are not kept the same.
I predict that the mass of the trolley will not affect the rate of acceleration. This is because according to Galileo’s laws of motion, all bodies accelerate at the same rate regardless of their size or mass. For example, the fact that a feather falls slower than a steel ball is due to amount of air resistance that a feather experiences (a lot) versus a steel ball (very little).
Also according to Newton’s second law, the acceleration and gravitational force of a body is directly proportional to each other. He adds to Galileo’s law of motion by saying everything falls at the rate of 9.8m/s.
He calculates this by:
(F=force, m=mass of Earth (), a=acceleration, r=radius of Earth, G=gravitational constant (6.7-10?a¶?a¶? Nm?/kg?), g=gravitational force)
If F=ma and F=gm
So you can cancel m to get a=g
Factors which affects the rate of acceleration:
Friction would affect the rate of acceleration because it increases the reluctant force by griping on the wheels and increasing the time it takes for the wheels to turn. Sometimes this can be good because it makes cars easier to manoeuvre. To show that friction affects the acceleration, we could carry out the same experiment, but instead of changing the mass, we would add different materials to the ramp. This would show us how surface area affects acceleration.
The gradient in which the body is travelling would also affect the acceleration because some of the force would go into the other direction instead of going down so it experiences more drag. This would increase the time it takes for the body to fall. We can show this in our experiment by increasing the angle of the ramp instead of mass.
The shape of the body will also affect its acceleration because the more wide it is the more air resistance/ drag it will have. Air resistance slows down an object because it opposes a force in the opposite direct to gravity, so the force of gravity is less. We can show this by changing the size of the surface area of the trolley but keeping mass the same.
From the graph, we can see that generally, as the mass increases, so does the acceleration. There’s a steep liner gradient from 850g-1650g, and acceleration increased by 4.82ms??. Even though the actual results shows a decrease in acceleration between 1650g-2100 by 0.53 ms??, the line of best fit tells us it is actually increasing. Overall, acceleration increased by 0.2m/s?? every 100g that was added.
The average speed shows as the mass increased, so does its speed. There is a liner gradient between 850g-1250, and the speed increased by 1.7cm/s. From 250g-2050g, the speed decreases by 0.75cm/s?. However, from 2050g-2450g, the speed increases again by 0.66cm/s?. Overall, although it decreases, the line of best fit shows that it increases greatly from 850g-1250, then the line starts levelling out from 1250g-1450g.
The accuracy rating generally shows that as the mass increases, the level of accuracy also increases. This graph shows the higher the number of accuracy, the lower the level of accuracy. There is a huge fall in the number of accuracy rating between 850g-2050. It went from 38.67 to 29, a difference of 9.67. From 850g-2050g, the number of accuracy kept decreasing and overall, it decreased by 14.3. However, from 2050g-2450g, it increased by 2.
This may be because as mass increases, the bigger the friction is on the wheels. The larger the friction the better the wheels can grip on the surface so travels more accurately and is less likely to skid. This tells us, the results of acceleration and speed for 850g is very likely to be an outlier because the level of accuracy is very low.
When we compare the results of the average acceleration to its speed, we can see it’s directly proportional because as the acceleration increased, so did the speed. This is because acceleration shows how speed changes.
When we compare the level of accuracy to the acceleration and speed, it tells us the results for 850g is very likely to be anomalie and possibly 1250g as well. If that were true, the graphs would show that there is no connection an object’s mass to its acceleration. This would prove Galileo’s law of motion and Newton’s second law that the rate of acceleration is constant and is not affected by size or mass.
However, our experiment does prove their theories are correct because our experiment shows that the less resultant forces oppose to gravity (more friction in this case), the faster the body accelerates and does not depend on its mass.
I believe my experiment went fairly well because I felt I could justify the reasons why I obtained these results and although I have some anomalies, most of the results were fairly accurate.
However, there were some flaws in my experiment such as:
I found it hard to set off the trolley at the position on the ramp each time because it was not marked clearly.
I did not wipe/grease the ramp after each experiment, doing this would have make the friction of the ramp more consistent
When I plotted the strips of ticker tape on the graph, I did not line them accurately on the squares. This made some of my results inaccurate.
To improve my experiment, I would have made the height of the ramp lower because it would experience more friction for the wheels to grip on. I would have also used trolleys with different masses but the same density. This way, drag/air resistance be more likely to be the same so there would only be one factor affecting the results. This would make out results more accurate.
To obtained accurate results, we can perform this experiment in a vacuum. This is because in a vacuum, you would not experience any resultant force as you do in Earth so you could accurately calculate acceleration. However, we can only experience a vacuum in space.
In earth, to decrease resultant forces, we can carry out this experiment in:
Air tight conditions |
In signal processing, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.
The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding; encoding done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal.
Compression is useful because it reduces the resources required to store and transmit data. Computational resources are consumed in the compression and decompression processes. Data compression is subject to a space–time complexity trade-off. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed, and the option to decompress the video in full before watching it may be inconvenient or require additional storage. The design of data compression schemes involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources required to compress and decompress the data
Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the data may be encoded as "279 red pixels". This is a basic example of run-length encoding; there are many schemes to reduce file size by eliminating redundancy.
The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. In the mid-1980s, following work by Terry Welch, the Lempel–Ziv–Welch (LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used in GIF images, programs such as PKZIP, and hardware devices such as modems. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded. Grammar-based codes like this can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms include Sequitur and Re-Pair.
The strongest modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling. In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.263, H.264/MPEG-4 AVC and HEVC for video coding.
In the late 1980s, digital images became more common, and standards for lossless image compression emerged. In the early 1990s, lossy compression methods began to be widely used. In these schemes, some loss of information is accepted as dropping nonessential detail can save storage space. There is a corresponding trade-off between preserving information and reducing size. Lossy data compression schemes are designed by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information. A number of popular compression formats exploit these perceptual differences, including psychoacoustics for sound, and psychovisuals for images and video.
Most forms of lossy compression are based on transform coding, especially the discrete cosine transform (DCT). It was first proposed in 1972 by Nasir Ahmed, who then developed a working algorithm with T. Natarajan and K. R. Rao in 1973, before introducing it in January 1974. DCT is the most widely used lossy compression method, and is used in multimedia formats for images (such as JPEG and HEIF), video (such as MPEG, AVC and HEVC) and audio (such as MP3, AAC and Vorbis).
Lossy image compression is used in digital cameras, to increase storage capacities. Similarly, DVDs, Blu-ray and streaming video use lossy video coding formats. Lossy compression is extensively used in video.
In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the audio signal. Compression of human speech is often performed with even more specialized techniques; speech coding is distinguished as a separate discipline from general-purpose audio compression. Speech coding is used in internet telephony, for example, audio compression is used for CD ripping and is decoded by the audio players.
Lossy compression can cause generation loss.
The theoretical basis for compression is provided by information theory and, more specifically, algorithmic information theory for lossless compression and rate–distortion theory for lossy compression. These areas of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Other topics associated with compression include coding theory and statistical inference.
There is a close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution). An optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".
An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponds to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.
According to AIXI theory, a connection more directly explained in Hutter Prize, the best possible compression of x is the smallest possible software which generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can't unzip it without both, but there may be an even smaller combined form.
Data compression can be viewed as a special case of data differencing. Data differencing consists of producing a difference given a source and a target, with patching reproducing the target given a source and a difference. Since there is no separate source and target in data compression, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a difference from nothing. This is the same as considering absolute entropy (corresponding to data compression) as a special case of relative entropy (corresponding to data differencing) with no initial data.
The term differential compression is used to emphasize the data differencing connection.
Entropy coding originated in the 1940s with the introduction of Shannon–Fano coding, the basis for Huffman coding which was developed in 1950. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the Hadamard transform in 1969.
An important image compression technique is the discrete cosine transform (DCT), a technique developed in the early 1970s. DCT is the basis for JPEG, a lossy compression format which was introduced by the Joint Photographic Experts Group (JPEG) in 1992. JPEG greatly reduces the amount of data required to represent an image at the cost of a relatively small reduction in image quality and has become the most widely used image file format. Its highly efficient DCT-based compression algorithm was largely responsible for the wide proliferation of digital images and digital photos.
Lempel–Ziv–Welch (LZW) is a lossless compression algorithm developed in 1984. It is used in the GIF format, introduced in 1987. DEFLATE, a lossless compression algorithm specified in 1996, is used in the Portable Network Graphics (PNG) format.
Wavelet compression, the use of wavelets in image compression, began after the development of DCT coding. The JPEG 2000 standard was introduced in 2000. In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004.
Audio data compression, not to be confused with dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements of audio data. Audio compression algorithms are implemented in software as audio codecs. In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, quantization, discrete cosine transform and linear prediction to reduce the amount of information used to represent the uncompressed data.
Lossy audio compression algorithms provide higher compression and are used in numerous audio applications including Vorbis and MP3. These algorithms almost all rely on psychoacoustics to eliminate or reduce fidelity of less audible sounds, thereby reducing the space required to store or transmit them.
The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640 MB compact disc (CD) holds approximately one hour of uncompressed high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in the MP3 format at a medium bit rate. A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640 MB.
Lossless audio compression produces a representation of digital data that can be decoded to an exact digital duplicate of the original. Compression ratios are around 50–60% of the original size, which is similar to those for generic lossless data compression. Lossless codecs use curve fitting or linear prediction as a basis for estimating the signal. Parameters describing the estimation and the difference between the estimation and the actual signal are coded separately.
A number of lossless audio compression formats exist. See list of lossless codecs for a listing. Some formats are associated with a distinct system, such as Direct Stream Transfer, used in Super Audio CD and Meridian Lossless Packing, used in DVD-Audio, Dolby TrueHD, Blu-ray and HD DVD.
Some audio file formats feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack, and OptimFROG DualStream.
When audio files are to be processed, either by further compression or for editing, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies.
Lossy audio compression
Lossy audio compression is used in a wide range of applications. In addition to standalone audio-only applications of file playback in MP3 players or computers, digitally compressed audio streams are used in most video DVDs, digital television, streaming media on the Internet, satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression, by discarding less-critical data based on psychoacoustic optimizations.
Psychoacoustics recognizes that not all data in an audio stream can be perceived by the human auditory system. Most lossy compression reduces redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at the same time as louder sounds. Those irrelevant sounds are coded with decreased accuracy or not at all.
Due to the nature of lossy algorithms, audio quality suffers a digital generation loss when a file is decompressed and recompressed. This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, lossy formats such as MP3 are very popular with end users as the file size is reduced to 5-20% of the original size and a megabyte can store about a minute's worth of music at adequate quality.
To determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain, typically the frequency domain. Once transformed, component frequencies can be prioritized according to how audible they are. Audibility of spectral components is assessed using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weight the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models.
Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. LPC uses a model of the human vocal tract to analyze speech sounds and infer the parameters used by the model to produce them moment to moment. These changing parameters are transmitted or stored and used to drive another model in the decoder which reproduces the sound.
Lossy formats are often used for the distribution of streaming audio or interactive communication (such as in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications.
Latency is introduced by the methods used to encode and decode the data. Some codecs will analyze a longer segment, called a frame, of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. The inherent latency of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality.
In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analyzed before a block of audio is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms.
Speech encoding is an important category of audio data compression. The perceptual models used to estimate what aspects of speech a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice are normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate.
This is accomplished, in general, by some combination of two approaches:
- Only encoding sounds that could be made by a single human voice.
- Throwing away more of the data in the signal—keeping just enough to reconstruct an "intelligible" voice rather than the full frequency range of human hearing.
Early audio research was conducted at Bell Labs. There, in 1950, C. Chapin Cutler filed the patent on differential pulse-code modulation (DPCM). In 1973, Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan.
Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time. Perceptual coding is used by modern audio compression formats such as MP3 and AAC.
Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3 and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used by modern audio compression formats such as Dolby Digital, MP3, and Advanced Audio Coding (AAC).
The world's first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor at the University of Buenos Aires. In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967, he started developing a practical application based on the recently developed IBM PC computer, and the broadcast automation system was launched in 1987 under the name Audicom. Twenty years later, almost all the radio stations in the world were using similar technology manufactured by a number of companies.
A literature compendium for a large variety of audio coding systems was published in the IEEE's Journal on Selected Areas in Communications (JSAC), in February 1988. While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual (i.e. masking) techniques and some kind of frequency analysis and back-end noiseless coding. Several of these papers remarked on the difficulty of obtaining good, clean digital audio for research purposes. Most, if not all, of the authors in the JSAC edition were also active in the MPEG-1 Audio committee, which created the MP3 format.
Video compression is a practical implementation of source coding in information theory. In practice, most video codecs are used alongside audio compression techniques to store the separate but complementary data streams as one combined package using so-called container formats.
Uncompressed video requires a very high data rate. Although lossless video compression codecs perform at a compression factor of 5 to 12, a typical H.264 lossy compression video has a compression factor between 20 and 200.
The two key video compression techniques used in video coding standards are the discrete cosine transform (DCT) and motion compensation (MC). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT video coding (block motion compensation).
Video data may be represented as a series of still image frames. Such data usually contains abundant amounts of spatial and temporal redundancy. Video compression algorithms attempt to reduce redundancy and store information more compactly.
Most video compression formats and codecs exploit both spatial and temporal redundancy (e.g. through difference coding with motion compensation). Similarities can be encoded by only storing differences between e.g. temporally adjacent frames (inter-frame coding) or spatially adjacent pixels (intra-frame coding). Inter-frame compression (a temporal delta encoding) (re)uses data from one or more earlier or later frames in a sequence to describe the current frame. Intra-frame coding, on the other hand, uses only data from within the current frame, effectively being still-image compression.
The intra-frame video coding formats used in camcorders and video editing employ simpler compression that uses only intra-frame prediction. This simplifies video editing software, as it prevents a situation in which a P or B frame refers to data that the editor has deleted.
Usually video compression additionally employs lossy compression techniques like quantization that reduce aspects of the source data that are (more or less) irrelevant to the human visual perception by exploiting perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas to reduce space, in a manner similar to those used in JPEG image compression. As in all lossy compression, there is a trade-off between video quality and bit rate, cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distracting artifacts.
Other methods than the prevalent DCT-based transform formats, such as fractal compression, matching pursuit and the use of a discrete wavelet transform (DWT), have been the subject of some research, but are typically not used in practical products (except for the use of wavelet coding as still-image coders without motion compensation). Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods.
Inter-frame coding works by comparing each frame in the video with the previous one. Individual frames of a video sequence are compared from one frame to the next, and the video compression codec sends only the differences to the reference frame. If the frame contains areas where nothing has moved, the system can simply issue a short command that copies that part of the previous frame into the next one. If sections of the frame move in a simple manner, the compressor can emit a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than intraframe compression. Usually the encoder will also transmit a residue signal which describes the remaining more subtle differences to the reference imagery. Using entropy coding, these residue signals have a more compact representation than the full signal. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in the variable bitrate.
Hybrid block-based transform formats
Today, nearly all commonly used video compression methods (e.g., those in standards approved by the ITU-T or ISO) share the same basic architecture that dates back to H.261 which was standardized in 1988 by the ITU-T. They mostly rely on the DCT, applied to rectangular blocks of neighboring pixels, and temporal prediction using motion vectors, as well as nowadays also an in-loop filtering step.
In the prediction stage, various deduplication and difference-coding techniques are applied that help decorrelate data and describe new data based on already transmitted data.
Then rectangular blocks of (residue) pixel data are transformed to the frequency domain to ease targeting irrelevant information in quantization and for some spatial redundancy reduction. The discrete cosine transform (DCT) that is widely used in this regard was introduced by N. Ahmed, T. Natarajan and K. R. Rao in 1974.
In the main lossy processing stage that data gets quantized in order to reduce information that is irrelevant to human visual perception.
In the last stage statistical redundancy gets largely eliminated by an entropy coder which often applies some form of arithmetic coding.
In an additional in-loop filtering stage various filters can be applied to the reconstructed image signal. By computing these filters also inside the encoding loop they can help compression because they can be applied to reference material before it gets used in the prediction process and they can be guided using the original signal. The most popular example are deblocking filters that blur out blocking artefacts from quantization discontinuities at transform block boundaries.
In 1967, A.H. Robinson and C. Cherry proposed a run-length encoding bandwidth compression scheme for the transmission of analog television signals. Discrete cosine transform (DCT), which is fundamental to modern video compression, was introduced by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974.
H.261, which debuted in 1988, commercially introduced the prevalent basic architecture of video compression technology. It was the first video coding format based on DCT compression, which would subsequently become the standard for all of the major video coding formats that followed. H.261 was developed by a number of companies, including Hitachi, PictureTel, NTT, BT and Toshiba.
The most popular video coding standards used for codecs have been the MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which was developed by a number of companies, primarily Sony, Thomson and Mitsubishi Electric. MPEG-2 became the standard video format for DVD and SD digital television. In 1999, it was followed by MPEG-4/H.263, which was a major leap forward for video compression technology. It was developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic.
The most widely used video coding format is H.264/MPEG-4 AVC. It was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics. AVC commercially introduced the modern context-adaptive binary arithmetic coding (CABAC) and context-adaptive variable-length coding (CAVLC) algorithms. AVC is the main video encoding standard for Blu-ray Discs, and is widely used by video sharing websites and streaming internet services such as YouTube, Netflix, Vimeo, and iTunes Store, web software such as Adobe Flash Player and Microsoft Silverlight, and various HDTV broadcasts over terrestrial and satellite television.
Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not use a reference genome for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and in much faster time than the leading general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF based encoding (MAFE), which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing the dataset. Other algorithms in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing 6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged over many genomes). For a benchmark in genetics/genomics data compressors, see
Outlook and currently unused potential
It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.
- Wade, Graham (1994). Signal coding and processing (2 ed.). Cambridge University Press. p. 34. ISBN 978-0-521-42336-6. Retrieved 2011-12-22.
The broad objective of source coding is to exploit or remove 'inefficient' redundancy in the PCM source and thereby achieve a reduction in the overall source rate R.
- Mahdi, O.A.; Mohammed, M.A.; Mohamed, A.J. (November 2012). "Implementing a Novel Approach an Convert Audio Compression to Text Coding via Hybrid Technique" (PDF). International Journal of Computer Science Issues. 9 (6, No. 3): 53–59. Retrieved 6 March 2013.
- Pujar, J.H.; Kadlaskar, L.M. (May 2010). "A New Lossless Method of Image Compression and Decompression Using Huffman Coding Techniques" (PDF). Journal of Theoretical and Applied Information Technology. 15 (1): 18–23.
- Salomon, David (2008). A Concise Introduction to Data Compression. Berlin: Springer. ISBN 9781848000728.
- Tank, M.K. (2011). "Implementation of Lempel-ZIV algorithm for lossless compression using VHDL". Implementation of Limpel-Ziv algorithm for lossless compression using VHDL. Thinkquest 2010: Proceedings of the First International Conference on Contours of Computing Technology. Berlin: Springer. pp. 275–283. doi:10.1007/978-81-8489-989-4_51. ISBN 978-81-8489-988-7.
- Navqi, Saud; Naqvi, R.; Riaz, R.A.; Siddiqui, F. (April 2011). "Optimized RTL design and implementation of LZW algorithm for high bandwidth applications" (PDF). Electrical Review. 2011 (4): 279–285.
- Stephen, Wolfram (2002). New Kind of Science. Champaign, IL. p. 1069. ISBN 1-57955-008-8.
- Mahmud, Salauddin (March 2012). "An Improved Data Compression Method for General Data" (PDF). International Journal of Scientific & Engineering Research. 3 (3): 2. Retrieved 6 March 2013.
- Lane, Tom. "JPEG Image Compression FAQ, Part 1". Internet FAQ Archives. Independent JPEG Group. Retrieved 6 March 2013.
- G. J. Sullivan; J.-R. Ohm; W.-J. Han; T. Wiegand (December 2012). "Overview of the High Efficiency Video Coding (HEVC) Standard". IEEE Transactions on Circuits and Systems for Video Technology. IEEE. 22 (12): 1649–1668. doi:10.1109/TCSVT.2012.2221191.
- Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1069. ISBN 978-1-57955-008-0.
- Arcangel, Cory. "On Compression" (PDF). Retrieved 6 March 2013.
- Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing. 1 (1): 4–5. doi:10.1016/1051-2004(91)90086-Z.
- Nasir Ahmed; T. Natarajan; Kamisetty Ramamohan Rao (January 1974). "Discrete Cosine Transform" (PDF). IEEE Transactions on Computers. C-23 (1): 90–93. doi:10.1109/T-C.1974.223784.
- CCITT Study Group VIII und die Joint Photographic Experts Group (JPEG) von ISO/IEC Joint Technical Committee 1/Subcommittee 29/Working Group 10 (1993), "Annex D – Arithmetic coding", Recommendation T.81: Digital Compression and Coding of Continuous-tone Still images – Requirements and guidelines (PDF), pp. 54 ff, retrieved 2009-11-07
- Marak, Laszlo. "On image compression" (PDF). University of Marne la Vallee. Archived from the original (PDF) on 28 May 2015. Retrieved 6 March 2013.
- Mahoney, Matt. "Rationale for a Large Text Compression Benchmark". Florida Institute of Technology. Retrieved 5 March 2013.
- Shmilovici A.; Kahiri Y.; Ben-Gal I.; Hauser S. (2009). "Measuring the Efficiency of the Intraday Forex Market with a Universal Data Compression Algorithm" (PDF). Computational Economics. 33 (2): 131–154. CiteSeerX 10.1.1.627.3751. doi:10.1007/s10614-008-9153-3. S2CID 17234503.
- I. Ben-Gal (2008). "On the Use of Data Compression Measures to Analyze Robust Designs" (PDF). IEEE Transactions on Reliability. 54 (3): 381–388. doi:10.1109/TR.2005.853280. S2CID 9376086.
- D. Scully; Carla E. Brodley (2006). "Compression and machine learning: A new perspective on feature space vectors". Data Compression Conference, 2006: 332. doi:10.1109/DCC.2006.13. ISBN 0-7695-2545-8. S2CID 12311412.
- Korn, D.; et al. "RFC 3284: The VCDIFF Generic Differencing and Compression Data Format". Internet Engineering Task Force. Retrieved 5 March 2013.
- Korn, D.G.; Vo, K.P. (1995). B. Krishnamurthy (ed.). Vdelta: Differencing and Compression. Practical Reusable Unix Software. New York: John Wiley & Sons, Inc.
- Claude Elwood Shannon (1948). Alcatel-Lucent (ed.). "A Mathematical Theory of Communication" (PDF). Bell System Technical Journal. 27 (3–4): 379–423, 623–656. doi:10.1002/j.1538-7305.1948.tb01338.x. hdl:11858/00-001M-0000-002C-4314-2. Retrieved 2019-04-21.
- David Albert Huffman (September 1952), "A method for the construction of minimum-redundancy codes" (PDF), Proceedings of the IRE, 40 (9), pp. 1098–1101, doi:10.1109/JRPROC.1952.273898
- William K. Pratt, Julius Kane, Harry C. Andrews: "Hadamard transform image coding", in Proceedings of the IEEE 57.1 (1969): Seiten 58–68
- "T.81 – DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES – REQUIREMENTS AND GUIDELINES" (PDF). CCITT. September 1992. Retrieved 12 July 2019.
- "The JPEG image format explained". BT.com. BT Group. 31 May 2018. Retrieved 5 August 2019.
- Baraniuk, Chris (15 October 2015). "Copy protections could come to JPEGs". BBC News. BBC. Retrieved 13 September 2019.
- "What Is a JPEG? The Invisible Object You See Every Day". The Atlantic. 24 September 2013. Retrieved 13 September 2019.
- "The GIF Controversy: A Software Developer's Perspective". Retrieved 26 May 2015.
- L. Peter Deutsch (May 1996). DEFLATE Compressed Data Format Specification version 1.3. IETF. p. 1. sec. Abstract. doi:10.17487/RFC1951. RFC 1951. Retrieved 2014-04-23.
- Hoffman, Roy (2012). Data Compression in Digital Systems. Springer Science & Business Media. p. 124. ISBN 9781461560319.
Basically, wavelet coding is a variant on DCT-based transform coding that reduces or eliminates some of its limitations. (...) Another advantage is that rather than working with 8 × 8 blocks of pixels, as do JPEG and other block-based DCT techniques, wavelet coding can simultaneously compress the entire image.
- Taubman, David; Marcellin, Michael (2012). JPEG2000 Image Compression Fundamentals, Standards and Practice: Image Compression Fundamentals, Standards and Practice. Springer Science & Business Media. ISBN 9781461507994.
- Unser, M.; Blu, T. (2003). "Mathematical properties of the JPEG2000 wavelet filters". IEEE Transactions on Image Processing. 12 (9): 1080–1090. Bibcode:2003ITIP...12.1080U. doi:10.1109/TIP.2003.812329. PMID 18237979. S2CID 2765169.
- Sullivan, Gary (8–12 December 2003). "General characteristics and design considerations for temporal subband video coding". ITU-T. Video Coding Experts Group. Retrieved 13 September 2019.
- Bovik, Alan C. (2009). The Essential Guide to Video Processing. Academic Press. p. 355. ISBN 9780080922508.
- Swartz, Charles S. (2005). Understanding Digital Cinema: A Professional Handbook. Taylor & Francis. p. 147. ISBN 9780240806174.
- Cunningham, Stuart; McGregor, Iain (2019). "Subjective Evaluation of Music Compressed with the ACER Codec Compared to AAC, MP3, and Uncompressed PCM". International Journal of Digital Multimedia Broadcasting. 2019: 1–16. doi:10.1155/2019/8265301.
- The Olympus WS-120 digital speech recorder, according to its manual, can store about 178 hours of speech-quality audio in .WMA format in 500 MB of flash memory.
- Coalson, Josh. "FLAC Comparison". Retrieved 2020-08-23.
- "Format overview". Retrieved 2020-08-23.
- Jaiswal, R.C. (2009). Audio-Video Engineering. Pune, Maharashtra: Nirali Prakashan. p. 3.41. ISBN 9788190639675.
- Faxin Yu; Hao Luo; Zheming Lu (2010). Three-Dimensional Model Analysis and Processing. Berlin: Springer. p. 47. ISBN 9783642126512.
- US patent 2605361, C. Chapin Cutler, "Differential Quantization of Communication Signals", issued 1952-07-29
- P. Cummiskey, Nikil S. Jayant, and J. L. Flanagan, "Adaptive quantization in differential PCM coding of speech", Bell Syst. Tech. J., vol. 52, pp. 1105—1118, Sept. 1973
- Cummiskey, P.; Jayant, Nikil S.; Flanagan, J. L. (1973). "Adaptive quantization in differential PCM coding of speech". The Bell System Technical Journal. 52 (7): 1105–1118. doi:10.1002/j.1538-7305.1973.tb02007.x. ISSN 0005-8580.
- Schroeder, Manfred R. (2014). "Bell Laboratories". Acoustics, Information, and Communication: Memorial Volume in Honor of Manfred R. Schroeder. Springer. p. 388. ISBN 9783319056609.
- Gray, Robert M. (2010). "A History of Realtime Digital Speech on Packet Networks: Part II of Linear Predictive Coding and the Internet Protocol" (PDF). Found. Trends Signal Process. 3 (4): 203–303. doi:10.1561/2000000036. ISSN 1932-8346.
- Guckert, John (Spring 2012). "The Use of FFT and MDCT in MP3 Audio Compression" (PDF). University of Utah. Retrieved 14 July 2019.
- J. P. Princen, A. W. Johnson und A. B. Bradley: Subband/transform coding using filter bank designs based on time domain aliasing cancellation, IEEE Proc. Intl. Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2161–2164, 1987.
- John P. Princen, Alan B. Bradley: Analysis/synthesis filter bank design based on time domain aliasing cancellation, IEEE Trans. Acoust. Speech Signal Processing, ASSP-34 (5), 1153–1161, 1986.
- Luo, Fa-Long (2008). Mobile Multimedia Broadcasting Standards: Technology and Practice. Springer Science & Business Media. p. 590. ISBN 9780387782638.
- Britanak, V. (2011). "On Properties, Relations, and Simplified Implementation of Filter Banks in the Dolby Digital (Plus) AC-3 Audio Coding Standards". IEEE Transactions on Audio, Speech, and Language Processing. 19 (5): 1231–1241. doi:10.1109/TASL.2010.2087755. S2CID 897622.
- Brandenburg, Karlheinz (1999). "MP3 and AAC Explained" (PDF). Archived (PDF) from the original on 2017-02-13.
- "Summary of some of Solidyne's contributions to Broadcast Engineering". Brief History of Solidyne. Buenos Aires: Solidyne. Archived from the original on 8 March 2013. Retrieved 6 March 2013.
- Zwicker, Eberhard; et al. (1967). The Ear As A Communication Receiver. Melville, NY: Acoustical Society of America. Archived from the original on 2000-09-14. Retrieved 2011-11-11.
- "File Compression Possibilities". A Brief guide to compress a file in 4 different ways.
- "Video Coding". CSIP website. Center for Signal and Information Processing, Georgia Institute of Technology. Archived from the original on 23 May 2013. Retrieved 6 March 2013.
- Dmitriy Vatolin; et al. (Graphics & Media Lab Video Group) (March 2007). Lossless Video Codecs Comparison '2007 (PDF) (Report). Moscow State University.
- Chen, Jie; Koc, Ut-Va; Liu, KJ Ray (2001). Design of Digital Video Coding Systems: A Complete Compressed Domain Approach. CRC Press. p. 71. ISBN 9780203904183.
- Li, Jian Ping (2006). Proceedings of the International Computer Conference 2006 on Wavelet Active Media Technology and Information Processing: Chongqing, China, 29-31 August 2006. World Scientific. p. 847. ISBN 9789812709998.
- Robinson, A. H.; Cherry, C. (1967). "Results of a prototype television bandwidth compression scheme". Proceedings of the IEEE. IEEE. 55 (3): 356–364. doi:10.1109/PROC.1967.5493.
- Ghanbari, Mohammed (2003). Standard Codecs: Image Compression to Advanced Video Coding. Institution of Engineering and Technology. pp. 1–2. ISBN 9780852967102.
- Reader, Cliff (2016-08-31). "Patent landscape for royalty-free video coding". In Tescher, Andrew G (ed.). Applications of Digital Image Processing XXXIX. 9971. San Diego, California: Society of Photo-Optical Instrumentation Engineers. pp. 99711B. Bibcode:2016SPIE.9971E..1BR. doi:10.1117/12.2239493. Lecture recording, from 3:05:10.
- "The History of Video File Formats Infographic — RealPlayer". 22 April 2012.
- "Patent statement declaration registered as H261-07". ITU. Retrieved 11 July 2019.
- "MPEG-2 Patent List" (PDF). MPEG LA. Retrieved 7 July 2019.
- "MPEG-4 Visual - Patent List" (PDF). MPEG LA. Retrieved 6 July 2019.
- "AVC/H.264 – Patent List" (PDF). MPEG LA. Retrieved 6 July 2019.
- Chanda P, Bader JS, Elhaik E (27 Jul 2012). "HapZipper: sharing HapMap populations just got easier". Nucleic Acids Research. 40 (20): e159. doi:10.1093/nar/gks709. PMC 3488212. PMID 22844100.
- Christley S, Lu Y, Li C, Xie X (Jan 15, 2009). "Human genomes as email attachments". Bioinformatics. 25 (2): 274–5. doi:10.1093/bioinformatics/btn582. PMID 18996942.
- Pavlichin DS, Weissman T, Yona G (September 2013). "The human genome contracts again". Bioinformatics. 29 (17): 2199–202. doi:10.1093/bioinformatics/btt362. PMID 23793748.
- M. Hosseini, D. Pratas, and A. Pinho. 2016. A survey on data compression methods for biological sequences. Information 7(4):(2016): 56
- "Data Compression via Logic Synthesis" (PDF).
- Hilbert, Martin; López, Priscila (1 April 2011). "The World's Technological Capacity to Store, Communicate, and Compute Information". Science. 332 (6025): 60–65. Bibcode:2011Sci...332...60H. doi:10.1126/science.1200970. PMID 21310967. S2CID 206531385.
- Data Compression Basics (Video)
- Video compression 4:2:2 10-bit and its benefits
- Why does 10-bit save bandwidth (even when content is 8-bit)?
- Which compression technology should be used
- Wiley – Introduction to Compression Theory
- EBU subjective listening tests on low-bitrate audio codecs
- Audio Archiving Guide: Music Formats (Guide for helping a user pick out the right codec)
- MPEG 1&2 video compression intro (pdf format) at the Wayback Machine (archived September 28, 2007)
- hydrogenaudio wiki comparison
- Introduction to Data Compression by Guy E Blelloch from CMU
- HD Greetings – 1080p Uncompressed source material for compression testing and research
- Explanation of lossless signal compression method used by most codecs
- Interactive blind listening tests of audio codecs over the internet
- TestVid – 2,000+ HD and other uncompressed source video clips for compression testing
- Videsignline – Intro to Video Compression
- Data Footprint Reduction Technology
- What is Run length Coding in video compression. |
Join Curt Frye for an in-depth discussion in this video Calculating values using built-in functions and variables, part of Learning Octave.
In the previous movie, I showed you how to use built-in mathematical and logical operators to evaluate expressions. In this movie, I will show you some useful built-in functions that you can use to calculate values. I'll start out with a very quick overview in PowerPoint, and then I'll move over to Octave so that you can see how it works inside of the programming environment. The first three functions, I'll cover deal with roots. We have square root, sqrt which finds the square root of x. So the square root of 9 is 3. Then we have nthroot, x of n, which finds the nthroot of x.
So, for example, the nthroot of 27, 3 finds the third or cube root of 27, and that is 3. The next four functions all deal with numbers that have a decimal component. Fix truncates the number, that is it just removes the decimal component. Ceil, which is short for ceiling, rounds the number up, regardless of the decimal component. So for example 4.001 would be rounded up to five. Floor does exactly the opposite. It rounds a number down. So 4.99, would be rounded down to 4.
And finally round, rounds a number to the closest integer. And the rule that it uses, is any decimal component of 0.5 or higher, is rounded to the next integer. So for example, 4.49 would be rounded down to 4 and 4.5 would be rounded up to 5. Other useful functions include max, which finds the largest number in a set, min which finds the smallest number. You can also calculate a factorial. So, for example, 5 factorial returns 120 because it's five times four times three times two times one, and you can also work with primes by generating a list of primes.
There are two ways to do that. The first is using the primes function, which returns primes up to a particular number. So for example if you wanted to list all the prime numbers less than eight, you would say primes, and then enclose eight in parenthesis and you would get a list, of two, three, five, seven. If you want to list the first x number of primes, then you could type in list primes and then x for the number that you want. So, for example, list primes three would return two, three and five. That's a quick overview of the functions. Now let me jump over to Octave and show you how to use them.
Now that I'm in Octave, I'll type in the expressions using built-in functions. So, for example, square root, sqrt of 16 and the 16 is in parenthesis. If I press Enter, I get 4. And it doesn't have to be a whole number. So for an example if I type in sqrt of 15 and enter. I get 3.8730. Nth root works in a similar fashion.
So nth root of (81,4) means that I am asking Octave to calculate the fourth root of 81. And when I press enter, we get the answer 3. And again, the next four functions all deal with numbers that have decimal components. So if I type fix 3.5 and enclosing 3.5 in parentheses, the fix function removes the decimal component so if I press enter, it returns a value of 3. Ceiling, ceil rounds up regardless of the decimal component so if I type in 6.001 enclosed in parenthesis and press enter, I get the number 7.
Floor does exactly the opposite so in a way it's like fix. Type in floor and 6.999 and enter. I get the answer of 6. And finally, round. I'll do two examples for this. Have round of 4.49. That rounds down 4. And round of 4.5, rounds up to 5. Max and min work on sets of values. So for example, if I were to assign the variable V, a series of values, and this is a one dimensional matrix or a vector.
So I would assign it the values: 1, 2, 3 and 4, enclosed in square brackets and press enter. You'll see that those numbers have now been assigned to V. And if I type in max of V. I get 4. And if I type in min of V, then press enter, I get 1. So max finds the largest value of 4 and min finds the smallest value in that set, which is 1. And don't worry, I will cover matrices and vectors in more detail. The next function that I mentioned is factorial.
So, for example, factorial of 5 would calculate 5 times 4 times 3 times 2 times 1, and that value is 120. The last two functions deal with prime numbers. So if I type in primes(23) and press enter. I get a list of all the prime numbers up to and including 23 and if I type list_primes and type in 23, then I will get a list of the first 23 prime numbers, which you see here.
Those are the built-in functions that I have found to be the most useful. You'll come back to them again and again, when you build your Octave scripts.
- Downloading and installing Octave
- Using built-in commands
- Manipulating strings
- Defining vectors and matrices
- Defining functions
- Creating executable scripts
- Debugging your Octave code |
Temporal range: Pleistocene–Present
Breeding Non Breeding
The snowy owl (Bubo scandiacus), also known as the polar owl, the white owl and the Arctic owl, is a large, white owl of the true owl family. Snowy owls are native to the Arctic regions of both North America and the Palearctic, breeding mostly on the tundra. It has a number of unique adaptations to its habitat and lifestyle, which are quite distinct from other extant owls. One of the largest species of owl, it is the only owl with largely white plumage. Males tend to be a purer white overall while females tend to more have more extensive flecks of dark brown. Juvenile male snowy owls have dark markings that may appear similar to females until maturity, at which point they typically turn whiter. The composition of brown markings about the wing, although not foolproof, is the most reliable technique to age and sex individual snowy owls.
Most owls sleep during the day and hunt at night, but the snowy owl is often active during the day, especially in the summertime. The snowy owl is both a specialized and generalist hunter. Its breeding efforts and entirely global population are closely tied to the availability of tundra-dwelling lemmings but in the non-breeding season and occasionally during breeding the snowy owl can adapt to almost any available prey, most often other small mammals and northerly water birds (as well as, opportunistically, carrion). Snowy owls typically nest on a small rise on the ground of the tundra. The snowy owl lays a very large clutch of eggs, often from about 5 to 11, with the laying and hatching of eggs considerably staggered. Despite the short Arctic summer, the development of the young takes a relatively long time and independence is sought in autumn.
The snowy owl is a nomadic bird, rarely breeding at the same locations or with the same mates on an annual basis and often not breeding at all if prey is unavailable. A largely migratory bird, snowy owls can wander almost anywhere close to the Arctic, sometimes unpredictably irrupting to the south in large numbers. Given the difficulty of surveying such an unpredictable bird, there was little in depth knowledge historically about the snowy owl's status. However, recent data suggests the species is declining precipitously. Whereas the global population was once estimated at over 200,000 individuals, recent data suggests that there are probably fewer than 100,000 individuals globally and that the number of successful breeding pairs is 28,000 or even considerably less. While the causes are not well-understood, numerous, complex environment factors often correlated with global warming are probably at the forefront of the fragility of the snowy owl's existence.
The snowy owl was one of the many bird species originally described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae, where it was given the binomial name Strix scandiaca. The genus name Bubo is Latin for an horned owl and scandiacus is New Latin for of Scandinavia. The former genera name Nyctea is derivation of Greek meaning "night". Linnaeus originally described the different plumages of this owl as separate species, with the male specimens of snowy owls being considered Strix scandiaca and the likely females considered as Strix nyctea. Until recently, the snowy owl was regarded as the sole member of a distinct genus, as Nyctea scandiaca, but mtDNA cytochrome b sequence data shows that it is very closely related to the horned owls in the genus Bubo and the species is now thusly often considered inclusive with that genus. However, some authorities debate this classification, still preferring Nyctea. Often authorities are motivated to retain the separate genus on the grounds of osteological distinctions.
Genetic testing revealed a reasonably distinct genetic makeup for snowy owls, being about 8% genetically distinct from other Bubo owls, perhaps giving credence to those who count the species as separate under Nyctea. However, a fairly recent shared origin in evolutionary history has been illustrated through a combination of genetic study and fossil review and there is little, other than osteology of the tarsometatarsus, to outright distinguish the snowy owl from other modern species like the Eurasian eagle-owl (Bubo bubo). Genetic testing has indicated that the snowy owl may have diverged from related species at around 4 million years ago. Furthermore, it has determined that the living species genetically most closely related to the snowy owl is the great horned owl (Bubo virginianus). On a broader scale, owls in general have, through genetic materials, been determined to be a highly distinct group, with outwardly similar groups such as Caprimulgiformes revealed to not be at all closely related. Within the owl order, typical owls are highly divergent from barn-owls. Furthermore, the Bubo genus likely clustered at some point during the evolutionary process with other largish owls, such as Strix, Pulsatrix and Ciccaba, based on broad similarities in their voice, reproductive behaviors (i.e. hooting postures) and a similar number and structure of chromosomes and autosomes. A number, but not all, of extant typical owls seem to have evolved from an ancient shared common ancestor with the Bubo owls. In addition to the question of relationship of the traditional Bubo owls to the snowy owls, ongoing ambiguity of the relationship of other similarly large-sized owls has been persistent. These have sometimes either been included in the genus or within separate genera, i.e. the Ketupu or fish owls and the Scotopelia or fishing owls. Despite the adaptive distinctions, the grouping of these large owls (i.e. Bubo, snowy, fish and perhaps fishing owls) appears to be borne out via research of karyotypes.
The fossil history of snowy owls is fairly well documented despite some early confusion on how to distinguish the skeletal structure of the snowy owls from eagle-owls. It was determined that the snowy owl once was distributed much more widely and far farther to the south during the Quaternary glaciation when much of the Northern Hemisphere was in the midst of an ice age. Fossil records shows that snowy owls once could be found in Austria, Azerbaijan, Czechoslovakia, England, France, Germany, Hungary, Italy, Poland, Sardinia and Spain as well as in the Americas in Cape Prince of Wales, Little Kiska Island, St. Lawrence Island, and in Illinois. In the Late Pleistocene the range expanded southward even more so to Bulgaria (80,000–16,000 years, Kozarnika Cave, W Bulgaria). and much of the Italian Peninsula. Pleistocene era fossil from France, i.e. B. s. gallica, showed that the snowy owls of the time were somewhat bulkier (though still notably smaller than contemporary eagle-owls of the times, which were larger than the eagle-owls of today) and ostelogically more sexually dimorphic in size than the modern form (9.9% dimorphism in favor of females in the fossils against 4.8% in the same features today). There are no subspecific or other geographical variations reported in the modern snowy owls, with individuals of vastly different origins breeding together readily due to their nomadic habits. Despite apparent variations in body size, environment conditions are the likely variant rather than genetics. No evidence could be found of phylogeographic variation could be found in snowy owls upon testing. Furthermore, the snowy owl appears to have a similar level of genetic diversity as other European owls.
Snowy owls are not known to interbreed with other owl species in the wild, and accordingly, no hybrids of snowy owls and other owl species have yet been sighted in the wild. However, a hobby falconer in Kollnburg, Germany, managed to successfully breed hybrids from a male snowy owl and a female Eurasian eagle-owl (Bubo bubo) in 2013. The two resulting male hybrid owls possessed the prominent ear tufts (generally absent in snowy owls), general size, orange eyes, and the same pattern of black markings on their plumage from their Eurasian eagle-owl mother, while retaining the generally black-and-white plumage colours from their snowy owl father. The hybrids were dubbed "Schnuhus", a portmanteau of the German words for snowy owl and Eurasian eagle-owl (Schnee-Eule and Uhu, respectively). As of 2014, the hybrids had grown to maturity and were healthy.
The snowy owl, of course, is mostly white. They are purer white than predatory mammals like polar bears (Ursus maritimus) and Arctic fox (Vulpes lagopus). Often when seen in the field, these owls can resemble a pale rock or a lump of snow on the ground. It usually appears to lack ear tufts but very short (and probably vestigial) tufts can be erected in some situations, perhaps most frequently by the female when she is sitting on the nest. The ear tufts measure about 20 to 25 mm (0.79 to 0.98 in) and consist of about 10 small feathers. The snowy owl has bright yellow eyes. The head is relatively small and, even for the relatively simply adapted hearing mechanism of a Bubo owl, the facial disc is shallow and the ear is uncomplicated. 1 male had ear slits of merely 21 mm × 14 mm (0.83 in × 0.55 in) on left and 21 mm × 14.5 mm (0.83 in × 0.57 in) on the right. Females are almost invariably more duskily patterned than like-age males. In mature males, the upper parts are plain white with usually a few dark spots on the miniature ear-tufts, about the head and the tips of some primaries and secondaries whilst the underside is often pure white. Despite their reputation for being purely white, only 3 out of 129 Russian museum specimens of adult males showed an almost complete absence of darker spots. The adult female is usually considerably more spotted and often slightly barred with dark brown on the crown and the underparts. Her flight and tail feathers are faintly barred brown while the underparts are white in base color with brown spotting and barring on the flanks and upper breast. In confusingly plumaged snowy owls, the sex can be determined by the shape of wing markings, which manifest as bars more so in females and spots in males. However, the very darkest males and the lightest females are nearly indistinguishable by plumage. On rare occasion, a female can appear almost pure white, as has been recorded in both the field and in captivity. There is some evidence that some of the species grow paler with age after maturity. One study's conclusions were that males were usually but not always lighter and that correctly aging is extremely difficult, sometimes individuals either get lighter, darker or do not change their appearance with age. On the other hand, with close study, it is possible to visually identify even individual snowy owls using the pattern of markings on the wing, which can be somewhat unique in each individual. After a fresh moult, some adult females that previously appeared relatively pale newly evidenced dark, heavy markings. On the contrary, some banded individuals over at least four years were observed to have been almost entirely unchanged in the extent of their markings. In another very pale owl, the barn owl (Tyto alba), the sexual dimorphism of spotting appears to be driven by genetics while, in snowy owls, environment may be the dictating factor instead.
The chicks are initially grayish white but quickly transition to dark gray-brown in the mesoptile plumage. This type of plumage camouflages effectively against the variously colored lichens that dot the tundra ground. This is gradually replaced by plumage showing dark barring on white. At the point of fledging, the plumage often becomes irregularly mottled or blotched with dark and is mostly solidly dark gray-brown above with white eyebrows and other areas of the face white. Recently fledged young can already be sexed to a semi-reliable degree by the dark marking patterns about their wings. The juvenile plumage resembles that of adult females but averages slightly darker on average. By their second moult fewer or more broken bars are usually evidenced on the wing. The extent of white and composition of wing patterns become more dimorphic by sex with each juvenile moult, culminating in the 4th or 5th pre-basic moult, wherein the owls are hard to distinguish from mature adults. Moults usually occur from July and September, non-breeding birds moulting later and more extensively, and are never extensive enough to render the owls flightless. Evidence indicates that snowy owls may attain adult plumage at 3 to 4 years of age, but fragmentary information suggests that some males are not fully mature and/or as fully white in plumage that they can attain until the 9th or 10th year. Generally speaking, moults of snowy owls occur more quickly than do those of Eurasian eagle-owls.
The toes of the snowy owl are extremely thickly feathered white, while the claws are black. The toe feathers are the longest known of any owl, averaging at 33.3 mm (1.31 in), against the great horned owl which has the 2nd longest toe feathers at a mean of 13 mm (0.51 in) Occasionally, snowy owls may show a faint blackish edge to the eyes and have a dark gray cere, though this is often not visible from the feather coverage, and a black bill. Unlike many other whitish birds, the snowy owl does not possess black wingtips, which is theorized to minimize wear-and-tear on the wing feathers in the other whitish bird types. The conspicuously notched primaries of the snowy owl appear to give an advantage over similar owls in long-distance flight and more extensive flapping flight. The snowy owl does have some of the noise-canceling serrations and comb-like wing feathers that render the flight of most owls functionally silent, but they have fewer than most related Bubo owls. Therefore, in combination with its less soft feathers, the flight of a snowy owl can be somewhat audible at close range. The flight of snowy owls tends to be steady and direct and is reminiscent to some of the flight of a large, slow-flying falcon. Though capable of occasional gliding flight, there is no evidence that snowy owls will soar. It is said that the species seldom exceeds a flying height of around 150 m (490 ft) even during passage. While the feet are sometimes described as "enormous", the tarsus is in osteological terms relatively short at 68% the length of those of a Eurasian eagle-owl but the claws are nearly as large, at 89% of the size of those of the eagle-owl. Despite its relatively short length, the tarsus is of similar circumference as in other Bubo owls. Also compared to an eagle-owl, the snowy owl has a relatively short decurved rostrum, a proportionately greater length to the interorbital roof and a much longer sclerotic ring surrounding the eyes while the anterior opening are the greatest known in any owl. Owls have extremely large eyes which are nearly the same size in large species such as the snowy owl as those of humans. The snowy owl's eye, at about 23.4 mm (0.92 in) in diameter, is slightly smaller than those of great horned and Eurasian eagle-owls but is slightly larger than those of some other large owls. Snowy owls must be able to see from great distances and in highly variable conditions but probably possess less acute night vision than many other owls. Based on the study of dioptres in different owl species, the snowy owl was determined to have eyesight better suited to long-range perception than to close discrimination, while some related species such as great horned owls could probably more successful perceive closer objects. Despite their visual limits, snowy owls may have up to 1.5 times more visual acuity than humans. Like other owls, snowy owls can probably perceive all colors but cannot perceive ultraviolet visual pigments. Owls have the largest brains of any bird (increasing in sync with the size of the owl species), with the size of the brain and eye related less to intelligence than perhaps to increased nocturnality and predatory behavior.
The snowy owl is a very large owl. They are the largest avian predator of the High Arctic and one of the largest owls in the world. Snowy owls are about the sixth or seventh heaviest living owl on average, around the fifth longest and perhaps the third longest winged. This species is the heaviest and longest winged owl (as well as the second longest) in North America, the second heaviest and longest winged owl in Europe (and third longest) but is outsized in bulk by about 3 to 4 other species in Asia. Despite being sometimes described as of similar size, the snowy owl is somewhat larger in all aspects of average size than the great horned owl while the similarly specialized taiga-dwelling great grey owl (Strix nebulosa), is longer in total length and of similar dimensions in standard measurements, but is shorter winged and much less heavy than the snowy owl. In Eurasia, the Eurasian eagle-owl is larger in all standards of measurements than the snowy owl not to mention two additional species each from Africa and Asia that are slightly to considerably heavier on average than the snowy owl. Like most birds of prey, the snowy owl shows reverse sexual dimorphism relative to most non-raptorial birds in that females are larger than males. Sexual dimorphism that favors the female may have some correlation with being able to more effectively withstand food shortages such as during brooding as well as the rigors associated with incubating and brooding. Females are sometimes described as “giant” whereas males appear relatively “neat and compact”. However, the sexual dimorphism is relatively less pronounced compared to some other Bubo species.
Male snowy owls have been known to measure from 52.5 to 64 cm (20.7 to 25.2 in) in total length, with an average from four large samples of 58.7 cm (23.1 in) and a maximum length, perhaps in need of verification, of reportedly 70.7 cm (27.8 in). In wingspan, males may range from 116 to 165.6 cm (3 ft 10 in to 5 ft 5 in), with a mean of 146.6 cm (4 ft 10 in). In females, total length has been known to range from 54 to 71 cm (21 to 28 in), with a mean of 63.7 cm (25.1 in) and an unverified maximum length of perhaps 76.7 cm (30.2 in) (if so they would have the second longest maximum length of any living owl, after only the great grey owl). Female wingspans have reportedly measured from 146 to 183 cm (4 ft 9 in to 6 ft 0 in), with a mean of 159 cm (5 ft 3 in). Despite one study claiming that snowy owl had the highest wing loading (i.e. grams per square cm of wing area) of any of 15 well-known owl species, more extensive sampling demonstratively illustrated that the wing loading of snowy owls is notably lower than Eurasian eagle- and great horned owls. The conspicuously long-winged profile of a flying snowy owl compared to these related species may cause some to compare their flight profile to a bulkier version of an enormous Buteo or a large falcon. Body mass in males can average from 1,465 to 1,808.3 g (3.230 to 3.987 lb), with a median of 1,658.2 g (3.656 lb) and a full weight range of 1,300 to 2,500 g (2.9 to 5.5 lb) from six sources. Body mass in females can average from 1,706.7 to 2,426 g (3.763 to 5.348 lb), with a median of 2,101.8 g (4.634 lb) and a full weight range of 1,330 to 2,951 g (2.932 to 6.506 lb). Larger than the aforementioned body mass studies, a massive pooled dataset at six wintering sites in North America showed that 995 males averaged at 1,636 g (3.607 lb) while 1,189 females were found to average 2,109 g (4.650 lb). Reported weights of down to 710 g (1.57 lb) for males and of 780 to 1,185 g (1.720 to 2.612 lb) for females are probably in reference to owls in a state of starvation. Such emaciated individuals are known to highly impaired and starvation deaths are probably not infrequent in winters with poor food accesses.
Standard measurements have been even more widely reported than length and wingspan. The wing chord of males can vary from 351 to 439 mm (13.8 to 17.3 in), averaging from 380.1 to 412 mm (14.96 to 16.22 in) with a median of 402.8 mm (15.86 in). The wing chord of females can vary from 380 to 477.3 mm (14.96 to 18.79 in), averaging from 416.2 to 445 mm (16.39 to 17.52 in) with a median of 435.5 mm (17.15 in). The tail length of males can vary on average from 209.6 to 235.4 mm (8.25 to 9.27 in), with a full range of 188 to 261 mm (7.4 to 10.3 in) and a median of 227 mm (8.9 in). The tail length of females can average from 228.5 to 254.4 mm (9.00 to 10.02 in), with a full range of 205 to 288 mm (8.1 to 11.3 in) and a median of 244.4 mm (9.62 in). Data indicates that slightly longer wing chord and tail lengths were reported on average in Russian data than in American research, however the weights were not significantly different in the two regions. Less widely taken measurements include the culmen, which can measure from 24.6 to 29 mm (0.97 to 1.14 in) with a median average of 26.3 mm (1.04 in) in males and 27.9 mm (1.10 in) in females, and the total bill length which is from 25 to 42 mm (0.98 to 1.65 in), with an average in both sexes of 35.6 mm (1.40 in). Tarsal length in males averages about 63.6 mm (2.50 in), with a range of 53 to 72 mm (2.1 to 2.8 in), and averages about 66 mm (2.6 in), with a range of 54 to 75 mm (2.1 to 3.0 in), in females.
The snowy owl is certainly one of the most unmistakable owls (or perhaps even animals) in the world. No other species attains the signature white stippled sparsely with black-brown color of these birds, a coloring which renders their bright yellow eyes all the more detectable, nor possesses their obvious extremely long feathering. The only other owl to breed in the High Arctic is the short-eared owl (Asio flammeus). Both species inhabit open country, overlap in range and are often seen by day, but the short-eared is much smaller and more tan or straw-colored in coloration, with streaked brown on chest. Even the palest short-eared owls conspicuously differ and are darker than the snowy owl; additionally the short-eared most often hunts in extended flights. More similar owls such as the Eurasian eagle-owl and the great horned owl attain a fairly pale, sometimes white-washed look in their northernmost races. These species do not normally breed nearly as far north as snowy owls but overlaps certainly do occur when snowy owls when the latter owl sometimes comes south in winter. However, even the most pale great horned and Eurasian eagle-owls are still considerably more heavily marked with darker base colors than snowy owls (the whitest eagle-owls are paler than the whitest great horned owls), possess much larger and more conspicuous ear tufts and lack the bicolored appearance of the darkest snowy owls. While the great horned owl has yellow eyes like the snowy owl, the Eurasian eagle-owl tends to have bright orange eyes. The open terrain habitats normally used by wintering snowy owls are also distinct from the typical edge and rocky habitats usually favored by the great horned and Eurasian eagle-owls, respectively.
The snowy owl differ in their calls from other Bubo owls, with a much more barking quality to their version of a hooting song. Perhaps as many as 15 different calls by mature snowy owls have been documented. The main vocalization is a monotonous sequence that normally contains 2–6 (but occasionally more), rough notes similar to the rhythm of a barking dog: krooh krooh krooh krooh... The call may end with an emphatic aaoow, which is somewhat reminiscent of the deep alarm call of a great black-backed gull (Larus marinus). They will call mainly from a perch but also sometimes do so in flight. The krooh call of the male snowy owl may perform multiple functions such as competitive exclusion of other males and advertising to females. The calls of this species may carry exceptionally far in the thin air of Arctic, certainly over more than 3 km (1.9 mi), and maybe even to as much 10 to 11 km (6.2 to 6.8 mi) away. The female has a similar call to male but can be higher-pitched and/or more guttural as well as single notes which are often disyllabic, khuso. Female snowy owls have also been known to utter chirps and high screaming notes, similar to those of the nestlings. Both sexes may at times give a series of clucking, squeals, grunts, hisses and cackles, perhaps such as in circumstances when they are excited. The alarm call is a loud, grating, hoarse keeea. Another raspier bark is recorded, sometimes called a "watchman's rattle" call, and may be transcribed as rick, rick, rick, ha, how, quack, quock or kre, kre, kre, kre, kre. A female attacking to protect her nest was recorded to let out a crowed ca-ca-oh call, whilst other owls attacking to protect the nest did a loud version of the typical call while circling before dropping down. They may also clap their beak in response to threats or annoyances. While called clapping, it is believed this sound may actually be a clicking of the tongue, not the beak. Though largely only vocal in the breeding season, leading to some erroneous older accounts describing the snowy owl as completely silent, some vocalizations have been recorded in winter in the northern United States. Initially, the young of the snowy owl have a high pitched and soft begging call which develops into a strong, wheezy scream at around 2 weeks. At the point when the young owls leave the nest around 3 weeks, the shrill squeals they emit may allow the mothers to locate them.
Distribution and habitat
The snowy owl is typically found in the northern circumpolar region, where it makes its summer home north of latitude 60° north though sometimes down to 55 degrees north. However, it is a particularly nomadic bird, and because population fluctuations in its prey species can force it to relocate, it has been known to breed at more southerly latitudes. Although the total breeding range includes a little over 12,000,000 km2 (4,600,000 sq mi), only about 1,300,000 km2 (500,000 sq mi) have a high probability of breeding, i.e. breeding at no more than 3–9-year intervals. Snowy owls nest in the Arctic tundra of the northernmost stretches of Alaska, Northern Canada, and Eurosiberia.
Between 1967 and 1975, snowy owls bred on the remote island of Fetlar in the Shetland Isles north of mainland Scotland, discovered by the Shetland RSPB warden, Bobby Tulloch. Females summered as recently as 1993, but their status in the British Isles is now that of a rare winter visitor to Shetland, the Outer Hebrides and the Cairngorms. Older records show that the snowy owls may have once semi-regularly bred elsewhere in the Shetlands. They range in northern Greenland (mostly Peary Land) and, rarely in "isolated parts of the highlands", Iceland. Thence, they are found breeding at times across northern Eurasia such as in Spitsbergen and western and northern Scandinavia. In Norway, they normally breed in Troms og Finnmark and seldom down as far south as Hardangervidda and in Sweden perhaps down to the Scandinavian Mountains while breeding is very inconsistent in Finland.
They also range in much of northern Russia, including northern Siberia, Anadyr, Koryakland, Taymyr Peninsula, Yugorsky Peninsula, Sakha (especially the Chukochya River) and Sakhalin. Breeding have also been reported sporadically to the south in the Komi Republic and even the Kama River in southern Perm Krai. Although considered part of the regular range, the last breeding by snowy owls in the Kola Peninsula was not since the early 1980s; similarly, breeding maps show the species in Arkhangelsk Oblast and the Pay-Khoy Ridge but no breeding records known in at least 30 years in either. They range throughout most of the Arctic isles of Russia such as Novaya Zemlya, Severnaya Zemlya, New Siberian Islands, Wrangel Island, Commander and Hall Islands.
In North America, the breeding range has been known in modern times to include the Aleutians (i.e. Buldir and Attu) and much of northern Alaska, most frequently from the Arctic National Wildlife Refuge to Utqiaġvik, and more sporadically down along the coastal-western parts such as through Nome, Hooper Bay, the Yukon Delta National Wildlife Refuge, and rarely even south to the Shumagin Islands. The snowy owl may breed extensively in northern Canada, largely making its home in the Arctic Archipelago. Their Canadian breeding range can include broadly Ellesmere Island up to Cape Sheridan, north coastal Labrador, the northern Hudson Bay, perhaps all of Nunavut (especially the Kivalliq Region), northeastern Manitoba, both most of northern mainland and insular Northwest Territories (including the delta of the Mackenzie River) and northern Yukon Territory (where breeding is mostly confined to Herschel Island). Since breeding and distribution is very small, local and inconsistent in northern Europe, northern Canada and northern Alaska represent the core part of the breeding range for snowy owls along with several parts of northern and northeastern/coastal Russia.
Regular wintering range
During the wintering, many snowy owls leave the dark Arctic to migrate to regions further south. Southern limits of the regular winter range are difficult to delineate given the inconsistency of appearances south of the Arctic. Furthermore, not infrequently, many snowy owls will overwinter somewhere in the Arctic through the winter, though seldom appear to do so in the same sites where they have bred. Due in no small part to the difficulty and hazardousness of observation for biologists during these harsh times, there is very limited data on overwintering snowy owls in the tundra, including how many occur, where they winter and what their ecology is at this season. The regular wintering range has at times been thought to include Iceland, Ireland and Scotland and across northern Eurasia such as southern Scandinavia, the Baltics, central Russia, southwestern Siberia, Sakhalin southern Kamchatka and, rarely, north China and sometimes the Altai Republic. In North America, they occasionally regularly winter in the Aleutian island chain and do so broadly and with a fair amount of consistency in much of southern Canada, from British Columbia to Labrador. Recent research has indicated that snowy owls regularly winter in several of the northern seas during wintertime, following the leads of sea ice as perching sites and presumably hunting mostly seabirds in polynyas. In February 1886, a snowy owl landed on the rigging of the Nova Scotia steamship Ulunda on the edge of the Grand Banks of Newfoundland, over 800 km (31,000,000 in) from the nearest land. It was captured and later preserved at the Nova Scotia Museum. Surprisingly, some studies have determined that after a high lemming year in North America, a higher percentage of snowy owls were using marine environments rather than inland ones.
Large winter irruptions at temperate latitudes are thought to be due to good breeding conditions resulting in more juvenile migrants. These result in irruptions occurring further south than the typical snowy owl range in some years. They have been reported, as well as in all northerly states in the contiguous states, as far south as the Georgia, Kentucky, South Carolina, nearly all the Gulf Coast of the United States, Colorado, Nevada, Texas, Utah, California and even Hawaii. In January 2009, a snowy owl appeared in Spring Hill, Tennessee, the first reported sighting in the state since 1987. Also notable is the mass southern migration in the winter of 2011/2012, when thousands of snowy owls were spotted in various locations across the United States. This was then followed by an even larger mass southern migration in 2013/2014 with the first snowy owls seen in Florida for decades. The nature of irruptions is less well-documented in Eurasia, in part due to the paucity of this owl in the European side, but accidental occurrence, presumably during irruptions, has been described in the Mediterranean area, France, Crimea, the Caspian part of Iran, Kazakhstan, northern Pakistan, northwestern India, Korea and Japan. Stragglers may too turn up as far south as the Azores and Bermuda.
Snowy owls are one of the best known inhabitants of the open Arctic tundra. Frequently, the earth in snowy owl breeding grounds is covered with mosses, lichens and some rocks. Often the species preferentially occurs in areas with some rising elevation such as hummocks, knolls, ridges, bluffs and rocky outcrops. Some of these rises in the tundra are created by glacial deposits. The ground is usually rather dry in tundra but in some areas of the southern tundra can also be quite marshy. Not infrequently, they will also use areas of varied coastal habitat, often tidal flats, as a breeding site. Breeding sites are usually at low elevations, usually less than 300 m (980 ft) above sea level, but when breeding to the south in inland mountains, such as in Norway, they may nest at as high as 1,000 m (3,300 ft). Outside the breeding season, snowy owls may habituate nearly any open landscape. Typically wintering sites are rather windswept with meager cover. These open areas can include those such as coastal dunes, other coastal spots, lakeshores, islands, moorlands, steppes, meadows, prairies, other extensive grasslands and rather shrubby areas of the Subarctic. These may be favored due to their vague similarity to the flat openness of the tundra. Manmade open sites are now perhaps even more used than natural ones, often agricultural fields and rangeland, as well as large areas of cleared forests. During irruption years when they are found in the Northeastern United States, juveniles frequent developed areas including urban areas and golf courses, as well as the expected grasslands and agricultural areas that older birds primarily use. On the plains of Alberta, observed snowy owls spent 30% of their time in stubble-fields, 30% in summer fallow, 14% in Hayfield and the remainder of the time in pasture, natural grasslands and sloughs. The agricultural areas, large untouched by the farmers in winter, may have had more concentrated prey than the others in Alberta. Perhaps the most consistently attractive habitat in North America to wintering snowy owls in modern times may be airports, which not only tend to have the flat, grassy characteristics of their preferred habitats but also by winter host a particular diversity of prey, both pests which rely on humans as well as wildlife attracted to the extensively grassy and marshy strips that dot the large airport vicinities. For example, Logan International Airport in Massachusetts has one relatively one of the most reliable annual populations known in the United States in winter. All ages spend a fair amount of their time over water in the Bering Sea, the Atlantic Ocean and even the Great Lakes, mostly on ice floes. These marine and ocean-like freshwater areas were observed to account for 22–31% of habitat used in 34 radio-tagged American snowy owls over two irruptive years, with the tagged owls occurring a mean of 3 km (1.9 mi) from the nearest land (while 35–58% used the expected preferred habitats of grassland, pasture and other agriculture).
Snowy owls may be active to some extent at both day, from dawn to dusk, and night. Snowy owls have been seen to be active even during the very brief winter daytime in the northern winter. During the Arctic summer, snowy owls may tend to peak in activity during the twilight that is the darkest time available given the lack of full nightfall. Reportedly, the peak time of activity during summer is between 9:00 pm and 3:00 am in Norway. The peak time of activity for those owls that once nested on Fetlar was reported between 10:00 and 11:00 pm. According to one authority, the least active times are at noon and midnight. As days become longer near autumn in Utqiaġvik, the snowy owls in the tundra become more active around nightfall and can often be seen resting during the day, especially if it is raining. During winter in Alberta, snowy owls were tracked in the daytime, despite being also active at night (as they were deemed too difficult to track). In the study, they were most active from 8:00–10:00 am and 4:00–6:00 pm and often rested mostly from 10:00 am to 4:00 pm. The owls were perched for 98% of observed daylight and seemed to time their activity to peak times for rodents. The variation of activity is probably in correspondence with their primary prey, the lemmings, and like them, the snowy owl may be considered cathermal. This species can withstand extremely cold temperatures, having been recorded in temperatures as low as minus 62.5 degrees Celsius with no obvious discomfort and also withstood a 5-hour exposure to minus 93 degrees Celsius but may have struggled with oxygen consumption by the end of this period. The snowy owl has perhaps the second lowest thermal conduction to the plumage on average of any bird after only the Adelie penguin (Pygoscelis adeliae) and rivals the best insulated mammals, such as Dall sheep (Ovis dalli) and Arctic fox, as the best insulated polar creature. Presumably as many as 7 rodents would need to be eaten daily to survive an extremely cold winter's day. Adults and young both have been seen to shelter behind rocks to shield themselves from particularly harsh winds or storms. Snowy owls often spending a majority of time on the ground, perched mostly on a slight rise of elevation. It has been interpreted from the morphology of their skeletal structure (i.e. their short, broad legs) that snowy owls are not well-suited to perching extensively in trees or rocks and prefer a flat surface to sit upon. However, they may perch more so in winter though do so only mainly when hunting, at times on hummocks, Fenceposts, telegraph poles by roads, radio and transmission towers, Haystacks, chimneys and the roofs of houses and large buildings. Rocks may be used as perches at times in all seasons. Though often relatively sluggish owls, like most related species, they are capable of sudden dashing movements in various contexts. Snowy owls can walk and run quite quickly, using outstretched wings for balance if necessary. This owl flies with fairly rowing wingbeats, occasionally interrupted by gliding on stretched wings. The flight is fairly buoyant for a Bubo owl. When displaying, the male may engage in an undulating flight with interspersed wingbeats and gliding in a slight dihedral, finally dropping rather vertically to the ground. They are capable for swimming but do not usually do so. Some seen to be swimming were previously injured but young have been seen to swim into water to escape predators if they cannot fly yet. They will also drink when unfrozen water is available. Snowy owl mothers have been observed to preen their young in the wild, while pairs in captivity have been observed to allopreen. In the period leading up to breeding, snowy owls switched regularly between searching (for nesting grounds) and loafing, often searching less when snow cover was less extensive.
Snowy owls will fight with conspecifics in all seasons occasionally but this is relatively infrequent during breeding and rarer still during winter. Dogfights and talon interlocking may ensue if the fight between two snowy owls continues to escalate. A study determined that snowy owls are able to orient the whitest parts of their plumage towards the sun, spending about 44% of time oriented as such during sunny days and much less on cloudy days. Some authors interpret this as a presumed signal to conspecifics, but thermoregulation could also be a factor. It is known that during winter in Alberta that female snowy owls are territorial towards one another and may not leave an area for up to 80 days but males are nomadic, usually only staying 1–2 days in an area (seldom to 3–17 days). The females spent on average seven times as long in a given area than did males. During threat displays, individuals will lower the front of the body, stretch the head low and forward, with partially extended wings and feathers on the head and raise their back. If continuously threatened or cornered, the posture in the threat display may become still more contoured and, if pressed, the owl will like back and attempt to slash with its large talons. The threat displays of males are generally more emphatic than those of females. Although snowy owls have been considered as semi-colonial, they do not appear to fit this mold well. Nesting sites can be loosely clustered but this is a coincidental response to concentrated prey and each pair tends to be somewhat intolerant of each other. During winter, snowy owls are usually solitary but some aggregations have been recorded, especially nearer the Arctic when more narrow food selection can lead to up to 20–30 owls gathering in an area of about 20 to 30 ha (49 to 74 acres). Congregations were also recorded in the winter in Montana, where 31–35 owls wintered in a 2.6 km2 (1.0 sq mi) area, owls mostly grouped in loose aggregations of 5–10 owls each or occasionally side-by-side or about 20 m (66 ft) apart. In extreme cases in Utqiaġvik, the owls may have exceptionally close active nests that may be down to only 800 to 1,600 m (2,600 to 5,200 ft) apart. Juvenile males appear to be especially prone to loose associations with one another, appearing to be non-territorial and able to hunt freely in front of one another. In a 213 km2 (82 sq mi) area in and around Utqiaġvik, productive years may have about 54 nests while none may be found in poor years. Utqiaġvik may have about 5 owls in early summer every 1.6 km (0.99 mi), have a nest spacing of 1.6 to 3.2 km (0.99 to 1.99 mi) and the owls territory size is about 5.2 to 10.2 km2 (2.0 to 3.9 sq mi). In Churchill, Manitoba, nest spacing averaged about 3.2 km (2.0 mi). In Southampton Island in a year when the owls nested there, nest spacing averaged 3.5 km (2.2 mi), with the closest two 1 km (0.62 mi) apart and density per nest was 22 km2 (8.5 sq mi). In Nunavut, densities could go from 1 owl per 2.6 km2 (1.0 sq mi) in a productive year to 1 owl per 26 km2 (10 sq mi) in a poor year and from 36 nests in a 100 km2 (39 sq mi) area to none at all. Owl density on Wrangel Island in Russia was observed be a single bird each 0.11 to 0.72 km2 (0.042 to 0.278 sq mi). The first known study of winter territories took place in Horicon Marsh where owls ranged from 0.5 to 2.6 km2 (0.19 to 1.00 sq mi) each. In Calgary, Alberta, mean territory size of juvenile females in winter was 407.5 ha (1,007 acres) and adult females was 195.2 ha (482 acres). Wintering owls in central Saskatchewan were radio-monitored, determining that 11 males had an average range of 54.4 km2 (21.0 sq mi), while that of 12 females was 31.9 km2 (12.3 sq mi) with the combined average being 53.8 km2 (20.8 sq mi).
It is fair to say that the snowy owl is a partial, if fairly irregular, migrant, having a very broad but patchy wintering range. 1st year birds tend to disperse farther south in winter than older owls with males wintering usually somewhat more to the south than females of equivalent ages, adult females often wintering the farthest north. The snowy owl likely covers more ground than almost any other owl in movements but many complex individual variations are known in movements, and they often do not take the traditional north–south direction that might be assumed. Migratory movements appear to be somewhat more common in America than in Asia. A study of wintering owls in the Kola Peninsula determined that the mean date of arrival of owls was 10 November with a departure date of 13 April, covering an average of 991 km (616 mi) during the course of the wintering period and clustering where prey was more concentrated. Some variety of movements recorded each autumn and snowy owls winter annually in plains of Siberia and Mongolia and prairies and marshlands of Canada. The Great Plains area of southern Canada host wintering snowy owls about 2 to 10 times more frequently than other areas of the continent. Some weak correlation has made with individuals having some level of fealty to certain wintering sites. Wintering snowy owls, a total of 419, recorded in Duluth, Minnesota from 1974 to 2012 would occur in larger numbers in years where rats were more plentiful. The amount of individual returns among 43 Duluth-wintering owls was fairly low in subsequent winters (8 for 1 year, a small handful in the next few years, and 9 in non-consecutive years). Sometimes surveys appeared to reveal hundreds of wintering snowy owls on coastal sea ice during an irruptive year. Three siblings that hatched in same nest in Cambridge Bay were recovered in drastically different spots at least a year later: one in eastern Ontario, one in Hudson Bay and one in Sakhalin Island. A nestling banded in Hordaland was recovered 1,380 km (860 mi) to the northeast in Finnmark. In the Logan Airport, 17 of 452 owls were recorded to return, eleven the following year, three 2 years later, and then singles variously 6, 10 and 16 years later. A banded female from Utqiaġvik was recorded to migrate over 1,928 km (1,198 mi) along seacoast down to Russia, returning over 1,528 km (949 mi) and covering at least 3,476 km (2,160 mi) in total. Another banded young female from Utqiaġvik went to the same Russian areas, returned to Utqiaġvik and then onto Victoria Island, but did appear to breed, while another also covered a similar route but ended up nesting on Banks Island. Another female migrated to the Canada–United States border, then moved back to the Gulf of Alaska, then to winter in the same border areas and then finally to both Banks and Victoria Island. Snowy owls from the Canadian Arctic were monitored to have covered an average of 1,100 km (680 mi) in one autumn then covered an average of 2,900 km (1,800 mi) a year later. In late winter, owls from the same area were found to have covered a mean of 4,093 km (2,543 mi) of ground in the tundra and spent a mean of 108 days, apparently searching for a suitable nesting situation the entire time.
In no fewer than 24 winters between 1882 and 1988, large numbers have occurred in Canada and the United States. These were irruption years. Record breeding irruptive years were recorded in the winters of 2011–2012 and 2014–2015. In the 1940s, it was calculated that the mean gape in time between large irruptions was 3.9 years. Southbound movements as such are much more conspicuous after peak vole years, once thought to be separated by periods of around 3–7 years. However, more extensive research has weakened the argument that irruptions are entirely food-based and the data indicates that irruptive movements are far from predictable. This is because a statewide survey in Alaska found no statewide synchrony in lemming numbers. Therefore, rather than decline of lemmings, it is the successful productivity of several pairs that plays the role, resulting in a large number of young owls that then irrupt. However, the snowy owls cannot breed in high numbers unless lemmings are widely available on the tundra. This connection of irruptions to high years of productivity was confirmed in a study by Robillard et al. (2016). About 90% of the snowy owls seen in irruptive years from 1991 to 2016 that were ageable were identified as juveniles.
Snowy owls may hunt at nearly at any time of the day or night, but may not attempt to do so during particularly severe weather. During the summer solstice, the owls appear to hunt during "theoretical nightfall". Night-vision devices have allowed biologists to observe that snowy owls hunt quite often during the extended nighttime during the northern winter. Prey are both taken and eaten on the ground. Snowy owls, like other carnivorous birds, often swallow their small prey whole. Strong stomach juices digest the flesh, while the indigestible bones, teeth, fur, and feathers are compacted into oval pellets that the bird regurgitates 18 to 24 hours after feeding. Regurgitation often takes place at regular perches, where dozens of pellets may be found. Biologists frequently examine these pellets to determine the quantity and types of prey the birds have eaten. When large prey are eaten in small pieces, pellets will not be produced. Larger prey is often torn apart, sometimes include removal of the head, with the large muscles, such as the humerus or breast, typically eaten first. The scattering of remains that results from the increment feeding on larger prey is thought to result in under-identification of them compared to smaller prey items. The aptitude for hunting by day, hunting from the ground and hunting in almost always completely open and treeless areas are the primary ways in which the snowy owl differs in hunting from other Bubo owls. Otherwise, the hunting habits are similar. It is thought, due to their less refined hearing compared to other owls, prey is usually perceived via vision and movement. Experiments indicate that snowy owls can detect prey from as far as 1.6 km (0.99 mi) away. Snowy owls generally use a rise or, occasionally, a perch while hunting. 88% of observed 34 hunts in Utqiaġvik were undertaken from an elevated watch-site (56% mounds or rises, 37% telephone poles). Their hunting style may recall that of buzzards, with the hunting owl sitting rather low and perching immobile for a long spell. Although their usual flight is a slow, deliberate downbeat on the broad, fingered wings, when prey is detected from their perch, flight may undertaken with a sudden, surprisingly quick accelerated style with interspersed wing beats. In Utqiaġvik, snowy owls may most frequently engage in a brief pursuit hunting style. In high winds capable of keeping their bulk aloft, snowy owls may too engage in a brief hovering flight before dropping onto prey. When hunting fish, apparently, some snowy owls will hover in a style reminiscent of the osprey (Pandion haliaetus), although in at least one other case a snowy owl was observed to capture fish by lying on its belly upon a rock by a fishing hole. A dashing stoop or pounce down onto their prey, ending in a high-impact "wallop", is fairly commonly recorded. Another common technique is the "sweep", wherein they fly by and grasp the prey while continuing to fly. In winter, snowy owls have been shown to be able to "snow plunge" to capture prey in the subnivean zone, under at least 20 cm (7.9 in) of snow. Perhaps least frequently, snowy owls may pursue their on foot, in doing so never taking wing. Snowy owls have been known to capture night-migrating passerines and shorebirds, sometimes perhaps on the wing, as well as large and/or potentially dangerous birds that were caught in air by snowy owls during daylight. On the wing pursuits against other various other carnivorous birds are sometimes undertaken as well to kleptoparasitize the prey caught by the other birds. Few variations of hunting technique were observed in winter observations from Alberta, almost all of the hunts being with the sit-and-wait method (also known as still-hunts). Adult females in Alberta had a considerably better hunting rate than juvenile females. Much as in Alberta, in Syracuse, New York, 90% of 51 hunts were still-hunting, with the sweep variant used after perch departure in 31% of hunts and the pounce method in 45% of hunts. The Syracuse-wintering owls used tall perches, a mixture of manmade objects and trees of around 6 m (20 ft) high, in nearly 61% of hunts, while nearly 14% were from low perches (i.e. fence-posts, snow banks and scrap piles) about half as high as the tall perches and started from a ground position nearly 10% of the time. In Sweden, males hunted from a perch more so than did females and adults both focused on significantly smaller prey (small mammals) and may have had more success hunting than juvenile snowy owls. Some snowy owls can survive a fast for up to about 40 days off of fat reserves. These owls were found to have extremely thick subcutaneous fat deposits of around 19 to 22 mm (0.75 to 0.87 in) and it is likely owls that overwinter in the Arctic rely heavily on these to survive during this scarce time, in combination with lethargic, energy-conserving behavior.
Snowy owls may not infrequently exploit prey inadvertently provided or compromised by human activities, including ducks injured by duck hunters, birds maimed by antenna wires, various animals caught in human traps and traplines as well as domestic or wild prey being bred or farmed by humans in enclosures. A wide variety of accrued reports show that the snowy owl that scavenging on carrion is not uncommon (despite having once been thought to be very rare in all owls), including instances of reindeer (Rangifer tarandus) body parts brought to nests and owls following polar bears to secondarily feed on their kills. Even huge marine mammals such as walrus (Odobenus rosmarus) and whales can be fed upon by these owls when the opportunity occurs. Snowy owls produce a pellet that in different areas averages a median of about 80 mm × 30 mm (3.1 in × 1.2 in), averaging up to 92 mm (3.6 in) in length as in Europe.
The snowy owl is primarily a hunter of mammals. Most especially, they often live off of the northerly lemmings. Sometimes other similar rodents like voles can also be found frequently in the snowy owl's foods. It is R-selected, meaning that it is an opportunistic breeder capable of taking advantage of increases in prey numbers and diversity, despite its apparent specialization. Birds are commonly taken as well, and may regularly include passerines, northern seabirds, ptarmigan and ducks, among others. Sometimes infrequent consumption of other prey such as beetles, crustaceans and occasionally amphibians and fish is reported (of these only fish are known to have been identified to prey species). All told, more than 200 prey species have been known to be taken by snowy owls around the world. Generally, like other large owls (including even bigger owls like the Eurasian eagle-owl), prey selection tends toward quite small prey, usually small mammals, but they can alternate freely with prey that is much larger than typical given the opportunity or even bigger than themselves, including relatively large mammals and several types of large bird of almost any age. One study estimated for the biomes of Alaska and Canada, mean prey sizes for snowy owls were 49.1 g (1.73 oz), in western North America, the mean prey size was 506 g (1.116 lb) and in eastern North America was 59.7 g (2.11 oz), while the mean prey size in northern Fennoscandia was similar (at 55.4 g (1.95 oz)). The mean number of prey species for snowy owls per biome ranged from 12 to 28. The opportunistic nature of snowy owls has long been known during their primarily winter observed feeding habits (leading to their unpopular nature and frequent persecution well into the 20th century).
The snowy owl's biology is closely tied to the availability of lemmings. These herbivorous rodents are largish members of the vole clan that are the predominant mammal of the tundra ecosystem alongside the reindeer and probably make up the majority of the mammalian biomass of the ecosystem. Lemmings are key architects of the soil, microtopography and plant life of the entire tundra. In the American lower Arctic areas, brown lemming of the Lemmus genus are predominant and tend to be found in lower, wetter habitats (feeding by preference on grasses sedges and mosses) while collared lemmings of the Dicrostonyx genus were in more arid, often higher elevation habitats with heathland and ate by preference willow leaves and forbs. The southerly brown lemmings behave differently than more northern collared lemming type, increasing almost limitlessly within preferred habitat whereas the collared type tends to spread to suboptimal habitats and therefore does not appear reach the high regional densities of the brown. Authorities now generally agree that there appears to be no synchrony between the brown and collared lemmings and the feeding access of snowy owls is irregular as a result, but snowy owls can likely alternate between the two lemming types as one or the other increases as they nomadically use different parts of the Arctic. It is possible that the rare coincidental mutual peak of both lemming types within a year results in the erratic high productivity that results in irruptions. Within individual Arctic lemming species, historically, populations can vary in rough 4- to 5-year trends. As a result, in areas such as Banks Island, the breeding rate of snowy owls can vary within a decade by about tenfold. Weights of lemmings taken can range from 30 to 95 g (1.1 to 3.4 oz) on Baffin Island, while those taken in Utqiaġvik averaged 70.3 and 77.8 g (2.48 and 2.74 oz) in female and male lemming, respectively. It was estimated based on captive daily food intake that a snowy owl may consume about 326 g (11.5 oz) of lemmings a day, though other estimates using voles show a daily need for about 145 to 150 g (5.1 to 5.3 oz). On Southampton Island, 97% of the diet was lemmings. A very similar number of lemmings (nearly 100%) were found over 25 years of study in Utqiaġvik, amongst 42,177 cumulative prey items. Of 76 lemmings that could be identified to sex at a cache, male lemmings were found in the cache twice as often as female lemmings. While initial findings indicated on Wrangel Island that female lemmings outnumbered males in prey remains, to the contrary osteology indicated that, like Utqiaġvik, males were more often taken. However, the slightly larger, slower-moving females may be preferred when available.
In some areas, snowy owls can breed where lemmings are uncommon to essentially absent. Even in Utqiaġvik, where the diet is quite homogenously based in lemmings, the hatching of passerines, shorebirds and waterfowl can provide a key resource when lemmings are not found regularly and may be the only means by which the young can survive at such lean times. In the Nome, Alaska area, the locally nesting snowy owls reportedly switched from lemmings to ptarmigans when the latter's chicks hatched. A somewhat varying diet was also reported in Prince of Wales Island, Nunavut where 78.3% of the biomass was lemmings, with 17.8% from waterfowl, 3.3% from weasel and about 1% from other birds. In Fennoscandia, among 2,700 prey items only a third were Norway lemmings (Lemmus lemmus) and a majority were voles at 50.6%, probably largely the tundra vole (Microtus oeconomus). A more detailed glance at Finnish Lapland showed that amongst 2,062 prey items, 32.5% of the foods were Norway lemmings (though in some years the balance could range up to 58.1%), 28% were grey red-backed voles (Myodes rufocanus) and 12.6% were tundra voles, with birds constituting a very small amount of the prey balance (1.1%). In northern Sweden, a more homogenous diet was found with the Norway lemming constituting about 90% of the foods. In the Yamal Peninsula, 40% of the diet was collared lemmings, 34% were Siberian brown lemming (Lemmus sibiricus), 13% were Microtus voles and ptarmigan and ducks both constituting 8% and with other birds making up much of the remaining balance. In some parts of the tundra, snowy owls may opportunistically prey upon Arctic ground squirrels (Spermophilus parryii). In the Hooper Bay area (much farther south than they usually nest), various rodents, in highland areas, and waterfowl, in marshland, were taken while breeding. When historically breeding on Fetlar in Shetland, the main prey for snowy owls was European rabbits (Oryctolagus cuniculus), Eurasian oystercatcher (Haematopus ostralegus), parasitic jaegers (Stercorarius parasiticus) and Eurasian whimbrel (Numenius phaeopus), in roughly that order, followed by other bird species with most (rabbits and secondary birds) prey taken as adults but for the oystercatchers and jaegers which were taken largely as fully grown but only recently fledged juveniles. 22–26% of oystercatcher and jaeger young in the island were estimated to be taken by snowy owls.
Bird predation by nesting snowy owls is highly opportunistic. Willow (Lagopus lagopus) and rock ptarmigan (Lagopus muta) of any age are often fairly regular in the diet of breeding snowy owls but they cannot be said to particularly specialize on these. Evidence was found in the Yamal Peninsula that the snowy owls became the primary predator of willow ptarmigan and that the predation was so frequent, it may have been the cause of the change of their habitat usage to willow thickets by the local ptarmigan. The reliance on ptarmigan has caused some conservation trickle-down concern for the owls because ptarmigan are hunted in large numbers, with the hunters of Norway permitted to cull up to 30% of the regional population. In North America, avian prey on the breeding ground regularly varies from small passerines like snow buntings (Plectrophenax nivalis) and Lapland longspurs (Calcarius lapponicus) to large waterfowl like king (Somateria spectabilis) and common eider (Somateria mollissima) and usually the goslings but also occasionally adults of geese such as brants (Branta bernicla), snow geese (Anser caerulescens) and cackling geese (Branta hutchinsii). Drake eiders of often similar size to the owls themselves are not infrequently the largest prey amongst remains around the nest mound. One nest had the bodies of all eiders that attempting to nest in the vicinity around it. The threatened and declining Steller's eider (Polysticta stelleri) when nesting in the Utqiaġvik area would appear to avoid the vicinity of snowy owl nests when selecting their own nesting sites due to the predation risk. Intermediately sized seabirds are often focused on in lieu of available lemmings. Foods were studied intensively in Iceland. Among 257 prey items found with a total prey mass of 73.6 kg (162 lb), birds made up 95% of the diet. The leading prey were adult rock ptarmigan, at 29.6% by number and 55.4% by biomass and adult European golden plover (Pluvialis apricaria), at 10.5% by number and 7.2% biomass. The rest of the balance was largely other shorebirds, which were taken slightly more often as chicks than adults. Pink-footed geese (Anser fabalis) were taken in equal number as goslings and adults, with respectively estimated average weights at these ages of 800 and 2,470 g (1.76 and 5.45 lb). On the isle of Agattu, the diet consisted entirely of birds, as there are no mammals found there. The much favored food in Agattu was the ancient murrelet (Synthliboramphus antiquus), at 68.4% of the biomass and 46% by number, while the secondary prey were followed numerically by smaller Leach's storm-petrels (Oceanodroma leucorhoa) (20.8%) and Lapland longspurs (10%) and in biomass by smallish ducks, the green-winged teal (Anas carolinensis) and harlequin duck (Histrionicus histrionicus) (13.4% biomass collectively). In the Murman Coast of Russia, also in the absence of lemmings, seabirds formed the largest part of the diet.
On the wintering grounds, mammals often predominate in the snowy owl's food inland doing so less in coastal areas. Overall wintering snowy owls eat more diverse foods they do whilst breeding, furthermore coastal wintering snowy owls had more diverse diets than inland ones. As in summer, moderately sized water birds such as teal, northern pintail (Anas acuta) and numerous alcids and the like are often focused on when hunting birds. The diet in 62 pellets, amongst at least 75 prey items, from coastal Oregon showed the main foods as black rat (Rattus rattus) (at an estimated 40%), red phalarope (Phalaropus fulicarius) (31%) and bufflehead (Bucephala albeola) (19%). Witnessed attacks were mostly upon buffleheads in Oregon. In coastal southwestern British Columbia, the diet among 139 prey items was 100% avian. The predominant prey were water birds, mostly snatched directly from surface of the water and largely weighing 400 to 800 g (0.88 to 1.76 lb), i.e. buffleheads (at 24% by number and 17.4% by biomass of foods) and horned grebes (Podiceps auritus) (at 34.9% by number and 24.6% by biomass), followed by variously other water birds, often the slightly larger species of glaucous-winged gull (Larus glaucescens) and the American wigeon (Mareca americana). A different study of this area also showed the predominance of ducks and other water birds to wintering snowy owls here, although Townsend's vole (Microtus townsendii ) (10.65%) and snowshoe hare (Lepus americanus) (5.7%) were also notably in a sample of 122 prey items.
During winter, snowy owls consume more strongly nocturnal prey than lemmings such as Peromyscus mice and northern pocket gophers (Thomomys talpoides). In southern Alberta, 248 prey items were found with North American deermouse (Peromyscus maniculatus), at 54.8% by number, and meadow voles (Microtus pennsylvanica), at 27% by number, as the main foods of snowy owls over 2 years. Other prey in Alberta were grey partridge (Perdix perdix) (at 5.79% of total), jackrabbits, weasels and owls. Richardson's ground squirrels (Urocitellus richardsonii) were consumed heavily in the Alberta study in a brief converged times of hibernation emergence and overwintering snowy owls. The sexual dimorphism in prey selection was also studied here, with male owls mainly focusing exclusively on the small rodents, females also took the same rodents but supplemented the diet with all alternate and larger prey. Overall, the meadow and montane voles (Microtus montanus) constituted 99% of over 4500 prey items in Montana. In Horicon Marsh in winter, 78% of the diet was meadow vole, with 14% being muskrats (Ondatra zibethicus), 6% ducks and smaller balances of rats and other birds. Snowy owls found in Michigan took meadow voles for 86% of the diet, white-footed mouse (Peromyscus leucopus) for 10.3% and northern short-tailed shrew (Blarina brevicauda) for 3.2%. Of 127 stomachs in New England in four irruptive winters from 1927 to 1942, of 155 prey items, 24.5% were brown rats, 11.6% were meadow voles and 10.3% were dovekie (Alle alle), with a smaller balance of snowshoe hare and birds from snow buntings to American black ducks (Anas rubripes). During the same years, stomach contents in Ontario included 40 identified prey items, led by brown rats (20%), white-footed mice (17.5%) and meadow voles (15%); of 81 prey items from Pennsylvania in 60 stomachs that were not empty, eastern cottontail (Sylvilagus floridanus) (32%), meadow vole (11.1%), domestic chicken (Gallus gallus domesticus) (11.1%) and northern bobwhite (Colinus virginianus) (5%) were the most often identified prey species. Introduced common pheasants were found to be somewhat more vulnerable than native American gamebirds like ruffed grouse due to their tendency to crouch rather than flush when approached by a flighted predator like the snowy owl in a glade or field. Some snowy owls wintering on rocky coasts and jetties were known in New England to live almost entirely off of purple sandpipers (Calidris maritima). The availability of brown rats may draw snowy owls to seemingly unattractive settings such as garbage dumps and under bridges. Meanwhile, snowy owls wintering in Lowell, Massachusetts were seen to live largely off of rock doves (Columba livia) caught off of buildings. Of 87 prey from stomachs in Maine, 35% were rats or mice, 20% were snowshoe hares and 10% were passerines. A small study of 20 prey items in an irruptive winter in Kansas found that 35% of the prey were red-winged blackbird (Agelaius phoeniceus), 15% prairie voles (Microtus ochrogaster) and 10% each by American coot (Fulica americana) and hispid cotton rats (Sigmodon hispidus).
On the isle of St. Kilda, 24 pellets were found for non-breeding snowy owls that stayed through the early summer. Of 46 prey items, the St Kilda field mouse (Apodemus sylvaticus hirtensis) was predominant by number at 69.6% but constituted 16.8% of biomass while adult Atlantic puffin (Fratercula arctica) constituted 63.5% of the prey biomass and 26% by number (rest of the balance being juvenile puffins and great skuas (Stercorarius skua)). The main subspecies of wood mouse was similarly dominant in the diet within County Mayo, Ireland and were presumably snatched at night due to their strict nocturnality. In Knockando, the winter diet was led by European rabbits (40.1%), red grouse (Lagopus lagopus scotica) (26.4%) and adult mountain hare (Lepus timidus) (20.9%) (in 156 pellets); in Ben Macdui, the diet was led by rock ptarmigan (72.3%), field voles (Microtus agrestis) and juvenile mountain hare (8.5%) (33 pellets); in Cabrach, the diet was led by red grouse (40%), mountain hare (20%) and European rabbit (15%) (16 pellets). Among 110 prey items found for snowy owls found wintering during irruption in southern Finland, all but 1 prey item were field voles (the only other prey being a single long-tailed duck (Clangula hyemalis)). Far to the east, wintering owls in the Irkutsky District were found to subsist mostly on narrow-headed voles (Microtus gregalis). In a wintering population in Kurgaldga Nature Reserve of Kazakhstan, the main foods were grey red-backed voles at 47.4%, winter white dwarf hamster (Phodopus sungorus) at 18.4%, steppe pika (Ochotona pusilla) at 7.9%, muskrat at 7.9%, Eurasian skylark (Alauda arvensis) at 7.9%, grey partridge at 5.3%, and both steppe polecat (Mustela eversmanii) and yellowhammer (Emberiza citrinella) at 2.6% On the Kuril Islands, wintering snowy owls main foods were reported as tundra voles, brown rats, ermines and whimbrel, in roughly that order.
Data from the Logan Airport in over 6,000 pellets shows that meadow vole and brown rat predominated the diet in the area, supplanted by assorted birds both small and large. American black ducks were primarily taken among bird species with other birds taken here including relatively large and diverse species Canada geese (Branta canadensis), brants, American herring gulls (Larus argentatus), double-crested cormorant (Phalacrocorax auritus), great blue heron (Ardea herodias), in addition to some formidable mammals such as house cat, American mink (Mustela vision), and striped skunk (Mephitis mephitis). Given the large size of some of this prey, it can be projected that the snowy owl can kill adult prey of around twice their own weight (i.e. geese, cats, skunks, etc.). Other large prey is sometimes taken by snowy owls, all roughly within the 2 to 5 kg (4.4 to 11.0 lb) weight range often include adults of large leporids such as Arctic hare (Lepus arcticus), Alaskan hare (Lepus othus), mountain hare and white-tailed jackrabbits (Lepus townsendii). As well as several species of geese, probable cygnets of Bewick's swans (Cygnus columbianus bewickii) as well as adults of the following: western capercaillie (Tetrao urogallus) (of both sexes), greater sage-grouse (Centrocercus urophasianus) and yellow-billed loons (Gavia adamsii). At the other end of the scale, the snowy owl has been known to take birds down to size of 19.5 g (0.69 oz) dark-eyed juncos (Junco hyemalis) and mammals down the size of 8.1 g (0.29 oz) common shrews (Sorex araneus). Fish are rarely taken anywhere but the snowy owl has been known to prey upon Arctic char (Salvelinus alpinus) and lake trout (Salvelinus namaycush).
Interspecific predatory relationships
The snowy owl is in many ways a very unique owl and differs from other species of owl in its ecological niche. Only one other owl, the short-eared owl, is known to breed in the High Arctic. However, the snowy owl shares its primary prey, the brown and collared lemmings, with a number of other avian predators. In sometimes differing parts of the Arctic, competing predators for lemmings are, in addition to short-eared owls, pomarine jaegers (Stercorarius pomarinus), long-tailed jaegers (Stercorarius longicaudus), rough-legged buzzards (Buteo lagopus), hen harriers (Circus cyaenus), northern harriers (Circus hudsonius) and generally less specialized gyrfalcons (Falco rusticollis), peregrine falcons (Falco peregrinus), glaucous gulls (Larus hypoboreus) and common ravens (Corvus corax). Certain carnivorous mammals, especially the Arctic fox and, in this region, the ermine, are also specialized to hunt lemmings. Most of the lemming predators are intolerant of the competition given the scattered nature of lemming populations and will displace and/or kill one another given the chance. However, given the need to conserve energy in the extreme environment, the predators may react passively to one another. When unusually breeding south in the Subarctic such as western Alaska, Scandinavia and central Russia, the number of predators with which the snowy owls are obligated to share prey and compete with may be too numerous to name. The taking of the young and eggs of snowy owls has been committed by a large number of predators: hawks and eagles, the northern jaegers, peregrine and gyrfalcons, glaucous gulls, common ravens, Arctic wolves (Canis lupus arctos), polar bears, brown bears (Ursus arctos), wolverines (Gulo gulo) and perhaps especially the Arctic fox. Adult snowy owls on the breeding grounds are far less vulnerable and can be justifiably qualified as an apex predator. Instances of killing of adult snowy owls on the breeding grounds have been witnessed to be committed by a pair of pomarine jaegers on an incubating adult female snowy owl (possibly merely a competitive attack as she was left uneaten) and by an Arctic fox that killed an adult male snowy owl.
When it goes south to winter outside of the Arctic, the snowy owl has a potential to interact with a number of additional predators. By necessity, it shares its wintertime diverse prey with a number of formidable predators. These are known to include their cousins, the great horned owl and the Eurasian eagle-owl. They are relieved of heavy competition from the related species by differing temporal activity, i.e. being more likely to actively hunt in daytime, and by habitat, using rather more open (quite often nearly treeless) habitats than them. During a study of wintering snowy owls in Saskatchewan, the authors indicated that the snowy owls may avoid areas inhabited and defended by great horned owls. Although they usually occurred here outside of a 800 m (2,600 ft) radius of central great horned owl ranges, they did not avoid the 1,600 m (5,200 ft) radius and different habitat usage may be a dictating factor. Given their mildly slighter size, it is unlikely that great horned owls (unlike the larger eagle-owl) would regularly dominate snowy owls in interactions and either species may give way to other depending on the size and disposition of the owls involved. Little study has been undertaken into the trophic competition of snowy owls with other predators during winter and, due to their scarcity, few predators are likely to expel much energy on competitive interactions with them, although many other predators will engage in anti-predator mobbing of snowy owls. Largely in winter, snowy owls have been the victim of a number of larger avian predators, though attacks are likely to be singular and rare. Instances of predation on snowy owls are known to have been committed several times in winter only by Eurasian eagle-owls. Additionally, golden eagles (Aquila chrysaetos) have been known to prey on snowy owls as well as all northern sea eagles: the bald (Haliaeetus leucocephalus), white-tailed (Haliaeetus albicilla) and Steller's sea eagles (Haliaeetus pelagicus). Snowy owls are also sometimes killed by birds that are mobbing them. In one instance, a peregrine falcon killed a snowy owl in a stoop after the owl had itself killed a fledgling falcon. Anecdotal report indicate predation by gyrfalcons (on snowy owls of unknown age and condition) but it was possibly also an act of mobbing. In another, a huge throng of Arctic terns (Sterna paradisaea) relentlessly swarmed and attacked a snowy owl until it meet its demise.
Almost certainly more often than being victim of other predators, snowy owls are known to dominate, kill and feed on a large diversity of other predators. Snowy owls, much like other Bubo owls, will opportunistically kill other birds of prey and predators. Although they will readily plunder the nests of other raptorial birds given the opportunity, most predations are on full-grown raptorial birds during winter due to the scarcity of raptor nests in the open tundra. In addition, most competing predators of the Arctic, excepting the very large mammals, are probably vulnerable to a hungry snowy owl. In data from the Logan Airport alone over different winters, the snowy owls were observed to have preyed upon an impressive diversity of other raptorial birds: rough-legged buzzards, American kestrels (Falco sparverius), peregrine falcons, barn owls, other snowy owls, barred owls (Strix varia), northern saw-whet owls (Aegolius acadicus) and short-eared owls. While owls are likely encountered during corresponding hunting times, it is likely that the swift falcons are usually ambushed at night (much as other Bubo owls will do). In both the tundra and the wintering ground, there are several accounts of predation by snowy owls on short-eared owls. In addition, snowy owls have been known to prey on northern harriers, northern goshawks (Accipiter gentilis) and gyrfalcons. In a few cases, both juvenile and adult Arctic foxes have been known to fall prey to snowy owls. A wintering snowy owl in Saskatchewan was observed to have preyed on an adult red fox (Vulpes vulpes). Predation by snowy owls on red foxes was also reported in the Irkutsky District of Russia. With an adult weight around 6 kg (13 lb) (and far from defenseless), red fox may be the largest known prey known for snowy owls. Besides aforementioned predation on domestic cats and skunks, several members of the weasel family, both small and relatively large, are known to be opportunistically hunted by snowy owls. As a result of its potential predator status, the snowy owl is frequently mobbed at all times of the year by other predatory birds, including fierce dive-bombing by several of the northern falcons on the wintering grounds, including even by the relatively tiny but fierce and very agile merlin (Falco columbarius). The much bulkier snowy owls cannot match the speed and flight ability of a falcon, ai may be almost relentlessly tormented by some birds such as peregrines.
Pair bond and breeding territory
In Utqiaġvik, of 239 recorded breeding attempts, 232 were monogamous, the other 7 social bigamy. On Baffin Island, 1 male bred with 2 females and sired 11 total fledged young. Another case of bigamy was reported in Norway where the 2 females bred to one male were 1.3 km (0.81 mi) apart in nest site location. On Feltar from 1967 to 1975, a male breed with two females, 1 younger and was possibly his own daughter. In the Feltar males first time breeding with both females, he did not bring food to the younger female. However, when older female disappeared the following year, the male and younger female producing 4 young, but disappeared the subsequent year altogether in 1975. There are also unconfirmed cases of polyandry, with 1 female being fed by 2 males. Snowy owls can breed once per year but when food is scarce many do not even attempt to breed. Despite frequent wandering in search of food, they generally adhere more so than to a strict breeding season than short-eared owls nesting in the tundra. 9 radio-tagged female snowy owls about Bylot Island were tracked to study how pre-laying snow cover effects their searching behavior for breeding area. These tracked females searched an average of 36 days and covered an average of 1,251 km (777 mi). It is thought that the male and female mutually find an attractive breeding spot independently and converge. The breeding territory normally averages about 2.6 km2 (1.0 sq mi) as in both Baffin Island and Ellesmere Island but varies in accordance to abundance of food and density of owls. Nesting territories average at Baffin island in the range of 8 to 10 km2 (3.1 to 3.9 sq mi) during poor lemming years. Nesting territories may up to 22 km2 (8.5 sq mi) on Southampton Island and had a mean distance of 4.5 km (2.8 mi) between active nests. In Utqiaġvik, nesting pairs can vary from none to at least 7 and the territories average 5 to 10 km2 (1.9 to 3.9 sq mi), with mean nest distances of 1.5 to 6 km (0.93 to 3.73 mi). In the Norwegian highlands, nesting occurs only at times of plenty distances of 1.2 to 3.7 km (0.75 to 2.30 mi) between nests, averaging 2.1 km (1.3 mi). Males marks territory with singing and display flights and likely always initiates. During the display, he engages in exaggerated wing beats with a shallow undulating and bouncy courtship flight with wings held in a dihedral. He often drops to the ground but then flies again to only glide gently back down. Overall, the flight is somewhat reminiscent of the flight of a moth. Females will answer her mate with her song during courtship. While courting, the male often also carries a lemming in his bill, then bows with cocked tail, similarly as in related owls (seldom displaying some other prey like snow buntings). He then flaps his wings open in an emphatic manner, with the ground display being relatively brief (about 5 minutes). The female may possibly refuse to breed if ritual not performed. A possible courtship was engaged in by a male in southern Saskatchewan when a female was sighted. On Southampton Island, at least 20 males observed in late May in a "lemming year". Nesting territory defense displays, not highly different from courtship displays, includes undulating flight and stiffly raised wings with bouts of exaggerated, delayed wing beats, looking like enormous white moths exposing their white wings under the sun. At times, competing males will interlock claws in mid-air. Territorial and nuptial displays are followed by a ground display by the male with the wings arched up in an "angel" posture, visible for well over a mile.
Most individuals arrive at the nest site by April or May with a few overwintering arctic exceptions. Males advertises potential nest sites to his mate by scratching the ground and spreading his wings over it. The nest is usually a shallow depression on a windswept eminence in the open tundra. There seems to be a variety of qualifiers for appropriate nest sites. The nest site is typically snow-free and dry relative to the surrounding environment, usually with a good view of the surrounding landscape. The nest may be made of ridges, elevated mounds, high polygons, hummocks, hills, man-made mounds and occasionally rocky outcrops. If covered with vegetation, taller plants that may obstruct view are plucked away sometimes. The nest sites are often long-established and naturally created by the freeze-thaw process of the tundra. Gravel bars may be used as well. The female may take the most active role in the nest's condition of any owl species. No owl build their own nests but female snowy owls take about three days constructing a scrape, digging with her claws and rotating until a fairly circular bowl is formed. She will still not construct or add foreign materials to the nest (despite some circumstantial evidence of moss and grass from outside the nest mound being found). In two separate cases in Utqiaġvik, two separate females dug out a second scrape to the side and below the main nests and appeared to have called all chicks to the more secluded nest to ride out severe weather until the skies cleared. The Utqiaġvik nest scrapes averaged 47.7 cm × 44 cm (18.8 in × 17.3 in) in 91 with a mean depth of 9.8 cm (3.9 in) while the scrapes were smaller in Hooper Bay, reportedly 25 to 33 cm (9.8 to 13.0 in) diameter and 4 to 9 cm (1.6 to 3.5 in) in depth. Occasionally, in the lower tundra, snowy owls may too use old nests of rough-legged buzzards as well as abandoned eagle nests. Unlike other northerly breeding raptorial birds, the snowy owl is not known to nest on cliffs and the like, so do not enter into direct competition with eagles, falcons, ravens or other Bubo owls when nesting to the relative south. The area of nest mound often has a relatively rich plant life which attract the lemmings, which may tunnel right under and around the owl's nest. Geese, ducks and shorebirds of several species known to gain incidental protection by nesting close to snowy owls. Conversely, the snowy owls will sometimes kill and eat both young and adults of these birds, which implies a trade-off in the benefits.
Egg-laying normally begins during early May to the first 10 days of June. Late thaws are harmful to them since they allow too little time for the full breeding process, with particularly importance given to good food supply in May for adults, even more so apparently than food supply in July when young are being fed. Late nests are possible cases of inexperienced pairs, low food supplies, bigamy or even replacement clutches. The clutch is extremely variable in size averaging around 7–9, with up to 15 or 16 eggs recorded in extreme cases. The clutch size very large relative to related species. Mean clutch sizes were 7.5 in a sample of 24 in Hooper Bay (range of 5–11); 6.7 in a sample of seven from Utqiaġvik (4–9); 9 in a sample of a sample of 5 in Baffin Island; 9.8 on Victoria Island; 8.4 (in a sample of 14) on Elsemere Island; 7.4 on Wrangel Island and 7.74 in Finnish Lapland. The average clutch size was 9.8 in a good year in Victoria Island while in a good year in Utqiaġvik the mean was 6.5. The clutch is laid directly to the ground and are pure, glossy white. An average egg is around 56.4 mm × 44.7 mm (2.22 in × 1.76 in) with a range of heights from 50 to 70.2 mm (1.97 to 2.76 in) and diameter of 41 to 49.3 mm (1.61 to 1.94 in). Egg weights are around 47.5 to 68 g (1.68 to 2.40 oz), the median or average being 53 and 60.3 g (1.87 and 2.13 oz) in different datasets. The average egg size is relatively small, about 20% smaller than Eurasian eagle-owl eggs and 8% smaller than great horned owl eggs. Laying intervals are normally 2 days (41–50 hours mostly). The laying intervals can range up to 3–5 days in inclement weather. The laying of a clutch of 11 eggs can take 20–30 days, while a more typical nest of around 8 takes about up to 16 days. The interval between the 8th and 9th eggs can be up to about 4 days. Incubation begins with the first egg and is by female alone, while she is fed by her mate.
Food is brought to the nest by males and surplus food is stored nearby. Females in breeding season often develop a very extensive brood patch which in this species is a fairly enormous, high vascularized featherless area of pink belly skin. Incubation lasts 31.8–33 days (unconfirmed and possibly dubious reports from as little as 27 to as much as 38-day incubations). The female alone broods the young, often while simultaneously incubating still unhatched eggs. Sometimes older chicks incidentally brood their younger siblings and females may shelter the young under her wings during inclement weather. When first feeding the young, the female may dismantle prey to feed the young only the softer body parts then gradually ramping up the size of proportions until they eat a whole prey item. Aggressive encounters with parent snowy owls are said to be "genuinely dangerous" and one resource claimed the snowy owl to be the bird species with the most formidable nest defense displays towards humans. The usual response to sighted humans near the nest is mild but continued approach begins to increasingly irritate the parents. At times, humans are forcefully dive-bombed upon, while other potential threats are dealt with in a “forward-threat” where the male walks towards the intruders, engaging in impressive feather-raising and fanning out of half-spread wings until they run forward and slash with both their feet and bill. Fairly serious injuries have been sustained in the worst of snowy owl defensive attacks, including cranial trauma, requiring researchers to make the long trek back to medical care, although human fatalities are not known. Snowy owl parents have been seen to aggressively attacked glaucous gulls, arctic fox and dogs in breeding ground in Utqiaġvik. Non-predatory animals like caribou in Utqiaġvik and sheep (Ovis aries) in Fetlar are attacked as well, possibly to avoid potential trampling of the eggs or the young. Males are said to do the majority of nest defense but the female will also often become involved as well. Analysis showed in Lapland, Sweden, that females in nest defense against people engaged in vocal displays (warning and mewing calls) and that males did not engage in mewing but did engage in most hooting calls, many warning calls and almost all physical attacks. In other instances, distraction displays are engaged in against predators, with a "broken-wing act" including high, thin squeals interspersed with weird squeaks, often taking flight only to quickly fall from the sky and imitate a struggle. One author recorded a male to draw him about 2 km (1.2 mi) from the nest before ceasing. 77% of 45 distraction displays in Lapland, Sweden were by females.
Development of young
Hatching intervals are generally from 1 to 3 days, quite often within 37–45 hours apart. New chicks are semi-altricial (i.e. typically helpless and blind), initially being white and rather wet but dry by the end of the first day. The weight of 7 hatchlings was 35 to 55 g (1.2 to 1.9 oz), with an average of 46 g (1.6 oz) while 3 were 44.7 g (1.58 oz). Due to the pronounced asynchrony of the egg-laying and hatching, the size difference between siblings can be enormous and in some cases when the smallest chick weighs only 20 to 50 g (0.71 to 1.76 oz), the biggest chick already has attained a weight of around 350 to 380 g (12 to 13 oz). When the oldest chick is about 3 weeks, the female will start to hunt as well as the male and both may directly feed the young although in some cases they may not need hunt very much if lemmings are particularly numerous. Caches of lemmings around a nest may include more than 80 lemmings that can support the family. Unlike many owls, the chicks of snowy owls are not known to behave aggressively toward one another or to engage in siblicide, perhaps in part due to the need for energy conservance. Some cases of cannibalism of chicks by the family group were thought to be cases where chicks dies from other causes. When they are about 2 weeks, the chicks may begin to walk around the nest site which they leave by 18–28 days, although they are still unable to fly and may find safety in nooks and crannies of vegetation and rocks usually only about 1 to 2 m (3.3 to 6.6 ft) from the nest mound, as well as via their parents defense. Leaving the nest is thought to likely be an anti-predator strategy. The male snowy owl may drop fresh prey deliveries directly on the ground near the wandering young. After about three weeks of age, the young may wander fairly widely, rarely to 1 km (0.62 mi), but usually stay within 500 m (1,600 ft) of the nest mound. Threat postures by young in reaction to researchers were first noticeable at about 20–25 days of age and common at about 28 days and the chicks can be impressively quick and agile-footed. The first fledgling occurs at around 35–50 days, and by 50–60 days the young can fly well and hunt on their own. The total care period is for 2–3.5 months, increasing in length with increased size of the brood. Although independence was once thought to be sought by late August or early September but is more likely by late September to October when migration season for the species begins. The nesting cycle is similar in length to the Arctic short-eared owls and faster than Eurasian eagle-owls by up to 2 months.
Maturity and nesting success
Sexual maturity reached the following year but the first breeding is normally at no sooner than the end of the second year of life. There is little strong evidence of typical age of first breeding but initial breeding by males could be inferred by the plumage of males in Utqiaġvik by plumage. At that stage, which the males were essentially all pure white, most were aged to about 3 to 4 years old. The snowy owl seems to markedly inconsistent in regard to breeding every year, often taking at least up to two years between attempts and sometimes as much as nearly a decade. 7 satellite-marking females in Canada proved that they did breed in consecutive years, with 1 breeding over 3 consecutive years. In 23 years at Utqiaġvik, snowys bred in 13 of them. Nesting success can reach 90–100% in even the largest clutches in high lemming years. While over the course of 21 years, 260 total nests were recorded in Utqiaġvik. There, from 4–54 nests were recorded annually. The Utqiaġvik nests bore 3 to 10 sized-clutches with a mean of 6 eggs per nest and an annual mean hatching success from 39 to 91%. 31–87% of chicks were able to depart on foot and 48–65% were annually estimated to survive to fledge; elsewhere, 40% survived to fledge. In another set, 97% of observed eggs both hatched and fledged. In Norway, the fledging success from 10 nests was much lower at about 46%. Norwegian data, which previously indicated it to be an almost accidental breeder in northern Norway, indicates that it is a more regular breeder than expected, though. 3 good years were found for snowy owls between 1968 and 2005: 1974 (when there were 12 pairs), 1978 (22 pairs) and 1985 (20 pairs), with 14 additional locations when potential (but not confirmed) breeding has occurred. The main determinable causes of nest failure were deemed to be starvation and exposure. A number of Norwegian and Finnish nests were known to fail due to severe black fly parasitism.
The snowy owl can live a long life for a bird. Records show that the oldest snowy owls in captivity can live to 25 to even 30 years of age. Typical lifespans probably reach around 10 years in the wild. The longest known lifespan in the wild was one that initially banded (possibly in its first winter) in Massachusetts and recovered dead in Montana 23 years and 10-month later. The annual survival rate for twelve females on Bylot Island was estimated at around 85–92.3%. It is often reputed that snowy owls frequently died from starvation, with historical accounts frequently opined they "had to" leave their breeding grounds due to lemming "crashes" but would starve to the south. However, it was proven fairly early on that snowy owls often do survive throughout the winter. This is reinforced somewhat by small radio-tracking and banding studies of the northern Great Plains and the intermountain valleys of the northwestern United States. More circumstance evidence shows a lack of starvation in the eastern part of North America as well. There is evidence that some adults are known to return to the same wintering areas in ensuing years, areas which are far south of their breeding range. At Logan Airport, most snowy owls that are seen appear to be in good condition. Of 71 dead snowy owls found in winter in the northern Great Plains, 86% died from assorted traumas, including collisions with automobiles and other, usually manmade, objects as well as electrocutions and shootings. Only 14% of the 71 deaths were due to apparent starving. Data showed some owls appeared to incur injuries but healed and survived. More evidence was found in wintering snowy owls in New York of healed fractures, though some may require surgery to recover. 537 wintering birds in Saskatchewan were studied based on fat reserves, which were superior in females over males and adults over juveniles; while 31% of females lacked fat reserves, at least 45% of males found starving or in a state of infirmity were males and 63% turned into Wildlife rehabilitation centres were also males. In British Columbia, of 177 snowy owl deaths, of owls to die, only a small percentage were due to natural causes, such as assumed starvation at 13% and 12% were "found dead". 1 fledgling on Fetlar dead due to pneumonia and Staphylococcus while a second died from Aspergillosis. Evidence shows that in Utqiaġvik during exceptionally prolonged rains (i.e. 2 to 3 days), nest-departed young in Utqiaġvik were vulnerable to starvation, leading to hypothermia and pneumonia. Due to their natural history, the snowy owl may be effected more severely by blood parasitism than other raptors, due to lowered immunity. Conversely, they appear to have lower levels of ectoparasites like chewing lice than in other large owls per large samples from Manitoba. The snowy owls averaged about 3.9 chewing lice per host against 7.5 for great grey owls and 10.5 for great horned owls.
This species presence and numbers is dependent on amount of food available. In "lemming years", snowy owls can appear to be quite abundant in habitat. Numbers of snowy owls are difficult to estimate even within studies that take place over decades due to the nomadic nature of adults. The population of Scandinavia has long been perceived as very small and ephemeral with Finland holding 0–100 pairs; Norway holding 1–20 pairs and Sweden holding 1–50 pairs. A low breeding population within European Russia has been estimated to hold 1,300–4,500 pairs and Greenland to have 500–1,000 pairs. Other than northern part of the American continent, a majority of the snowy owl's breeding range is in northern Russia, but overall estimates are not known. An exact count of 4,871 individuals were seen on surveys between the Indigirka and Kolyma rivers. The numbers estimated by Partners in Flight and other authors by the 2000s was that North America held about 72,500 snowy owls, about 30% of which were juveniles. The Canadian population of snowy owls was estimated at 10,000–30,000 (in the 1990s) or even to 50,000–100,000 individuals, perhaps improbably. Within Canada, the population on Banks Island was once claimed at up to 15,000–25,000 in productive years and in Queen Elizabeth Islands at about 932 individuals. Alaska is the only state with breeding snowy owls but has probably quite a bit fewer breeding owls than does Canada. Furthermore, the Partners in Flight and the IUCN estimated that the world population was roughly 200,000–290,000 individuals as recently as the 2000s. However, in the 2010s, it has been discovered that all prior estimates were extremely excessive and that more precise numbers could be estimated with better surveying, phylogeographic data and more insights into the owl's free-wheeling wanderings. It is now believed that there are only 14,000–28,000 mature breeding pairs of snowy owls in the world. During lemming declines, the number of nesting females may drop down to as low as 1,700 worldwide, a dangerously low number, and the number of snowy owls worldwide is less than 10% of what it was once thought to be. Due to the small and rapidly declining population, the snowy was uplisted in 2017 to being a vulnerable species by the IUCN. A 52% decline has been inferred for the North American population since the 1960s with another even more drastic estimate placing the decline from 1970 to 2014 at 64%. Trends are harder to delineate in Scandinavia but a similar downward trend is thought to be occurring.
Anthropogenic mortality and persecution
Of 438 band encounters in the USG banding laboratory, almost all causes of death that could be determined, whether intentional or not, were correlated with human interference. 34.2% or 150 were dead due to unknown causes, 11.9% were shot, 7.1% were hit by automobiles, 5.5% were found dead or injured on highways, 3.9% were collision from towers or wires, 2.7% were in animal traps, 2.1% in airplane birdstrikes, 0.6% were entangled while the remaining 33.3% recovered injured due to assorted or unknown causes. Snowy owls are endangered by heavy airport usage resulting in birdstrikes. Many such collisions are known in Canada and likely also in Siberia and Mongolia . Despite their danger to planes, no human fatalities have been recorded in collisions with this species. Snowy owls are always far outnumbered in Canadian airports in winter by short-eared owls. However, relative to its scarcity, the snowy accounts for a very large balance of the birdstrikes recorded at American airports due to the attractiveness of the habitat, accounting for 4.6% of 2456 recorded collisions (the barn owl is the most frequently involved in birdstrikes). The species is locally vulnerable to pesticides. The placement of buildings in the Utqiaġvik is now thought to have displaced some snowy owls. In Norway, potential sources of disturbance near the nests include tourism, recreation, reindeer husbandry, motorized traffic, dogs, photographers, ornithologists and scientists. Some biologist have expressed concern that radio-tagging of snowy owls may cause some unclear detrimental effect on snowy owls but little evidence is known if they actually make the owls more susceptible to death.
Snowy owls can be quite wary, as they are not infrequently hunted by Circumpolar peoples. Historically, the snowy owl was one of the most persecuted owl species. In the irruption of 1876–77, an estimated 500 snowy owls were shot, with similar numbers in 1889–90 and an estimated 500–1,000 killed in Ontario alone during 1901–02 invasion and about 800 killed in the 1905–06 invasion. Indigenous people of the Arctic historically killed snowy owls as food but now many communities in northern Alaska are fairly modernized, therefore biologists feel that the permitted killing of snowy owls by the indigenous is outdated. The consumption of snowy owls by humans has been proven as far back as ancient cave deposits in France and elsewhere, and they have even been considered as one of the most frequent food species for early humans. They do not shun developed areas especially with old field that hold rodents and, due to lack of human experience, can be extremely tame and unable to escape armed humans. In British Columbia, of 177 snowy owl deaths, the most often diagnosed cause of death was shootings at 25%, often well after legal protection of the species. The number poached snowy owls in Ontario is opined to be unusually high considering their scarcity. While the species was once otherwise killed as food and then later shot out of resentment for perceived threats against domestic and favored game stock, the reasoning behind ongoing shooting of snowy owls into the 21st century is not well-understood. Siberian snowy owls are frequently victim to baited fox traps, with possibly up to around 300 killed in a year based upon very rough estimates. Warfarin poisoning in use as rodenticides are known to kill some wintering snowy owls, including up to six at Logan Airport alone. Mercury concentrations, most likely through bioaccumulation, have been detected in snowy owls in the Aleutian Islands but it is not known whether fatal mercury poisoning has occurred. PCBs may have killed some snowy owls in concentration. Some airports have advocated and instituted the practice of shooting owls to avoid birdstrikes but successful translocation is possible and preferred given the species protected status.
Climate change is now widely perceived to perhaps the primary driver of the snowy owl's decline. As temperatures continue to rise, abiotic factors such as increased rain and reduced snow are likely to effect lemming populations and, in turn, snowy owls. These and potentially many other issues (possibly including modifying migrating behavior, vegetation composition, increased insect, disease and parasite activities, risk of hyperthermia) are a matter of concern. Additionally, reduction of sea ice, which snowy owls are now known to rely extensively on, as a result of warming climates, impacts could be significant. The effect of climate change was essentially confirmed in northern Greenland where a perhaps irrevocable collapse of the lemming population was observed. From 1998 to 2000, the lemming numbers appeared to have quickly declined. The number of lemmings per hectare (ha) is less than one-fifth of what it once was in Greenland (i.e. from 12 lemmings per ha to less than 2 per ha at peak). This is almost certainly correlated with a 98% decline in owl productivity as well as that of the local stoats (the long-tailed jaeger and Arctic foxes, though previously thought to be almost as reliant on lemmings, seem to be more loosely coupled and more generalized and did not decline as much). The amount of lemming mounds is much less than it once in northern Greenland and any variety of population cycle has been apparently abandoned by what remains of the lemmings.
In popular culture
- The Harry Potter books by J. K. Rowling, and subsequent films of the same name, feature a female snowy owl named Hedwig. Concern was expressed by some in the media that the popularity of the Harry Potter films would cause an increase in the illicit owl trade of snowy owls. However, there was no strong evidence of an increase in snowy owl's confiscated from the black market, despite a larger than typical number of snowy owls being reported at wildlife centres.
- The EADS Harfang, drone aircraft developed by the French Air Force, is named in French for the snowy owl (Harfang des neiges).
- The snowy owl (harfang des neiges in French) is the avian symbol of Quebec and French-Canadians.
- "Bubo scandiacus Linnaeus 1758 (snowy owl)". PBDB.
- BirdLife International (2020). " Bubo scandiacus". IUCN Red List of Threatened Species. 2020: e.T22689055A181375387. doi:10.2305/IUCN.UK.2020-3.RLTS.T22689055A181375387.en.
- Potapov, Eugene & Sale, Richard (2013). The Snowy Owl. T&APoyser. ISBN 978-0713688177.
- König, Claus; Weick, Friedhelm (2008). Owls of the World (2nd ed.). London: Christopher Helm. ISBN 9781408108840.
- Voous, Karel H.; Cameron, Ad (illustrator) (1988). Owls of the Northern Hemisphere. London, Collins. pp. 209–219. ISBN 978-0-00-219493-8.
- Holt, D. W., M. D. Larson, N. Smith, D. L. Evans, and D. F. Parmelee (2020). Snowy Owl (Bubo scandiacus), version 1.0. In Birds of the World (S. M. Billerman, Editor). Cornell Lab of Ornithology, Ithaca, NY, USA.
- Solheim, R. (2012). Wing feather moult and age determination of Snowy Owls Bubo scandiacus. Ornis Norvegica (2012), 35: 48–67
- Hume, R. (1991). Owls of the world. Running Press, Philadelphia.
- Sindelar Jr., C. (1966). A comparison of five consecutive Snowy Owl invasions in Wisconsin. Passenger Pigeon, 28(10), 108.
- Bent, A. C. (1938). Life Histories of North American Birds of Prey (part 2), Orders Falconiformes and Stringiformes (Vol. 170). US Government Printing Office.
- Marthinsen, G., Wennerberg, L., Solheim, R. & Lifjeld, J.T. (2009). No phylogeographic structure in the circumpolar snowy owl (Bubo scandiacus). Conservation Genetics. 10(4): 923–933.
- Linnaeus, Carl (1758). Systema Naturae per Regna Tria Naturae, Secundum Classes, Ordines, Genera, Species, cum Characteribus, Differentiis, Synonymis, Locis. Tomus I. Editio decima, reformata (in Latin). Holmiae: (Laurentii Salvii). p. 92.
- Jobling, James A (2010). The Helm Dictionary of Scientific Bird Names. London: Christopher Helm. pp. 179, 349. ISBN 978-1-4081-2501-4.
- Lönnberg, E. (1931). Olaf Rudbeck Jr. The First Swedish ornithologist. Ibis, 13 (1): 302–307.
- Wink, M. & Heidrich, P. (2000). Molecular systematics of owls (Strigiformes) based on DNA-sequences of the mitochondrial cytochrome b gene. Pp. 819–828 in: Chancellor, R.D. & Meyburg, B.U. eds. (2000). Raptors at Risk. Proceedings of the V World Conference on Birds of Prey and Owls. Midrand, Johannesburg, 4–11 August 1998. WWGBP & Hancock House, Berlin & Blaine, Washington.
- Penhallurick, J. M. (2002). The taxonomy and conservation status of the owls of the world: a review. Ecology and conservation of owls. CSIRO, Collingwood, 343–354.
- Ford, N. L. (1967). A systematic study of the owls based on comparative osteology. PhD diss, Univ. of Michigan, Ann Arbor.
- Yamada, K., Nishida-Umehara, C., & Matsuda, Y. (2004). A new family of satellite DNA sequences as a major component of centromeric heterochromatin in owls (Strigiformes). Chromosoma, 112(6), 277–287.
- Wink, M., El-Sayed, A. A., Sauer-Gürth, H., & Gonzalez, J. (2009). Molecular phylogeny of owls (Strigiformes) inferred from DNA sequences of the mitochondrial cytochrome b and the nuclear RAG-1 gene. Ardea, 97(4), 581–591.
- Cracraft, J. (1981). Toward a phylogenetic classification of the recent birds of the world (Class Aves). Auk 98:681–714.
- Mindell, D. P. (1997). Phylogentic relationships among and within select avian orders based on mitochondrial DNA. Avian molecular evolution and systematics, 211–247.
- Wink, M., A.-A. El-Sayed, H. Sauer-Gürth and J. Gonzalez. (2009). Molecular phylogeny of owls (Strigiformes) inferred from DNA sequences of the mitochondrial cytochrome b and the nuclear RAG-1 gene. Ardea 97 (4):581–591.
- Belterman, R. H. R., & De Boer, L. E. M. (1984). A karyological study of 55 species of birds, including karyotypes of 39 species new to cytology. Genetica, 65(1), 39–82.
- Schmutz, S. M., & Moker, J. S. (1991). A cytogenetic comparison of some North American owl species. Genome, 34(5), 714–717.
- Owls of the World: A Photographic Guide by Mikkola, H. Firefly Books (2012), ISBN 9781770851368
- Olsen, J., Wink, M., Sauer-Gurth, H., & Trost, S. (2002). A new Ninox owl from Sumba, Indonesia. Emu, 102(3), 223–231.
- Brodkorb, P. (1971). Catalogue of fossil birds, Part 4 (Columbiformes through Piciformes). Bulletin of the Florida State Museum, Biological Sciences 15 (4).
- Stewart, J. R. (2007). The fossil and archaeological record of the Eagle Owl in Britain. British Birds, 100(8), 481.
- Boev, Z. (1998). "First fossil record of the Snowy Owl Nyctea scandiaca (Linnaeus, 1758) (Aves: Strigidae) from Bulgaria". Historia Naturalis Bulgarica. 9: 79–86.
- Bedetti, C.; Palombo, M.R.; Sardella, R. (October 2001). "Last occurrences of large mammals and birds in the Late Quaternary of the Italian peninsula". 1st International Congress "The World of Elephants". pp. 701–703. ISBN 978-88-8080-025-5.
- Chauviré, C. (1965). Les oiseaux du gisement magdalénien du Morin (Gironde). 89e Congrés des Sociétés Savantes, Lyon, 1964, 255–266.
- Mourer-Chauviré, C. (1975). Les oiseaux du Pléistocène moyen et supérieur de France. 2ème fascicule (Vol. 64, No. 2). Persée-Portail des revues scientifiques en SHS.
- Andrews, P. (1990). Owls, caves and fossils. Chicago: University of Chicago Press.
- Marthinsen, G., Wennerberg, L., Solheim, R., & Lifjeld, J. T. (2009). Snowy owls (Bubo scandiacus) constitute one panmictic population. University of Oslo.
- "Schnuhu": Überraschende Kreuzung – Ich bin Bayerns süßester Fratz!. tz.de Retrieved on 7 October 2016
- Sutton, G. M. (1971). High Arctic: An Expedition to the Unspoiled North. New York: PS Eriksson, c1971, 1975 printing.
- Gavrilov, E.I., Ivanchev, V.P., Kotov, A.A., Koshelev, A.I. & Nazarov, Y.N. (1993). Ptitsy Rossii i sopredel’nykh regionov: Ryabkoobraznye, Golubeobraznye, Kukushkoobraznye, Sovoobraznye (Birds of Russia and Adjacent Regions: Pterocletiformes, Columbiformes, Cuculiformes, and Strigiformes), Moscow: Nauka.
- Oberholser, H. C. (1974). The Bird Life of Texas. University of Texas Press, Austin, TX, USA.
- Lind, H. (1993). Different ecology in male and female wintering Snowy Owls Nyctea scandiaca L. in Sweden due to colour and size dimorphism. Ornis Svecica 3 (3–4):147–158.
- McMorris, A. (2011). Snowy Owls: Age, Sex and Plumage. Presentation Delaware Valley Ornithological Club.
- Dementiev, G. P., Gladkov, N. A., Ptushenko, E. S., Spangenberg, E. P., & Sudilovskaya, A. M. (1966). Birds of the Soviet Union, vol. 1. Israel Program for Scientific Translations, Jerusalem.
- Solheim, R. (2016). Identifying Individual Great Gray Owls (Strix nebulosa) and Snowy Owls (Bubo scandiacus) Using Wing Feather Bar Patterns. Journal of Raptor Research, 50(4), 370–378.
- Howell, S. N. G. (2010). Peterson Reference Guide to Molt in North American Birds. Houghton Mifflin Harcourt Company, Boston, MA, USA.
- Roulin, A., Richner, H., & Ducrest, A. L. (1998). Genetic, environmental, and condition‐dependent effects on female and male ornamentation in the barn owl Tyto alba. Evolution, 52(5), 1451–1460.
- Seidensticker, M. T., D. W. Holt, J. Detienne, S. Talbot & Gray, K. (2011). Sexing young Snowy Owls. Journal of Raptor Research 45 (4):281–289.
- Pyle, P. (1997). Flight-feather molt patterns and age in North American owls. Colorado Springs, CO: ABA Monogr. Ser. no. 2.
- Ridgway, R., & Friedmann, H. (1914). The Birds of North and Middle America: A Descriptive Catalog of the Higher Groups, Genera, Species, and Subspecies of Birds Known to Occur in North America, from the Arctic Lands to the Isthmus of Panama, the West Indies and Other Islands of the Caribbean Sea, and the Galapagos Archipelago (Vol. 50). US Government Printing Office.
- Cramp, S.; Simmons, K.E.L. (1985). Birds of the Western Palearctic. Vol. 2. Oxford: Oxford University Press.
|volume=has extra text (help)
- Barrows, C. W. (1981). Roost selection by spotted owls: an adaptation to heat stress. The Condor, 83(4), 302–309.
- Averill, C. K. (1923). Black wing tips. The Condor, 25(2), 57–59.
- Averill, C. K. (1927). Emargination of the Long Primaries in Relation to Power of Flight and Migration with One Illustration. The Condor, 29(1), 17–18.
- Wagner, H., Weger, M., Klaas, M., & Schröder, W. (2017). Features of owl wings that promote silent flight. Interface focus, 7(1), 20160078.
- Stabler, R. M., & Hoy, N. D. (1942). Measurements of Tarsal Circumferences from Living Raptorial Birds. Bird-Banding, 9–12.
- Iwaniuk, A. N., Hurd, P. L., & Wylie, D. R. (2006). Comparative morphology of the avian cerebellum: I. Degree of foliation. Brain, behavior and evolution, 68(1), 45–62.
- Gill, F. (2007). Ornithology. 3rd Edn. (W. H. Freeman Co: New York.
- Wills, S., Pinard, C., Nykamp, S., & Beaufrère, H. (2016). Ophthalmic reference values and lesions in two captive populations of northern owls: great grey owls (Strix nebulosa) and snowy owls (Bubo scandiacus). Journal of Zoo and Wildlife Medicine, 47(1), 244–255.
- Murphy, C. J., & Howland, H. C. (1983). Owl eyes: accommodation, corneal curvature and refractive state. Journal of comparative physiology, 151(3), 277–284.
- Bowmaker, J. K., & Martin, G. R. (1978). Nocturnal Bird, Strix Aluco (Tawny Owl) . Vision res, 18, 1125–1130.
- Martin, G. R., & Gordon, I. E. (1974). Visual acuity in the tawny owl (Strix aluco). Vision Research, 14(12), 1393–1397.
- Lind, O., Mitkus, M., Olsson, P., & Kelber, A. (2014). Ultraviolet vision in birds: the importance of transparent eye media. Proceedings of the Royal Society B: Biological Sciences, 281(1774), 20132209.
- Burkhardt, D. (1989). UV vision: A bird's eye view of feathers. Journal of Comparative Physiology a-Sensory Neural and Behavioral Physiology 164 (6):787–796.
- Garamszegi, L. Z., Møller, A. P., & Erritzøe, J. (2002). Coevolving avian eye size and brain size in relation to prey capture and nocturnality. Proceedings of the Royal Society of London. Series B: Biological Sciences, 269(1494), 961–967.
- Weidensaul, S. (2015). Owls of North America and the Caribbean. Houghton Mifflin Harcourt.
- Dunning, John B. Jr., ed. (2008). CRC Handbook of Avian Body Masses (2nd ed.). CRC Press. ISBN 978-1-4200-6444-5.
- Poole, E. L. (1938). Weights and wing areas in North American birds. Auk 55: 511–517.
- Mikkola, H. (1983). Owls of Europe. T. & AD Poyser.
- McGillivray, W. B. (1987). Reversed size dimorphism in 10 species of northern owls. In: Biology and Conservation of Northern Forest Owls: Symposium Proceedings, 3–7 February, Winnipeg, MB., edited by R. W. Nero, R. J. Clark, R. J. Knapton and H. Hamre, 59–66. Fort Collins, CO: U.S. For. Serv. Gen. Tech. Rep. RM-142. U.S.D.A. Forest Service, Rocky Mountain Forest and Range Experiment Station.
- Lundberg, A. (1986). Adaptive advantages of reversed sexual size dimorphism in European owls. Ornis Scandinavica, 133–140.
- Weick, Friedhelm (2007). Owls (Strigiformes): Annotated and Illustrated Checklist. Springer. ISBN 978-3-540-39567-6.
- Korpimäki, E. (1986). Reversed size dimorphism in birds of prey, especially in Tengmalm's Owl Aegolius funereus: a test of the" starvation hypothesis". Ornis Scandinavica, 326-332.
- Eckert, A. W. (1987). The Owls of North America, North of Mexico: All the Species and Subspecies Illustrated in Color and Fully Described. Gramercy.
- Parmelee, D. F. (1972). Canada's incredible arctic owls. Beaver no. summer:30–41.
- Priklonskiy, S.G. (1993). Snowy Owl — Nyctea scandiaca (Linnaeus, 1758). In: Birds of Russia and adjoining regions: Pterocliformes, Columbiformes, Cuculiformes, Strigiformes. Moscow, p. 258–270. (in Russian).
- Keith, L.B. (1960). Observations of Snowy Owls at Delta, Manitoba. Can. Field-Nat. 74:106–112.
- Johnson, D. H. (1997). Wing loading in 15 species of North American owls. In: Duncan, James R.; Johnson, David H.; Nicholls, Thomas H., eds. Biology and conservation of owls of the Northern Hemisphere: 2nd International symposium. Gen. Tech. Rep. NC-190. St. Paul, MN: US Dept. of Agriculture, Forest Service, North Central Forest Experiment Station. 553–561. (Vol. 190).
- Earhart, C. M., & Johnson, N. K. (1970). Size dimorphism and food habits of North American owls. The Condor, 72(3), 251–264.
- Kerlinger, P., & Lein, M. R. (1988). Causes of Mortality, Fat Condition, and Weights of Wintering Snowy Owls. Journal of Field Ornithology, 7–12.
- National Geographic Society. "Snowy Owl".
- Holt, D.W., Gray, K., Maples, M.T. & Korte, M. (2016). Mass growth rates, plumage development, and related behaviors of Snowy Owl (Bubo scandiacus) nestlings. Journal of Raptor Research. 50(2): 131–143.
- Chang, A. M., & Wiebe, K. L. (2016). Body condition in Snowy Owls wintering on the prairies is greater in females and older individuals and may contribute to sex-biased mortality. The Auk: Ornithological Advances, 133(4), 738–746.
- Pitelka, F. A., P. Q. Tomich & Treichel, G. W. (1955). Breeding behavior of jaegers and owls near Barrow, Alaska. Condor 57:3–18.
- Ryabitsev, V.K. (2011). Birds of the Urals, Ural Region, and Western Siberia: Guide and Identification Key (Ural’sk. Univ., Yekterinburg).
- Golovatin, M.G. & Paskhalniy, S.P. (2005). Ptitsy Polyarnogo Urala (Birds of the Polar Urals), Ekaterinburg, Siberia.
- Josephson, B. (1980). Aging and sexing snowy owls. Journal of Field Ornithology . 51: 149- 160.
- Portenko, L. A. (1972). Die Schnee-Eule: Nyctea scandiaca (Vol. 454). A. Ziemsen.
- Pyle, P. (1997). Identification Guide to North American Birds, Part I: Columbidae to Ploceidae. Slate Creek Press, Bolinas, CA, USA.
- Smith, Dwight G. (2002). Great Horned Owl (1st ed.). Mechanicsburg, PA: Stackpole Books. pp. 33, 80–81. ISBN 978-0811726894.
- Beaman, M. & Madge, S. (1998). The Handbook of Bird Identification for Europe and the Western Palearctic. Christopher Helm, London.
- Evans, D. L. (1980). Vocalizations and territorial behavior of wintering Snowy Owls. Am. Birds 34: 748–749.
- Sutton, G. M. (1932). The exploration of Southampton Island. Part II, Zoölogy. Section 2.-The birds of Southampton Island. Memoirs of the Carnegie Museum 12 (2):1–275.
- Taylor, P.S. (1973). Breeding behaviour of the Snowy Owl. Living Bird. 12: 137–154.
- Watson, Adam (1957). "The behaviour, breeding and food-ecology of the Snowy Owl Nycea scandiaca". Ibis. 99 (3): 419–462. doi:10.1111/j.1474-919X.1957.tb01959.x.
- Sutton, G. M. & Parmelee, D. F. (1956). Breeding of the Snowy Owl in southeastern Baffin Island. Condor 58:273–282.
- Thaxter, C. (1875). Among the Isles of Shoals. Atlantic Monthly 25:204–213.
- Witherby, H. F., F. C. R. Jourdain, N. F. Ticehurst and B. W. Tucker. (1952). The handbook of British Birds. rev ed. London: H. F. & G. Witherby.
- Weir, R. D. (1973). Snowy Owl invasion on Wolf Island, winter 72. Ontario Field Biology 27:3–17.
- Parmelee, D. (1992). Snowy Owl (Nyctea scandiaca). No. 10 in: Poole et al. (1992–1993).
- Barve, V. (2014). Discovering and developing primary biodiversity data from social networking sites: A novel approach. Ecological Informatics, 24, 194–199.
- Hosking, Eric (2 August 1967). "Snowy Owl with young—an historic photograph". The Times. UK.
- Marter, Hans J. (28 November 2016). "Reviews / Fond memories of the Bobby the birdman". The Shetland News. UK. Retrieved 23 October 2020.
- "Hope of first owl chicks in years", BBC News. 13 May 2008.
- Marquiss, M., Smith, R. & Galbraith, H. (1989). Diet of Snowy Owls on Cairn Gorm Plateau in 1980 and 1987. Scottish Birds. 15(4): 180–181.
- Saxby, H. L. (1863). Notes on the Snowy Owl. Zoologist 21:8633–8639.
- Saxby, H. L. (1874). The birds of Shetland. Edinburgh.
- Salomonsen, F. (1951). The birds of Greenland, vol. 2. Copenhagen, Denmark: FE Bording.
- Manniche, A. L. V. (1910). The terrestrial mammals and birds of east Greenland; biological observations. Medd. Grønland 45:1–200.
- Barth, E. (1949). Norwegian Animal Life. Volume 2. Birds.
- Jacobsen, K. O. (2005). Snøugle (Bubo Scandiacus) Norge. Hekkeforekomster i perioden 1968–2005. Hekkeforekomster i perioden, 2005.
- Saurola, P. L. (1997). Monitoring Finnish owls 1982–1996: methods and results. United States Department of Agriculture Forest Service General Technical Report NC, 363–380.
- Golovatin, M.G. & Paskhalniy, S.P. (2000). Avifauna of the Lower Ob River floodplain, in Nauchniy vestnik. 18–37.
- Osmolovskaya, V. I. (1948). Geographical distribution of raptors in Kazakhstan plains and their importance for pest control. Acad. Sci. USSR Inst. Geogr, 41, 5–77.
- Egorov, O.V. & Labutin, Y.V., (1959). Materialy poekologii I khozyaistvennomu znacheniu filina v Yakutii. Trudy Instituta Biologii 6: 10–18.
- Vorobiev, K. (1963). Birds of Yakutia. Academy of Sciences of the USSR.
- Morozov, V. V., Sharikov, A. V., & Ivanov, M. N. (2013). Occurrence and catching of Snowy Owls in Yugorskiy Peninsula, Russia, in 2012. Field Report. NOF‐rapport, 1–2013.
- Morozov, V.V. (2005). Snowy Owl in the eastern part of Bolshezemelskaya tundra and Yugorsky Peninsula. In: Owls of the Northern Eurasia (eds. Volkov S.V., Morozov V.V. & Sharikov A.V.). Moscow, p. 10–22. (in Russian with English summary).
- Sabaneev, L.P. (1874). Vertebrates of the Middle Urals and their geographic distribution in the Perm and Orenburg provinces. Moscow.
- Krasnov, Y.V. (1985). On the biology of the Snowy Owl on the Eastern Murman Coast. Diurnal raptors and owls in RSFSR zapovedniks. Moscow: Central Research Institute of RSFSR Game & Hunting Department: 110—116.
- Mineev, O. Y. & Minnev, Y. N. (2005). Distribution of owls in North-East European tundra. In: Owls of the Northern Eurasia (eds. Volkov S.V., Morozov V.V. & Sharikov A.V.).
- Knystautas, A. (1993). Birds of Russia. HarperCollins, London.
- Rogacheva, H. (1992). The Birds of Central Siberia. Husum-Druck und Verlagsgesellschaft, Husum, Germany.
- Armstrong, R.H. (1983). A New, Expanded Guide to the Birds of Alaska. Alaska Northwest Publishing Company, Anchorage, Alaska.
- Parmelee, D. F., & MacDonald, S. D. (1960). The birds of west-central Ellesmere Island and adjacent areas (No. 63). Department of Northern Affairs and National Resources.
- Sinclair, P. H., W. A. Nixon, C. D. Eckert, and N. L. Hughes (2003). Birds of the Yukon Territory. UBC Press, Vancouver, BC, Canada.
- Miller, F.L., Russell, R.H. & Gunn, A. (1975). Distribution and numbers of Snowy Owls on Melville, Eglinton, and Byam Martin Islands, Northwest Territories, Canada. Raptor Research. 9(3–4): 60–64.
- Vazhov, S. V., & Vazhov, V. M. (2016). Ecology of some species of owls in agricultural landscapes of the Altai region. Ecology, Environment and Conservation, 22(3), 1555–1563.
- Etchécopar, R.D. & Hüe, F. (1978). Les Oiseaux de Chine, de Mongolie et de Corée. Non Passereaux. Les Éditions du Pacifique, Papeete, Tahiti.
- Campbell, R.W., Dawe, N.K., McTaggart-Cowan, I., Cooper, J.M., Kaiser, G.W. & McNall, M.C.E. (1990). The Birds of British Columbia. Vol. 2. Nonpasserines: Diurnal Birds of Prey through Woodpeckers. UBC Press, Vancouver.
- Godfrey, W. E. (1986). The Birds of Canada. Revised Edition. National Museums of Canada, Ottawa, ON, Canada.
- Fay, F. H., & Cade, T. J. (1959). An ecological analysis of the avifauna of St. Lawrence Island, Alaska. University of California Publications in Zoology 63:73–150.
- Irving, L., McRoy, C. P. & Burns, J. J. (1970). Birds observed during a cruise in the ice-covered Bering Sea in March 1968. Condor 72:110–112.
- McRoy, C. P., Stoker, S. W. , Hall, G. E. & Muktoyuk, E. (1971). Winter observations of mammals and birds St. Matthew Island. Arctic 24 (1):63–65.
- Therrien, J. F., Gauthier, G., & Bêty, J. (2011). An avian terrestrial predator of the Arctic relies on the marine ecosystem during winter. Journal of Avian Biology, 42(4), 363–369.
- Conlin, Dan (2 October 2013). "An Owl Oddity", Maritime Museum of the Atlantic.
- Gross, A. O. (1947). Cyclic invasions of the snowy owl and the migration of 1945–1946. The Auk, 64(4), 584–601.
- Robillard, A., Gauthier, G., Therrien, J. F., & Bêty, J. (2018). Wintering space use and site fidelity in a nomadic species, the snowy owl. Journal of Avian Biology, 49(5), jav-01707.
- Therrien, Jean-François (March 2017). "Winter Use of a Highly Diverse Suite of Habitats by Irruptive Snowy Owls". Northeastern Naturalist. 24 (Special Issue 7): B81–B89. doi:10.1656/045.024.s712. S2CID 90013886.
- Santonja, P.; Mestre, I.; Weidensaul, S.; Brinker, D.; Huy, S.; Smith, N.; Mcdonald, T.; Blom, M.; Zazelenchuck, D.; Weber, D.; Gauthier, G.; Lecomte, N.; Therrien, J. (2019). "Age composition of winter irruptive Snowy Owls in North America". Ibis. 161 (1): 211–215. doi:10.1111/ibi.12647.
- Root, T. R. (1988). Atlas of Wintering North American Birds: An Analysis of Christmas Bird Count Data. University of Chicago Press, Chicago, IL, USA.
- American Ornithologists' Union (1957). Check-list of North American Birds, 5th edition. American Ornithologists' Union, Washington, DC, USA.
- Such, J., & Such, M. (2012). Winter 2011–2012 (December–February). Colorado Birds, 215.
- Simpson Jr, M. B. Critique of Early Reports of Snowy Owls (Bubo scandiacus) from the Carolinas: 1737 to 1872. Carolina Bird Club.
- Robbins, M. B. & Otte, C. (2013). The irruptive movement of Snowy Owls (Bubo scandiacus) into Kansas and Missouri during the winter of 2011–2012. Kansas Ornithological Society Bulletin 64 (4): 41–44.
- Small, A. (1994). California Birds: their Status and Distribution. Ibis Publishing Company, Vista, California.
- "Snowy Owl Appears in Middle Tennessee." The Styling Owlish. 24 January 2009.[dead link]
- Zuckerman, Laura (28 January 2012). Snowy owls soar south from Arctic in rare mass migration. Reuters
- Leung, Marlene Leung (5 January 2014). "Snowy owl invasion: Birds spotted as far south as Florida". CTV News.
- Schwartz, John (31 January 2014). "A Bird Flies South, and It's News". New York Times. Retrieved 31 January 2014.
- Ali, S. & Ripley, S.D. (1981). Handbook of the Birds of India and Pakistan. Vol. 3. 2nd edition. Oxford University Press, Delhi.
- Brazil, M. A. (1991). The Birds of Japan. Christopher Helm, London.
- Fujimaki, Y. (1987). [Records of Nyctea scandiaca from Hokkaido, Japan]. Japanese Journal of Ornithology. 36(2–3): 101–103.
- Richards, J. M., & Gaston, A. J. (Eds.). (2018). Birds of Nunavut. UBC Press.
- Murie, O. J. (1929). Nesting of the snowy owl. The Condor, 31(1), 3–12.
- Fuller, M., Holt, D. & Schueck, L. (2003). Snowy Owl movements: Variation on the migration theme. Edited by P. Berthold, E. Gwinner and E. Sonnenschein, Avian migration. Berlin: Springer-Verlag.
- Kerlinger, P., Lein, M.R. & Sevick, B.J. (1985). Distribution and population fluctuations of wintering Snowy Owls (Nyctea scandiaca) in North America. Canadian Journal of Zoology. 63(8): 1829–1834.
- Doyle, F. I., Therrien, J. F., Reid, D. G., Gauthier, G., & Krebs, C. J. (2017). Seasonal movements of female Snowy Owls breeding in the western North American Arctic. Journal of Raptor Research, 51(4), 428–438.
- Lein, M.R. & Webber, G.A. (1979). Habitat selection by wintering Snowy Owls (Nyctea scandiaca). Canadian Field-Naturalist. 93(2): 176–178.
- Smith, N. (1997). Observations of wintering Snowy Owls (Nyctea scandiaca) at Logan Airport, East Boston, Massachusetts from 1981 to 1997. In: Biology and conservation of owls of the Northern Hemisphere: 2nd International Symposium, edited by J. R. Duncan, D. H. Johnson and T. H. Nicholls, 591–596. St. Paul: U.S. Dept. of Agriculture, Forest Service, North Central Forest Experiment Station.
- Baker, J. A., & Brooks, R. J. (1981). Raptor and vole populations at an airport. The Journal of Wildlife Management, 390–396.
- Young, C.M. (1973). The Snowy Owl migration of 1971–72 in the Sudbury region of Ontario. American Birds 27(1): 11–12.
- Shields, M. (1969). Activity cycles of Snowy Owls at Barrow, Alaska. Murrelet 50 (2): 14–16.
- Hagen, Y. (1960). The Snowy Owl on Hardangervidda in the Summer of 1959. Papers of The Norwegian State Game Research. 2, No. 7.
- Tulloch, R. J. (1968). Snowy Owls breeding in Shetland in 1967. British Birds 61:119–132.
- Shields, M. (1969). Activity cycles of Snowy Owls at Barrow, Alaska. Murrelet 50 (2):14–16.
- Boxall, P. C. & Lein, M. R. (1989). Time budgets and activity of wintering Snowy Owls. Journal of Field Ornithology 60 (1): 20–29.
- "Snowy Owl (Nyctea Scandia)". animals.nationalgeographic.com. National Geographic. 11 November 2010.
- Gessaman, J. A. (1972). Bioenergetics of the snowy owl (Nyctea scandiaca). Arctic and Alpine Research, 4(3), 223–238.
- Irving, L. (1955). Nocturnal decline in the temperature of birds in cold weather. Condor 57: 362–365.
- Mebs, T. & Scherzinger, W. (2000). Snowy Owl. In: Die eulen Europa: biolgie, kennzeichen, bestande, edited by T. Mebs and W. Scherzinger, 167–183. Stuttgart: Franckh-Kosmos Verlags-Gmbh and Co.
- Therrien, J. F., Pinaud, D., Gauthier, G., Lecomte, N., Bildstein, K. L., & Bety, J. (2015). Pre-breeding prospecting behaviour of snowy owls (data from Therrien et al. 2015)-reference-data.
- Bortolotti, G. R., Stoffel, M. J., & Galvan, I. (2011). Wintering Snowy Owls Bubo scandiacus integrate plumage colour, behaviour and their environment to maximize efficacy of visual displays. Ibis, 153(1), 134–142.
- Wiebe, K. L.; Chang, A. M. (2018). "Seeing sunlit owls in a new light: orienting Snowy Owls may not be displaying.". Ibis. 160 (1): 62–70. doi:10.1111/ibi.12533.
- Boxall, P.C. & Lein, M.R. (1982). Are owls regular? An analysis of pellet regurgitation times of Snowy Owls in the wild. Raptor Research. 16(3): 79–82.
- Kaufman, K. (1996). Lives of North American Birds. Houghton Mifflin Company, Boston & New York.
- Hart, H. C. (1880). Notes on the ornithology of the British Polar Expedition, 1875-6. Zoologist 4:121–129.
- Brandt, H. (1942). Alaska Bird Trails: An Expedition by Dog Sled to the Delta of the Yukon River at Hooper Bay. The Bird Research Foundation, Cleveland, OH, USA.
- Dorogoi, I.V. (1990). [Factors of communal breeding of the Snowy Owls (Nyctea scandiaca) and Anseriformes birds at the Vrangel Island]. Ornitologiya. 24: 26–33. In Russian with English summary.
- Pitelka, F. A., Tomich, P. Q. & Treichel, G.W. (1955). Ecological relations of jeagers and owls as Lemming predators near Barrow, Alaska. Ecological Monographs 25: 85–117.
- Holt, D. W. & Zetterberg, S. A. (2008). The 2005 to 2006 Snowy Owl irruption migration to western Montana. Northwestern Naturalist 89 (3):145–151.
- Shelford, V. E. (1943). The abundance of the Collared Lemming (Dicrostonyx groenlandicus (TR) VAR. Richardsoni Mer.) in the Churchill area, 1929 to 1940. Ecology 24 (4):472–484.
- Parker, G. R. (1974). A population peak and crash of lemmings and Snowy Owls on Southampton Island, Northwest Territories. Canadian Field-Naturalist. 88(2): 151–156.
- Manning, T. H., Höhn, E. O. & MacPherson, A. H. (1956). The birds of Banks Island. National Museum of Canada Bulletin 143, Biological Series 48.
- Vaughn, R. (1992). In search of Arctic birds. London: T & AD Poyser, Ltd.
- Menyushina, I. E. (1997). Snowy Owl (Nyctea scandiaca) reproduction in relation to lemming population cycles on Wrangel Island. In: Biology and conservation of owls of the Northern Hemisphere: 2nd International Symposium, edited by J. R. Duncan, D. H. Johnson and T. H. Nicholls, 572–582. St. Paul: U.S. Dept. of Agriculture, Forest Service, North Central Forest Experiment Station.
- Chang, A. M., & Wiebe, K. L. (2018). Movement patterns and home ranges of male and female Snowy Owls (Bubo scandiacus) wintering on the Canadian prairies. Canadian Journal of Zoology, 96(6), 545–552.
- Øien, I. J., Aarvak, T., Jacobsen, K. O., & Solheim, R. (2018). Satellite Telemetry Uncovers Important Wintering Areas for Snowy Owls On the Kola Peninsula, Northwestern Russia. Орнитология, 42, 42–49.
- Oeming, A. F. (1957). Notes on the Barred Owl and the Snowy Owl in Alberta. Blue Jay 15:153–156.
- Follen, D. & Luepke, K. (1980). Snowy Owl recaptures. Inland Bird Banding 52: 60.
- Therrien, J.-F., Fitzgerald, G., Gauthier G. & Bêty, J. (2011). Diet-tissue discrimination factors of carbon and nitrogen stable isotopes in blood of Snowy Owl (Bubo scandiacus). Canadian Journal of Zoology 89 (4): 343–347.
- Therrien, J.-F. , Gauthier, G., Pinaud, D. & Bêty, J. (2014). Irruptive movements and breeding dispersal of Snowy Owls: a specialized predator exploiting a pulsed resource. Journal of Avian Biology. 45(6): 536–544.
- Jorgensen, J. G., Dinan, L. R., & Walker Jr, T. J. (2012). Snowy Owl Invasion of 2011–12.
- Snyder, L. L. (1943). The Snowy Owl migration of 1941–42. Wilson Bulletin 55 (1):8–10.
- Shelford, V. E. (1945). The relation of Snowy Owl migration to the abundance of the Collared Lemming. Auk 62 (4):592–596.
- Chitty, H. (1950). Canadian Arctic wild life enquiry, 1943–1949: With a summary of results since 1933. Journal of Animal Ecology 19 (2):180–193.
- Gross, A. O. (1944). Food of the snowy owl. The Auk, 61(1), 1–18.
- Pitelka, F. A. & Batzli, G. O. (1993). Distribution, abundance, and habitat use by lemmings on the north slope of Alaska. In: The biology of lemmings, edited by N. C. Stenseth and R. A. Ims, 213–236. London: Academic Press.
- Krebs, C. J. (1993). Are lemmings large Microtus or small reindeer? A review of lemming cycles after 25 years and future recommendations for future work. In: The biology of lemmings, edited by N. C. Stenseth and R. Ims, 247–260. London: Academic Press for the Linnean Society of London.
- Robillard, A., Therrien, J. F., Gauthier, G., Clark, K. M., & Bêty, J. (2016). Pulsed resources at tundra breeding sites affect winter irruptions at temperate latitudes of a top predator, the snowy owl. Oecologia, 181(2), 423–433.
- Snowy Owl — Bubo scandiacus, formerly Nyctea scandiaca. owlpages.com
- Royer, A., Montuire, S., Gilg, O., & Laroulandie, V. (2019). A taphonomic investigation of small vertebrate accumulations produced by the snowy owl (Bubo scandiacus) and its implications for fossil studies. Palaeogeography, Palaeoclimatology, Palaeoecology, 514, 189–205.
- Johnsgard, P. A. (1988). North American owls: biology and natural history. Smithsonian Institute.
- Tyler, H.A. & Phillips, D. (1978). Owls by Day and Night. Naturegraph, Happy Camp, California.
- Hohn, E. O. (1973). Winter hunting of Snowy Owls in farmland. Canadian Field-Naturalist 87 (4): 468–469.
- Audubon, J. J. (1840). The Birds of America. Dover Publications, Inc., New York, NY, USA.
- Dancey, H.E. (1983). Winter foraging habits of a Snowy Owl. Indiana Audubon Quarterly. 61(4): 136–144.
- Duffy, D. C., Beehler B. & Haas, W. (1976). Snowy Owl steals prey from Marsh Hawk. Auk 93 (4): 839–840.
- Boxall, P. C. & Lein, M. R. (1982). Feeding ecology of Snowy Owls (Nyctea scandiaca) wintering in S. Alberta. Arctic 35: 282–290.
- Winter, R. E. (2016). Hunting Behaviors and Foraging Success of Winter Irruptive Snowy Owls in New York. SUNY College of Environmental Science and Forestry, Thesis.
- Wiggins, I. L. (1953). Foraging activities of the Snowy Owl (Nyctea scandiaca) during a period of low lemming population. Auk 70:366–367.
- Brooks, W. S. (1915). Notes on birds from east Siberia and Arctic Alaska. Bulletin of the Museum of Comparative Zoology 59:361–413.
- Nagell, B. & Frycklund, I. (1965). The irruption of the Snowy Owl (Nyctea scandiaca) in Scandinavia in the winters of 1960–1963 and notes on its behavior. Vår Fågelvärld 24 (1): 26–55.
- King, B., Nayler, F. & Wardle, F. (1966). Feeding and resting behavior of a Snowy Owl in Scilly. British Birds 59 (3): 108.
- Robertson, G. J. & Gilchrist, H. G. (2003). Wintering Snowy Owls feed on sea ducks in the Belcher Islands, Nunavut, Canada. Journal of Raptor Research 37 (2): 164–166.
- Allen, M. L., Ward, M. P., Južnič, D., & Krofel, M. (2019). Scavenging by Owls: A Global Review and New Observations from Europe and North America. Journal of Raptor Research, 53(4), 410–418.
- Detienne, J. C., Holt, D., Seidensticker M. T. & Pitz, T. (2008). Diet of Snowy Owls wintering in west-central Montana, with comparisons to other North American studies. Journal of Raptor Research 42 (3): 172–179.
- Patterson, J. M. (2007). An analysis of Snowy Owl (Bubo scandiacus) diet during the 2005 to 2006 irruption along the Oregon and Washington coasts. Northwestern Naturalist, 88(1), 12–15.
- Fisher, A. K. (1893). The hawks and owls of the United States in their relation to agriculture. Washington: U.S. Department of Agriculture, Division of Ornithology and Mammalogy.
- Gabrielson, I. N. & Lincoln, F. C. (1959). The Birds of Alaska. Stackpole Company, Harrisburg, PA, USA.
- Marti, C. D., Korpimäki, E., & Jaksić, F. M. (1993). Trophic structure of raptor communities: a three-continent comparison and synthesis. In Current ornithology (pp. 47–137). Springer, Boston, MA.
- Robinson, M. & Becker, C.D. (1986). Snowy Owls on Fetlar. British Birds. 79(5): 228–242.
- McKendrigk, J. D., Batzli, G. O., Everett, K. R., & Swanson, J. C. (1980). Some effects of mammalian herbivores and fertilization on tundra soils and vegetation. Arctic and Alpine Research, 12(4), 565–578.
- Fitzgerald, B.M. (1981). Predatory birds and mammals. In Tundra ecosystems: a comparative analysis (Eds L.C. Bliss, 0.W. Heal & J.J. Moore), pp. 485–508. Cambridge University Press, Cambridge.
- Batzli, G. O., & Pitelka, F. A. (1983). Nutritional ecology of microtine rodents: food habits of lemmings near Barrow, Alaska. Journal of Mammalogy, 64(4), 648–655.
- Parker, G.R. (1974). A population peak and crash of lemmings and Snowy Owls on Southampton Island, Northwest Territories. Canadian Field-Naturalist. 88(2): 151–156.
- Krechmar, A.V. & Dorogoy, I.V . (1981). Snowy Owl (Nyctea scandiaca). In: Ecology of mammals and birds in Wrangel Island. Vladivostok: DVNZ AN SSSR: 56–81.
- Dufresne, F. (1922). The Snowy Owl-destroyer of game. Bull. Amer. Game Prot. Assoc. 11: 11–12.
- Hakala, A., Huhtala, K., Kaikusalo, A., Pulliainen, E., & Sulkava, S. (2006). Diet of Finnish snowy owls Nyctea scandiaca. Ornis Fennica, 83(2), 59.
- Andersson, N. Å. & Persson, B. (1971). Något om fjällugglans Nyctea scandiaca näringsval i Lappland. Vår Fågelvärld 30: 227–231.
- Osmolovskaya, V.N. 1948. [Ecology of raptors on the Yamal peninsula]. – Proc. Inst. Geography, Academy of Sciences of the USSR 61: 4–77 (in Russian).
- Barker, O. E., & Derocher, A. E. (2010). Habitat selection by arctic ground squirrels (Spermophilus parryii). Journal of Mammalogy, 91(5), 1251–1260.
- Brackney, A. W. & King, R. J. (1991). Population shifts by Snowy Owls on the Arctic coastal plain of Alaska. Abstract. In Alaska Bird Conference and Workshop. Anchorage.
- Hannon, S. J., & Barry, T. W. (1986). Demography, breeding biology and predation of willow ptarmigan at Anderson River delta, Northwest Territories. Arctic, 300–303.
- Potapova, O. (2001). Snowy owl Nyctea scandiaca (Aves: Strigiformes) in the Pleistocene of the Ural Mountains with notes on its ecology and distribution in the Northern Palearctic. Deinsea, 8(1), 103–126.
- Tarasov, V. V. (2011). Summer flocks of the Willow Ptarmigan in the north of the Yamal Peninsula. In: R. T. Watson, T. J. Cade, M. Fuller, G. Hunt, and E. Potapov (Eds.). Gyrfalcons and Ptarmigan in a Changing World. The Peregrine Fund, Boise, Idaho, USA.
- Heggøy, O., & Øien, I. J. (2014). Conservation status of birds of prey and owls in Norway. NOF/BirdLife Norway-Report, 1, 1–129.
- Custer, T.W. (1973). Snowy Owl predation on lapland longspur nestlings recorded on film. Auk. 90(2): 433–435.
- Dorogoy, I.V . (1987). Ecology of small mammal predators in Wrangel Island and their role in the dynamics of lemming numbers. Vladivostok: DVO AN SSSR. 92 p. (In Russian).
- Wiggins, I. L. (1953). Foraging activities of the Snowy Owl (Nyctea scandiaca) during a period of low lemming population. The Auk, 70(3), 366–367.
- Quakenbush, L., Suydam, R., Obritschkewitsch, T., & Deering, M. (2004). Breeding biology of Steller's eiders (Polysticta stelleri) near Barrow, Alaska, 1991–99. Arctic, 166–182.
- Stenkewitz, U., & Nielsen, Ó. K. (2019). The Summer Diet of the Snowy Owl (Bubo scandiacus) in Iceland. Journal of Raptor Research, 53(1), 98–101.
- Williams, P.L. & Frank, L.G. (1979). Diet of the Snowy Owl in the absence of small mammals. Condor. 81(2): 213–214.
- Krasnov, Y. (1985). To the biology of the Snowy Owl in the Eastern Murman. Birds of Prey and Owls in the Nature Reserves of the Russian Federation. TSNIL GLAVOKHOTA, 110–116.
- Stronach, P. & Cooper, J. (2010). Snowy Owl pellet containing Eurasian Teal. British Birds. 103(6): 360–361.
- Valenziano, R. L., & Labedz, T. E. (2014). Stomach Content Analysis of Recent Snowy Owl (Bubo scandiacus) Specimens from Nebraska. Neb. Bird Review, 80 (3): 122–127.
- Dove, C.J. & Coddington, C. P. J. (2015). Forensic techniques identify the first record of Snowy Owl (Bubo scandiacus) feeding on a Razorbill (Alca torda) . Wilson Journal of Ornithology. 127(3): 503–506.
- Robillard, A., Gauthier, G., Therrien, J.-F., Fitzgerald, G., Provencher, J.F. & Bêty, J. (2017). Variability in stable isotopes of Snowy Owl feathers and contribution of marine resources to their winter diet. Journal of Avian Biology. 48(6): 759–769.
- Campbell, R. W. & MacColl, M. D. (1978). Winter foods of Snowy Owls in southwestern British Columbia. Journal of Wildlife Management 42 (1):190–192.
- Campbell, R. W. & Preston, M. I. (2009). Featured Species- Snowy Owl (Bubo scandiacus). Widllife Afield, 6 (2): 173–255.
- Breen-Smith, T.M. & James, P.C. (1995). Snowy Owl predation on a northern pocket gopher: evidence of nocturnal foraging? Blue Jay. 53(1): 58–59.
- Keith, L. B. (1963). A note on Snowy Owl food habits. The Wilson Bulletin, 75(3): 276–277.
- Chamberlin, M. L. (1980). Winter hunting behavior of a snowy owl in Michigan. The Wilson Bulletin, 116–120.
- Mendall, H. L. (1944). Food of hawks and owls in Maine. Journal of Wildlife Management 8:198–208.
- Young, E.A., Blake, C., Graham, R., Otte, C., Beckman, M. & Klem, D. (2014). Prey items from Snowy Owl (Bubo scandiacus) Pellets during the 2011–2012 Irruption in Kansas. Kansas Ornithological Society, 65 (4): 33–40.
- Miles, W. T. S., & Money, S. (2008). Behaviour and diet of non-breeding Snowy Owls on St Kilda. Scottish Birds, 28, 11.
- Murrey, T. & Sleeman, D. (2005). Dietary analysis from the Snowy Owls pellets Nyctea scandiaca Linnaeus 1958, from the Mullet Peninsula, Co, Mayo. Irish Naturalists' Journal, 283: 136.
- Savory, J. (2019). Unpublished information in the SOC Archive on Snowy Owl diet at three locations in Moray in the 1960s. Scottish Birds, 202(204), 202.
- Maleev, V.G. & Popov, V. V. (2007). Birds of forest-steppes of the Upper Angara river basin. Irkutsk, 300 pg.
- Mosalev, A. (1969). About wintering of Snowy Owl in the Kurgaldga Nature Reserve.
- Best, T. L., & Henry, T. H. (1994). Lepus othus. Mammalian Species, (458), 1–5.
- Bergman, G. (1961). "The food of birds of prey and owls in Fenno-Scandia". British Birds, 54(8), 307–320.
- Nagy, S., Petkov, N., Rees, E., Solokha, A., Hilton, G., Beekman, J., & Nolet, B. (2012). International single species action plan for the conservation of the northwest European population of Bewick's Swan (Cygnus columbianus bewickii). Wetlands International and The Wildfowl & Wetlands Trust (WWT), AEWA Technical Series, (44).
- Gilyazov, A. V. (2005). Snowy Owl Nyctea scandiaca Linnaeus, 1758. Red-data Book of the Murmansk Region. Murmansk: Murmansk book publishers: 316—318. [in Russian].
- Conover, M. R., & Roberts, A. J. (2017). Predators, predator removal, and sage‐grouse: A review. The Journal of Wildlife Management, 81(1), 7–15.
- Uher-Koch, B. D., M. R. North, and J. A. Schmutz (2020). Yellow-billed Loon (Gavia adamsii), version 1.0. In Birds of the World (S. M. Billerman, Editor). Cornell Lab of Ornithology, Ithaca, NY, USA.
- Dunning, Jr., J. B. 1993. CRC handbook of avian body masses. CRC Press, Boca Raton, FL.
- Bailey, A. M. (1948). Birds of Arctic Alaska. Colorado Mus. Nat. Hist., Popular Ser., 8. 317 pp.
- Reid, D. G., Krebs, C. J., & Kenney, A. (1995). Limitation of collared lemming population growth at low densities by predation mortality. Oikos, 387–398.
- Maher, W. J. (1970). The Pomarine Jaeger as a Brown Lemming predator in northern Alaska. Wilson Bulletin 82: 130–157.
- Wiklund, C. G., Angerbjörn, A., Isakson, E., Kjellén, N., & Tannerfeldt, M. (1999). Lemming predators on the Siberian tundra. Ambio, 281–286.
- Gilg, O., Sittler, B., Sabard, B., Hurstel, A., Sané, R., Delattre, P., & Hanski, I. (2006). Functional and numerical responses of four lemming predators in high arctic Greenland. Oikos, 113(2), 193–216.
- Ovsyanikov, N.G. & Menushina, I.E. (1986). [Competition for food between the Snowy Owl (Nyctea scandiaca) and the arctic fox (Alopex lagopus)]. Zoologischeskii Zhurnal. 65(6): 901–910. In Russian with English summary.
- Menyushina, I. E. (1994). Interspecies relation of the polar fox (Alopex lagopus L.) and the Snowy Owl (Nyctea scandiaca L.) during the breeding season in the Wrangel Island. 1. Lutreola 3: 15–21.
- Walker, L.W. (1993). The Book of Owls. University of Texas Press, Austin, Texas.
- Chang, A. M. (2017). Habitat use, movement patterns, and body condition of male and female Snowy Owls (Bubo scandiacus) in winter (Doctoral dissertation, University of Saskatchewan).
- Mikkola, H. (1976). Owls killing and killed by other owls and raptors in Europe. British Birds, 69, 144–154.
- Johnson, M.J. (1995). Bald Eagle predation on Snowy Owl. Loon 67(2):107.
- Golovatin, M.G. & Paskhalny, S.P. (2005). Distribution, numbers and ecology of White-tailed Eagle in the north of West Siberia. Berkut, 14 (1): 59–70.
- Utekhina, I., Potapov, E., & McGrady, M. J. (2000). Diet of the Steller's Sea Eagle in the northern Sea of Okhotsk. In: First Symposium on Steller's and White-tailed Sea Eagles in East Asia. Tokyo, Japan: Wild Bird Society of Japan (pp. 71–92).
- Nelson, E. W. (1887). Birds of Alaska, p. 35-222. In H. W. Henshaw [ed.], Report upon natural history collections made in Alaska between the years 1877 and 188 1. No. III Arctic Series, Signal Service, U.S. Army, Government Printing Office,-Washington, DC.
- Meinertzhagen, R. (1959). Pirates and Predators: The piratical and predatory habits of birds. Oliver & Boyd.
- Levin, S. A., J. E. Levin and R. T. Paine. (1977). Snowy Owl predation on Short-eared Owls. Condor 79 (3): 395.
- Audet, A. M., Robbins, C. B., & Larivière, S. (2002). Alopex lagopus. Mammalian species, 2002(713), 1–10.
- Dixon, C. C. (1975). Red Fox Predated by Snowy Owl. Blue Jay, 33(2).
- Korpimäki, E., & Norrdahl, K. (1989). Avian predation on mustelids in Europe 1: occurrence and effects on body size variation and life traits. Oikos, 205–215.
- Brigham, A. (2013). Snowy Owl-Gyrfalcon Scrap, White Butte, SK. Blue Jay, 71(3), 149–152.
- Therrien, J. F., Pinaud, D., Gauthier, G., Lecomte, N., Bildstein, K. L., & Bety, J. (2015). Is pre-breeding prospecting behaviour affected by snow cover in the irruptive snowy owl? A test using state-space modelling and environmental data annotated via Movebank. Movement ecology, 3(1), 1.
- Boxall, P. C. & Lein, M. R. (1982). Possible courtship behavior of Snowy Owls in winter. Wilson Bulletin 94:79–81.
- Holt, D.W., Maples, M.T., Petersen-Parret, J.L., Korti, M., Seidensticker, M. & Gray, K. (2009). Characteristics of nest mounds used by Snowy Owls in Barrow, Alaska, with conservation and management implications. Ardea. 97(4): 555–561.
- Tremblay, J. P., Gauthier, G., Lepage, D., & Desrochers, A. (1997). Factors affecting nesting success in greater snow geese: Effects of habitat and association with snowy owls. The Wilson Bulletin, 449–461.
- Lepage, D., Gauthier, G., & Reed, A. (1996). Breeding-site infidelity in greater snow geese: a consequence of constraints on laying date? Canadian Journal of Zoology, 74(10), 1866–1875.
- Ebbinge, B. S., & Spaans, B. (2002). How do Brent Geese (Branta b. bernicla) cope with evil? Complex relationships between predators and prey. Journal für Ornithologie, 143(1), 33–42.
- Smith, P. A. (2003). Factors affecting nest site selection and reproductive success of tundra nesting shorebirds (Doctoral dissertation, University of British Columbia).
- Litvin, K.Y. & Ovsyanikov, N.G. (1990). [Relationship between the reproduction and numbers of Snowy Owls and arctic foxes and the number of true lemmings of the Wrangel Island]. Zoologischeskii Zhurnal. 69(4): 52–64. In Russian with English summary.
- Baicich, P.J. & Harrison, C.J.O. (1997). A Guide to the Nests, Eggs, and Nestlings of North American Birds. Academic Press, San Diego, California.
- Schönwetter, M. (1960). Handbuch der Oologie (Ed. W. ME~SE). Vol. 1. Berlin.
- Bendire, C. E. (1892). Life histories of North American birds with special reference to their breeding habits and eggs. U.S. National Museum Special Bulletin 1.
- Schaanning, H. T. L. (1907). Østfinmarkens fuglefauna. Bergens Mus. Arb. 8:1–98.
- Pleske, T. (1928). Birds of the Eurasian tundra. Mem. Boston Soc. Nat. Hist. 6:111–485.
- Parmelee, D. F., Stephens, H. A. & Schmidt, R. H. (1967). The birds of Southeastern Victoria Island and adjacent small islands. National Museum of Canada Bulletin 222.
- Couzens, D. (2008). Extreme Birds: The World's Most Extraordinary and Bizarre Birds. Firefly Books.
- Barth, E. K. (1950). Efter fjallugglor pf Hardangervidda. Fauna Flora, 45: 235–242.
- Wiklund, C. G., & Stigh, J. (1983). Nest defence and evolution of reversed sexual size dimorphism in Snowy Owls Nyctea scandiaca. Ornis Scandinavica, 58–62.
- ADW: Nyctea scandiaca: Information. Animaldiversity.ummz.umich.edu. Retrieved on 19 October 2010.
- Romero, L. M., Holt, D. W. Maples M. & Wingfield, J. C.. (2006). Corticosterone is not correlated with nest departure in Snowy Owl chicks (Nyctea scandiaca). General and Comparative Endocrinology 149 (2): 119–123.
- Holt, D. W. & Leasure, S. M. (1993). Short-eared Owl (Asio flammeus). In: The birds of North America, No. 62, edited by A. Poole and F. Gill. Washington, DC: Acad. Nat. Sci., Philadelphia, PA; Am. Ornithol. Union.
- Schrezinger, W. (1974). Zur Ethologie und Jugendentwicklung der Schnee-Eule Nyctea scandiaca nach Beobachtungen in Gefangenschaft. J. Orn, 115: 8–49.
- Solheim, R., Jacobsen, K. O., Øien, I. J., Aarvak, T., & Polojärvi, P. (2013). Snowy Owl nest failures caused by blackfly attacks on incubating females. Ornis Norvegica, 36, 1–5.
- Schenker, A. (1978). Höchsalter europaischer Vögel im Zoologischen Garten Basel. Ornithol. Beob. 75: 96–97.
- Therrien, J. F., Gauthier, G., & Bêty, J. (2012). Survival and reproduction of adult snowy owls tracked by satellite. The Journal of Wildlife Management, 76(8), 1562–1567.
- Holt, D. W. & Zetterberg, S. A. (2008). The 2005 to 2006 Snowy Owl irruption migration to western Montana. Northwestern Naturalist 89 (3): 145–151.
- Kerlinger, P. & Lein, M. R. (1988). Population ecology of Snowy Owls during winter on the Great Plains of North America. Condor 90: 866–874.
- Curk, T., McDonald, T., Zazelenchuk, D., Weidensaul, S., Brinker, D., Huy, S., Smith, N. Miller, T. Robillard, A. Gauthier, G. Lecomte, N. Therrien, J.-F. & Lecomte, N. (2018). Winter irruptive Snowy Owls (Bubo scandiacus) in North America are not starving. Canadian Journal of Zoology, 96(6), 553–558.
- Oeming, A. F. (1957). Notes on the Barred Owl and the Snowy Owl in Alberta. Blue Jay 15: 153–156.
- Follen, D. & Luepke, K. (1980). Snowy Owl recaptures. Inland Bird Banding 52: 60.
- Burdeaux Jr, R. R., & Wade, L. (2018). Successful Management of Open, Contaminated Metacarpal Fractures in an Adult Snowy Owl (Bubo scandiacus) With a Minimal Type II External Skeletal Fixator. Journal of avian medicine and surgery, 32(3), 210–216.
- Baker, K. C., Rettenmund, C. L., Sander, S. J., Rivas, A. E., Green, K. C., Mangus, L., & Bronson, E. (2018). Clinical effect of hemoparasite infections in snowy owls (Bubo scandiacus). Journal of Zoo and Wildlife Medicine, 49(1), 143–152.
- Galloway, T. D., & Lamb, R. J. (2019). Infestation parameters for chewing lice (Phthiraptera: Amblycera, Ischnocera) infesting owls (Aves: Strigidae, Tytonidae) in Manitoba, Canada. The Canadian Entomologist, 151(5), 608–620.
- Väisänen, R. A., Lammi, E., & Koskimies, P. (1998). Distribution, numbers and population changes of Finnish breeding birds. Otava, Helsinki, Finland.
- Saurola, P. (2009). Bad news and good news: population changes of Finnish owls during 1982–2007. Ardea, 97(4), 469–482.
- BirdLife International (2015). European Red List of Birds. Office for Official Publications of the European Communities, Luxembourg
- Dial, C. R., Talbot, S. L. Sage, G. K., Seidensticker M. T. & Holt, D. W. (2012). Cross-species amplification of microsatellite markers in the Great Horned Owl Bubo virginianus, Short-eared Owl Asio flammeus and Snowy Owl B. scandiacus for use in population genetics, individual identification and parentage studies. Journal of the Yamashina Institute for Ornithology 44 (1): 1–12.
- Stepanyan, L.S. (1990). Konspekt ornitologicheskoi fauny SSSR. [Conspectus of the Ornithological Fauna of the USSR]. Nauka, Moscow. (In Russian with English summary.).
- Rich, T. D., Beardmore, C. J. Berlanga, H., Blancher, P. J., Bradstreet, M. S. W., Butcher, G. S., Demarest, D. W., Dunn, E. H., Hunter, W. C., Iñigo-Elias, E. E., Kennedy, J. A., Martell, A. M.. Panjabi, A. O., Pashley, D. N., Rosenberg, K. V., Rustay, C. M., Wendt, J. S. & Will, T. C. (2004). Partners in Flight North American Landbird Conservation Plan. Cornell Lab of Ornithology, Ithaca, NY, USA.
- Millsap, B. A., & Allen, G. T. (2006). Effects of falconry harvest on wild raptor populations in the United States: theoretical considerations and management recommendations. Wildlife Society Bulletin, 34(5), 1392–1400.
- Kirk, D. A., Hussell D. & Dunn, E. (1995). Raptor population status and trends in Canada. Bird Trends, 4: 2–9.
- Reding-License, A. A. (2015). Harfang des neiges (Bubo scandiacus). Government of Canada.
- American Ornithologists' Union (1998). Check-list of North American Birds, 7th edition. American Ornithologists' Union, Washington, DC, USA.
- Miller, F. L. (1987). Snowy Owl numbers on twelve Queen Elizabeth Islands, Canadian High Arctic. Journal of Raptor Research 21 (4): 153–157.
- Alaska Department of Fish and Game. State of Alaska special status species 2011.
- PIFSC. Partners in Flight Science Committee (2013). Population estimates database (Version 2.0) 2013. Available from http://rmbo.org/pifpopestimates.
- BirdLife International. Nyctea Scandiaca. 2006 IUCN Red List of Threatened Species (2004). Available from http://www.iucnredlist.org.
- Burton, J. A. (1973). Owls of the world. New York: E. P. Dutton.
- Chitty, H. (1950). Canadian Arctic wild life enquiry, 1943–1949: With a summary of results since 1933. Journal of Animal Ecology 19 (2): 180–193.
- Berlanga, H., Kennedy, J. A., Rich, T. D., Arizmendi, M. C., Beardmore, C. J., Blancher, P. J., Butcher, G. S., Couturier, A. R., Dayer, A. A., Demarest, D. W., Easton, W. E., Gustafson, M., Iñigo-Elias, E., Krebs, E. A., Panjabi, A. O., Rodriguez Contreras, V., Rosenberg, K. V., Ruth, J. M., Santana Castellón, E., Vidal, R. Ma. & Will, T. (2010). Saving our shared birds: Partners in Flight tri-national vision for landbird conservation. Ithaca: Cornell Laboratory of Ornithology.
- Rosenberg, K. V., Blancher, P. J., Stanton, J. C., & Panjabi, A. O. (2017). Use of North American Breeding Bird Survey data in avian conservation assessments. The Condor: Ornithological Applications, 119(3), 594–606.
- Catling, P. M. (1973). Food of snowy owls wintering in southern Ontario, with particular reference to the snowy owl hazard to aircraft. Ontario field biol, 7, 41–45.
- Baker, J. A., & Brooks, R. J. (1981). Distribution patterns of raptors in relation to density of meadow voles. The Condor, 83(1), 42–47.
- Blokpoel, H. (1976). Bird hazards to aircraft: problems and prevention of bird/aircraft collisions. Clarke Irwin;[Ottawa]: Canadian Wildlife Service, Environment Canada: Pub. Centre, Supply and Services Canada.
- Linnell, K. E., & Washburn, B. E. (2018). Assessing Owl Collisions with US Civil and US Air Force Aircraft. Journal of Raptor Research, 52(3), 282–290.
- Heggøy, O., Aarvak, T., Øien, I. J., Jacobsen, K. O., Solheim, R., Zazelenchuk, D., Stoffel, M. & Kleven, O. (2017). Effects of satellite transmitters on survival in Snowy Owls Bubo scandiacus. Ornis Norvegica, 40: 33–38.
- Bakalar, E. M. (2004). Subsistence Whaling in the Native Village of Barrow: Bringing Autonomy to Native Alaskans Outside the International Whaling Commission. Brook. J. Int'l L., 30, 601.
- Mourer-Chauviré, C. (1979). La chasse aux oiseaux pendant la Préhistoire. La Recherche, 106(10), 1202–1210.
- Laroulandie, V. (2016). Hunting fast-moving, low-turnover small game: The status of the snowy owl (Bubo scandiacus) in the Magdalenian. Quaternary international, 414, 174–197.
- Desmarchelier, M., Santamaria-Bouvier, A., Fitzgérald, G., & Lair, S. (2010). Mortality and morbidity associated with gunshot in raptorial birds from the province of Quebec: 1986 to 2007. The Canadian Veterinary Journal, 51(1), 70.
- Ellis, D. H. & D. G. Smith. (1993). Preliminary report of extensive Gyrfalcon and Snowy Owl mortality in northern Siberia. Raptor-Link 1 (2):3–4.
- Stone, W. B.,Okoniewski, J. C. & Stedelin, J. R. (1999). Poisoning of wildlife with anticoagulant rodenticides in New York. Journal of Wildlife Diseases 35:187–193.
- Kaler, R. S., Kenney, L. A., Bond, A. L., & Eagles-Smith, C. A. (2014). Mercury concentrations in breast feathers of three upper trophic level marine predators from the western Aleutian Islands, Alaska. Marine pollution bulletin, 82(1–2), 189–193.
- ACIA. (2004). Impacts of a warming climate: Arctic climate impact assessment. Cambridge: Cambridge University Press.
- Inouye, D. W. (2019). Climate change in other taxa. Effects of Climate Change on Birds, 257.
- Schmidt, N. M., Ims, R. A., Høye, T. T., Gilg, O., Hansen, L. H., Hansen, J., Lund, M., Fuglei, E., Forchhammer, M. C. & Sittler, B. (2012). Response of an arctic predator guild to collapsing lemming cycles. Proceedings of the Royal Society B: Biological Sciences, 279(1746), 4417–4422.
- Gilg, O., Sittler, B., & Hanski, I. (2012). Will Collared Lemmings and their Predators be the first vertebrates to "Fall over the Cliff" in Greenland due to Global Climate Changes?
- "4 reasons Hedwig was better than everyone else at Hogwarts". Pottermore. Retrieved 13 February 2018.
- Megias, D. A., Anderson, S. C., Smith, R. J., & Veríssimo, D. (2017). Investigating the impact of media on demand for wildlife: A case study of Harry Potter and the UK trade in owls. PLOS ONE, 12(10).
- "The avian emblem of Quebec".
- "The Snowy Owl to Represent Canada". Nature Canada.
|Wikimedia Commons has media related to Bubo scandiacus.|
|Wikispecies has information related to Bubo scandiacus.|
- Free Video About Snowy Owls
- Snowy owl increasingly casting its spell over North American skies (Jan. 2015), The Guardian
- Snowy Owl Species Account—Cornell Lab of Ornithology
- Snowy Owl – Nyctea scandiaca—USGS Patuxent Bird Identification InfoCenter
- "Bubo scandiacus". Avibase.
- "Snowy Owl media". Internet Bird Collection.
- Snowy Owl photo gallery at VIREO (Drexel University) |
Table of Contents
What is Inflammation?
The word inflammation comes from the Latin “inflammo“, meaning “I set alight, I ignite”.
It has been a buzzword in health trends recently, and for good reason.
Inflammation is the body’s natural response to protect itself from harm.
Without the inflammatory response, further damage may continue to occur throughout the body and infected area. However, sometimes inflammation can become harmful.
There are two different types of inflammation. They are…
The first state of inflammation is called irritation and it occurs when an infected area on or inside of the body becomes inflamed.
This is the immediate healing process.
Acute inflammation is beneficial in situations where a knee injury is sustained (from falling, for example) and the tissues are damaged and need to be cared for.
Acute inflammation starts rapidly and can become severe quite quickly. It is usually localized to a specific site of injury.
It typically only lasts from a few days to weeks.
Examples of situations, diseases and conditions that result in acute inflammation include but are not limited to:
- Acute sinusitis (3)
- Infected ingrown toenail
- Acute tonsillitis
- Acute bronchitis
- Intense exercise (4)
- Flu or cold
The Five Cardinal Signs of Acute Inflammation
Acute inflammation can be characterized by the following five cardinal signs (5):
- Redness: Increased blood flow to the inflamed area
- Increased heat: Increased blood flow to the inflamed area
- Swelling: Accumulation of fluid
- Pain: Release of chemicals that stimulate the nerve endings
- Loss of function: Combination of factors
These signs occur when acute inflammation happens on the surface of the body.
If acute inflammation occurs internally of the organs, not all of the signs will be apparent. For instance, there can only be pain when there are enough sensory nerve endings in the inflamed site, therefore inflammation of the lung would not cause pain.
Sometimes inflammation can become self-perpetuating. Meaning inflammation will be created in response to the inflammation that is already occurring in the body (6). This is chronic inflammation and it is long lasting.
It can also result from failure of the body to eliminate what was causing the acute inflammation or a chronic irritant that persists. However, it is not always known what causes the body to become inflamed in the first place.
A chronic inflammatory response often occurs in conditions like autoimmune diseases, such as:
- Rheumatoid arthritis (7)
- Inflammatory bowel disease (8)
- Chronic peptic ulcer
- Chronic sinusitis (9)
Causes of Chronic Inflammation
Inflammation begins when pro-inflammatory hormones in the body call out for white blood cells that fix damaged tissue or clear out an infection. These are matched by as equally powerful anti-inflammatory compounds that move in once the threat is neutralized.
When this healthy mechanism goes wrong, it doesn’t shut off (chronic inflammation).
Chronic inflammation is a major factor in many of the leading causes of death in the United States (10).
There are several factors that can cause inflammation, such as:
- Advanced glycation end products due to elevated blood sugar levels
- Oxidized lipoproteins (such as low-density lipoprotein)
- Mitochondrial dysfunction
- Uric acid crystals
Chronic inflammation can be triggered by cellular dysfunction and stress. This can be caused by oxidative stress, excessive calorie consumption and elevated blood sugar levels.
Stress induced inflammation can remain undetected for years once triggered, propagating cell death in the body
The silent state of chronic inflammation has been coined “inflammaging” (11).
Signs of Chronic Inflammation
Chronic inflammation can reveal itself in a variety of ways.
The symptoms listed below alone are not the grounds for self-diagnosis. It’s always important to address any health problems that you may have with a health practitioner.
However, being aware of them is invaluable. The signs of chronic inflammation are:
- Depression: Inflammation is believed to be the cause of depression. This claim has been backed by the scientist Andrew Miller, MD, a professor of psychiatry and behavioral sciences at Emory School of Medicine (12).
- Digestive issues: Diarrhea, pains, cramps and bloating are thought to be symptoms of ongoing inflammation inside your body.
- Fatigue: If you’re exhausted on days when you’ve gotten enough sleep, inflammation could be the culprit. Inflamed cells are sick cells and they can’t produce the energy that you need to keep going throughout the day (13).
- Skin problems: Itching and redness on the skin are classic signs of internal inflammation. They can be caused by autoimmune diseases, allergies or liver issues. Psoriasis, a chronic skin disease, is also a sign of inflammation (14).
- Allergies: The symptoms of allergies (redness, itching and pain) is your immune response to usually harmless substances. Watery eyes and a running nose are signs that you are chronically inflamed.
Read on to find out how you can reduce chronic inflammation and the risk of future health problems occurring.
Top 15 Ways to Reduce Inflammation Naturally
Making changes to your diet can be a powerful way to spur off inflammation (15).
Since emerging research is focusing on the link between inflammation and the list of chronic diseases we previously mentioned, it’s important that you influence your health in positive ways.
Below is a list of 20 ways that can help you take control of inflammation.
1. Omega-3 Fatty Acids
Omega-3 fatty acids are considered essential fatty acids and they are necessary for optimal human health. The body cannot make them and therefore they must come from the food we eat or through supplementation (16).
Most people are aware that omega-3 fatty acids come from fish and that they have remarkable health-protecting benefits.
They are reported to have anti-inflammatory effects in humans and also thought to be useful in the management of autoimmune diseases (17).
2. Coconut Oil
Coconut oil can boost metabolism and reduce helping you lose weight over a long period of time.
It contains medium chain fats, which lead to the weight loss and reduced waist circumference (18). Visceral fat, also known as abdominal fat is the fat that tends to lodge around your organs and can cause inflammation (19).
Therefore consuming coconut oil will reduce your belly fat and in return lower the levels of inflammation in your body.
3. Eat a Diet Low in Omega-6 Rich Foods
Omega-6 Fatty acids are a class of polyunsaturated fats. An imbalance of omega-6 and omega-3 fatty acids in the diet causes chronic inflammation (20).
They can be found in vegetable oils such as sunflower, soybean and safflower. Unfortunately these oils are found in almost every food we eat, prolific in the modern Western diet.
An easy way to cut omega-6 out of your diet is by reducing store bought fries or restaurant cooked meals you eat.
4. Identify and Address Food Sensitivities
In this lifetime food allergies and sensitive and increased dramatically, with as many as 15 million Americans suffering from food allergies (21).
An allergy is an overreaction of the immune system and this releases antibodies and triggers inflammation. The symptoms can be both dramatic and acute, ranging from:
Removing the food that you are allergic to or have a sensitivity towards will help you in identifying the cause of your inflammation. Once the food is removed, your symptoms should subside and then you will know how you should change your eating habits in order to remain healthy.
The key is to know your body well enough so that you are able to tell the signs and then respond appropriately.
7-9 hours of sleep a night is considered the normal sleep duration for adults (24). This varies based on age, activity level and overall health.
A loss of sleep causes physical changes in our bodies and brains, with the levels of inflammatory markers in the blood like C-reactive protein (CRP) and interleukin-6 (IL-6) increasing.
Getting this amount of sleep per night is crucial in avoiding long term inflammation (25).
Tomatoes are a nightshade vegetable and they contain a lot of nutrients. They are an excellent source of vitamin C, vitamin A, vitamin K, copper, folate biotin and much more (26).
Turmeric is delicious yellow spice. It is common in Indian cuisine and you can find it in almost every grocery store.
It has received a lot of attention for containing the powerful anti-inflammatory nutrient, curcumin. Since it is able to reduce inflammation, it has anti-diabetic activity (29).
Eating it with black pepper enhances its effects. Black pepper boosts the absorption of curcumin by 2000%, as it contains piperine (30).
Peppers, including chili peppers and bell peppers, are low in calories and fat.
Peppers are rich in antioxidants and vitamin C, which has anti-inflammatory effects (31). Chilli peppers also contain capsaicin. Capsaicin reduces a Substance P. Capsaicin, which is a specific pain transmitter in your nerves. This may relieve arthritis.
Cayenne pepper is derived from peppers and can be made at home in a few steps (32).
There are over a dozen of varieties of berries, some of these include:
Berries contain anthocyanins. Anthocyanins are antioxidants that have anti-inflammatory effects and activity. They also have the potential to protect against disease (35).
Garlic is used in many cuisines around the world to add flavor. It has also been used as a natural remedy for colds and other illness for years.
For the most benefits, consume garlic raw. You can simply eat it on its own, but it’s best not to chew it while doing so.
A study showed that if an avocado is eaten with a hamburger, the inflammatory response was limited in comparison to eating the hamburger without the avocado (38).
Avocados are a great source of phytosterols, alpha-linolenic acid, healthy monounsaturated fats and fiber. A compound found in the avocado has also been shown to reduce inflammation in young cells (39).
Avocados can be consumed in supplement form – avocado/soybean unsaponifiables (ASU) are natural vegetable extracts made from avocado and soybean oils. ASUs may prove to be an effective treatment option for symptomatic Osteoarthritis (40).
Onions have anti-inflammatory and antioxidant effects (41). They are also loaded with healthy compounds that help fight inflammation in arthritis.
Onions are a source of flavonoids, in particular the flavonol Quercetin, which exerts anti-inflammatory effects (42).
Onions exhibit antimicrobial activity against a range of fungi and bacteria.
13. Green Tea
While white and black tea also contains polyphenols, Green tea has the highest polyphenol content (45).
Improvements in diseases such as colitis and arthritis in Asia have been contributed to the consumption of green tea.
14. Dark Chocolate
So many of us love chocolate. The good news is, a certain type of chocolate is anti-inflammatory – dark chocolate. It contains flavanols, which are responsible for its anti-inflammatory effects (46).
The amount of antioxidants and anti-inflammatory properties in dark chocolate depends on the processing. Always choose unprocessed cacao, which is unsweetened and natural.
Healthy dieting alone may not be enough to avoid inflammation. Regular exercise reduces markers of generalized, systemic inflammation such as C-reactive protein (47). C-reactive protein is a blood test marker for inflammation in the body.
However you do not need to take part in intense exercise to feel the anti-inflammatory effects of exercise. Studies have shown that individuals who walk more present a low inflammatory status (48).
The 7 Most Effective Anti-Inflammatory Spices
Spice up your meals and bring down your inflammation with these proven anti-inflammatory spices. Not only will they enhance the flavor of your food, but they also contain potent plant-based compounds for a thriving life.
Turmeric (Curcuma longa) is a type of ginger grown in Southeast Asia. While you can enjoy this root fresh, most people use it in powder form. It adds a deep orange hue to meals and has a warm, pepper-like flavor that makes it popular in curries and similar dishes.
Approximately 3% of turmeric is made up of a medicinal compound known as curcumin. (49) Curcumin is incredibly anti-inflammatory.
In fact, studies have found that it can be just as effective as anti-inflammatory drugs. Just as importantly, researchers say curcumin has none of the side effects of common anti-inflammatory medications. (50)
An additional benefit of the curcumin in turmeric is its ability to neutralize free radicals as a powerful antioxidant. (51) This may help your body to heal the damage caused by chronic inflammation.
Ginger (Zingiber officinale) adds zest to any meal, and is commonly used in Asian-inspired dishes. This root has been used for more than 2,000 years in Chinese medicine to treat everything from nausea to upset stomachs to arthritis. (52) The latter, which is a common inflammation-related disease, highlights ginger’s anti-inflammatory benefits.
Without getting into the nitty gritty details, ginger suppresses the various compounds in your body that trigger inflammation. (53) Scientists report that people often find the soothing effects of ginger to be just as effective as medications from their doctor.
Cinnamon (Cinnamomum spp.) adds festive excitement to both sweet and savoury dishes. From baked goods to smoothies and coffee, it’s a spice that’s easy to incorporate into your daily life.
Cinnamon is high in polyphenols, a powerful type of antioxidant. Researchers reviewed 26 healthy spices and found that cinnamon’s antioxidant activity outperformed all other spices in the report. (54)
This high level of polyphenol content may help reduce the causes and symptoms of inflammation. (55)
For the best results, use Ceylon cinnamon (so-called “true” cinnamon) instead of Cassia cinnamon. (56) The Cassia variety is cheaper but carries potential side effects if you eat too much.
Garlic (Allium sativum) is related to onions and chives, and was used in traditional medicine as far back as the time of the ancient Egyptians. Archaeologists have found evidence of garlic buried deep within the pyramids, and today it’s used to treat everything from heart disease to the common cold. (57)
Garlic is rich in a compound known as thiacremonone. According to researchers, this compound is effective at reducing inflammation, and can even help with inflammation-related diseases like arthritis. (58)
Heat up your tastebuds with cayenne pepper (Capsicum annuum), a Brazilian chili pepper with a spiciness of 30,000-50,000 Scoville units.
The active component in the pepper which gives it its burning flavor comes from capsaicinoids. It’s technically irritant, but it can also reduce inflammation. (59)
6. Black pepper
Need to tone down the heat from cayenne? Try black pepper (Piper nigrum) instead.
Pepper’s piperine compounds, which give it its bold flavor, have also been shown in studies to help reduce your body’s response to inflammation-provoking situations. (60) Consider pairing pepper with other anti-inflammatory spices. For example, eating pepper and turmeric together can increase the effectiveness of turmeric. (61)
Cloves come from the sweet-smelling flower buds of the Syzygium aromaticum tree. They’re commonly used in Indonesian cooking, which is where the trees commonly grow.
A growing body of research is pinpointing the benefits of cloves. For example, scientists say it may help reduce topical pain. (62) And another study, this one done on mice, found that cloves reduce inflammation. (63)
10 Most Common Foods That Spark Inflammation
You are what you eat, and research has highlighted how the common North American diet is rich in foods and ingredients that cause inflammation. If you want to minimize existing inflammation, or prevent inflammation in the first place, consider going on an inflammation detox diet that eliminates the following foods.
There are different sugar limits depending on who you ask, but one thing is for sure: Americans eat far too much sugar. For example, the American Heart Association recommends capping your sugar intake at 10 teaspoons a day, but the average adult eats more than twice that daily. (64)
That adds up to a whopping 130 pounds of sugar a year.
Sugar causes your body to release cytokines, which can increase the level of inflammation in your body. (65) Common sneaky sources of sugar include salad dressing, pasta sauce and soups.
2. Vegetable Oil
Ideally, your diet should consist of a ratio of 1 to 1 when it comes to omega-6 and omega-3 fatty acids. Unfortunately, most people in North America have a ratio of 16 to 1, warns scientists. (66) This means most people are getting far too much omega-6 fats, and this fat imbalance can prompt inflammation.
One of the biggest sources of omega-6 fats is vegetable oil, whether it’s used in dressings or to fry food. Avoid corn, soy, sunflower, palm or safflower oil, and try using cold-pressed virgin olive oil instead.
3. Fried Foods
Fried foods carry a double risk. First, they’re often high in vegetable oil. Second, high heat can cause the creation of advanced glycation end products (AGEs), which are very inflammatory. (67)
4. Refined Flour
Whole grains contain anti-inflammatory compounds, but the same can’t be said about refined flour. (68) Refined flour has been stripped of much of its health benefits, and are high in compounds such as lectins that provoke inflammation throughout your body.
A genetic mutation means most people can’t digest milk. (69) By some estimates, only 30-40% of humans are able to process the lactose in milk.
While the research is conflicting, numerous studies have found that dairy increases low-grade inflammation and might even increase your risks of needing arthritis-related hip replacement surgery. (70)
6. Artificial Additives and Sweeteners
From fake fruit flavors to non-sugar sweeteners to food dyes, a lot of processed foods carry sneaky additives. For some people, these additives can trigger an immune response. The body might sense their presence and send attack cells, which in turn provokes inflammation. (71)
Read the labels of any processed foods you buy, and always aim to eat food as close to the way nature intended it to be.
7. Saturated Fats
No more than 10% of your daily calories should come from saturated fat, and the risks of going overboard include heart disease and elevated risks of many other serious health conditions. (72)
Numerous studies have demonstrated how eating saturated fat triggers inflammation in your fat tissue, which is linked to increased arthritis inflammation. (73)
The Arthritis Foundation reports that some of the most common sources of saturated fats in the average person’s diet include cheese, pizza, pasta, dessert and red meat.
When is a steak not a steak? When the cows were raised differently.
What a cow ate throughout its life impacts the health and quality of its meat. Take beef as an example. A single 3.5-ounce serving of conventional grain-fed beef contains 38.5mg of inflammation-reducing omega-3 fatty acids.
In contrast, the amount in grass-fed beef jumps up to more than 93mg. The same is true for eggs, salmon and other farmed animal products. (76)
If possible, limit your intake of animal-based protein as much as possible. And when you do choose to eat meat, aim for grass-fed meat.
9. Trans Fat Foods
Companies are slowly phasing out trans fats, but you’ll still find it lurking in packaged baked goods, frozen pizza, non-dairy shakes and frostings, etc. Check the labels, and avoid any foods that contain partially hydrogenated oils.
10. Fast Foods
Eating fast foods is a fast way to experience inflammation. They tend to contain many of the above ingredients: fried foods, cooking oil, trans fats, saturated fats, and processed conventionally fed meat.
If there is one meal that manages to pack most of these proinflammatory foods together, it’s fast food. If you absolutely must have a meal at a fast food restaurant, try and check the calories and ingredients in advance so you know what your healthiest options are when you arrive.
13 Easy-to-Follow Food Rules To Avoid Inflammation
Not sure how to implement the wide array of research that’s out there? Break it down into small daily habits to make your new anti-inflammation diet easy to follow.
1. Eat More Fiber
The average male needs 30-38 grams of fiber a day, while the average female needs 21 grams.
Unfortunately, the typical American only eats 16 grams of daily fiber. (79) A high-fiber, whole foods diet naturally contains more inflammation-fighting phytonutrients due to your increased intake of whole grains, fruits and vegetables.
Fiber can also boost digestion and help your body eliminate unhealthy substances that may provoke inflammation.
2. Eat 9 Servings of Veggies and Fruits Daily
One cup of raw leafy greens, or 1/2 cup of cooked vegetables or fruits, constitutes a serving.
Increasing the amount of plant-based foods you eat raises your intake of fiber and antioxidants. It also means you’re likely eating less of inflammation-causing unhealthy foods.
3. Eat More Crucifers and Alliums
Alliums include herbs like garlic (which is a powerful anti-inflammatory), and crucifers include high-fiber veggies like kale and broccoli.
Try to eat at least four servings of these veggies/herbs a week. And if you’re cooking them, don’t forget to add a pinch of cayenne, pepper and the other anti-inflammatory herbs discussed in this article.
4. Limit Saturated Fat
To avoid inflammation, and reduce your risks of heart disease and other serious health maladies, limit your saturated fat intake to no more than 10% of your daily calories.
For a 2000 calorie diet, that is approximately 16 grams of saturated fats a day. (80) This adds up quickly. A piece of bacon gets you 9 grams of saturated fat. A tablespoon of butter has 7 grams. And a 12-ounce steak has 20 grams.
5. Eat More Fish
Fish is high in inflammation-fighting omega-3 fatty acids. Aim to eat fish, such as salmon, three times a week.
And just like with your veggies, use this as an opportunity to incorporate anti-inflammatory herbs and spices into your diet.
6. Cook With Healthy Oils
Avoid vegetable oils, which throw your omega-6 fatty acid ratio out of balance. Instead, opt for oils like olive oil or coconut oil. Keep in mind that “healthy” oils still contain a lot of fat. Use sparingly.
7. Eat Healthy Snacks
Snacks can be a way for excess calories to sneak into your diet, but if you’re a snacker, shift towards healthier snacks.
Ideas include unsweetened fruit, nuts (they’re high in inflammation-reducing healthy fats) and veggie sticks.
8. Avoid Overly Processed or Sweetened Foods
When you eat whole foods that are as close to the way they appear in nature, you dramatically improve the levels of antioxidants, fiber and other beneficial compounds in your diet.
When you’re buying food, always pick the option that has the least amount of processing. Also, check the ingredients label for sugar, keeping in mind that it might be masquerading under another name (e.g. evaporated cane juice).
9. Eliminate Trans Fats
Trans fats are the worst type of fats. From margarine to cookies, you’ll find trans fats in anything that contains “partially hydrogenated” or “hydrogenated” oil. If you see this on a food or snack you like, switch it out for a healthier whole foods alternative.
10. Get Creative With Flavors
Think outside the box. Many of the foods discussed in this article bring with them anti-inflammation benefits and can be used in all of your meals to add texture, flavor or even sweetness.
For example, instead of sweetening your food with honey or sugar, try increasing the sweetness of your overall meal with fruit, yams or carrots.
Likewise, try to add at least one anti-inflammatory spice to each meal, whether it’s cinnamon on your morning oatmeal or ginger in your evening stir-fry.
The Links Between Your Immune System, Lifestyle and Inflammation
Inflammation never pops up as an isolated scenario. It’s a sign that something throughout your entire life is out of balance, and it’s your body’s invitation to respect and honor your body’s messages and take a look at your overall lifestyle.
The Science of Your Immune System
Your immune system is a complex two-part system that acts as a powerful messenger on behalf of your body and your health.
Your innate immunity covers physical aspects of your body designed to defend you from viruses, germs and other invaders. For example, your mucus lining in your nose helps screen out germs, and the stomach acid in your digestive tract helps neutralize dangers in your food.
The second part of your immune system is your adaptive immunity. This is how your body learns from past situations and strengthens its defenses.
For example, when you’re exposed to the flu, your body produces the proper antibodies and remembers this specific strain of the flu so that you’re better equipped the next time you encounter it.
This adaptive immunity is a very complex system of biological pathways, hormones, cells, and other compounds in your system. Inflammation is part of this.
It’s one way that your body responds to a danger, whether it’s physical danger (such as tissue damage) or biological danger (such as a bacteria infection).
Thus, inflammation isn’t inherently bad. In fact, it is part of your body healing itself.
But what is bad is when diet and lifestyle force your immune system to constant “heal” itself, and inflammation becomes chronic. If you’re experiencing chronic inflammation, take a look at your lifestyle and diet.
Is Your Lifestyle Causing Inflammation?
Daily habits build up to life-changing outcomes. Diet is obviously the most important lifestyle factor, but there’s more.
For example, sitting for extended periods of time is linked to increased levels of inflammation. (81,82). If you work in a sedentary job, consider getting up once every hour and going for a short walk.
Alcohol intake also creates inflammation. (83) Consider going on an alcohol break. If you’re worried about social appearances when you’re out with friends or coworkers, ask the bartender to mix you a no-sugar alternative, such as soda water with mint leaves and a wedge of lemon.
It’ll satisfy your drink cravings and no one around you will notice a difference.
Finally, chronic stress has been linked to chronic inflammation. (84) It’s your body’s way of coping with “danger,” even if that danger is simply in your head. Take a moment every morning and evening to relax and de-stress. Options to try include yoga, deep breathing exercises and meditation.
Sample Anti-Inflammatory Diet Menu
Following specific diet plans may reduce your inflammation. Some of the top anti-inflammation diets backed by studies include a low-carb diet, (85) a vegetarian diet (86) and a Mediterranean diet. (87)
A low-carb diet naturally cuts out the sugars and refined grains that are problematic for managing your body’s inflammatory responses.
A vegetarian diet eliminates dairy, processed meats and most saturated fats, which are major components of a pro-inflammatory diet. And a Mediterranean diet emphasizes whole foods, healthy fats and omega-3-rich fish, which reduce inflammation.
A quick way to get inspired and see how easy it is to follow an anti-inflammation diet is by seeing a sample menu.
– A 3-egg omelet cooked in coconut oil and seasoned with garlic and black pepper.
– A green smoothie made with leafy greens and an apple.
– Salad topped with grilled salmon and dressed with balsamic vinegar.
– A handful of nuts
– Green tea or black coffee (no dairy or sugar)
– Asian-inspired veggie stir fry with tofu and ginger
– Unsweetened dried fruit for dessert.
These are just a few of the many science-backed ways that can help you reduce inflammation naturally. It’s important that you try a variety of these, and see what works best for your body and health.
Remember, there is nothing wrong with a little inflammation as it’s your body fighting off things that don’t belong in it. However when it starts to affects you chronically, you know it’s time to take action and seek out medical help. |
Written by: Peter Williamson, Ph.D. | Issue # 45 | 2015
- A bone found in an English cave contained DNA from an ancient wild ox known as the aurochs.
- The DNA was sequenced from over 85% of the aurochs genome.
- Ninety percent of the genetic variants identified in aurochs DNA are found in modern cattle.
- Cattle from Britain and Ireland have retained a relatively high level of aurochs DNA sequence.
- Following the distribution of genetic variants from aurochs to modern cattle provides a trail of domestication and specialized breed trait selection.
A preserved specimen of aurochs bone was discovered deep beneath the Derbyshire Dales in the UK in the 1990s (1). Aurochs are an ancient cattle breed domesticated around 10,000 years ago somewhere around modern day Iran. In Europe, the last of these animals were still found on a Polish royal reserve as recently as the 17th century. Park et al., (2) have now extracted enough DNA from the ancient bone specimen to sequence the aurochs genome. When they compared the aurochs sequence to the DNA of cattle breeds we know and use in domestic agriculture today, they found a surprisingly high level in common with British and Irish cattle.
Scientists from Britain have recovered a bone specimen from a cave deep beneath the Derbyshire countryside. The cave was recognized as an ancient burial ground and contained numerous preserved animal bones. The aurochs bone was dated as being over 6,700 years old. This was prior to the New Stone Age in Britain and during a time when Britain was connected to the European mainland. The peoples of Britain were then hunters and gatherers and, no doubt, aurochs were prized game. The first farmers apparently migrated into Britain with their livestock. However, these inhabitants probably represented less than 20% of the population, and contributed to a wider movement that saw the gradual adaptation of managed quarry (for hunting) to herding (4).
Park et al. were able to get enough DNA from the bone specimen to use modern sequencing methods, and reconstructed over 86% of the aurochs genome sequence. This corresponded to approximately 90% of the reference genome derived from a Hereford cow. Once they had this data in their files, they compared it to the sequences from over 80 other individual genomes from different cattle. They identified the places where there was a difference, or variant, between the aurochs sequence and the reference genome. Now they could focus on these 2.1 million differences and compare across a wider range of sequences from modern cattle breeds.
The goal for Park et al. was to use these variants as markers to track how the aurochs DNA was distributed in the DNA of these modern breeds. They first checked how many of these variants were already known to exist in cattle. They found that for over 90% of these variants were already recorded in cattle genomic databases. This meant two things. First, it confirmed that these sequence variants would be useful for tracking the extent to which various parts of the aurochs DNA was used in different cattle breeds, and second, they could infer which variants may have influenced selected traits.
They found that aurochs were likely to be used to inter-breed with domesticated cattle at some time in the past. Most likely, farmers looked to wild cattle to boost their stocks, but there may have also been some random matings. The scientists also found that the divergence of European breeds, relative to North African and Asian breeds, may be more recent than previously thought. This was evident from an apparent recent loss of genetic variants from the European breeds rather than an ancient split. The evidence for a significant contribution of aurochs to the modern breeds found in Scotland, Wales, England and Ireland was particularly compelling. . Thus, the interbreeding of aurochs and domestic European cattle continued long after North African and Asian breeds had already migrated.
When examining the aurochs genome, the scientists extracted a selection of variants that had functional properties. That is, they focused on a relatively small number of variants (SNP) that were identified as having a role in influencing the structure or expression of a gene. There were 166 of these, selected from across the entire genome. An analysis of these genes found that they fell into three interesting categories associated with brain function and behavior, immune response, and growth and metabolism. The scientists speculated that this reflected the selection of cattle suited to domestication. This would result in animals that behaved appropriately, or survived infections that may have become more common if they were living in more crowded conditions. They were also probably selected for their ability to pack on more muscle to provide more meat.
The scientists then looked at gene variants that were present in the aurochs genome and found to be present with very high frequency in the modern cattle breeds, that is, these particular variants were positively selected during the development of breeds. The traits these genes affected were favorable for meat production, or disease resistance, or behavior in a farm system. One of the selected gene variants that stood out was within the DGAT1 gene, which we know has a major impact on milk production traits (5).
They also identified variants that occur in microRNAs. MicroRNAs are a relatively recent discovery in mammals and are derived from very small pieces of DNA. Their main role is to influence the amount of a protein that appears from one or more specific genes. They can be quite influential in determining important traits, such as muscle growth (6). In the case of the variant identified in the aurochs DNA sequence, referred to as miR-2893, it has effects on molecules involved in neurological function, fatty acid metabolism, and immune function (2).
The study reveals a lot about aurochs, and the prehistory of modern cattle breed development. It also cements the aurochs as the origin of the genetic variation that exists in modern breeds; variation that has fueled selection over many centuries to give modern breeds the specialized roles to suit modern farm systems and food production.
- Edwards CJ, Magee DA, Park SD, McGettigan PA, Lohan AJ, et al. (2010) A complete mitochondrial genome sequence from a mesolithic wild aurochs (Bos primigenius). PLoS One 5: e9255.
- Park SD, Magee DA, McGettigan PA, Teasdale MD, Edwards CJ, et al. (2015) Genome sequencing of the extinct Eurasian wild aurochs, Bos primigenius, illuminates the phylogeography and evolution of cattle. Genome Biol 16: 234.
- van Vuure T (2002) History, Morphology and Ecology of the Aurochs (Bos primigenius).
- Pryor F (2006) Farmers in prehistoric Britain: The History Press Ltd.
- Grisart B, Coppieters W, Farnir F, Karim L, Ford C, et al. (2002) Positional candidate cloning of a QTL in dairy cattle: identification of a missense mutation in the bovine DGAT1 gene with major effect on milk yield and composition. Genome Res 12: 222-231.
- Clop A, Marcq F, Takeda H, Pirottin D, Tordoir X, et al. (2006) A mutation creating a potential illegitimate microRNA target site in the myostatin gene affects muscularity in sheep. Nat Genet 38: 813-818. |
Before we begin graphing systems of equations, a good starting point is to review our knowledge of 2-D graphs. These graphs are known as 2-D because they have two axes. Find an online image of a graph to use as the foundation of your discussion. (This is easily accomplished by searching within Google Images.)
Using your graph as the example:
Select any two points on the graph and apply the slope formula, interpreting the result as a rate of change (units of measurement required); and
Use rate of change (slope) to explain why your graph is linear (constant slope) or not linear (changing slopes).
Embed the graph into the post by copying and pasting into the discussion. You must cite the source of the image. Also be sure to show the computations used to determine slope. |
Chapter 3: Applications of Differentiation
Section 3.4: Differentials and the Linear Approximation
The definition of a differential is based on Figure 3.4.1, the fundamental diagram of differential calculus. Points A and C are on the line tangent to the red curve at point A. In the right-triangle ΔABC, the angle the hypotenuse (the tangent line) makes with the horizontal is θ, and by the definition of the derivative at A, f′=tanθ. Consequently,
so that opposite=f′dx≡df.
Figure 3.4.1 Defining the differential df
Thus df, the differential of fx, is defined as the derivative f′x times dx, an increment (large or small) in x, the independent variable.
Figure 3.4.1 then suggests that df is an approximation to Δf, the exact change in f as the independent variable changes from x to x+dx.
This idea is captured in the notation Δ f=fx+dx−fx≐df
Isolating fx+dx leads to the linear approximation fx+dx≐fx+df
The linear approximation is nothing more than the tangent-line approximation, that is, the use of the tangent line to approximate values of a nonlinear function.
The Mean-Value theorem (Theorem 3.4.1), whose proof is independent of the relationships in Figure 3.4.1, then states that there is a point c for which the linear approximation is actually an exact equality.
Theorem 3.4.1: Mean-Value Theorem
fx is continuous in a,b
fx is differentiable in a,b
At least one c exists in a,b for which f′c=fb−fab−a
This form of the Mean-Value theorem has a geometric interpretation, namely, that over the interval a,b there is a point c at which the tangent line is parallel to the secant line connecting a,fa and b,fb.
If the conclusion of Theorem 3.4.1 is rewritten as
and if a is identified with x, and b with x+dx, then the conclusion of Theorem 3.4.1 becomes
In other words, the analytic content of the Mean-Value theorem is that the linear approximation is exact if the differential is evaluated at the special point c. However, the point c depends on x, so no recipe can be given for finding the value of c that makes the linear approximation exact.
Approximate 17 by using the differential of the function fx=x.
How accurate is this approximation?
Apply the Mean Value theorem to the function fx=x e−x,0≤x≤3.
Determine c from first principles.
Over what interval would the tangent line at x=3 approximate fx=x e−x with an error no greater than 0.1?
<< Previous Section Table of Contents Next Section >>
© Maplesoft, a division of Waterloo Maple Inc., 2023. All rights reserved. This product is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation.
For more information on Maplesoft products and services, visit www.maplesoft.com
Download Help Document |
In 1916, Albert Einstein published his theory of general relativity, which established the modern view of gravity as a warping of the fabric of spacetime. The theory predicted that objects that interact with gravity could disturb that fabric, sending ripples across it.
Any object that interacts with gravity can create gravitational waves. But only the most catastrophic cosmic events make gravitational waves powerful enough for us to detect. Now that observatories have begun to record gravitational waves on a regular basis, scientists are discussing how dark matter—only known so far to interact with other matter only through gravity—might create gravitational waves strong enough to be found.
The spacetime blanket
In the universe, space and time are invariably linked as four-dimensional spacetime. For simplicity, you can think of spacetime as a blanket suspended above the ground. Jupiter might be a single Cheerio on top of that blanket. The sun could be a tennis ball. R136a1—the most massive known star—might be a 40-pound medicine ball.
Each of these objects weighs down the blanket where it sits: the heavier the object, the bigger the dip in the blanket. Like objects of different weights on a blanket, objects of different masses have different effects on the fabric of spacetime. A dip in spacetime is gravitational field.
The gravitational field of one object can affect another object. The other object might fall into the first object’s gravitational field and orbit around it, like the moon around Earth, or Earth around the sun.
Alternatively, two bodies with gravitational fields might spiral toward each other, getting closer and closer until they collide. As this happens, they create ripples in spacetime—gravitational waves.
On September 14, 2015, scientists used the Laser Interferometer Gravitational-Wave Observatory, or LIGO, to make the first direct observation of gravitational waves, part of the buildup to the crash between two massive black holes.
Since that first detection, the LIGO collaboration—together with the collaboration that runs a partner gravitational-wave observatory called Virgo—has detected gravitational waves from at least 10 more mergers of black holes and, in 2017, the first merger between two neutron stars.
Dark matter is believed to be five times as prevalent as visible matter. Its gravitational effects are seen throughout the universe. Scientists think they have yet to definitively see gravitational waves caused by dark matter, but they can think of numerous ways this might happen.
Primordial black holes
Scientists have seen the gravitational effects of dark matter, so they know it must be there—or at least, something must be going on to cause those effects. But so far, they’ve never directly detected a dark matter particle, so they’re not sure exactly what dark matter is like.
One idea is that some of the dark matter could actually be primordial black holes.
Imagine the universe as an infinitely large petri dish. In this scenario, the Big Bang is the point where matter-bacteria begins to grow. That point quickly expands, moving outward to encompass more and more of the petri dish. If that growth is slightly uneven, certain areas will become more densely inhabited by matter than others.
These pockets of dense matter—mostly photons at this point in the universe—might have collapsed under their own gravity and formed early black holes.
“I think it’s an interesting theory, as interesting as a new kind of particle,” says Yacine Ali-Haimoud, an assistant professor of physics at New York University. “If primordial black holes do exist, it would have profound implications on the conditions in the very early universe.”
By using gravitational waves to learn about the properties of black holes, LIGO might be able to prove or constrain this dark matter theory.
Unlike normal black holes, primordial black holes don’t have a minimum mass threshold they need to reach in order to form. If LIGO were to see a black hole less massive than the sun, for example, it might be a primordial black hole.
Even if primordial black holes do exist, it’s doubtful that they account for all of the dark matter in the universe. Still, finding proof of primordial black holes would expand our fundamental understanding of dark matter and how the universe began.
Neutron star rattles
Dark matter seems to interact with normal matter only through gravity, but, based on the way known particles interact, theorists think it’s possible that dark matter might also interact with itself.
If that is the case, dark matter particles might bind together to form dark objects that are as compact as a neutron star.
We know that stars drastically “weigh down” the fabric of spacetime around them. If the universe were populated with compact dark objects, there would be a chance that at least some of them would end up trapped inside of ordinary matter stars.
A normal star and a dark object would interact only through gravity, allowing the two to co-exist without much of a fuss. But any disruption to the star—for example, a supernova explosion—could create a rattle-like disturbance between the resulting neutron star and the trapped dark object. If such an event occurred in our galaxy, it would create detectable gravitational waves
“We understand neutron stars quite well,” says Sanjay Reddy, University of Washington physics professor and senior fellow with the Institute for Nuclear Theory. “If something ‘odd’ happens with gravitational waves, we would know there was potentially something new going on that might involve dark matter.”
The likelihood that any exist in our solar system is limited. Chuck Horowitz, Maria Alessandra Papa and Reddy recently analyzed LIGO’s data and found no indication of compact dark objects of a specific mass range within Earth, Jupiter or the sun.
Further gravitational-wave studies could place further constraints on compact dark objects. “Constraints are important,” says Ann Nelson, a physics professor at the University of Washington. “They allow us to improve existing theories and even formulate new ones.”
One light dark matter candidate is the axion, named by physicist Frank Wilczek after a brand of detergent, in reference to its ability to tidy up a problem in the theory of quantum chromodynamics.
Scientists think it could be possible for axions to bind together into axion stars, similar to neutron stars but made up of extremely compact axion matter.
“If axions exist, there are scenarios where they can cluster together and form stellar objects, like ordinary matter,” says Tim Dietrich, a LIGO-Virgo member and physicist. “We don’t know if axion stars exist, and we won’t know for sure until we find constraints for our models.”
If an axion star merged with a neutron star, scientists might not be able to tell the difference between the two with their current instruments. Instead, scientists would need to rely on electromagnetic signals accompanying the gravitational wave to identify the anomaly.
It’s also possible that axions could bunch around a binary black hole or neutron star system. If those stars then merged, the changes in the axion “cloud” would be visible in the gravitational wave signal. A third possibility is that axions could be created by the merger, an action that would be reflected in the signal.
This month, the LIGO-Virgo collaborations began their third observing run and, with new upgrades, expect to detect a merger event every week.
Gravitational-wave detectors have already proven their worth in confirming Einstein’s century-old prediction. But there is still plenty that studying gravitational waves can teach us. “Gravitational waves are like a completely new sense for science,” Ali-Haimoud says. “A new sense means new ways to look at all the big questions in physics.” |
Dr. Michael C. Labossiere, the author of a Macintosh tutorial named Fallacy Tutorial Pro 3.0, has kindly agreed to allow the text of his work to appear on the Nizkor site, as a Nizkor Feature. It remains © Copyright 1995 Michael C. Labossiere, with distribution restrictions -- please see our copyright notice. If you have questions or comments about this work, please direct them both to the Nizkor webmasters (email@example.com ) and to Dr. Labossiere (firstname.lastname@example.org ).Other sites that list and explain fallacies include:
- Constructing a Logical Argument
In order to understand what a fallacy is, one must understand what an argument is. Very briefly, an argument consists of one or more premises and one conclusion. A premise is a statement (a sentence that is either true or false) that is offered in support of the claim being made, which is the conclusion (which is also a sentence that is either true or false).
There are two main types of arguments: deductive and inductive. A deductive argument is an argument such that the premises provide (or appear to provide) complete support for the conclusion. An inductive argument is an argument such that the premises provide (or appear to provide) some degree of support (but less than complete support) for the conclusion. If the premises actually provide the required degree of support
for the conclusion, then the argument is a good one. A good deductive argument is known as a valid argument and is such that if all its premises are true, then its conclusion must be true. If all the argument is valid and actually has all true premises, then it is known as a sound argument. If it is invalid or has one or more false premises, it will be unsound. A good inductive argument is known as a strong (or "cogent") inductive argument. It is such that if the premises are true, the conclusion is likely to be true.
A fallacy is, very generally, an error in reasoning. This differs from a factual error, which is simply being wrong about the facts. To be more specific, a fallacy is an "argument" in which the premises given for the conclusion do not provide the needed degree of support. A deductive fallacy is a deductive argument that is invalid (it is such that it could have all true premises and still have a false conclusion). An inductive fallacy is less formal than a deductive fallacy. They are simply "arguments" which appear to be inductive arguments, but the premises do not provided enough support for the conclusion. In such cases, even if the premises were true, the conclusion would not be more likely to be true. |
Calculus Graphs Study Guide
A function can be fully described by showing what happens at each number in its domain (for example, 4→ 2) or by giving its formula (for example, f(x) = √x). However, neither of these provides a clear overall picture of the function.
Luckily for us, René Descartes came up with the idea of a graph, a visual picture of a function. Rather than say 4→ 2 or f( 4) = 2, we plot (4,2) on the Cartesian plane, which would look like Figure 2.1.
If we plotted all the points in the domain of f(x) = √x (not just the whole numbers, but all the fractions and decimals, too), then the points would be so close together that they would form a continuous curve as in Figure 2.2.
The graph shows us several interesting characteristics of the function f(x) = √x. Because the graph starts at x = 0 and runs to the right, this means that the domain is x ≥ 0.
We can see that the function f(x) = √x is increasing (going up from left to right) and not decreasing (going down from left to right).
The function f(x) = √x is concave down because it curves downward (see Figure 2.3) like a frown and not concave up like a smile (see Figure 2.4).
Note on Finding Coordinates
We put the y into the formula y = f(x) = √x to imply that the y-coordinates of our points are the numbers we get by plugging the x-coordinates into the function f.
Use the graph of the following function (see Figure 2.5) to determine the domain, where the function is increasing and decreasing, and where the function is concave up and concave down.
The domain of g consists of all real numbers because there is a point above or below every number on the x-axis.
The function g is increasing up to the point at x = 2, where it then decreases down to x = 8, and then increases ever afterward. To save space, we say that g increases on (–∞,2) and on (8,∞), and that g decreases on (2,8).
The point at (2,6) where g stops increasing and begins to decrease is the highest point in its immediate area and is called a local maximum. The point at (8,3) is similarly a local minimum, the lowest point in its neighborhood. These points tend to be the most interesting points on a graph.
The concavity of g is trickier to estimate. Clearly g is concave down in the vicinity of x = 2 and concave up around x = 7 and x = 8. The exact point where the concavity changes is called a point of inflection. On this graph, it seems to be at the point (5,4), though some people might imagine it a bit earlier or later. Thus, we say that g is concave down on (–∞,5) and concave up on (5,∞).
To be completely honest, any information obtained by looking at a graph is going to be a rough estimate. Is the local maximum at (2,6), or is it at (2.0003,5.9998)? There is no way to tell the difference. Graphs made up by people, like the ones in this lesson, tend to have everything interesting happen at whole numbers. Graphs formed using real-world data tend to be much less kind.
- Kindergarten Sight Words List
- First Grade Sight Words List
- 10 Fun Activities for Children with Autism
- Definitions of Social Studies
- Signs Your Child Might Have Asperger's Syndrome
- Curriculum Definition
- Child Development Theories
- Theories of Learning
- A Teacher's Guide to Differentiating Instruction
- 8 Things First-Year Students Fear About College |
Researchers got their first up-close look at dust from the surface of a small, stony asteroid after the Hayabusa spacecraft scooped some up and brought it back to Earth. Analysis of these dust particles, detailed in a special issue of the journal Science this week, confirms a long-standing suspicion: that the most common meteorites found here on Earth, known as ordinary chondrites, are born from these stony, or S-type, asteroids. And since chondrites are among the most primitive objects in the solar system, the discovery also means that these asteroids have been recording a long and rich history of early solar system events.
The 26 August issue of Science includes six reports and a Perspective article that highlight the initial studies of this asteroid dust.
The Hayabusa spacecraft was launched by the Japan Aerospace Exploration Agency (JAXA) in 2003 to sample the surface of the near-Earth asteroid known as 25143 Itokawa. The unmanned vessel reached its destination a little more than two years later -- and in November 2005, it made two separate touchdowns on the surface of Itokawa. Although its primary sampler malfunctioned, the spacecraft was able to strike the asteroid's surface with an elastic sampling horn and catch the small amount of dust particles that were kicked up. After reentering Earth's atmosphere and landing in South Australia in June 2010, Hayabusa's delicate samples were analyzed extensively by various teams of researchers.
"Science is very excited and pleased to be presenting these important scientific analyses," said Brooks Hanson, Deputy Editor of the Physical Sciences. "The first samples that researchers collected beyond Earth were from the moon, and the first analyses of those samples were also published in Science. Those samples, along with the more recent sampling of a comet and the solar wind, have changed our understanding of the solar system and Earth. They are still yielding important results. These Hayabusa samples are the first samples of an asteroid. Not only do they provide important information about the history of the asteroid Itokawa, but by providing the needed ground truth that is only possible through direct sampling, they also help make other important samples -- like meteorite collections and the lunar samples -- even more useful."
The asteroid sampled by Hayabusa is a rocky, S-type asteroid with the appearance of a rubble pile. Based on observations from the ground, researchers have believed that similar S-type asteroids, generally located in our solar system's inner and middle asteroid belt, are responsible for most of the small meteorites that regularly strike Earth. But, the visible spectra of these asteroids have never precisely matched those of ordinary chondrites -- a fact that has left researchers suspicious of their actual affiliation. The only way to confirm a direct relationship between meteorites and these S-type asteroids was to physically sample the regolith from an asteroid's surface.
Tomoki Nakamura from Tohoku University in Sendai, Japan and colleagues from across the country and in the United States were among the first to analyze this regolith brought back by Hayabusa. The team of researchers used a combination of powerful electron microscopes and X-ray diffraction techniques to study the mineral chemistry of Itokawa's dust particles.
"Our study demonstrates that the rocky particles recovered from the S-type asteroid are identical to ordinary chondrites, which proves that asteroids are indeed very primitive solar system bodies," said Nakamura.
The researchers also noticed that Itokawa's regolith has gone through significant heating and impact shocks. Based on its size, they conclude that the asteroid is actually made up of small fragments of a much bigger asteroid.
"The particles recovered from the asteroid have experienced long-term heating at about 800 degrees Celsius," said Nakamura. "But, to reach 800 degrees, an asteroid would need to be about 12.4 miles (20 kilometers) in diameter. The current size of Itokawa is much smaller than that so it must have first formed as a larger body, then been broken by an impact event and reassembled in its current form."
Separate teams of researchers, including Mitsuru Ebihara from Tokyo Metropolitan University and colleagues from the United States and Australia, cut open the tiny regolith grains returned by Hayabusa to get a look at the minerals inside them. Their composition shows that the dust grains have preserved a record of primitive elements from the early solar system. Now, those mineral compositions can be compared to tens of thousands of meteorites that have fallen to Earth, and then correlated to the visible spectra of other asteroids in space.
Akira Tsuchiyama from Osaka University in Toyonaka, Japan and colleagues from around the world also analyzed the three-dimensional structures of the dust particles. Since dust from the surface of the moon is the only other type of extraterrestrial regolith that researchers have been able to sample directly (from the Apollo and Luna missions), these researchers closely compared the two types.
"The cool thing about this Itokawa analysis is the tremendous amount of data we can get from such a small sample," said Michael Zolensky from the NASA Johnson Space Center in Houston, Texas, a co-author of the research. "When researchers analyzed regolith from the moon, they needed kilogram-sized samples. But, for the past 40 years, experts have been developing technologies to analyze extremely small samples. Now, we've gained all this information about Itokawa with only a few nano-grams of dust from the asteroid."
According to the researchers, Itokawa's regolith has been shaped by erosion and surface impacts on the asteroid, whereas lunar regolith, which has spent more time exposed to solar winds and space weathering, has been more chemically altered.
Takaaki Noguchi from Ibaraki University in Mito, Japan, and colleagues cite this chemical difference between the lunar dust and the Itokawa samples as one of the reasons astronomers have never been able to definitively tie ordinary chondrites to S-type asteroids in the past.
"Space weathering is the interaction between the surface of airless bodies, like asteroids and the moon, and the energetic particles in space," said Noguchi. "When these energetic particles -- like solar wind, plasma ejected from the Sun and fast-traveling micrometeoroids -- strike an object, pieces of them condense on the surface of that object. In the vacuum of space, such deposits can create small iron particles that greatly affect the visible spectra of these celestial bodies when they are viewed from Earth."
But now, instead of using lunar samples to estimate the space weathering on an asteroid in the future, researchers can turn to the asteroid regolith for direct insight into such processes.
Two more international studies led by Keisuke Nagao from the University of Tokyo and Hisayoshi Yurimoto from Hokkaido University in Sapporo, Japan, respectively, have determined how long the regolith material has been on the surface of Itokawa and established a direct link between the oxygen isotopes in ordinary chondrites and their parent, S-type asteroids.
According to the researchers, the dust from Itokawa has been on the surface of the asteroid for less than eight million years. They suggest that regolith material from such small asteroids might escape easily into space to become meteorites, traveling toward Earth.
"This dust from the surface of the Itokawa asteroid will become a sort of Rosetta Stone for astronomers to use," according to Zolensky. "Now that we understand the bulk mineral and chemical composition of the Hayabusa sample, we can compare them to meteorites that have struck the Earth and try to determine which asteroids the chondrites came from."
The report by Nakamura et al. received additional support from NASA grants.
The report by Yurimoto et al. received additional support from a Monka-sho grant and the NASA Muses-CN/Hayabusa Program.
The report by Ebihara et al. received additional support from a grant-in-aid defrayed by the Ministry of Education, Culture, Science and Technology of Japan and a grant from NASA.
The report by Noguchi et al. received additional support from the NASA Muses-CN/Hayabusa Program.
The report by Tsuchiyama et al. received additional support from a grant-in-aid of the Japan Ministry of Education, Culture, Sports, Science and Technology and the NASA Muses-CN/Hayabusa Program.
Materials provided by American Association for the Advancement of Science. Note: Content may be edited for style and length.
Cite This Page: |
05978 Topic: Learning Team
Number of Pages: 2 (Double Spaced)
Number of sources: 1
Writing Style: APA
Type of document: Essay
Language Style: English (U.S.)
Modules Chapter 6 Week 2 p655
C H A P T E R 6
In everyday language we say that something is valid if it is sound, meaningful, or well grounded on principles or evidence. For example, we speak of a valid theory, a valid argument, or a valid reason. In legal terminology, lawyers say that something is valid if it is “executed with the proper formalities” (Black, 1979), such as a valid contract and a valid will. In each of these instances, people make judgments based on evidence of the meaningfulness or the veracity of something. Similarly, in the language of psychological assessment, validity is a term used in conjunction with the meaningfulness of a test score—what the test score truly means.
The Concept of Validity
Validity , as applied to a test, is a judgment or estimate of how well a test measures what it purports to measure in a particular context. More specifically, it is a judgment based on evidence about the appropriateness of inferences drawn from test scores.1 An inference is a logical result or deduction. Characterizations of the validity of tests and test scores are frequently phrased in terms such as “acceptable” or “weak.” These terms reflect a judgment about how adequately the test measures what it purports to measure.
Inherent in a judgment of an instrument’s validity is a judgment of how useful the instrument is for a particular purpose with a particular population of people. As a shorthand, assessors may refer to a particular test as a “valid test.” However, what is really meant is that the test has been shown to be valid for a particular use with a particular population of testtakers at a particular time. No test or measurement technique is “universally valid” for all time, for all uses, with all types of testtaker populations. Rather, tests may be shown to be valid within what we would characterize as reasonable boundaries of a contemplated usage. If those boundaries are exceeded, the validity of the test may be called into question. Further, to the extent that the validity of a test may diminish as the culture or the times change, the validity of a test may have to be re-established with the same as well as other testtaker populations.Page 176
JUST THINK . . .
Why is the phrase valid test sometimes misleading?
Validation is the process of gathering and evaluating evidence about validity. Both the test developer and the test user may play a role in the validation of a test for a specific purpose. It is the test developer’s responsibility to supply validity evidence in the test manual. It may sometimes be appropriate for test users to conduct their own validation studies with their own groups of testtakers. Such local validation studies may yield insights regarding a particular population of testtakers as compared to the norming sample described in a test manual. Local validation studies are absolutely necessary when the test user plans to alter in some way the format, instructions, language, or content of the test. For example, a local validation study would be necessary if the test user sought to transform a nationally standardized test into Braille for administration to blind and visually impaired testtakers. Local validation studies would also be necessary if a test user sought to use a test with a population of testtakers that differed in some significant way from the population on which the test was standardized.
JUST THINK . . .
Local validation studies require professional time and know-how, and they may be costly. For these reasons, they might not be done even if they are desirable or necessary. What would you recommend to a test user who is in no position to conduct such a local validation study but who nonetheless is contemplating the use of a test that requires one?
One way measurement specialists have traditionally conceptualized validity is according to three categories:
1. Content validity. This is a measure of validity based on an evaluation of the subjects, topics, or content covered by the items in the test.
2. Criterion-related validity. This is a measure of validity obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures
3. Construct validity. This is a measure of validity that is arrived at by executing a comprehensive analysis of
a. how scores on the test relate to other test scores and measures, and
b. how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure.
In this classic conception of validity, referred to as the trinitarian view (Guion, 1980), it might be useful to visualize construct validity as being “umbrella validity” because every other variety of validity falls under it. Why construct validity is the overriding variety of validity will become clear as we discuss what makes a test valid and the methods and procedures used in validation. Indeed, there are many ways of approaching the process of test validation, and these different plans of attack are often referred to as strategies. We speak, for example, of content validation strategies, criterion-related validation strategies, and construct validation strategies.
Trinitarian approaches to validity assessment are not mutually exclusive. That is, each of the three conceptions of validity provides evidence that, with other evidence, contributes to a judgment concerning the validity of a test. Stated another way, all three types of validity evidence contribute to a unified picture of a test’s validity. A test user may not need to know about all three. Depending on the use to which a test is being put, one type of validity evidence may be more relevant than another.
The trinitarian model of validity is not without its critics (Landy, 1986). Messick (1995), for example, condemned this approach as fragmented and incomplete. He called for a unitary view of validity, one that takes into account everything from the implications of test scores in terms of societal values to the consequences of test use. However, even in the so-called unitary view, different elements of validity may come to the fore for scrutiny, and so an understanding of those elements in isolation is necessary.
In this chapter we discuss content validity, criterion-related validity, and construct validity; three now-classic approaches to judging whether a test measures what it purports to measure. Page 177Let’s note at the outset that, although the trinitarian model focuses on three types of validity, you are likely to come across other varieties of validity in your readings. For example, you are likely to come across the term ecological validity. You may recall from Chapter 1 that the term ecological momentary assessment (EMA) refers to the in-the-moment and in-the-place evaluation of targeted variables (such as behaviors, cognitions, and emotions) in a natural, naturalistic, or real-life context. In a somewhat similar vein, the term ecological validity refers to a judgment regarding how well a test measures what it purports to measure at the time and place that the variable being measured (typically a behavior, cognition, or emotion) is actually emitted. In essence, the greater the ecological validity of a test or other measurement procedure, the greater the generalizability of the measurement results to particular real-life circumstances.
Part of the appeal of EMA is that it does not have the limitations of retrospective self-report. Studies of the ecological validity of many tests or other assessment procedures are conducted in a natural (or naturalistic) environment, which is identical or similar to the environment in which a targeted behavior or other variable might naturally occur (see, for example, Courvoisier et al., 2012; Lewinski et al., 2014; Lo et al., 2015). However, in some cases, owing to the nature of the particular variable under study, such research may be retrospective in nature (see, for example, the 2014 Weems et al. study of memory for traumatic events).
Other validity-related terms that you will come across in the psychology literature are predictive validity and concurrent validity. We discuss these terms later in this chapter in the context of criterion-related validity. Yet another term you may come across is face validity (see Figure 6–1). In fact, you will come across that term right now . . .
Figure 6–1 Face Validity and Comedian Rodney Dangerfield Rodney Dangerfield (1921–2004) was famous for complaining, “I don’t get no respect.” Somewhat analogously, the concept of face validity has been described as the “Rodney Dangerfield of psychometric variables” because it has “received little attention—and even less respect—from researchers examining the construct validity of psychological tests and measures” (Bornstein et al., 1994, p. 363). By the way, the tombstone of this beloved stand-up comic and film actor reads: “Rodney Dangerfield . . . There goes the neighborhood.”© Arthur Schatz/The Life Images Collection/Getty Images
Face validity relates more to what a test appears to measure to the person being tested than to what the test actually measures. Face validity is a judgment concerning how relevant the Page 178test items appear to be. Stated another way, if a test definitely appears to measure what it purports to measure “on the face of it,” then it could be said to be high in face validity. A paper-and-pencil personality test labeled The Introversion/Extraversion Test, with items that ask respondents whether they have acted in an introverted or an extraverted way in particular situations, may be perceived by respondents as a highly face-valid test. On the other hand, a personality test in which respondents are asked to report what they see in inkblots may be perceived as a test with low face validity. Many respondents would be left wondering how what they said they saw in the inkblots really had anything at all to do with personality.
In contrast to judgments about the reliability of a test and judgments about the content, construct, or criterion-related validity of a test, judgments about face validity are frequently thought of from the perspective of the testtaker, not the test user. A test’s lack of face validity could contribute to a lack of confidence in the perceived effectiveness of the test—with a consequential decrease in the testtaker’s cooperation or motivation to do his or her best. In a corporate environment, lack of face validity may lead to unwillingness of administrators or managers to “buy-in” to the use of a particular test (see this chapter’s Meet an Assessment Professional ). In a similar vein, parents may object to having their children tested with instruments that lack ostensible validity. Such concern might stem from a belief that the use of such tests will result in invalid conclusions.
MEET AN ASSESSMENT PROFESSIONAL
Meet Dr. Adam Shoemaker
In the “real world,” tests require buy-in from test administrators and candidates. While the reliability and validity of the test are always of primary importance, the test process can be short-circuited by administrators who don’t know how to use the test or who don’t have a good understanding of test theory. So at least half the battle of implementing a new testing tool is to make sure administrators know how to use it, accept the way that it works, and feel comfortable that it is tapping the skills and abilities necessary for the candidate to do the job.
Here’s an example: Early in my company’s history of using online assessments, we piloted a test that had acceptable reliability and criterion validity. We saw some strongly significant correlations between scores on the test and objective performance numbers, suggesting that this test did a good job of distinguishing between high and low performers on the job. The test proved to be unbiased and showed no demonstrable adverse impact against minority groups. However, very few test administrators felt comfortable using the assessment because most people felt that the skills that it tapped were not closely related to the skills needed for the job. Legally, ethically, and statistically, we were on firm ground, but we could never fully achieve “buy-in” from the people who had to administer the test.
On the other hand, we also piloted a test that showed very little criterion validity at all. There were no significant correlations between scores on the test and performance outcomes; the test was unable to distinguish between a high and a low performer. Still . . . the test administrators loved this test because it “looked” so much like the job. That is, it had high face validity and tapped skills that seemed to be precisely the kinds of skills that were needed on the job. From a legal, ethical, and statistical perspective, we knew we could not use this test to select employees, but we continued to use it to provide a “realistic job preview” to candidates. That way, the test continued to work for us in really showing candidates that this was the kind of thing they would be doing all day at work. More than a few times, candidates voluntarily withdrew from the process because they had a better understanding of what the job involved long before they even sat down at a desk.
Adam Shoemaker, Ph.D., Human Resources Consultant for Talent Acquisition, Tampa, Florida © Adam Shoemaker
The moral of this story is that as scientists, we have to remember that reliability and validity are super important in the development and implementation of a test . . . but as human beings, we have to remember that the test we end up using must also be easy to use and appear face valid for both the candidate and the administrator.
Read more of what Dr. Shoemaker had to say—his complete essay—through the Instructor Resources within Connect.
Used with permission of Adam Shoemaker.
JUST THINK . . .
What is the value of face validity from the perspective of the test user?
In reality, a test that lacks face validity may still be relevant and useful. However, if the test is not perceived as relevant and useful by testtakers, parents, legislators, and others, then negative consequences may result. These consequences may range from poor testtaker attitude to lawsuits filed by disgruntled parties against a test user and test publisher. Ultimately, face validity may be more a matter of public relations than psychometric soundness. Still, it is important nonetheless, and (much like Rodney Dangerfield) deserving of respect.
Content validity describes a judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample. For example, the universe of behavior referred to as assertive is very wide-ranging. A content-valid, paper-and-pencil test of assertiveness would be one that is adequately representative of this wide range. We might expect that such a test would contain items sampling from hypothetical situations at home (such as whether the respondent has difficulty in making her or his views known to fellow family members), on the job (such as whether the respondent has difficulty in asking subordinates to do what is required of them), and in social situations (such as whether the respondent would send back a steak not done to order in a fancy restaurant). Ideally, test developers have a clear (as opposed to “fuzzy”) vision of the construct being measured, and the clarity of this vision can be reflected in the content validity of the test (Haynes et al., 1995). In the interest of ensuring content validity, test developers strive to include key components of the construct targeted for measurement, and exclude content irrelevant to the construct targeted for measurement.
With respect to educational achievement tests, it is customary to consider a test a content-valid measure when the proportion of material covered by the test approximates the proportion of material covered in the course. A cumulative final exam in introductory statistics would be considered content-valid if the proportion and type of introductory statistics problems on the test approximates the proportion and type of introductory statistics problems presented in the course.
The early stages of a test being developed for use in the classroom—be it one classroom or those throughout the state or the nation—typically entail research exploring the universe of possible instructional objectives for the course. Included among the many possible sources of information on such objectives are course syllabi, course textbooks, teachers of the course, specialists who Page 180develop curricula, and professors and supervisors who train teachers in the particular subject area. From the pooled information (along with the judgment of the test developer), there emerges a test blueprint for the “structure” of the evaluation—that is, a plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test, and so forth (see Figure 6–2). In many instances the test blueprint represents the culmination of efforts to adequately sample the universe of content areas that conceivably could be sampled in such a test.2
Figure 6–2 Building a Test from a Test Blueprint An architect’s blueprint usually takes the form of a technical drawing or diagram of a structure, sometimes written in white lines on a blue background. The blueprint may be thought of as a plan of a structure, typically detailed enough so that the structure could actually be constructed from it. Somewhat comparable to the architect’s blueprint is the test blueprint of a test developer. Seldom, if ever, on a blue background and written in white, it is nonetheless a detailed plan of the content, organization, and quantity of the items that a test will contain—sometimes complete with “weightings” of the content to be covered (He, 2011; Spray & Huang, 2000; Sykes & Hou, 2003). A test administered on a regular basis may require “item-pool management” to manage the creation of new items and the output of old items in a manner that is consistent with the test’s blueprint (Ariel et al., 2006; van der Linden et al., 2000).© John Rowley/Getty Images RF
JUST THINK . . .
A test developer is working on a brief screening instrument designed to predict student success in a psychological testing and assessment course. You are the consultant called upon to blueprint the content areas covered. Your recommendations?
For an employment test to be content-valid, its content must be a representative sample of the job-related skills required for employment. Behavioral observation is one technique frequently used in blueprinting the content areas to be covered in certain types of employment tests. The test developer will observe successful veterans on that job, note the behaviors necessary for success on the job, and design the test to include a representative Page 181sample of those behaviors. Those same workers (as well as their supervisors and others) may subsequently be called on to act as experts or judges in rating the degree to which the content of the test is a representative sample of the required job-related skills. At that point, the test developer will want to know about the extent to which the experts or judges agree. A description of one such method for quantifying the degree of agreement between such raters can be found “online only” through the Instructor Resources within Connect (refer to OOBAL-6-B2).
Culture and the relativity of content validity
Tests are often thought of as either valid or not valid. A history test, for example, either does or does not accurately measure one’s knowledge of historical fact. However, it is also true that what constitutes historical fact depends to some extent on who is writing the history. Consider, for example, a momentous event in the history of the world, one that served as a catalyst for World War I. Archduke Franz Ferdinand was assassinated on June 28, 1914, by a Serb named Gavrilo Princip (Figure 6–3). Now think about how you would answer the following multiple-choice item on a history test:
Figure 6–3 Cultural Relativity, History, and Test Validity Austro-Hungarian Archduke Franz Ferdinand and his wife, Sophia, are pictured (left) as they left Sarajevo’s City Hall on June 28, 1914. Moments later, Ferdinand would be assassinated by Gavrilo Princip, shown in custody at right. The killing served as a catalyst for World War I and is discussed and analyzed in history textbooks in every language around the world. Yet descriptions of the assassin Princip in those textbooks—and ability test items based on those descriptions—vary as a function of culture.© Ingram Publishing RF
Gavrilo Princip was
a. a poet
b. a hero
c. a terrorist
d. a nationalist
e. all of the above
For various textbooks in the Bosnian region of the world, choice “e”—that’s right, “all of the above”—is the “correct” answer. Hedges (1997) observed that textbooks in areas of Bosnia and Herzegovina that were controlled by different ethnic groups imparted widely varying characterizations of the assassin. In the Serb-controlled region of the country, history textbooks—and presumably the tests constructed to measure students’ learning—regarded Princip as a “hero and poet.” By contrast, Croatian students might read that Princip was an assassin trained to commit a terrorist act. Muslims in the region were taught that Princip was a nationalist whose deed sparked anti-Serbian rioting.
JUST THINK . . .
The passage of time sometimes serves to place historical figures in a different light. How might the textbook descriptions of Gavrilo Princip have changed in these regions?
A history test considered valid in one classroom, at one time, and in one place will not necessarily be considered so in another classroom, at another time, and in another place. Consider a test containing the true-false item, “Colonel Claus von Stauffenberg is a hero.” Such an item is useful in illustrating the cultural relativity affecting item scoring. In 1944, von Stauffenberg, a German officer, was an active participant in a bomb plot to assassinate Germany’s leader, Adolf Hitler. When the plot (popularized in the film, Operation Valkyrie) failed, von Stauffenberg was executed and promptly villified in Germany as a despicable traitor. Today, the light of history shines favorably on von Stauffenberg, and he is perceived as a hero in Germany. A German postage stamp with his face on it was issued to honor von Stauffenberg’s 100th birthday.
Politics is another factor that may well play a part in perceptions and judgments concerning the validity of tests and test items. In many countries throughout the world, a response that is keyed incorrect to a particular test item can lead to consequences far more dire than a deduction in points towards the total test score. Sometimes, even constructing a test with a reference to a taboo topic can have dire consequences for the test developer. For example, one Palestinian professor who included items pertaining to governmental corruption on an examination was tortured by authorities as a result (“Brother Against Brother,” 1997). Such scenarios bring new meaning to the term politically correct as it applies to tests, test items, and testtaker responses.
JUST THINK . . .
Commercial test developers who publish widely used history tests must maintain the content validity of their tests. What challenges do they face in doing so?
Criterion-related validity is a judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest—the measure of interest being the criterion. Two types of validity evidence are subsumed under the heading criterion-related validity. Concurrent validity is an index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently). Predictive validity is an index of the degree to which a test score predicts some criterion measure. Before we discuss each of these types of validity evidence in detail, it seems appropriate to raise (and answer) an important question.
What Is a Criterion?
We were first introduced to the concept of a criterion in Chapter 4, where, in the context of defining criterion-referenced assessment, we defined a criterion broadly as a standard on which a judgment or decision may be based. Here, in the context of our discussion of criterion-related validity, we will define a criterion just a bit more narrowly as the standard against which a test Page 183or a test score is evaluated. So, for example, if a test purports to measure the trait of athleticism, we might expect to employ “membership in a health club” or any generally accepted measure of physical fitness as a criterion in evaluating whether the athleticism test truly measures athleticism. Operationally, a criterion can be most anything: pilot performance in flying a Boeing 767, grade on examination in Advanced Hairweaving, number of days spent in psychiatric hospitalization; the list is endless. There are no hard-and-fast rules for what constitutes a criterion. It can be a test score, a specific behavior or group of behaviors, an amount of time, a rating, a psychiatric diagnosis, a training cost, an index of absenteeism, an index of alcohol intoxication, and so on. Whatever the criterion, ideally it is relevant, valid, and uncontaminated. Let’s explain.
Characteristics of a criterion
An adequate criterion is relevant. By this we mean that it is pertinent or applicable to the matter at hand. We would expect, for example, that a test purporting to advise testtakers whether they share the same interests of successful actors to have been validated using the interests of successful actors as a criterion.
An adequate criterion measure must also be valid for the purpose for which it is being used. If one test (X) is being used as the criterion to validate a second test (Y), then evidence should exist that test X is valid. If the criterion used is a rating made by a judge or a panel, then evidence should exist that the rating is valid. Suppose, for example, that a test purporting to measure depression is said to have been validated using as a criterion the diagnoses made by a blue-ribbon panel of psychodiagnosticians. A test user might wish to probe further regarding variables such as the credentials of the “blue-ribbon panel” (or, their educational background, training, and experience) and the actual procedures used to validate a diagnosis of depression. Answers to such questions would help address the issue of whether the criterion (in this case, the diagnoses made by panel members) was indeed valid.
Ideally, a criterion is also uncontaminated. Criterion contamination is the term applied to a criterion measure that has been based, at least in part, on predictor measures. As an example, consider a hypothetical “Inmate Violence Potential Test” (IVPT) designed to predict a prisoner’s potential for violence in the cell block. In part, this evaluation entails ratings from fellow inmates, guards, and other staff in order to come up with a number that represents each inmate’s violence potential. After all of the inmates in the study have been given scores on this test, the study authors then attempt to validate the test by asking guards to rate each inmate on their violence potential. Because the guards’ opinions were used to formulate the inmate’s test score in the first place (the predictor variable), the guards’ opinions cannot be used as a criterion against which to judge the soundness of the test. If the guards’ opinions were used both as a predictor and as a criterion, then we would say that criterion contamination had occurred.
Here is another example of criterion contamination. Suppose that a team of researchers from a company called Ventura International Psychiatric Research (VIPR) just completed a study of how accurately a test called the MMPI-2-RF predicted psychiatric diagnosis in the psychiatric population of the Minnesota state hospital system. As we will see in Chapter 12, the MMPI-2-RF is, in fact, a widely used test. In this study, the predictor is the MMPI-2-RF, and the criterion is the psychiatric diagnosis that exists in the patient’s record. Further, let’s suppose that while all the data are being analyzed at VIPR headquarters, someone informs these researchers that the diagnosis for every patient in the Minnesota state hospital system was determined, at least in part, by an MMPI-2-RF test score. Should they still proceed with their analysis? The answer is no. Because the predictor measure has contaminated the criterion measure, it would be of little value to find, in essence, that the predictor can indeed predict itself.
When criterion contamination does occur, the results of the validation study cannot be taken seriously. There are no methods or statistics to gauge the extent to which criterion contamination has taken place, and there are no methods or statistics to correct for such contamination.
Now, let’s take a closer look at concurrent validity and predictive validity.Page 184
If test scores are obtained at about the same time as the criterion measures are obtained, measures of the relationship between the test scores and the criterion provide evidence of concurrent validity. Statements of concurrent validity indicate the extent to which test scores may be used to estimate an individual’s present standing on a criterion. If, for example, scores (or classifications) made on the basis of a psychodiagnostic test were to be validated against a criterion of already diagnosed psychiatric patients, then the process would be one of concurrent validation. In general, once the validity of the inference from the test scores is established, the test may provide a faster, less expensive way to offer a diagnosis or a classification decision. A test with satisfactorily demonstrated concurrent validity may therefore be appealing to prospective users because it holds out the potential of savings of money and professional time.
Sometimes the concurrent validity of a particular test (let’s call it Test A) is explored with respect to another test (we’ll call Test B). In such studies, prior research has satisfactorily demonstrated the validity of Test B, so the question becomes: “How well does Test A compare with Test B?” Here, Test B is used as the validating criterion. In some studies, Test A is either a brand-new test or a test being used for some new purpose, perhaps with a new population.
Here is a real-life example of a concurrent validity study in which a group of researchers explored whether a test validated for use with adults could be used with adolescents. The Beck Depression Inventory (BDI; Beck et al., 1961, 1979; Beck & Steer, 1993) and its revision, the Beck Depression Inventory-II (BDI-II; Beck et al., 1996) are self-report measures used to identify symptoms of depression and quantify their severity. Although the BDI had been widely used with adults, questions were raised regarding its appropriateness for use with adolescents. Ambrosini et al. (1991) conducted a concurrent validity study to explore the utility of the BDI with adolescents. They also sought to determine if the test could successfully differentiate patients with depression from those without depression in a population of adolescent outpatients. Diagnoses generated from the concurrent administration of an instrument previously validated for use with adolescents were used as the criterion validators. The findings suggested that the BDI is valid for use with adolescents.
JUST THINK . . .
What else might these researchers have done to explore the utility of the BDI with adolescents?
We now turn our attention to another form of criterion validity, one in which the criterion measure is obtained not concurrently but at some future time.
Test scores may be obtained at one time and the criterion measures obtained at a future time, usually after some intervening event has taken place. The intervening event may take varied forms, such as training, experience, therapy, medication, or simply the passage of time. Measures of the relationship between the test scores and a criterion measure obtained at a future time provide an indication of the predictive validity of the test; that is, how accurately scores on the test predict some criterion measure. Measures of the relationship between college admissions tests and freshman grade point averages, for example, provide evidence of the predictive validity of the admissions tests.
In settings where tests might be employed—such as a personnel agency, a college admissions office, or a warden’s office—a test’s high predictive validity can be a useful aid to decision-makers who must select successful students, productive workers, or good parole risks. Whether a test result is valuable in decision making depends on how well the test results improve selection decisions over decisions made without knowledge of test results. In an Page 185industrial setting where volume turnout is important, if the use of a personnel selection test can enhance productivity to even a small degree, then that enhancement will pay off year after year and may translate into millions of dollars of increased revenue. And in a clinical context, no price could be placed on a test that could save more lives from suicide or by providing predictive accuracy over and above existing tests with respect to such acts. Unfortunately, the difficulties inherent in developing such tests are numerous and multifaceted (Mulvey & Lidz, 1984; Murphy, 1984; Petrie & Chamberlain, 1985). When evaluating the predictive validity of a test, researchers must take into consideration the base rate of the occurrence of the variable in question, both as that variable exists in the general population and as it exists in the sample being studied. Generally, a base rate is the extent to which a particular trait, behavior, characteristic, or attribute exists in the population (expressed as a proportion). In psychometric parlance, a hit rate may be defined as the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute. For example, hit rate could refer to the proportion of people accurately predicted to be able to perform work at the graduate school level or to the proportion of neurological patients accurately identified as having a brain tumor. In like fashion, a miss rate may be defined as the proportion of people the test fails to identify as having, or not having, a particular characteristic or attribute. Here, a miss amounts to an inaccurate prediction. The category of misses may be further subdivided. A false positive is a miss wherein the test predicted that the testtaker did possess the particular characteristic or attribute being measured when in fact the testtaker did not. A false negative is a miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when the testtaker actually did.
To evaluate the predictive validity of a test, a test targeting a particular attribute may be administered to a sample of research subjects in which approximately half of the subjects possess or exhibit the targeted attribute and the other half do not. Evaluating the predictive validity of a test is essentially a matter of evaluating the extent to which use of the test results in an acceptable hit rate.
Judgments of criterion-related validity, whether concurrent or predictive, are based on two types of statistical evidence: the validity coefficient and expectancy data.
The validity coefficient
The validity coefficient is a correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure. The correlation coefficient computed from a score (or classification) on a psychodiagnostic test and the criterion score (or classification) assigned by psychodiagnosticians is one example of a validity coefficient. Typically, the Pearson correlation coefficient is used to determine the validity between the two measures. However, depending on variables such as the type of data, the sample size, and the shape of the distribution, other correlation coefficients could be used. For example, in correlating self-rankings of performance on some job with rankings made by job supervisors, the formula for the Spearman rho rank-order correlation would be employed.
Like the reliability coefficient and other correlational measures, the validity coefficient is affected by restriction or inflation of range. And as in other correlational studies, a key issue is whether the range of scores employed is appropriate to the objective of the correlational analysis. In situations where, for example, attrition in the number of subjects has occurred over the course of the study, the validity coefficient may be adversely affected.
The problem of restricted range can also occur through a self-selection process in the sample employed for the validation study. Thus, for example, if the test purports to measure something as technical or as dangerous as oil-barge firefighting skills, it may well be that the only people who reply to an ad for the position of oil-barge firefighter are those who are actually highly qualified for the position. Accordingly, the range of the distribution of scores on this test of oil-barge firefighting skills would be restricted. For less technical or dangerous positions, a self-selection factor might be operative if the test developer selects a group of Page 186newly hired employees to test (with the expectation that criterion measures will be available for this group at some subsequent date). However, because the newly hired employees have probably already passed some formal or informal evaluation in the process of being hired, there is a good chance that ability to do the job will be higher among this group than among a random sample of ordinary job applicants. Consequently, scores on the criterion measure that is later administered will tend to be higher than scores on the criterion measure obtained from a random sample of ordinary job applicants. Stated another way, the scores will be restricted in range.
Whereas it is the responsibility of the test developer to report validation data in the test manual, it is the responsibility of test users to read carefully the description of the validation study and then to evaluate the suitability of the test for their specific purposes. What were the characteristics of the sample used in the validation study? How matched are those characteristics to the people for whom an administration of the test is contemplated? For a specific test purpose, are some subtests of a test more appropriate than the entire test?
How high should a validity coefficient be for a user or a test developer to infer that the test is valid? There are no rules for determining the minimum acceptable size of a validity coefficient. In fact, Cronbach and Gleser (1965) cautioned against the establishment of such rules. They argued that validity coefficients need to be large enough to enable the test user to make accurate decisions within the unique context in which a test is being used. Essentially, the validity coefficient should be high enough to result in the identification and differentiation of testtakers with respect to target attribute(s), such as employees who are likely to be more productive, police officers who are less likely to misuse their weapons, and students who are more likely to be successful in a particular course of study.
Test users involved in predicting some criterion from test scores are often interested in the utility of multiple predictors. The value of including more than one predictor depends on a couple of factors. First, of course, each measure used as a predictor should have criterion-related predictive validity. Second, additional predictors should possess incremental validity , defined here as the degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use.
Incremental validity may be used when predicting something like academic success in college. Grade point average (GPA) at the end of the first year may be used as a measure of academic success. A study of potential predictors of GPA may reveal that time spent in the library and time spent studying are highly correlated with GPA. How much sleep a student’s roommate allows the student to have during exam periods correlates with GPA to a smaller extent. What is the most accurate but most efficient way to predict GPA? One approach, employing the principles of incremental validity, is to start with the best predictor: the predictor that is most highly correlated with GPA. This may be time spent studying. Then, using multiple regression techniques, one would examine the usefulness of the other predictors.
Even though time in the library is highly correlated with GPA, it may not possess incremental validity if it overlaps too much with the first predictor, time spent studying. Said another way, if time spent studying and time in the library are so highly correlated with each other that they reflect essentially the same thing, then only one of them needs to be included as a predictor. Including both predictors will provide little new information. By contrast, the variable of how much sleep a student’s roommate allows the student to have during exams may have good incremental validity. This is so because it reflects a different aspect of preparing for exams (resting) from the first predictor (studying). Incremental validity has been used to improve the prediction of job performance for Marine Corps mechanics (Carey, 1994) and the prediction of child abuse (Murphy-Berman, 1994). In both instances, predictor measures were included only if they demonstrated that they could explain something about the criterion measure that was not already known from the other predictors.Page 187
Construct validity is a judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct. A construct is an informed, scientific idea developed or hypothesized to describe or explain behavior. Intelligence is a construct that may be invoked to describe why a student performs well in school. Anxiety is a construct that may be invoked to describe why a psychiatric patient paces the floor. Other examples of constructs are job satisfaction, personality, bigotry, clerical aptitude, depression, motivation, self-esteem, emotional adjustment, potential dangerousness, executive potential, creativity, and mechanical comprehension, to name but a few.
Constructs are unobservable, presupposed (underlying) traits that a test developer may invoke to describe test behavior or criterion performance. The researcher investigating a test’s construct validity must formulate hypotheses about the expected behavior of high scorers and low scorers on the test. These hypotheses give rise to a tentative theory about the nature of the construct the test was designed to measure. If the test is a valid measure of the construct, then high scorers and low scorers will behave as predicted by the theory. If high scorers and low scorers on the test do not behave as predicted, the investigator will need to reexamine the nature of the construct itself or hypotheses made about it. One possible reason for obtaining results contrary to those predicted by the theory is that the test simply does not measure the construct. An alternative explanation could lie in the theory that generated hypotheses about the construct. The theory may need to be reexamined.
In some instances, the reason for obtaining contrary findings can be traced to the statistical procedures used or to the way the procedures were executed. One procedure may have been more appropriate than another, given the particular assumptions. Thus, although confirming evidence contributes to a judgment that a test is a valid measure of a construct, evidence to the contrary can also be useful. Contrary evidence can provide a stimulus for the discovery of new facets of the construct as well as alternative methods of measurement.
Traditionally, construct validity has been viewed as the unifying concept for all validity evidence (American Educational Research Association et al., 1999). As we noted at the outset, all types of validity evidence, including evidence from the content- and criterion-related varieties of validity, come under the umbrella of construct validity. Let’s look at the types of evidence that might be gathered.
Evidence of Construct Validity
A number of procedures may be used to provide different kinds of evidence that a test has construct validity. The various techniques of construct validation may provide evidence, for example, that
· the test is homogeneous, measuring a single construct;
· test scores increase or decrease as a function of age, the passage of time, or an experimental manipulation as theoretically predicted;
· test scores obtained after some event or the mere passage of time (or, posttest scores) differ from pretest scores as theoretically predicted;
· test scores obtained by people from distinct groups vary as predicted by the theory;
· test scores correlate with scores on other tests in accordance with what would be predicted from a theory that covers the manifestation of the construct in question.
A brief discussion of each type of construct validity evidence and the procedures used to obtain it follows.
Evidence of homogeneity
When describing a test and its items, homogeneity refers to how uniform a test is in measuring a single concept. A test developer can increase test homogeneity in several ways. Consider, for example, a test of academic achievement that contains subtests in areas Page 188such as mathematics, spelling, and reading comprehension. The Pearson r could be used to correlate average subtest scores with the average total test score. Subtests that in the test developer’s judgment do not correlate very well with the test as a whole might have to be reconstructed (or eliminated) lest the test not measure the construct academic achievement. Correlations between subtest scores and total test score are generally reported in the test manual as evidence of homogeneity.
One way a test developer can improve the homogeneity of a test containing items that are scored dichotomously (such as a true-false test) is by eliminating items that do not show significant correlation coefficients with total test scores. If all test items show significant, positive correlations with total test scores and if high scorers on the test tend to pass each item more than low scorers do, then each item is probably measuring the same construct as the total test. Each item is contributing to test homogeneity.
The homogeneity of a test in which items are scored on a multipoint scale can also be improved. For example, some attitude and opinion questionnaires require respondents to indicate level of agreement with specific statements by responding, for example, strongly agree, agree, disagree, or strongly disagree.Each response is assigned a numerical score, and items that do not show significant Spearman rank-order correlation coefficients are eliminated. If all test items show significant, positive correlations with total test scores, then each item is most likely measuring the same construct that the test as a whole is measuring (and is thereby contributing to the test’s homogeneity). Coefficient alpha may also be used in estimating the homogeneity of a test composed of multiple-choice items (Novick & Lewis, 1967).
As a case study illustrating how a test’s homogeneity can be improved, consider the Marital Satisfaction Scale (MSS; Roach et al., 1981). Designed to assess various aspects of married people’s attitudes toward their marital relationship, the MSS contains an approximately equal number of items expressing positive and negative sentiments with respect to marriage. For example, My life would seem empty without my marriage and My marriage has “smothered” my personality. In one stage of the development of this test, subjects indicated how much they agreed or disagreed with the various sentiments in each of 73 items by marking a 5-point scale that ranged from strongly agree to strongly disagree. Based on the correlations between item scores and total score, the test developers elected to retain 48 items with correlation coefficients greater than .50, thus creating a more homogeneous instrument.
Item-analysis procedures have also been employed in the quest for test homogeneity. One item-analysis procedure focuses on the relationship between testtakers’ scores on individual items and their score on the entire test. Each item is analyzed with respect to how high scorers versus low scorers responded to it. If it is an academic test and if high scorers on the entire test for some reason tended to get that particular item wrong while low scorers on the test as a whole tended to get the item right, the item is obviously not a good one. The item should be eliminated in the interest of test homogeneity, among other considerations. If the test is one of marital satisfaction, and if individuals who score high on the test as a whole respond to a particular item in a way that would indicate that they are not satisfied whereas people who tend not to be satisfied respond to the item in a way that would indicate that they are satisfied, then again the item should probably be eliminated or at least reexamined for clarity.
JUST THINK . . .
Is it possible for a test to be too homogeneous in item content?
Although test homogeneity is desirable because it assures us that all the items on the test tend to be measuring the same thing, it is not the be-all and end-all of construct validity. Knowing that a test is homogeneous contributes no information about how the construct being measured relates to other constructs. It is therefore important to report evidence of a test’s homogeneity along with other evidence of construct validity.
Evidence of changes with age
Some constructs are expected to change over time. Reading rate, for example, tends to increase dramatically year by year from age 6 to the early teens. If a test score purports to be a measure of a construct that could be expected to change over time, then the Page 189test score, too, should show the same progressive changes with age to be considered a valid measure of the construct. For example, if children in grades 6, 7, 8, and 9 took a test of eighth-grade vocabulary, then we would expect that the total number of items scored as correct from all the test protocols would increase as a function of the higher grade level of the testtakers.
Some constructs lend themselves more readily than others to predictions of change over time. Thus, although we may be able to predict that a gifted child’s scores on a test of reading skills will increase over the course of the testtaker’s years of elementary and secondary education, we may not be able to predict with such confidence how a newlywed couple will score through the years on a test of marital satisfaction. This fact does not relegate a construct such as marital satisfaction to a lower stature than reading ability. Rather, it simply means that measures of marital satisfaction may be less stable over time or more vulnerable to situational events (such as in-laws coming to visit and refusing to leave for three months) than is reading ability. Evidence of change over time, like evidence of test homogeneity, does not in itself provide information about how the construct relates to other constructs.
Evidence of pretest–posttest changes
Evidence that test scores change as a result of some experience between a pretest and a posttest can be evidence of construct validity. Some of the more typical intervening experiences responsible for changes in test scores are formal education, a course of therapy or medication, and on-the-job experience. Of course, depending on the construct being measured, almost any intervening life experience could be predicted to yield changes in score from pretest to posttest. Reading an inspirational book, watching a TV talk show, undergoing surgery, serving a prison sentence, or the mere passage of time may each prove to be a potent intervening variable.
Returning to our example of the Marital Satisfaction Scale, one investigator cited in Roach et al. (1981) compared scores on that instrument before and after a sex therapy treatment program. Scores showed a significant change between pretest and posttest. A second posttest given eight weeks later showed that scores remained stable (suggesting the instrument was reliable), whereas the pretest–posttest measures were still significantly different. Such changes in scores in the predicted direction after the treatment program contribute to evidence of the construct validity for this test.
JUST THINK . . .
Might it have been advisable to have simultaneous testing of a matched group of couples who did not participate in sex therapy and simultaneous testing of a matched group of couples who did not consult divorce attorneys? In both instances, would there have been any reason to expect any significant changes in the test scores of these two control groups?
We would expect a decline in marital satisfaction scores if a pretest were administered to a sample of couples shortly after they took their nuptial vows and a posttest were administered shortly after members of the couples consulted their respective divorce attorneys sometime within the first five years of marriage. The experimental group in this study would consist of couples who consulted a divorce attorney within the first five years of marriage. The design of such pretest–posttest research ideally should include a control group to rule out alternative explanations of the findings.
Evidence from distinct groups
Also referred to as the method of contrasted groups , one way of providing evidence for the validity of a test is to demonstrate that scores on the test vary in a predictable way as a function of membership in some group. The rationale here is that if a test is a valid measure of a particular construct, then test scores from groups of people who would be presumed to differ with respect to that construct should have correspondingly different test scores. Consider in this context a test of depression wherein the higher the test score, the more depressed the testtaker is presumed to be. We would expect individuals psychiatrically hospitalized for depression to score higher on this measure than a random sample of Walmart shoppers.
Now, suppose it was your intention to provide construct validity evidence for the Marital Satisfaction Scale by showing differences in scores between distinct groups. How might you go about doing that?Page 190
Roach and colleagues (1981) proceeded by identifying two groups of married couples, one relatively satisfied in their marriage, the other not so satisfied. The groups were identified by ratings by peers and professional marriage counselors. A t test on the difference between mean score on the test was significant ( p < .01)—evidence to support the notion that the Marital Satisfaction Scale is indeed a valid measure of the construct marital satisfaction.
In a bygone era, the method many test developers used to create distinct groups was deception. For example, if it had been predicted that more of the construct would be exhibited on the test in question if the subject felt highly anxious, an experimental situation might be designed to make the subject feel highly anxious. Virtually any feeling state the theory called for could be induced by an experimental scenario that typically involved giving the research subject some misinformation. However, given the ethical constraints of contemporary psychologists and the reluctance of academic institutions and other sponsors of research to condone deception in human research, the method of obtaining distinct groups by creating them through the dissemination of deceptive information is frowned upon (if not prohibited) today.
Evidence for the construct validity of a particular test may converge from a number of sources, such as other tests or measures designed to assess the same (or a similar) construct. Thus, if scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established, and already validated tests designed to measure the same (or a similar) construct, this would be an example of convergent evidence . 3
Convergent evidence for validity may come not only from correlations with tests purporting to measure an identical construct but also from correlations with measures purporting to measure related constructs. Consider, for example, a new test designed to measure the construct test anxiety. Generally speaking, we might expect high positive correlations between this new test and older, more established measures of test anxiety. However, we might also expect more moderate correlations between this new test and measures of general anxiety.
Roach et al. (1981) provided convergent evidence of the construct validity of the Marital Satisfaction Scale by computing a validity coefficient between scores on it and scores on the Marital Adjustment Test (Locke & Wallace, 1959). The validity coefficient of .79 provided additional evidence of their instrument’s construct validity.
A validity coefficient showing little (a statistically insignificant) relationship between test scores and/or other variables with which scores on the test being construct-validated should not theoretically be correlated provides discriminant evidence of construct validity (also known as discriminant validity). In the course of developing the Marital Satisfaction Scale (MSS), its authors correlated scores on that instrument with scores on the Marlowe-Crowne Social Desirability Scale (Crowne & Marlowe, 1964). Roach et al. (1981) hypothesized that high correlations between these two instruments would suggest that respondents were probably not answering items on the MSS entirely honestly but instead were responding in socially desirable ways. But the correlation between the MSS and the social desirability measure did not prove to be significant, so the test developers concluded that social desirability could be ruled out as a primary factor in explaining the meaning of MSS test scores.
In 1959 an experimental technique useful for examining both convergent and discriminant validity evidence was presented in Psychological Bulletin.This rather technical procedure was called the multitrait-multimethod matrix . A detailed description of it, along with an Page 191illustration, can be found in OOBAL-6-B1. Here, let’s simply point out that multitrait means “two or more traits” and multimethod means “two or more methods.” The multitrait-multimethod matrix (Campbell & Fiske, 1959) is the matrix or table that results from correlating variables (traits) within and between methods. Values for any number of traits (such as aggressiveness or extraversion) as obtained by various methods (such as behavioral observation or a personality test) are inserted into the table, and the resulting matrix of correlations provides insight with respect to both the convergent and the discriminant validity of the methods used.4
Both convergent and discriminant evidence of construct validity can be obtained by the use of factor analysis. Factor analysis is a shorthand term for a class of mathematical procedures designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ. In psychometric research, factor analysis is frequently employed as a data reduction method in which several sets of scores and the correlations between them are analyzed. In such studies, the purpose of the factor analysis may be to identify the factor or factors in common between test scores on subscales within a particular test, or the factors in common between scores on a series of tests. In general, factor analysis is conducted on either an exploratory or a confirmatory basis. Exploratory factor analysis typically entails “estimating, or extracting factors; deciding how many factors to retain; and rotating factors to an interpretable orientation” (Floyd & Widaman, 1995, p. 287). By contrast, in confirmatory factor analysis , researchers test the degree to which a hypothetical model (which includes factors) fits the actual data.
A term commonly employed in factor analysis is factor loading , which is “a sort of metaphor. Each test is thought of as a vehicle carrying a certain amount of one or more abilities” (Tyler, 1965, p. 44). Factor loading in a test conveys information about the extent to which the factor determines the test score or scores. A new test purporting to measure bulimia, for example, can be factor-analyzed with other known measures of bulimia, as well as with other kinds of measures (such as measures of intelligence, self-esteem, general anxiety, anorexia, or perfectionism). High factor loadings by the new test on a “bulimia factor” would provide convergent evidence of construct validity. Moderate to low factor loadings by the new test with respect to measures of other eating disorders such as anorexia would provide discriminant evidence of construct validity.
Factor analysis frequently involves technical procedures so complex that few contemporary researchers would attempt to conduct one without the aid of sophisticated software. But although the actual data analysis has become work for computers, humans still tend to be very much involved in the naming of factors once the computer has identified them. Thus, for example, suppose a factor analysis identified a common factor being measured by two hypothetical instruments, a “Bulimia Test” and an “Anorexia Test.” This common factor would have to be named. One factor analyst looking at the data and the items of each test might christen the common factor an eating disorder factor. Another factor analyst examining exactly the same materials might label the common factor a body weight preoccupation factor. A third analyst might name the factor a self-perception disorder factor. Which of these is correct?
From a statistical perspective, it is simply impossible to say what the common factor should be named. Naming factors that emerge from a factor analysis has more to do with knowledge, judgment, and verbal abstraction ability than with mathematical expertise. There are no hard-and-fast rules. Factor analysts exercise their own judgment about what factor name best communicates the meaning of the factor. Further, even the criteria used to identify a common factor, as well as related technical matters, can be a matter of debate, if not heated controversy.Page 192
Factor analysis is a subject rich in technical complexity. Its uses and applications can vary as a function of the research objectives as well as the nature of the tests and the constructs under study. Factor analysis is the subject of our Close-Up in Chapter 9. More immediately, our Close-Up here brings together much of the information imparted so far in this chapter to provide a “real life” example of the test validation process.
The Preliminary Validation of a Measure of Individual Differences in Constructive Versus Unconstructive Worry*
Establishing validity is an important step in the development of new psychological measures. The development of a questionnaire that measures individual differences in worry called the Constructive and Unconstructive Worry Questionnaire (CUWQ; McNeill & Dunlop, 2016) provides an illustration of some of the steps in the test validation process.
Prior to the development of this questionnaire, research on worry had shown that the act of worrying can lead to both positive outcomes (such as increased work performance; Perkins & Corr, 2005) and negative outcomes (such as insomnia; Carney & Waters, 2006). Importantly, findings suggested that the types of worrying thoughts that lead to positive outcomes (which are referred to by the test authors as constructive worry) may differ from the types of worrying thoughts that lead to negative outcomes (referred to as unconstructive worry). However, a review of existing measures of individual differences in worry suggested that none of the measures were made to distinguish people’s tendency to worry constructively from their tendency to worry unconstructively. Since the ability to determine whether individuals are predominantly worrying constructively or unconstructively holds diagnostic and therapeutic benefits, the test authors set out to fill this gap and develop a new questionnaire that would be able to capture both these dimensions of the worry construct.
During the first step of questionnaire development, the creation of an item pool, it was important to ensure the questionnaire would have good content validity. That is, the items would need to adequately sample the variety of characteristics of constructive and unconstructive worry. Based on the test authors’ definition of these two constructs, a literature review was conducted and a list of potential characteristics of constructive versus unconstructive worry was created. This list of characteristics was used to develop a pool of 40 items. These 40 items were cross checked by each author, as well as one independent expert, to ensure that each item was unique and concise. A review of the list as a whole was conducted to ensure that it covered the full range of characteristics identified by the literature review. This process resulted in the elimination of 11 of the initial items, leaving a pool of 29 items. Of the 29 items in total, 13 items were expected to measure the tendency to worry constructively, and the remaining 16 items were expected to measure the tendency to worry unconstructively.
Next, drawing from the theoretical background behind the test authors’ definition of constructive and unconstructive worry, a range of criteria that should be differentially related to one’s tendency to worry constructively versus unconstructively were selected. More specifically, it was hypothesized that the tendency to worry unconstructively would be positively related to trait-anxiety (State Trait Anxiety Inventory (STAI-T); Spielberger et al., 1970) and amount of worry one experiences (e.g., Worry Domains Questionnaire (WDQ); Stöber & Joormann, 2001). In addition, this tendency to worry unconstructively was hypothesized to be negatively related to one’s tendency to be punctual and one’s actual performance of risk-mitigating behaviors. The tendency to worry constructively, on the other hand, was hypothesized to be negatively related to trait-anxiety and amount of worry, and positively related to one’s tendency to be punctual and one’s performance of risk-mitigating behaviors. Identification of these criteria prior to data collection would pave the way for the test authors to conduct an evaluation of the questionnaire’s criterion-based construct-validity in the future.
Upon completion of item pool construction and criterion identification, two studies were conducted. In Study 1, data from 295 participants from the United States was collected on the 29 newly developed worry items, plus two criterion-based measures, namely trait-anxiety and punctuality. An exploratory factor analysis was conducted, and the majority of the 29 items grouped together into a two-factor solution (as expected). The items predicted to capture a tendency to worry constructively loaded strongly on one factor, and the items predicted to capture a tendency to worry unconstructively loaded strongly on the other factor. However, 11 out of the original 29 items either did not load strongly on either factor, or they cross-loaded onto the other factor to a moderate extent. To increase construct validity through increased homogeneity of the two scales, these 11 items were removed from the final version of the questionnaire. The 18 items that remained included eight that primarily loaded on the factor labeled as constructive worry and ten that primarily loaded on the factor labeled as unconstructive worry.
A confirmatory factor analysis on these 18 items showed a good model fit. However, this analysis does not prove that these two factors actually captured the tendencies to worry constructively and unconstructively. To test the construct validity of these factor scores, the relations of the unconstructive and constructive worry factors with both trait-anxiety (Spielberger et al., 1970) and the tendency to be punctual were examined. Results supported the hypotheses and supported an assumption of criterion-based construct validity. That is, as hypothesized, scores on the constructive worry factor were negatively associated with trait-anxiety and positively associated with the tendency to be punctual. Scores on the Unconstructive Worry factor were positively associated with trait-anxiety and negatively associated with the tendency to be punctual.
To further test the construct validity of this newly developed measure, a second study was conducted. In Study 2, data from 998 Australian residents of wildfire-prone areas responded to the 18 (final) worry items from Study 1, plus two additional items, respectively, capturing two additional criteria. These two additional criteria were (1) the amount of worry one tends to experience as captured by two existing worry questionnaires, namely the Worry Domains Questionnaire (Stöber & Joormann, 2001) and the Penn State Worry Questionnaire (Meyer et al., 1990), and (2) the performance of risk-mitigating behaviors that reduce the risk of harm or property damage resulting from a potential wildfire threat. A confirmatory factor analysis on this second data set supported the notion that constructive worry versus unconstructive worry items were indeed capturing separate constructs in a homogenous manner. Furthermore, as hypothesized, the constructive worry factor was positively associated with the performance of wildfire risk-mitigating behaviors, and negatively associated with the amount of worry one experiences. The unconstructive worry factor, on the other hand, was negatively associated with the performance of wildfire risk-mitigating behaviors, and positively associated with the amount of worry one experiences. This provided further criterion-based construct validity.
There are several ways in which future studies could provide additional evidence of construct validity of the CUWQ. For one, both studies reported above looked at the two scales’ concurrent criterion-based validity, but not at their predictive criterion-based validity. Future studies could focus on filling this gap. For example, since both constructs are hypothesized to predict the experience of anxiety (which was confirmed by the scales’ relationships with trait-anxiety in Study 1), they should predict the likelihood of an individual being diagnosed with an anxiety disorder in the future, with unconstructive worry being a positive predictor and constructive worry being a negative predictor. Furthermore, future studies could provide additional evidence of construct validity by testing whether interventions, such as therapy aimed at reducing unconstructive worry, can lead to a reduction in scores on the unconstructive worry scale over time. Finally, it is important to note that all validity testing to date has been conducted in samples from the general population, so the test should be further tested in samples from a clinical population of pathological worriers before test validity in this population can be assumed. The same applies to the use of the questionnaire in samples from non-US/Australian populations.
*This Close-Up was guest-authored by Ilona M. McNeill of The University of Melbourne, and Patrick D. Dunlop of The University of Western Australia.
JUST THINK . . .
What might be an example of a valid test used in an unfair manner?
Validity, Bias, and Fairness
In the eyes of many laypeople, questions concerning the validity of a test are intimately tied to questions concerning the fair use of tests and the issues of bias and fairness. Let us hasten to point out that validity, fairness in test use, and test bias are three separate issues. It is possible, for example, for a valid test to be used fairly or unfairly.
For the general public, the term bias as applied to psychological and educational tests may conjure up many meanings having to do with prejudice and preferential treatment (Brown et al., 1999). For federal judges, the term bias as it relates to items on children’s intelligence tests is synonymous with “too difficult for one group as compared to another” (Sattler, 1991). For psychometricians, bias is a factor inherent in a test that systematically prevents accurate, impartial measurement.
Psychometricians have developed the technical means to identify and remedy bias, at least in the mathematical sense. As a simple illustration, consider a test we will call the “flip-coin test” (FCT). The “equipment” needed to conduct this test is a two-sided coin. One side (“heads”) has the image of a profile and the other side (“tails”) does not. The FCT would be considered biased if the instrument (the coin) were weighted so that either heads or tails appears more frequently than by chance alone. If the test in question were an intelligence test, the test would be considered biased if it were constructed so that people who had brown eyes consistently and systematically obtained higher scores than people with green eyes—assuming, of course, that in reality people with brown eyes are not generally more intelligent than people with green eyes. Systematic is a key word in our definition of test bias. We have previously looked at sources of random or chance variation in test scores. Bias implies systematic variation.
Another illustration: Let’s suppose we need to hire 50 secretaries and so we place an ad in the newspaper. In response to the ad, 200 people reply, including 100 people who happen to have brown eyes and 100 people who happen to have green eyes. Each of the 200 applicants is individually administered a hypothetical test we will call the “Test of Secretarial Skills” (TSS). Logic tells us that eye color is probably not a relevant variable with respect to performing the duties of a secretary. We would therefore have no reason to believe that green-eyed people are better secretaries than brown-eyed people or vice versa. We might reasonably expect that, after the tests have been scored and the selection process has been completed, an approximately equivalent number of brown-eyed and green-eyed people would have been hired (or, approximately 25 brown-eyed people and 25 green-eyed people). But what if it turned out that 48 green-eyed people were hired and only 2 brown-eyed people were hired? Is this evidence that the TSS is a biased test?
Although the answer to this question seems simple on the face of it—“Yes, the test is biased because they should have hired 25 and 25!”—a truly responsible answer to this question would entail statistically troubleshooting the test and the entire selection procedure (see Berk, 1982). One reason some tests have been found to be biased has more to do with the design of the research study than the design of the test. For example, if there are too few testtakers in one of the groups (such as the minority group—literally), this methodological problem will make it appear as if the test is biased when in fact it may not be. A test may justifiably be deemed biased if some portion of its variance stems from some factor(s) that are irrelevant to performance on the criterion measure; as a consequence, one group of testtakers will systematically perform differently from another. Prevention during test development is the best cure for test bias, though a procedure called estimated true score transformations represents one of many available post hoc remedies (Mueller, 1949; see also Reynolds & Brown, 1984).5Page 193
A rating is a numerical or verbal judgment (or both) that places a person or an attribute along a continuum identified by a scale of numerical or word descriptors known as a rating scale . Simply stated, a rating error is a judgment resulting from the intentional or unintentional misuse of a rating scale. Thus, for example, a leniency error (also known as a generosity error ) is, as its name implies, an error in rating that arises from the tendency on the part of the rater to be lenient in scoring, marking, and/or grading. From your own experience during course registration, you might be aware that a section of a particular course will quickly Page 195be filled if it is being taught by a professor with a reputation for leniency errors in end-of-term grading. As another possible example of a leniency or generosity error, consider comments in the “Twittersphere” after a high-profile performance of a popular performer. Intuitively, one would expect more favorable (and forgiving) ratings of the performance from die-hard fans of the performer, regardless of the actual quality of the performance as rated by more objective reviewers. The phenomenon of leniency and severity in ratings can be found mostly in any setting that ratings are rendered. In psychotherapy settings, for example, it is not unheard of for supervisors to be a bit too generous or too lenient in their ratings of their supervisees.
Reviewing the literature on psychotherapy supervision and supervision in other disciplines, Gonsalvez and Crowe (2014) concluded that raters’ judgments of psychotherapy supervisees’ competency are compromised by leniency errors. In an effort to remedy the state of affairs, they offered a series of concrete suggestions including a list of specific competencies to be evaluated, as well as when and how such evaluations for competency should be conducted.
JUST THINK . . .
What factor do you think might account for the phenomenon of raters whose ratings always seem to fall victim to the central tendency error?
At the other extreme is a severity error . Movie critics who pan just about everything they review may be guilty of severity errors. Of course, that is only true if they review a wide range of movies that might consensually be viewed as good and bad.
Another type of error might be termed a central tendency error . Here the rater, for whatever reason, exhibits a general and systematic reluctance to giving ratings at either the positive or the negative extreme. Consequently, all of this rater’s ratings would tend to cluster in the middle of the rating continuum.
One way to overcome what might be termed restriction-of-range rating errors (central tendency, leniency, severity errors) is to use rankings , a procedure that requires the rater to measure individuals against one another instead of against an absolute scale. By using rankings instead of ratings, the rater (now the “ranker”) is forced to select first, second, third choices, and so forth.
Halo effect describes the fact that, for some raters, some ratees can do no wrong. More specifically, a halo effect may also be defined as a tendency to give a particular ratee a higher rating than he or she objectively deserves because of the rater’s failure to discriminate among conceptually distinct and potentially independent aspects of a ratee’s behavior. Just for the sake of example—and not for a moment because we believe it is even in the realm of possibility—let’s suppose Lady Gaga consented to write and deliver a speech on multivariate analysis. Her speech probably would earn much higher all-around ratings if given before the founding chapter of the Lady Gaga Fan Club than if delivered before and rated by the membership of, say, the Royal Statistical Society. This would be true even in the highly improbable case that the members of each group were equally savvy with respect to multivariate analysis. We would expect the halo effect to be operative at full power as Lady Gaga spoke before her diehard fans.
Criterion data may also be influenced by the rater’s knowledge of the ratee’s race or sex (Landy & Farr, 1980). Males have been shown to receive more favorable evaluations than females in traditionally masculine occupations. Except in highly integrated situations, ratees tend to receive higher ratings from raters of the same race (Landy & Farr, 1980). Returning to our hypothetical Test of Secretarial Skills (TSS) example, a particular rater may have had particularly great—or particularly distressing—prior experiences with green-eyed (or brown-eyed) people and so may be making extraordinarily high (or low) ratings on that irrational basis.
Training programs to familiarize raters with common rating errors and sources of rater bias have shown promise in reducing rating errors and increasing measures of reliability and validity. Lecture, role playing, discussion, watching oneself on videotape, and computer simulation of different situations are some of the many techniques that could be brought to bear in such training programs. We revisit the subject of rating and rating error in our discussion of personality assessment later. For now, let’s take up the issue of test fairness.Page 196
In contrast to questions of test bias, which may be thought of as technically complex statistical problems, issues of test fairness tend to be rooted more in thorny issues involving values (Halpern, 2000). Thus, although questions of test bias can sometimes be answered with mathematical precision and finality, questions of fairness can be grappled with endlessly by well-meaning people who hold opposing points of view. With that caveat in mind, and with exceptions most certainly in the offing, we will define fairness in a psychometric context as the extent to which a test is used in an impartial, just, and equitable way.6
Some uses of tests are patently unfair in the judgment of any reasonable person. During the cold war, the government of what was then called the Soviet Union used psychiatric tests to suppress political dissidents. People were imprisoned or institutionalized for verbalizing opposition to the government. Apart from such blatantly unfair uses of tests, what constitutes a fair and an unfair use of tests is a matter left to various parties in the assessment enterprise. Ideally, the test developer strives for fairness in the test development process and in the test’s manual and usage guidelines. The test user strives for fairness in the way the test is actually used. Society strives for fairness in test use by means of legislation, judicial decisions, and administrative regulations.
Fairness as applied to tests is a difficult and complicated subject. However, it is possible to discuss some rather common misunderstandings regarding what are sometimes perceived as unfair or even biased tests. Some tests, for example, have been labeled “unfair” because they discriminate among groups of people.7 The reasoning here goes something like this: “Although individual differences exist, it is a truism that all people are created equal. Accordingly, any differences found among groups of people on any psychological trait must be an artifact of an unfair or biased test.” Because this belief is rooted in faith as opposed to scientific evidence—in fact, it flies in the face of scientific evidence—it is virtually impossible to refute. One either accepts it on faith or does not.
We would all like to believe that people are equal in every way and that all people are capable of rising to the same heights given equal opportunity. A more realistic view would appear to be that each person is capable of fulfilling a personal potential. Because people differ so obviously with respect to physical traits, one would be hard put to believe that psychological differences found to exist between individuals—and groups of individuals—are purely a function of inadequate tests. Again, although a test is not inherently unfair or biased simply because it is a tool by which group differences are found, the useof the test data, like the use of any data, can be unfair.
Another misunderstanding of what constitutes an unfair or biased test is that it is unfair to administer to a particular population a standardized test that did not include members of that population in the standardization sample. In fact, the test may well be biased, but that must be determined by statistical or other means. The sheer fact that no members of a particular group were included in the standardization sample does not in itself invalidate the test for use with that group.
A final source of misunderstanding is the complex problem of remedying situations where bias or unfair test usage has been found to occur. In the area of selection for jobs, positions in universities and professional schools, and the like, a number of different preventive measures and remedies have been attempted. As you read about the tools used in these attempts in this chapter’s Everyday Psychometrics , form your own opinions regarding what constitutes a fair use of employment and other tests in a selection process.Page 197
Adjustment of Test Scores by Group Membership: Fairness in Testing or Foul Play?
Any test, regardless of its psychometric soundness, may be knowingly or unwittingly used in a way that has an adverse impact on one or another group. If such adverse impact is found to exist and if social policy demands some remedy or an affirmative action program, then psychometricians have a number of techniques at their disposal to create change. Table 1 lists some of these techniques.
Psychometric Techniques for Preventing or Remedying Adverse Impact and/or Instituting an Affirmative Action Program
Some of these techniques may be preventive if employed in the test development process, and others may be employed with already established tests. Some of these techniques entail direct score manipulation; others, such as banding, do not. Preparation of this table benefited from Sackett and Wilk (1994), and their work should be consulted for more detailed consideration of the complex issues involved.
|Addition of Points||1. A constant number of points is added to the test score of members of a particular group. The purpose of the point addition is to reduce or eliminate observed differences between groups.|
|1. Differential Scoring of Items||1. This technique incorporates group membership information, not in adjusting a raw score on a test but in deriving the score in the first place. The application of the technique may involve the scoring of some test items for members of one group but not scoring the same test items for members of another group. This technique is also known as empirical keying by group.|
|1. Elimination of Items Based on Differential Item Functioning||1. This procedure entails removing from a test any items found to inappropriately favor one group’s test performance over another’s. Ideally, the intent of the elimination of certain test items is not to make the test easier for any group but simply to make the test fairer. Sackett and Wilk (1994) put it this way: “Conceptually, rather than asking ‘Is this item harder for members of Group X than it is for Group Y?’ these approaches ask ‘Is this item harder for members of Group X with true score Z than it is for members of Group Y with true score Z?’”|
|1. Differential Cutoffs||1. Different cutoffs are set for members of different groups. For example, a passing score for members of one group is 65, whereas a passing score for members of another group is 70. As with the addition of points, the purpose of differential cutoffs is to reduce or eliminate observed differences between groups.|
|1. Separate Lists||1. Different lists of testtaker scores are established by group membership. For each list, test performance of testtakers is ranked in top-down fashion. Users of the test scores for selection purposes may alternate selections from the different lists. Depending on factors such as the allocation rules in effect and the equivalency of the standard deviation within the groups, the separate-lists technique may yield effects similar to those of other techniques, such as the addition of points and differential cutoffs. In practice, the separate list is popular in affirmative action programs where the intent is to overselect from previously excluded groups.|
|1. Within-Group Norming||1. Used as a remedy for adverse impact if members of different groups tend to perform differentially on a particular test, within-group norming entails the conversion of all raw scores into percentile scores or standard scores based on the test performance of one’s own group. In essence, an individual testtaker is being compared only with other members of his or her own group. When race is the primary criterion of group membership and separate norms are established by race, this technique is known as race-norming.|
|1. Banding||1. The effect of banding of test scores is to make equivalent all scores that fall within a particular range or band. For example, thousands of raw scores on a test may be transformed to a stanine having a value of 1 to 9. All scores that fall within each of the stanine boundaries will be treated by the test user as either equivalent or subject to some additional selection criteria. A sliding band (Cascio et al., 1991) is a modified banding procedure wherein a band is adjusted (“slid”) to permit the selection of more members of some group than would otherwise be selected.|
|1. Preference Policies||1. In the interest of affirmative action, reverse discrimination, or some other policy deemed to be in the interest of society at large, a test user might establish a policy of preference based on group membership. For example, if a municipal fire department sought to increase the representation of female personnel in its ranks, it might institute a test-related policy designed to do just that. A key provision in this policy might be that when a male and a female earn equal scores on the test used for hiring, the female will be hired.|
Although psychometricians have the tools to institute special policies through manipulations in test development, scoring, and interpretation, there are few clear guidelines in this controversial area (Brown, 1994; Gottfredson, 1994, 2000; Sackett & Wilk, 1994). The waters are further muddied by the fact that some of the guidelines seem to have contradictory implications. For example, although racial preferment in employee selection (disparate impact) is unlawful, the use of valid and unbiased selection procedures virtually guarantees disparate impact. This state of affairs will change only when racial disparities in job-related skills and abilities are minimized (Gottfredson, 1994).
In 1991, Congress enacted legislation effectively barring employers from adjusting testtakers’ scores for the purpose of making hiring or promotion decisions. Section 106 of the Civil Rights Act of 1991 made it illegal for employers “in connection with the selection or referral of applicants or candidates for employment or promotion to adjust the scores of, use different cutoffs for, or otherwise alter the results of employment-related tests on the basis of race, color, religion, sex, or national origin.”
The law prompted concern on the part of many psychologists who believed it would adversely affect various societal groups and might reverse social gains. Brown (1994, p. 927) forecast that “the ramifications of the Act are more far-reaching than Congress envisioned when it considered the amendment and could mean that many personality tests and physical ability tests that rely on separate scoring for men and women are outlawed in employment selection.” Arguments in favor of group-related test-score adjustment have been made on philosophical as well as technical grounds. From a philosophical perspective, increased minority representation is socially valued to the point that minority preference in test scoring is warranted. In the same vein, minority preference is viewed both as a remedy for past societal wrongs and as a contemporary guarantee of proportional workplace representation. From a more technical perspective, it is argued that some tests require adjustment in scores because (1) the tests are biased, and a given score on them does not necessarily carry the same meaning for all testtakers; and/or (2) “a particular way of using a test is at odds with an espoused position as to what constitutes fair use” (Sackett & Wilk, 1994, p. 931).
In contrast to advocates of test-score adjustment are those who view such adjustments as part of a social agenda for preferential treatment of certain groups. These opponents of test-score adjustment reject the subordination of individual effort and ability to group membership as criteria in the assignment of test scores (Gottfredson, 1988, 2000). Hunter and Schmidt (1976, p. 1069) described the unfortunate consequences for all parties involved in a college selection situation wherein poor-risk applicants were accepted on the basis of score adjustments or quotas. With reference to the employment setting, Hunter and Schmidt (1976) described one case in which entrance standards were lowered so more members of a particular group could be hired. However, many of these new hires did not pass promotion tests—with the result that the company was sued for discriminatory promotion practice. Yet another consideration concerns the feelings of “minority applicants who are selected under a quota system but who also would have been selected under unqualified individualism and must therefore pay the price, in lowered prestige and self-esteem” (Jensen, 1980, p. 398).
A number of psychometric models of fairness in testing have been presented and debated in the scholarly literature (Hunter & Schmidt, 1976; Petersen & Novick, 1976; Schmidt & Hunter, 1974; Thorndike, 1971). Despite a wealth of research and debate, a long-standing question in the field of personnel psychology remains: “How can group differences on cognitive ability tests be reduced while retaining existing high levels of reliability and criterion-related validity?”
According to Gottfredson (1994), the answer probably will not come from measurement-related research because differences in scores on many of the tests in question arise principally from differences in job-related abilities. For Gottfredson (1994, p. 963), “the biggest contribution personnel psychologists can make in the long run may be to insist collectively and candidly that their measurement tools are neither the cause of nor the cure for racial differences in job skills and consequent inequalities in employment.”
Beyond the workplace and personnel psychology, what role, if any, should measurement play in promoting diversity? As Haidt et al. (2003) reflected, there are several varieties of diversity, some perceived as more valuable than others. Do we need to develop more specific measures designed, for example, to discourage “moral diversity” while encouraging “demographic diversity”? These types of questions have implications in a number of areas from academic admission policies to immigration.
JUST THINK . . .
How do you feel about the use of various procedures to adjust test scores on the basis of group membership? Are these types of issues best left to measurement experts?
If performance differences are found between identified groups of people on a valid and reliable test used for selection purposes, some hard questions may have to be dealt with if the test is to continue to be used. Is the problem due to some technical deficiency in the test, or is the test in reality too good at identifying people of different levels of ability? Regardless, is the test being used fairly? If so, what might society do to remedy the skill disparity between different groups as reflected on the test?
Our discussion of issues of test fairness and test bias may seem to have brought us far afield of the seemingly cut-and-dried, relatively nonemotional subject of test validity. However, the complex issues accompanying discussions of test validity, including issues of fairness and bias, must be wrestled with by us all. For further consideration of the philosophical issues involved, we refer you to the solitude of your own thoughts and the reading of your own conscience.
Test your understanding of elements of this chapter by seeing if you can explain each of the following terms, expressions, and abbreviations:
· hit rate
· slope bias
|TitleABC/123 Version X||1|
|Dr. Zak Case StudyPSYCH/655 Version 4||1|
University of Phoenix Material
Dr. Zak Case Study
Read the following case study. Use the information in the case study to answer the accompanying follow-up questions. Although questions 1 & 2 have short answers, you should prepare a 150- to 200-word response for each of the remaining questions.
Dr. Zak developed a test to measure depression. He sampled 100 university students to take his five item test. The group of students was comprised of 30 men and 70 women. In this group, four persons were African American, six persons were Hispanic, and one person was Asian. Zak’s Miraculous Test of Depression is printed below:
1. I feel depressed: Yes No
2. I have been sad for the last two weeks: Yes No
3. I have seen changes in my eating and sleeping: Yes No
4. I don’t feel that life is going to get better: Yes No
5. I feel happy most of the day: Yes No
Yes = 1; No = 0
The mean on this test is 3.5 with a standard deviation of .5.
1. Sally scores 1.5 on this test. How many standard deviations is Sally from the mean? (Show your calculations)
Because the Mean = 3.5 with a SD = .5 , and Sally’s score = 1.5, my calculations would be first 1.5 – 3.5 = -2, then -2 / .5 = -4. So in conclusion, this means that Sally is -4 standard deviations away from the mean.
(1.5 – 3.5) / (.5) = – 4
2. Billy scores 5. What is his standard score?
In order to determine the standard score here, we would have to calculate the z – score using the raw score of (5) minus the mean (3.5), devided by standard deviation (.5). So first we have the equation 5 – 3.5 = 1.5, followed by the equation of 1.5 /. 5 = 3. As a result of this, we can determine that Billy has a Standard score of 3.
(5 – 3.5) / (.5) = 3
3. What scale of measurement is Dr. Zak using? Do you think Dr. Zak’s choice of scaling is appropriate? Why or why not? What are your suggestions?
4. Do you think Dr. Zak has a good sample on which to norm his test? Why or why not? What are your suggestions?
5. What other items do you think need to be included in Dr. Zak’s domain sampling?
Depression is a serious illness, and if it is not examined to its fullest it can evolve into a serious case which can lead severe consequences. This study could benefit from asking more questions because asking more questions can provide further insight into how serious their depression is, and perhaps gauge what level of depression they are experiencing. Some other questions that can be asked can be, it is difficult for me to concentrate, I have trouble making decisions, my sleep patterns bad and having difficulty relaxing, I often feel nervous or anxious, committing suicide has come into my mind, you are no longer sexually driven, is depression present in your family history, you often feel sad or worthless.
The issue with this test is that people experience different forms of depression where some cases may not be as severe as others. There are many different factors that have an effect in the development of depression from changes in hormones, genetic predispositions, relationships, or the experience of internal and external stressors. The more questions that are involved in the testing, the more information that can be gathered to help produce effective results.
6. Suggest changes to this test to make it better. Justify your reason for each suggestion supporting each reason with psychometric principles from the text book or other materials used in your course.
7. Dr. Zak also gave his students the Beck Depression Inventory (BDI). The correlation between his test and the BDI was r =.14. Evaluate this correlation. What does this correlation tell you about the relationship between these two instruments?
Cohen, R. J., Swerdlik, M. E, & Sturman, E. D. (2013). Psychological testing and assessment: An introduction to tests and measurement (8th ed.). New York, NY: McGraw-Hill.
Copyright © XXXX by University of Phoenix. All rights reserved.
Copyright © 2017, 2015, 2013 by University of Phoenix. All rights reserved.
Modules Chapter 5 wk2 p655
C H A P T E R 5
In everyday conversation, reliability is a synonym for dependability or consistency. We speak of the train that is so reliable you can set your watch by it. If we’re lucky, we have a reliable friend who is always there for us in a time of need.
Broadly speaking, in the language of psychometrics reliability refers to consistency in measurement. And whereas in everyday conversation reliability always connotes something positive, in the psychometric sense it really only refers to something that is consistent—not necessarily consistently good or bad, but simply consistent.
It is important for us, as users of tests and consumers of information about tests, to know how reliable tests and other measurement procedures are. But reliability is not an all-or-none matter. A test may be reliable in one context and unreliable in another. There are different types and degrees of reliability. A reliability coefficient is an index of reliability, a proportion that indicates the ratio between the true score variance on a test and the total variance. In this chapter, we explore different kinds of reliability coefficients, including those for measuring test-retest reliability, alternate-forms reliability, split-half reliability, and inter-scorer reliability.
The Concept of Reliability
Recall from our discussion of classical test theory that a score on an ability test is presumed to reflect not only the testtaker’s true score on the ability being measured but also error.1 In its broadest sense, error refers to the component of the observed test score that does not have to do with the testtaker’s ability. If we use X to represent an observed score, T to represent a true score, and E to represent error, then the fact that an observed score equals the true score plus error may be expressed as follows:
A statistic useful in describing sources of test score variability is the variance (σ2)—the standard deviation squared. This statistic is useful because it can be broken into components. Page 142Variance from true differences is true variance , and variance from irrelevant, random sources is error variance . If σ2 represents the total variance, the true variance, and the error variance, then the relationship of the variances can be expressed as
In this equation, the total variance in an observed distribution of test scores (σ2) equals the sum of the true variance plus the error variance . The term reliability refers to the proportion of the total variance attributed to true variance. The greater the proportion of the total variance attributed to true variance, the more reliable the test. Because true differences are assumed to be stable, they are presumed to yield consistent scores on repeated administrations of the same test as well as on equivalent forms of tests. Because error variance may increase or decrease a test score by varying amounts, consistency of the test score—and thus the reliability—can be affected.
In general, the term measurement error refers to, collectively, all of the factors associated with the process of measuring some variable, other than the variable being measured. To illustrate, consider an English-language test on the subject of 12th-grade algebra being administered, in English, to a sample of 12-grade students, newly arrived to the United States from China. The students in the sample are all known to be “whiz kids” in algebra. Yet for some reason, all of the students receive failing grades on the test. Do these failures indicate that these students really are not “whiz kids” at all? Possibly. But a researcher looking for answers regarding this outcome would do well to evaluate the English-language skills of the students. Perhaps this group of students did not do well on the algebra test because they could neither read nor understand what was required of them. In such an instance, the fact that the test was written and administered in English could have contributed in large part to the measurement error in this evaluation. Stated another way, although the test was designed to evaluate one variable (knowledge of algebra), scores on it may have been more reflective of another variable (knowledge of and proficiency in English language). This source of measurement error (the fact that the test was written and administered in English) could have been eliminated by translating the test and administering it in the language of the testtakers.
Measurement error, much like error in general, can be categorized as being either systematic or random. Random error is a source of error in measuring a targeted variable caused by unpredictable fluctuations and inconsistencies of other variables in the measurement process. Sometimes referred to as “noise,” this source of error fluctuates from one testing situation to another with no discernible pattern that would systematically raise or lower scores. Examples of random error that could conceivably affect test scores range from unanticipated events happening in the immediate vicinity of the test environment (such as a lightning strike or a spontaneous “occupy the university” rally), to unanticipated physical events happening within the testtaker (such as a sudden and unexpected surge in the testtaker’s blood sugar or blood pressure).
JUST THINK . . .
What might be a source of random error inherent in all the tests an assessor administers in his or her private office?
In contrast to random error, systematic error refers to a source of error in measuring a variable that is typically constant or proportionate to what is presumed to be the true value of the variable being measured. For example, a 12-inch ruler may be found to be, in actuality, a tenth of one inch longer than 12 inches. All of the 12-inch measurements previously taken with that ruler were systematically off by one-tenth of an inch; that is, anything measured to be exactly 12 inches with that ruler was, in reality, 12 and one-tenth inches. In this example, it is the measuring instrument itself that has been found to be a source of systematic error. Once a systematic error becomes known, it becomes predictable—as well as fixable. Note also that a systematic source of error does not affect score consistency. So, for example, suppose a measuring instrument such as the official weight scale used on The Biggest Loser television Page 143program consistently underweighed by 5 pounds everyone who stepped on it. Regardless of this (systematic) error, the relative standings of all of the contestants weighed on that scale would remain unchanged. A scale underweighing all contestants by 5 pounds simply amounts to a constant being subtracted from every “score.” Although weighing contestants on such a scale would not yield a true (or valid) weight, such a systematic error source would not change the variability of the distribution or affect the measured reliability of the instrument. In the end, the individual crowned “the biggest loser” would indeed be the contestant who lost the most weight—it’s just that he or she would actually weigh 5 pounds more than the weight measured by the show’s official scale. Now moving from the realm of reality television back to the realm of psychological testing and assessment, let’s take a closer look at the source of some error variance commonly encountered during testing and assessment.
JUST THINK . . .
What might be a source of systematic error inherent in all the tests an assessor administers in his or her private office?
Sources of Error Variance
Sources of error variance include test construction, administration, scoring, and/or interpretation.
One source of variance during test construction is item sampling or content sampling , terms that refer to variation among items within a test as well as to variation among items between tests. Consider two or more tests designed to measure a specific skill, personality attribute, or body of knowledge. Differences are sure to be found in the way the items are worded and in the exact content sampled. Each of us has probably walked into an achievement test setting thinking “I hope they ask this question” or “I hope they don’t ask that question.” If the only questions on the examination were the ones we hoped would be asked, we might achieve a higher score on that test than on another test purporting to measure the same thing. The higher score would be due to the specific content sampled, the way the items were worded, and so on. The extent to which a testtaker’s score is affected by the content sampled on a test and by the way the content is sampled (that is, the way in which the item is constructed) is a source of error variance. From the perspective of a test creator, a challenge in test development is to maximize the proportion of the total variance that is true variance and to minimize the proportion of the total variance that is error variance.
Sources of error variance that occur during test administration may influence the testtaker’s attention or motivation. The testtaker’s reactions to those influences are the source of one kind of error variance. Examples of untoward influences during administration of a test include factors related to the test environment: room temperature, level of lighting, and amount of ventilation and noise, for instance. A relentless fly may develop a tenacious attraction to an examinee’s face. A wad of gum on the seat of the chair may make itself known only after the testtaker sits down on it. Other environment-related variables include the instrument used to enter responses and even the writing surface on which responses are entered. A pencil with a dull or broken point can make it difficult to blacken the little grids. The writing surface on a school desk may be riddled with heart carvings, the legacy of past years’ students who felt compelled to express their eternal devotion to someone now long forgotten. External to the test environment in a global sense, the events of the day may also serve as a source of error. So, for example, test results may vary depending upon whether the testtaker’s country is at war or at peace (Gil et al., 2016). A variable of interest when evaluating a patient’s general level of suspiciousness or fear is the patient’s home neighborhood and lifestyle. Especially in patients who live in and must cope daily with an unsafe neighborhood, Page 144what is actually adaptive fear and suspiciousness can be misinterpreted by an interviewer as psychotic paranoia (Wilson et al., 2016).
Other potential sources of error variance during test administration are testtaker variables. Pressing emotional problems, physical discomfort, lack of sleep, and the effects of drugs or medication can all be sources of error variance. Formal learning experiences, casual life experiences, therapy, illness, and changes in mood or mental state are other potential sources of testtaker-related error variance. It is even conceivable that significant changes in the testtaker’s body weight could be a source of error variance. Weight gain and obesity are associated with a rise in fasting glucose level—which in turn is associated with cognitive impairment. In one study that measured performance on a cognitive task, subjects with high fasting glucose levels made nearly twice as many errors as subjects whose fasting glucose level was in the normal range (Hawkins et al., 2016).
Examiner-related variables are potential sources of error variance. The examiner’s physical appearance and demeanor—even the presence or absence of an examiner—are some factors for consideration here. Some examiners in some testing situations might knowingly or unwittingly depart from the procedure prescribed for a particular test. On an oral examination, some examiners may unwittingly provide clues by emphasizing key words as they pose questions. They might convey information about the correctness of a response through head nodding, eye movements, or other nonverbal gestures. In the course of an interview to evaluate a patient’s suicidal risk, highly religious clinicians may be more inclined than their moderately religious counterparts to conclude that such risk exists (Berman et al., 2015). Clearly, the level of professionalism exhibited by examiners is a source of error variance.
Test scoring and interpretation
In many tests, the advent of computer scoring and a growing reliance on objective, computer-scorable items have virtually eliminated error variance caused by scorer differences. However, not all tests can be scored from grids blackened by no. 2 pencils. Individually administered intelligence tests, some tests of personality, tests of creativity, various behavioral measures, essay tests, portfolio assessment, situational behavior tests, and countless other tools of assessment still require scoring by trained personnel.
Manuals for individual intelligence tests tend to be very explicit about scoring criteria, lest examinees’ measured intelligence vary as a function of who is doing the testing and scoring. In some tests of personality, examinees are asked to supply open-ended responses to stimuli such as pictures, words, sentences, and inkblots, and it is the examiner who must then quantify or qualitatively evaluate responses. In one test of creativity, examinees might be given the task of creating as many things as they can out of a set of blocks. Here, it is the examiner’s task to determine which block constructions will be awarded credit and which will not. For a behavioral measure of social skills in an inpatient psychiatric service, the scorers or raters might be asked to rate patients with respect to the variable “social relatedness.” Such a behavioral measure might require the rater to check yes or no to items like Patient says “Good morning” to at least two staff members.
JUST THINK . . .
Can you conceive of a test item on a rating scale requiring human judgment that all raters will score the same 100% of the time?
Scorers and scoring systems are potential sources of error variance. A test may employ objective-type items amenable to computer scoring of well-documented reliability. Yet even then, a technical glitch might contaminate the data. If subjectivity is involved in scoring, then the scorer (or rater) can be a source of error variance. Indeed, despite rigorous scoring criteria set forth in many of the better-known tests of intelligence, examiner/scorers occasionally still are confronted by situations where an examinee’s response lies in a gray area. The element of subjectivity in scoring may be much greater in the administration of certain nonobjective-type personality tests, tests of creativity (such as the block test just described), and certain academic tests (such as essay examinations). Subjectivity in scoring can even enter into Page 145behavioral assessment. Consider the case of two behavior observers given the task of rating one psychiatric inpatient on the variable of “social relatedness.” On an item that asks simply whether two staff members were greeted in the morning, one rater might judge the patient’s eye contact and mumbling of something to two staff members to qualify as a yes response. The other observer might feel strongly that a no response to the item is appropriate. Such problems in scoring agreement can be addressed through rigorous training designed to make the consistency—or reliability—of various scorers as nearly perfect as can be.
Other sources of error
Surveys and polls are two tools of assessment commonly used by researchers who study public opinion. In the political arena, for example, researchers trying to predict who will win an election may sample opinions from representative voters and then draw conclusions based on their data. However, in the “fine print” of those conclusions is usually a disclaimer that the conclusions may be off by plus or minus a certain percent. This fine print is a reference to the margin of error the researchers estimate to exist in their study. The error in such research may be a result of sampling error—the extent to which the population of voters in the study actually was representative of voters in the election. The researchers may not have gotten it right with respect to demographics, political party affiliation, or other factors related to the population of voters. Alternatively, the researchers may have gotten such factors right but simply did not include enough people in their sample to draw the conclusions that they did. This brings us to another type of error, called methodological error. So, for example, the interviewers may not have been trained properly, the wording in the questionnaire may have been ambiguous, or the items may have somehow been biased to favor one or another of the candidates.
Certain types of assessment situations lend themselves to particular varieties of systematic and nonsystematic error. For example, consider assessing the extent of agreement between partners regarding the quality and quantity of physical and psychological abuse in their relationship. As Moffitt et al. (1997) observed, “Because partner abuse usually occurs in private, there are only two persons who ‘really’ know what goes on behind closed doors: the two members of the couple” (p. 47). Potential sources of nonsystematic error in such an assessment situation include forgetting, failing to notice abusive behavior, and misunderstanding instructions regarding reporting. A number of studies (O’Leary & Arias, 1988; Riggs et al., 1989; Straus, 1979) have suggested that underreporting or overreporting of perpetration of abuse also may contribute to systematic error. Females, for example, may underreport abuse because of fear, shame, or social desirability factors and overreport abuse if they are seeking help. Males may underreport abuse because of embarrassment and social desirability factors and overreport abuse if they are attempting to justify the report.
Just as the amount of abuse one partner suffers at the hands of the other may never be known, so the amount of test variance that is true relative to error may never be known. A so-called true score, as Stanley (1971, p. 361) put it, is “not the ultimate fact in the book of the recording angel.” Further, the utility of the methods used for estimating true versus error variance is a hotly debated matter (see Collins, 1996; Humphreys, 1996; Williams & Zimmerman, 1996a, 1996b). Let’s take a closer look at such estimates and how they are derived.
Test-Retest Reliability Estimates
A ruler made from the highest-quality steel can be a very reliable instrument of measurement. Every time you measure something that is exactly 12 inches long, for example, your ruler will tell you that what you are measuring is exactly 12 inches long. The reliability of this instrument Page 146of measurement may also be said to be stable over time. Whether you measure the 12 inches today, tomorrow, or next year, the ruler is still going to measure 12 inches as 12 inches. By contrast, a ruler constructed of putty might be a very unreliable instrument of measurement. One minute it could measure some known 12-inch standard as 12 inches, the next minute it could measure it as 14 inches, and a week later it could measure it as 18 inches. One way of estimating the reliability of a measuring instrument is by using the same instrument to measure the same thing at two points in time. In psychometric parlance, this approach to reliability evaluation is called the test-retest method, and the result of such an evaluation is an estimate of test-retest reliability.
Test-retest reliability is an estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test. The test-retest measure is appropriate when evaluating the reliability of a test that purports to measure something that is relatively stable over time, such as a personality trait. If the characteristic being measured is assumed to fluctuate over time, then there would be little sense in assessing the reliability of the test using the test-retest method.
As time passes, people change. For example, people may learn new things, forget some things, and acquire new skills. It is generally the case (although there are exceptions) that, as the time interval between administrations of the same test increases, the correlation between the scores obtained on each testing decreases. The passage of time can be a source of error variance. The longer the time that passes, the greater the likelihood that the reliability coefficient will be lower. When the interval between testing is greater than six months, the estimate of test-retest reliability is often referred to as the coefficient of stability .
An estimate of test-retest reliability from a math test might be low if the testtakers took a math tutorial before the second test was administered. An estimate of test-retest reliability from a personality profile might be low if the testtaker suffered some emotional trauma or received counseling during the intervening period. A low estimate of test-retest reliability might be found even when the interval between testings is relatively brief. This may well be the case when the testings occur during a time of great developmental change with respect to the variables they are designed to assess. An evaluation of a test-retest reliability coefficient must therefore extend beyond the magnitude of the obtained coefficient. If we are to come to proper conclusions about the reliability of the measuring instrument, evaluation of a test-retest reliability estimate must extend to a consideration of possible intervening factors between test administrations.
An estimate of test-retest reliability may be most appropriate in gauging the reliability of tests that employ outcome measures such as reaction time or perceptual judgments (including discriminations of brightness, loudness, or taste). However, even in measuring variables such as these, and even when the time period between the two administrations of the test is relatively small, various factors (such as experience, practice, memory, fatigue, and motivation) may intervene and confound an obtained measure of reliability.2
Taking a broader perspective, psychological science, and science in general, demands that the measurements obtained by one experimenter be replicable by other experimenters using the same instruments of measurement and following the same procedures. However, as observed in this chapter’s Close-Up , a replicability problem of epic proportions appears to be brewing.Page 147
Psychology’s Replicability Crisis*
In the mid-2000s, academic scientists became concerned that science was not being performed rigorously enough to prevent spurious results from reaching consensus within the scientific community. In other words, they worried that scientific findings, although peer-reviewed and published, were not replicable by independent parties. Since that time, hundreds of researchers have endeavored to determine if there is really a problem, and if there is, how to curb it. In 2015, a group of researchers called the Open Science Collaboration attempted to redo 100 psychology studies that had already been peer-reviewed and published in leading journals (Open Science Collaboration, 2015). Their results, published in the journal Science, indicated that, depending on the criteria used, only 40–60% of replications found the same results as the original studies. This low replication rate helped confirm that science indeed had a problem with replicability, the seriousness of which is reflected in the term replicability crisis.
Why and how did this crisis of replicability emerge? Here it will be argued that the major causal factors are (1) a general lack of published replication attempts in the professional literature, (2) editorial preferences for positive over negative findings, and (3) questionable research practices on the part of authors of published studies. Let’s consider each of these factors.
Lack of Published Replication Attempts
Journals have long preferred to publish novel results instead of replications of previous work. In fact, a recent study found that only 1.07% of the published psychological scientific literature sought to directly replicate previous work (Makel et al., 2012). Academic scientists, who depend on publication in order to progress in their careers, respond to this bias by focusing their research on unexplored phenomena instead of replications. The implications for science are dire. Replication by independent parties provides for confidence in a finding, reducing the likelihood of experimenter bias and statistical anomaly. Indeed, had scientists been as focused on replication as they were on hunting down novel results, the field would likely not be in crisis now.
Editorial Preference for Positive over Negative Findings
Journals prefer positive over negative findings. “Positive” in this context does not refer to how upbeat, beneficial, or heart-warming the study is. Rather, positive refers to whether the study concluded that an experimental effect existed. Stated another way, and drawing on your recall from that class you took in experimental methods, positive findings typically entail a rejection of the null hypothesis. In essence, from the perspective of most journals, rejecting the null hypothesis as a result of a research study is a newsworthy event. By contrast, accepting the null hypothesis might just amount to “old news.”
The fact that journals are more apt to publish positive rather than negative studies has consequences in terms of the types of studies that even get submitted for publication. Studies submitted for publication typically report the existence of an effect rather than the absence of one. The vast majority of studies that actually get published also report the existence of an effect. Those studies designed to disconfirm reports of published effects are few-and-far-between to begin with, and may not be deemed publishable even when they are conducted and submitted to a journal for review. The net result is that scientists, policy-makers, judges, and anyone else who has occasion to rely on published research may have a difficult time determining the actual strength and robustness of a reported finding.
Questionable Research Practices (QRPs)
In this admittedly nonexhaustive review of factors contributing to the replicability crisis, the third factor is QRPs. Included here are questionable scientific practices that do not rise to the level of fraud but still introduce error into bodies of scientific evidence. For example, a recent survey of psychological scientists found that nearly 60% of the respondents reported that they decided to collect more data after peeking to see if their already-collected data had reached statistical significance (John et al., 2012). While this procedure may seem relatively benign, it is not. Imagine you are trying to determine if a nickel is fair, or weighted toward heads. Rather than establishing the number flips you plan on performing prior to your “test,” you just start flipping and from time-to-time check how many times the coin has come up heads. After a run of five heads, you notice that your weighted-coin hypothesis is looking strong and decide to stop flipping. The nonindependence between the decision to collect data and the data themselves introduces bias. Over the course of many studies, such practices can seriously undermine a body of research.
There are many other sorts of QRPs. For example, one variety entails the researcher failing to report all of the research undertaken in a research program, and then Page 148selectively only reporting the studies that confirm a particular hypothesis. With only the published study in hand, and without access to the researchers’ records, it would be difficult if not impossible for the research consumer to discern important milestones in the chronology of the research (such as what studies were conducted in what sequence, and what measurements were taken).
One proposed remedy for such QRPs is preregistration (Eich, 2014). Preregistration involves publicly committing to a set of procedures prior to carrying out a study. Using such a procedure, there can be no doubt as to the number of observations planned, and the number of measures anticipated. In fact, there are now several websites that allow researchers to preregister their research plans. It is also increasingly common for academic journals to demand preregistration (or at least a good explanation for why the study wasn’t preregistered). Alternatively, some journals award special recognition to studies that were preregistered so that readers can have more confidence in the replicability of the reported findings.
Lessons Learned from the Replicability Crisis
The replicability crisis represents an important learning opportunity for scientists and students. Prior to such replicability issues coming to light, it was typically assumed that science would simply self-correct over the long run. This means that at some point in time, the nonreplicable study would be exposed as such, and the scientific record would somehow be straightened out. Of course, while some self-correction does occur, it occurs neither fast enough nor often enough, nor in sufficient magnitude. The stark reality is that unreliable findings that reach general acceptance can stay in place for decades before they are eventually disconfirmed. And even when such long-standing findings are proven incorrect, there is no mechanism in place to alert other scientists and the public of this fact.
Traditionally, science has only been admitted into courtrooms if an expert attests that the science has reached “general acceptance” in the scientific community from which it comes. However, in the wake of science’s replicability crisis, it is not at all uncommon for findings to meet this general acceptance standard. Sadly, the standard may be met even if the findings from the subject study are questionable at best, or downright inaccurate at worst. Fortunately, another legal test has been created in recent years (Chin, 2014). In this test, judges are asked to play a gatekeeper role and only admit scientific evidence if it has been properly tested, has a sufficiently low error rate, and has been peer-reviewed and published. In this latter test, judges can ask more sensible questions, such as whether the study has been replicated and if the testing was done using a safeguard like preregistration.
Spurred by the recognition of a crisis of replicability, science is moving to right from both past and potential wrongs. As previously noted, there are now mechanisms in place for preregistration of experimental designs and growing acceptance of the importance of doing so. Further, organizations that provide for open science (e.g., easy and efficient preregistration) are receiving millions of dollars in funding to provide support for researchers seeking to perform more rigorous research. Moreover, replication efforts—beyond even that of the Open Science Collaboration—are becoming more common (Klein et al, 2013). Overall, it appears that most scientists now recognize replicability as a concern that needs to be addressed with meaningful changes to what has constituted “business-as-usual” for so many years.
Effectively addressing the replicability crisis is important for any profession that relies on scientific evidence. Within the field of law, for example, science is used every day in courtrooms throughout the world to prosecute criminal cases and adjudicate civil disputes. Everyone from a criminal defendant facing capital punishment to a major corporation arguing that its violent video games did not promote real-life violence may rely at some point in a trial on a study published in a psychology journal. Appeals are sometimes limited. Costs associated with legal proceedings are often prohibitive. With a momentous verdict in the offing, none of the litigants has the luxury of time—which might amount to decades, if at all—for the scholarly research system to self-correct.
When it comes to psychology’s replicability crisis, there is good and bad news. The bad news is that it is real, and that it has existed perhaps, since scientific studies were first published. The good news is that the problem has finally been recognized, and constructive steps are being taken to address it.
Used with permission of Jason Chin.
*This Close-Up was guest-authored by Jason Chin of the University of Toronto.
Parallel-Forms and Alternate-Forms Reliability Estimates
If you have ever taken a makeup exam in which the questions were not all the same as on the test initially given, you have had experience with different forms of a test. And if you have ever wondered whether the two forms of the test were really equivalent, you have wondered about the alternate-forms or parallel-forms reliability of the test. The degree of the relationship between various forms of a test can be evaluated by means of an alternate-forms or parallel-forms coefficient of reliability, which is often termed the coefficient of equivalence .
Although frequently used interchangeably, there is a difference between the terms alternate forms and parallel forms. Parallel forms of a test exist when, for each form of the test, the means and the variances of observed test scores are equal. In theory, the means of scores obtained on parallel forms correlate equally with the true score. More practically, scores obtained on parallel tests correlate equally with other measures. The term parallel forms reliability refers to an estimate of the extent to which item sampling and other errors have affected test scores on versions of the same test when, for each form of the test, the means and variances of observed test scores are equal.
Alternate forms are simply different versions of a test that have been constructed so as to be parallel. Although they do not meet the requirements for the legitimate designation “parallel,” alternate forms of a test are typically designed to be equivalent with respect to variables such as content and level of difficulty. The term alternate forms reliability refers to an estimate of the extent to which these different forms of the same test have been affected by item sampling error, or other error.
JUST THINK . . .
You missed the midterm examination and have to take a makeup exam. Your classmates tell you that they found the midterm impossibly difficult. Your instructor tells you that you will be taking an alternate form, not a parallel form, of the original test. How do you feel about that?
Obtaining estimates of alternate-forms reliability and parallel-forms reliability is similar in two ways to obtaining an estimate of test-retest reliability: (1) Two test administrations with the same group are required, and (2) test scores may be affected by factors such as motivation, fatigue, or intervening events such as practice, learning, or therapy (although not as much as when the same test is administered twice). An additional source of error variance, item sampling, is inherent in the computation of an alternate- or parallel-forms reliability coefficient. Testtakers may do better or worse on a specific form of the test not as a function of their true ability but simply because of the particular items that were selected for inclusion in the test.3
Developing alternate forms of tests can be time-consuming and expensive. Imagine what might be involved in trying to create sets of equivalent items and then getting the same people to sit for repeated administrations of an experimental test! On the other hand, once an alternate or parallel form of a test has been developed, it is advantageous to the test user in several ways. For example, it minimizes the effect of memory for the content of a previously administered form of the test.
JUST THINK . . .
From the perspective of the test user, what are other possible advantages of having alternate or parallel forms of the same test?
Certain traits are presumed to be relatively stable in people over time, and we would expect tests measuring those traits—alternate forms, parallel forms, or otherwise—to reflect that stability. As an example, we expect that there will be, and in fact there is, a reasonable degree of stability in scores on intelligence tests. Conversely, we might expect relatively little stability in scores obtained on a measure of state anxiety (anxiety felt at the moment).Page 150
An estimate of the reliability of a test can be obtained without developing an alternate form of the test and without having to administer the test twice to the same people. Deriving this type of estimate entails an evaluation of the internal consistency of the test items. Logically enough, it is referred to as an internal consistency estimate of reliability or as an estimate of inter-item consistency . There are different methods of obtaining internal consistency estimates of reliability. One such method is the split-half estimate.
Split-Half Reliability Estimates
An estimate of split-half reliability is obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once. It is a useful measure of reliability when it is impractical or undesirable to assess reliability with two tests or to administer a test twice (because of factors such as time or expense). The computation of a coefficient of split-half reliability generally entails three steps:
· Step 1. Divide the test into equivalent halves.
· Step 2. Calculate a Pearson r between scores on the two halves of the test.
· Step 3. Adjust the half-test reliability using the Spearman–Brown formula (discussed shortly).
When it comes to calculating split-half reliability coefficients, there’s more than one way to split a test—but there are some ways you should never split a test. Simply dividing the test in the middle is not recommended because it’s likely that this procedure would spuriously raise or lower the reliability coefficient. Different amounts of fatigue for the first as opposed to the second part of the test, different amounts of test anxiety, and differences in item difficulty as a function of placement in the test are all factors to consider.
One acceptable way to split a test is to randomly assign items to one or the other half of the test. Another acceptable way to split a test is to assign odd-numbered items to one half of the test and even-numbered items to the other half. This method yields an estimate of split-half reliability that is also referred to as odd-even reliability . 4 Yet another way to split a test is to divide the test by content so that each half contains items equivalent with respect to content and difficulty. In general, a primary objective in splitting a test in half for the purpose of obtaining a split-half reliability estimate is to create what might be called “mini-parallel-forms,” with each half equal to the other—or as nearly equal as humanly possible—in format, stylistic, statistical, and related aspects.
Step 2 in the procedure entails the computation of a Pearson r, which requires little explanation at this point. However, the third step requires the use of the Spearman–Brown formula.
The Spearman–Brown formula
The Spearman–Brown formula allows a test developer or user to estimate internal consistency reliability from a correlation of two halves of a test. It is a specific application of a more general formula to estimate the reliability of a test that is lengthened or shortened by any number of items. Because the reliability of a test is affected by its length, a formula is necessary for estimating the reliability of a test that has been shortened or lengthened. The general Spearman–Brown (rSB) formula is
where rSB is equal to the reliability adjusted by the Spearman–Brown formula, rxy is equal to the Pearson r in the original-length test, and n is equal to the number of items in the revised version divided by the number of items in the original version.
By determining the reliability of one half of a test, a test developer can use the Spearman–Brown formula to estimate the reliability of a whole test. Because a whole test is two times longer than half a test, n becomes 2 in the Spearman–Brown formula for the adjustment of split-half reliability. The symbol rhh stands for the Pearson r of scores in the two half tests:
Usually, but not always, reliability increases as test length increases. Ideally, the additional test items are equivalent with respect to the content and the range of difficulty of the original items. Estimates of reliability based on consideration of the entire test therefore tend to be higher than those based on half of a test. Table 5–1 shows half-test correlations presented alongside adjusted reliability estimates for the whole test. You can see that all the adjusted correlations are higher than the unadjusted correlations. This is so because Spearman–Brown estimates are based on a test that is twice as long as the original half test. For the data from the kindergarten pupils, for example, a half-test reliability of .718 is estimated to be equivalent to a whole-test reliability of .836.
|Grade||Half-Test Correlation (unadjusted r )||Whole-Test Estimate (rSB)|
|Table 5–1Odd-Even Reliability Coefficients before and after the Spearman-Brown Adjustment*|
*For scores on a test of mental ability
If test developers or users wish to shorten a test, the Spearman–Brown formula may be used to estimate the effect of the shortening on the test’s reliability. Reduction in test size for the purpose of reducing test administration time is a common practice in certain situations. For example, the test administrator may have only limited time with a particular testtaker or group of testtakers. Reduction in test size may be indicated in situations where boredom or fatigue could produce responses of questionable meaningfulness.
JUST THINK . . .
What are other situations in which a reduction in test size or the time it takes to administer a test might be desirable? What are the arguments against reducing test size?
A Spearman–Brown formula could also be used to determine the number of items needed to attain a desired level of reliability. In adding items to increase test reliability to a desired level, the rule is that the new items must be equivalent in content and difficulty so that the longer test still measures what the original test measured. If the reliability of the original test is relatively low, then it may be impractical to increase the number of items to reach an acceptable level of reliability. Another alternative would be to abandon this relatively unreliable instrument and locate—or develop—a suitable alternative. The reliability of the instrument could also be raised in some way. For example, the reliability of the instrument might be raised by creating new items, clarifying the test’s instructions, or simplifying the scoring rules.
Internal consistency estimates of reliability, such as that obtained by use of the Spearman–Brown formula, are inappropriate for measuring the reliability of heterogeneous tests and speed tests. The impact of test characteristics on reliability is discussed in detail later in this chapter.Page 152
Other Methods of Estimating Internal Consistency
In addition to the Spearman–Brown formula, other methods used to obtain estimates of internal consistency reliability include formulas developed by Kuder and Richardson (1937) and Cronbach (1951). Inter-item consistency refers to the degree of correlation among all the items on a scale. A measure of inter-item consistency is calculated from a single administration of a single form of a test. An index of inter-item consistency, in turn, is useful in assessing the homogeneity of the test. Tests are said to be homogeneous if they contain items that measure a single trait. As an adjective used to describe test items, homogeneity (derived from the Greek words homos, meaning “same,” and genos, meaning “kind”) is the degree to which a test measures a single factor. In other words, homogeneity is the extent to which items in a scale are unifactorial.
In contrast to test homogeneity, heterogeneity describes the degree to which a test measures different factors. A heterogeneous (or nonhomogeneous) test is composed of items that measure more than one trait. A test that assesses knowledge only of ultra high definition (UHD) television repair skills could be expected to be more homogeneous in content than a general electronics repair test. The former test assesses only one area whereas the latter assesses several, such as knowledge not only of UHD televisions but also of digital video recorders, Blu-Ray players, MP3 players, satellite radio receivers, and so forth.
The more homogeneous a test is, the more inter-item consistency it can be expected to have. Because a homogeneous test samples a relatively narrow content area, it is to be expected to contain more inter-item consistency than a heterogeneous test. Test homogeneity is desirable because it allows relatively straightforward test-score interpretation. Testtakers with the same score on a homogeneous test probably have similar abilities in the area tested. Testtakers with the same score on a more heterogeneous test may have quite different abilities.
Although a homogeneous test is desirable because it so readily lends itself to clear interpretation, it is often an insufficient tool for measuring multifaceted psychological variables such as intelligence or personality. One way to circumvent this potential source of difficulty has been to administer a series of homogeneous tests, each designed to measure some component of a heterogeneous variable.5
The Kuder–Richardson formulas
Dissatisfaction with existing split-half methods of estimating reliability compelled G. Frederic Kuder and M. W. Richardson (1937; Richardson & Kuder, 1939) to develop their own measures for estimating reliability. The most widely known of the many formulas they collaborated on is their Kuder–Richardson formula 20 , or KR-20, so named because it was the 20th formula developed in a series. Where test items are highly homogeneous, KR-20 and split-half reliability estimates will be similar. However, KR-20 is the statistic of choice for determining the inter-item consistency of dichotomous items, primarily those items that can be scored right or wrong (such as multiple-choice items). If test items are more heterogeneous, KR-20 will yield lower reliability estimates than the split-half method. Table 5–2 summarizes items on a sample heterogeneous test (the HERT), and Table 5–3 summarizes HERT performance for 20 testtakers. Assuming the difficulty level of all the items on the test to be about the same, would you expect a split-half (odd-even) estimate of reliability to be fairly high or low? How would the KR-20 reliability estimate compare with the odd-even estimate of reliability—would it be higher or lower?
|Item Number||Content Area|
|3||Digital video recorder (DVR)|
|4||Digital video recorder (DVR)|
|11||Compact disc player|
|12||Compact disc player|
|13||Satellite radio receiver|
|14||Satellite radio receiver|
|Table 5–2Content Areas Sampled for 18 Items of the Hypothetical Electronics Repair Test (HERT)|
|Item Number||Number of Testtakers Correct|
|Table 5–3Performance on the 18-Item HERT by Item for 20 Testtakers|
We might guess that, because the content areas sampled for the 18 items from this “Hypothetical Electronics Repair Test” are ordered in a manner whereby odd and even items Page 153tap the same content area, the odd-even reliability estimate will probably be quite high. Because of the great heterogeneity of content areas when taken as a whole, it could reasonably be predicted that the KR-20 estimate of reliability will be lower than the odd-even one. How is KR-20 computed? The following formula may be used:
where rKR20 stands for the Kuder–Richardson formula 20 reliability coefficient, k is the number of test items, σ2 is the variance of total test scores, p is the proportion of testtakers who pass the item, q is the proportion of people who fail the item, and Σ pq is the sum of the pq products over all items. For this particular example, k equals 18. Based on the data in Table 5–3, Σpq can be computed to be 3.975. The variance of total test scores is 5.26. Thus, rKR20 = .259.
An approximation of KR-20 can be obtained by the use of the 21st formula in the series developed by Kuder and Richardson, a formula known as—you guessed it—KR-21. The KR-21 formula may be used if there is reason to assume that all the test items have approximately Page 154the same degree of difficulty. Let’s add that this assumption is seldom justified. Formula KR-21 has become outdated in an era of calculators and computers. Way back when, KR-21 was sometimes used to estimate KR-20 only because it required many fewer calculations.
Numerous modifications of Kuder–Richardson formulas have been proposed through the years. The one variant of the KR-20 formula that has received the most acceptance and is in widest use today is a statistic called coefficient alpha. You may even hear it referred to as coefficient α−20. This expression incorporates both the Greek letter alpha (α) and the number 20, the latter a reference to KR-20.
Developed by Cronbach (1951) and subsequently elaborated on by others (such as Kaiser & Michael, 1975; Novick & Lewis, 1967), coefficient alpha may be thought of as the mean of all possible split-half correlations, corrected by the Spearman–Brown formula. In contrast to KR-20, which is appropriately used only on tests with dichotomous items, coefficient alpha is appropriate for use on tests containing nondichotomous items. The formula for coefficient alpha is
where ra is coefficient alpha, k is the number of items, is the variance of one item, Σ is the sum of variances of each item, and σ2 is the variance of the total test scores.
Coefficient alpha is the preferred statistic for obtaining an estimate of internal consistency reliability. A variation of the formula has been developed for use in obtaining an estimate of test-retest reliability (Green, 2003). Essentially, this formula yields an estimate of the mean of all possible test-retest, split-half coefficients. Coefficient alpha is widely used as a measure of reliability, in part because it requires only one administration of the test.
Unlike a Pearson r, which may range in value from −1 to +1, coefficient alpha typically ranges in value from 0 to 1. The reason for this is that, conceptually, coefficient alpha (much like other coefficients of reliability) is calculated to help answer questions about how similar sets of data are. Here, similarity is gauged, in essence, on a scale from 0 (absolutely no similarity) to 1 (perfectly identical). It is possible, however, to conceive of data sets that would yield a negative value of alpha (Streiner, 2003b). Still, because negative values of alpha are theoretically impossible, it is recommended under such rare circumstances that the alpha coefficient be reported as zero (Henson, 2001). Also, a myth about alpha is that “bigger is always better.” As Streiner (2003b) pointed out, a value of alpha above .90 may be “too high” and indicate redundancy in the items.
In contrast to coefficient alpha, a Pearson r may be thought of as dealing conceptually with both dissimilarity and similarity. Accordingly, an r value of −1 may be thought of as indicating “perfect dissimilarity.” In practice, most reliability coefficients—regardless of the specific type of reliability they are measuring—range in value from 0 to 1. This is generally true, although it is possible to conceive of exceptional cases in which data sets yield an r with a negative value.
Average proportional distance (APD)
A relatively new measure for evaluating the internal consistency of a test is the average proportional distance (APD) method (Sturman et al., 2009). Rather than focusing on similarity between scores on items of a test (as do split-half methods and Cronbach’s alpha), the APD is a measure that focuses on the degree of difference that exists between item scores. Accordingly, we define the average proportional distance method as a measure used to evaluate the internal consistency of a test that focuses on the degree of difference that exists between item scores.
To illustrate how the APD is calculated, consider the (hypothetical) “3-Item Test of Extraversion” (3-ITE). As conveyed by the title of the 3-ITE, it is a test that has only three Page 155items. Each of the items is a sentence that somehow relates to extraversion. Testtakers are instructed to respond to each of the three items with reference to the following 7-point scale: 1 = Very strongly disagree, 2 = Strongly disagree, 3 = Disagree, 4 = Neither Agree nor Disagree, 5 = Agree, 6 = Strongly agree, and 7 = Very strongly agree.
Typically, in order to evaluate the inter-item consistency of a scale, the calculation of the APD would be calculated for a group of testtakers. However, for the purpose of illustrating the calculations of this measure, let’s look at how the APD would be calculated for one testtaker. Yolanda scores 4 on Item 1, 5 on Item 2, and 6 on Item 3. Based on Yolanda’s scores, the APD would be calculated as follows:
· Step 1: Calculate the absolute difference between scores for all of the items.
· Step 2: Average the difference between scores.
· Step 3: Obtain the APD by dividing the average difference between scores by the number of response options on the test, minus one.
So, for the 3-ITE, here is how the calculations would look using Yolanda’s test scores:
· Step 1: Absolute difference between Items 1 and 2 = 1
· Absolute difference between Items 1 and 3 = 2
· Absolute difference between Items 2 and 3 = 1
· Step 2: In order to obtain the average difference (AD), add up the absolute differences in Step 1 and divide by the number of items as follows:
· Step 3: To obtain the average proportional distance (APD), divide the average difference by 6 (the 7 response options in our ITE scale minus 1). Using Yolanda’s data, we would divide 1.33 by 6 to get .22. Thus, the APD for the ITE is .22. But what does this mean?
The general “rule of thumb” for interpreting an APD is that an obtained value of .2 or lower is indicative of excellent internal consistency, and that a value of .25 to .2 is in the acceptable range. A calculated APD of .25 is suggestive of problems with the internal consistency of the test. These guidelines are based on the assumption that items measuring a single construct such as extraversion should ideally be correlated with one another in the .6 to .7 range. Let’s add that the expected inter-item correlation may vary depending on the variables being measured, so the ideal correlation values are not set in stone. In the case of the 3-ITE, the data for our one subject suggests that the scale has acceptable internal consistency. Of course, in order to make any meaningful conclusions about the internal consistency of the 3-ITE, the instrument would have to be tested with a large sample of testtakers.
One potential advantage of the APD method over using Cronbach’s alpha is that the APD index is not connected to the number of items on a measure. Cronbach’s alpha will be higher when a measure has more than 25 items (Cortina, 1993). Perhaps the best course of action when evaluating the internal consistency of a given measure is to analyze and integrate the information using several indices, including Cronbach’s alpha, mean inter-item correlations, and the APD.
Before proceeding, let’s emphasize that all indices of reliability provide an index that is a characteristic of a particular group of test scores, not of the test itself (Caruso, 2000; Yin & Fan, 2000). Measures of reliability are estimates, and estimates are subject to error. The precise amount of error inherent in a reliability estimate will vary with various factors, such as the sample of testtakers from which the data were drawn. A reliability index published in a test manual might be very impressive. However, keep in mind that the reported reliability was achieved with a particular group of testtakers. If a new group of testtakers is sufficiently Page 156different from the group of testtakers on whom the reliability studies were done, the reliability coefficient may not be as impressive—and may even be unacceptable.
Measures of Inter-Scorer Reliability
When being evaluated, we usually would like to believe that the results would be the same no matter who is doing the evaluating.6 For example, if you take a road test for a driver’s license, you would like to believe that whether you pass or fail is solely a matter of your performance behind the wheel and not a function of who is sitting in the passenger’s seat. Unfortunately, in some types of tests under some conditions, the score may be more a function of the scorer than of anything else. This was demonstrated back in 1912, when researchers presented one pupil’s English composition to a convention of teachers and volunteers graded the papers. The grades ranged from a low of 50% to a high of 98% (Starch & Elliott, 1912). Concerns about inter-scorer reliability are as relevant today as they were back then (Chmielewski et al., 2015; Edens et al., 2015; Penney et al., 2016). With this as background, it can be appreciated that certain tests lend themselves to scoring in a way that is more consistent than with other tests. It is meaningful, therefore, to raise questions about the degree of consistency, or reliability, that exists between scorers of a particular test.
Variously referred to as scorer reliability, judge reliability, observer reliability, and inter-rater reliability, inter-scorer reliability is the degree of agreement or consistency between two or more scorers (or judges or raters) with regard to a particular measure. Reference to levels of inter-scorer reliability for a particular test may be published in the test’s manual or elsewhere. If the reliability coefficient is high, the prospective test user knows that test scores can be derived in a systematic, consistent way by various scorers with sufficient training. A responsible test developer who is unable to create a test that can be scored with a reasonable degree of consistency by trained scorers will go back to the drawing board to discover the reason for this problem. If, for example, the problem is a lack of clarity in scoring criteria, then the remedy might be to rewrite the scoring criteria section of the manual to include clearly written scoring rules. Inter-rater consistency may be promoted by providing raters with the opportunity for group discussion along with practice exercises and information on rater accuracy (Smith, 1986).
Inter-scorer reliability is often used when coding nonverbal behavior. For example, a researcher who wishes to quantify some aspect of nonverbal behavior, such as depressed mood, would start by composing a checklist of behaviors that constitute depressed mood (such as looking downward and moving slowly). Accordingly, each subject would be given a depressed mood score by a rater. Researchers try to guard against such ratings being products of the rater’s individual biases or idiosyncrasies in judgment. This can be accomplished by having at least one other individual observe and rate the same behaviors. If consensus can be demonstrated in the ratings, the researchers can be more confident regarding the accuracy of the ratings and their conformity with the established rating system.
JUST THINK . . .
Can you think of a measure in which it might be desirable for different judges, scorers, or raters to have different views on what is being judged, scored, or rated?
Perhaps the simplest way of determining the degree of consistency among scorers in the scoring of a test is to calculate a coefficient of correlation. This correlation coefficient is referred to as a coefficient of inter-scorer reliability . In this chapter’s Everyday Psychometrics section, the nature of the relationship between the specific method used and the resulting estimate of diagnostic reliability is considered in greater detail.Page 157
The Importance of the Method Used for Estimating Reliability*
As noted throughout this text, reliability is extremely important in its own right and is also a necessary, but not sufficient, condition for validity. However, researchers often fail to understand that the specific method used to obtain reliability estimates can lead to large differences in those estimates, even when other factors (such as subject sample, raters, and specific reliability statistic used) are held constant. A published study by Chmielewski et al. (2015) highlighted the substantial influence that differences in method can have on estimates of inter-rater reliability.
As one might expect, high levels of diagnostic (inter-rater) reliability are vital for the accurate diagnosis of psychiatric/psychological disorders. Diagnostic reliability must be acceptably high in order to accurately identify risk factors for a disorder that are common to subjects in a research study. Without satisfactory levels of diagnostic reliability, it becomes nearly impossible to accurately determine the effectiveness of treatments in clinical trials. Low diagnostic reliability can also lead to improper information regarding how a disorder changes over time. In applied clinical settings, unreliable diagnoses can result in ineffective patient care—or worse. The utility and validity of a particular diagnosis itself can be called into question if expert diagnosticians cannot, for whatever reason, consistently agree on who should and should not be so diagnosed. In sum, high levels of diagnostic reliability are essential for establishing diagnostic validity (Freedman, 2013; Nelson-Gray, 1991).
The official nomenclature of psychological/psychiatric diagnoses in the United States is the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013), which provides explicit diagnostic criteria for all mental disorders. A perceived strength of recent versions of the DSM is that disorders listed in the manual can be diagnosed with a high level of inter-rater reliability (Hyman, 2010; Nathan & Langenbucher, 1999), especially when trained professionals use semistructured interviews to assign those diagnoses. However, the field trials for the newest version of the manual, the DSM-5, demonstrated a mean kappa of only .44 (Regier et al., 2013), which is considered a “fair” level of agreement that is only moderately greater than chance (Cicchetti, 1994; Fleiss, 1981). Moreover, DSM-5 kappas were much lower than those from previous versions of the manual which had been in the “excellent” range. As one might expect, given the assumption that psychiatric diagnoses are reliable, the results of the DSM-5 field trials caused considerable controversy and led to numerous criticisms of the new manual (Frances, 2012; Jones, 2012). Interestingly, several diagnoses, which were unchanged from previous versions of the manual, also demonstrated low diagnostic reliability suggesting that the manual itself was not responsible for the apparent reduction in reliability. Instead, differences in the methods used to obtain estimates of inter-rater reliability in the DSM-5 Field Trials, compared to estimates for previous versions of the manual, may have led to the lower observed diagnostic reliability.
Prior to DSM-5, estimates of DSM inter-rater reliability were largely derived using the audio-recording method. In the audio-recording method, one clinician interviews a patient and assigns diagnoses. Then a second clinician, who does not know what diagnoses were assigned, listens to an audio-recording (or watches a video-recording) of the interview and independently assigns diagnoses. These two sets of ratings are then used to calculate inter-rater reliability coefficients (such as kappa). However, in recent years, several researchers have made the case that the audio-recording method might inflate estimates of diagnostic reliability for a variety of reasons (Chmielewski et al., 2015; Kraemer et al., 2012). First, if the interviewing clinician decides the patient they are interviewing does not meet diagnostic criteria for a disorder, they typically do not ask about any remaining symptoms of the disorder (this is a feature of semistructured interviews designed to reduce administration times). However, it also means that the clinician listening to the audio-tape, even if they believe the patient might meet diagnostic criteria for a disorder, does not have all the information necessary to assign a diagnosis and therefore is forced to agree that no diagnosis is present. Second, only the interviewing clinician can follow up patient responses with further questions or obtain clarification regarding symptoms to help them make a decision. Third, even when semistructured interviews are used it is possible that two highly trained clinicians might obtain different responses from a patient if they had each conducted their own interview. In other words, the patient may volunteer more or perhaps even different information to one of the clinicians for any number of reasons. All of the above result in the audio- or video-recording method artificially constraining the information provided to the clinicians to be identical, which is unlikely to occur in actual research or Page 158clinical settings. As such, this method does not allow for truly independent ratings and therefore likely results in overestimates of what would be obtained if separate interviews were conducted.
In the test-retest method, separate independent interviews are conducted by two different clinicians, with neither clinician knowing what occurred during the other interview. These interviews are conducted over a time frame short enough that true change in diagnostic status is highly unlikely, making this method similar to the dependability method of assessing reliability (Chmielewski & Watson, 2009). Because diagnostic reliability is intended to assess the extent to which a patient would receive the same diagnosis at different hospitals or clinics—or, alternatively, the extent to which different studies are recruiting similar patients—the test-retest method provides a more meaningful, realistic, and ecologically valid estimate of diagnostic reliability.
Chmielewski et al. (2015) examined the influence of method on estimates of reliability by using both the audio-recording and test-retest methods in a large sample of psychiatric patients. The authors’ analyzed DSM-5 diagnoses because of the long-standing claims in the literature that they were reliable and the fact that structured interviews had not yet been created for the DSM-5. They carefully selected a one-week test-retest interval, based on theory and research, to minimize the likelihood that true diagnostic change would occur while substantially reducing memory effects and patient fatigue which might exist if the interviews were conducted immediately after each other. Clinicians in the study were at least master’s level and underwent extensive training that far exceeded the training of clinicians in the vast majority of research studies. The same pool of clinicians and patients was used for the audio-recording and test-retest methods. Diagnoses were assigned using the Structured Clinical Interview for DSM-IV (SCID-I/P; First et al., 2002), which is widely considered the gold-standard diagnostic interview in the field. Finally, patients completed self-report measures which were examined to ensure patients’ symptoms did not change over the one-week interval.
Diagnostic (inter-rater) reliability using the audio-recording method was very high (mean kappa = .80) and would be considered “excellent” by traditional standards (Cicchetti, 1994; Fleiss, 1981). Moreover, estimates of diagnostic reliability were equivalent or superior to previously published values for the DSM-5. However, estimates of diagnostic reliability obtained from the test-retest method were substantially lower (mean kappa = .47) and would be considered only “fair” by traditional standards. Moreover, approximately 25% of the disorders demonstrated “poor” diagnostic reliability. Interestingly, this level of diagnostic reliability was very similar to that observed in the DSM-5 Field Trials (mean kappa = .44), which also used the test-retest method (Regier et al., 2013). It is important to note these large differences in estimates of diagnostic reliability emerged despite the fact that (1) the same highly trained master’s-level clinicians were used for both methods; (2) the SCID-I/P, which is considered the “gold standard” in diagnostic interviews, was used; (3) the same patient sample was used; and (4) patients’ self-report of their symptoms was very stable (or, patients were experiencing their symptoms the same way during both interviews) and any changes in self-report were unrelated to diagnostic disagreements between clinicians. These results suggest that the reliability of diagnoses is far lower than commonly believed. Moreover, the results demonstrate the substantial influence that method has on estimates of diagnostic reliability even when other factors are held constant.
Used with permission of Michael Chmielewski.
*This Everyday Psychometrics was guest-authored by Michael Chmielewski of Southern Methodist University and was based on an article by Chmielewski et al. (2015), published in the Journal of Abnormal Psychology (copyright © 2015 by the American Psychological Association). The use of this information does not imply endorsement by the publisher.
Using and Interpreting a Coefficient of Reliability
We have seen that, with respect to the test itself, there are basically three approaches to the estimation of reliability: (1) test-retest, (2) alternate or parallel forms, and (3) internal or inter-item consistency. The method or methods employed will depend on a number of factors, such as the purpose of obtaining a measure of reliability.
Another question that is linked in no trivial way to the purpose of the test is, “How high should the coefficient of reliability be?” Perhaps the best “short answer” to this question is: Page 159“On a continuum relative to the purpose and importance of the decisions to be made on the basis of scores on the test.” Reliability is a mandatory attribute in all tests we use. However, we need more of it in some tests, and we will admittedly allow for less of it in others. If a test score carries with it life-or-death implications, then we need to hold that test to some high standards—including relatively high standards with regard to coefficients of reliability. If a test score is routinely used in combination with many other test scores and typically accounts for only a small part of the decision process, that test will not be held to the highest standards of reliability. As a rule of thumb, it may be useful to think of reliability coefficients in a way that parallels many grading systems: In the .90s rates a grade of A (with a value of .95 higher for the most important types of decisions), in the .80s rates a B (with below .85 being a clear B−), and anywhere from .65 through the .70s rates a weak, “barely passing” grade that borders on failing (and unacceptable). Now, let’s get a bit more technical with regard to the purpose of the reliability coefficient.
The Purpose of the Reliability Coefficient
If a specific test of employee performance is designed for use at various times over the course of the employment period, it would be reasonable to expect the test to demonstrate reliability across time. It would thus be desirable to have an estimate of the instrument’s test-retest reliability. For a test designed for a single administration only, an estimate of internal consistency would be the reliability measure of choice. If the purpose of determining reliability is to break down the error variance into its parts, as shown in Figure 5–1, then a number of reliability coefficients would have to be calculated.
Figure 5–1 Sources of Variance in a Hypothetical Test In this hypothetical situation, 5% of the variance has not been identified by the test user. It is possible, for example, that this portion of the variance could be accounted for by transient error, a source of error attributable to variations in the testtaker’s feelings, moods, or mental state over time. Then again, this 5% of the error may be due to other factors that are yet to be identified.
Note that the various reliability coefficients do not all reflect the same sources of error variance. Thus, an individual reliability coefficient may provide an index of error from test construction, test administration, or test scoring and interpretation. A coefficient of inter-rater reliability, for example, provides information about error as a result of test scoring. Specifically, it can be used to answer questions about how consistently two scorers score the same test items. Table 5–4 summarizes the different kinds of error variance that are reflected in different reliability coefficients.
|Type of Reliability||Purpose||Typical uses||Number of Testing Sessions||Sources of Error Variance||Statistical Procedures|
|· Test-retest||· To evaluate the stabilityof a measure||· When assessing the stability of various personality traits||· 2||· Administration||· Pearson r or Spearman rho|
|· Alternate-forms||· To evaluate the relationship between different forms of a measure||· When there is a need for different forms of a test (e.g., makeup tests)||· 1 or 2||· Test construction or administration||· Pearson r or Spearman rho|
|· Internal consistency||· To evaluate the extent to which items on a scale relate to one another||· When evaluating the homogeneity of a measure (or, all items are tapping a single construct)||· 1||· Test construction||· Pearson r between equivalent test halves with Spearman Brown correction or Kuder-R-ichardson for dichotomous items, or coefficient alpha for multipoint items or APD|
|· Inter-scorer||· To evaluate the level of agreement between raters on a measure||· Interviews or coding of behavior. Used when researchers need to show that there is consensus in the way that different raters view a particular behavior pattern (and hence no observer bias).||· 1||· Scoring and interpretation||· Cohen’s kappa, Pearson r or Spearman rho|
|Table 5–4Summary of Reliability Types|
The Nature of the Test
Closely related to considerations concerning the purpose and use of a reliability coefficient are those concerning the nature of the test itself. Included here are considerations such as whether (1) the test items are homogeneous or heterogeneous in nature; (2) the characteristic, ability, or trait being measured is presumed to be dynamic or static; (3) the range of test scores is or is not restricted; (4) the test is a speed or a power test; and (5) the test is or is not criterion-referenced.
Some tests present special problems regarding the measurement of their reliability. For example, a number of psychological tests have been developed for use with infants to help identify children who are developing slowly or who may profit from early intervention of some sort. Measuring the internal consistency reliability or the inter-scorer reliability of such tests is accomplished in much the same way as it is with other tests. However, measuring test-retest reliability presents a unique problem. The abilities of the very young children being tested are fast-changing. It is common knowledge that cognitive development during the first months and years of life is both rapid and uneven. Children often grow in spurts, sometimes changing dramatically in as little as days (Hetherington & Parke, 1993). The child tested just before and again just after a developmental advance may perform very differently on the two testings. In such cases, a marked change in test score might be attributed to error when in reality it reflects a genuine change in the testtaker’s skills. The challenge in gauging the test-retest reliability of such tests is to do so in such a way that it is not spuriously lowered by the testtaker’s actual Page 161developmental changes between testings. In attempting to accomplish this, developers of such tests may design test-retest reliability studies with very short intervals between testings, sometimes as little as four days.
Homogeneity versus heterogeneity of test items
Recall that a test is said to be homogeneous in items if it is functionally uniform throughout. Tests designed to measure one factor, such as one ability or one trait, are expected to be homogeneous in items. For such tests, it is reasonable to expect a high degree of internal consistency. By contrast, if the test is heterogeneous in items, an estimate of internal consistency might be low relative to a more appropriate estimate of test-retest reliability.
Dynamic versus static characteristics
Whether what is being measured by the test is dynamic or static is also a consideration in obtaining an estimate of reliability. A dynamic characteristic is a trait, state, or ability presumed to be ever-changing as a function of situational and cognitive experiences. If, for example, one were to take hourly measurements of the dynamic characteristic of anxiety as manifested by a stockbroker throughout a business day, one might find the measured level of this characteristic to change from hour to hour. Such changes might even be related to the magnitude of the Dow Jones average. Because the true amount of anxiety presumed to exist would vary with each assessment, a test-retest measure would be of little help in gauging the reliability of the measuring instrument. Therefore, the best estimate of reliability would be obtained from a measure of internal consistency. Contrast this situation to one in which hourly assessments of this same stockbroker are made on a trait, state, or ability presumed to be relatively unchanging (a static characteristic ), such as intelligence. In this instance, obtained measurement would not be expected to vary significantly as a function of time, and either the test-retest or the alternate-forms method would be appropriate.
JUST THINK . . .
Provide another example of both a dynamic characteristic and a static characteristic that a psychological test could measure.
Restriction or inflation of range
In using and interpreting a coefficient of reliability, the issue variously referred to as restriction of range or restriction of variance (or, conversely, inflation of range or inflation of variance ) is important. If the variance of either variable in a correlational analysis is restricted by the sampling procedure used, then the resulting correlation coefficient tends to be lower. If the variance of either variable in a correlational analysis is inflated by the sampling procedure, then the resulting correlation coefficient tends to be higher. Refer back to Figure 3–17 on page 111 (Two Scatterplots Illustrating Unrestricted and Restricted Ranges) for a graphic illustration.
Also of critical importance is whether the range of variances employed is appropriate to the objective of the correlational analysis. Consider, for example, a published educational test designed for use with children in grades 1 through 6. Ideally, the manual for this test should contain not one reliability value covering all the testtakers in grades 1 through 6 but instead reliability values for testtakers at each grade level. Here’s another example: A corporate personnel officer employs a certain screening test in the hiring process. For future testing and hiring purposes, this personnel officer maintains reliability data with respect to scores achieved by job applicants—as opposed to hired employees—in order to avoid restriction of range effects in the data. This is so because the people who were hired typically scored higher on the test than any comparable group of applicants.
Speed tests versus power tests
When a time limit is long enough to allow testtakers to attempt all items, and if some items are so difficult that no testtaker is able to obtain a perfect score, then the test is a power test . By contrast, a speed test generally contains items of Page 162uniform level of difficulty (typically uniformly low) so that, when given generous time limits, all testtakers should be able to complete all the test items correctly. In practice, however, the time limit on a speed test is established so that few if any of the testtakers will be able to complete the entire test. Score differences on a speed test are therefore based on performance speed because items attempted tend to be correct.
A reliability estimate of a speed test should be based on performance from two independent testing periods using one of the following: (1) test-retest reliability, (2) alternate-forms reliability, or (3) split-half reliability from two separately timed half tests. If a split-half procedure is used, then the obtained reliability coefficient is for a half test and should be adjusted using the Spearman–Brown formula.
Because a measure of the reliability of a speed test should reflect the consistency of response speed, the reliability of a speed test should not be calculated from a single administration of the test with a single time limit. If a speed test is administered once and some measure of internal consistency, such as the Kuder–Richardson or a split-half correlation, is calculated, the result will be a spuriously high reliability coefficient. To understand why the KR-20 or split-half reliability coefficient will be spuriously high, consider the following example.
When a group of testtakers completes a speed test, almost all the items completed will be correct. If reliability is examined using an odd-even split, and if the testtakers completed the items in order, then testtakers will get close to the same number of odd as even items correct. A testtaker completing 82 items can be expected to get approximately 41 odd and 41 even items correct. A testtaker completing 61 items may get 31 odd and 30 even items correct. When the numbers of odd and even items correct are correlated across a group of testtakers, the correlation will be close to 1.00. Yet this impressive correlation coefficient actually tells us nothing about response consistency.
Under the same scenario, a Kuder–Richardson reliability coefficient would yield a similar coefficient that would also be, well, equally useless. Recall that KR-20 reliability is based on the proportion of testtakers correct (p) and the proportion of testtakers incorrect (q) on each item. In the case of a speed test, it is conceivable that p would equal 1.0 and q would equal 0 for many of the items. Toward the end of the test—when many items would not even be attempted because of the time limit—p might equal 0 and q might equal 1.0. For many, if not a majority, of the items, then, the product pq would equal or approximate 0. When 0 is substituted in the KR-20 formula for Σ pq, the reliability coefficient is 1.0 (a meaningless coefficient in this instance).
A criterion-referenced test is designed to provide an indication of where a testtaker stands with respect to some variable or criterion, such as an educational or a vocational objective. Unlike norm-referenced tests, criterion-referenced tests tend to contain material that has been mastered in hierarchical fashion. For example, the would-be pilot masters on-ground skills before attempting to master in-flight skills. Scores on criterion-referenced tests tend to be interpreted in pass–fail (or, perhaps more accurately, “master-failed-to-master”) terms, and any scrutiny of performance on individual items tends to be for diagnostic and remedial purposes.
Traditional techniques of estimating reliability employ measures that take into account scores on the entire test. Recall that a test-retest reliability estimate is based on the correlation between the total scores on two administrations of the same test. In alternate-forms reliability, a reliability estimate is based on the correlation between the two total scores on the two forms. In split-half reliability, a reliability estimate is based on the correlation between scores on two halves of the test and is then adjusted using the Spearman–Brown formula to obtain a reliability estimate of the whole test. Although there are exceptions, such traditional procedures of Page 163estimating reliability are usually not appropriate for use with criterion-referenced tests. To understand why, recall that reliability is defined as the proportion of total variance (σ2) attributable to true variance (σ2th). Total variance in a test score distribution equals the sum of the true variance plus the error variance (σe2)
A measure of reliability, therefore, depends on the variability of the test scores: how different the scores are from one another. In criterion-referenced testing, and particularly in mastery testing, how different the scores are from one another is seldom a focus of interest. In fact, individual differences between examinees on total test scores may be minimal. The critical issue for the user of a mastery test is whether or not a certain criterion score has been achieved.
As individual differences (and the variability) decrease, a traditional measure of reliability would also decrease, regardless of the stability of individual performance. Therefore, traditional ways of estimating reliability are not always appropriate for criterion-referenced tests, though there may be instances in which traditional estimates can be adopted. An example might be a situation in which the same test is being used at different stages in some program—training, therapy, or the like—and so variability in scores could reasonably be expected. Statistical techniques useful in determining the reliability of criterion-referenced tests are discussed in great detail in many sources devoted to that subject (e.g., Hambleton & Jurgensen, 1990).
The True Score Model of Measurement and Alternatives to It
Thus far—and throughout this book, unless specifically stated otherwise—the model we have assumed to be operative is classical test theory (CTT) , also referred to as the true score (or classical) model of measurement. CTT is the most widely used and accepted model in the psychometric literature today—rumors of its demise have been greatly exaggerated (Zickar & Broadfoot, 2009). One of the reasons it has remained the most widely used model has to do with its simplicity, especially when one considers the complexity of other proposed models of measurement. Comparing CTT to IRT, for example, Streiner (2010) mused, “CTT is much simpler to understand than IRT; there aren’t formidable-looking equations with exponentiations, Greek letters, and other arcane symbols” (p. 185). Additionally, the CTT notion that everyone has a “true score” on a test has had, and continues to have, great intuitive appeal. Of course, exactly how to define this elusive true score has been a matter of sometimes contentious debate. For our purposes, we will define true score as a value that according to classical test theory genuinely reflects an individual’s ability (or trait) level as measured by a particular test. Let’s emphasize here that this value is indeed very test dependent. A person’s “true score” on one intelligence test, for example, can vary greatly from that same person’s “true score” on another intelligence test. Similarly, if “Form D” of an ability test contains items that the testtaker finds to be much more difficult than those on “Form E” of that test, then there is a good chance that the testtaker’s true score on Form D will be lower than that on Form E. The same holds for true scores obtained on different tests of personality. One’s true score on one test of extraversion, for example, may not bear much resemblance to one’s true score on another test of extraversion. Comparing a testtaker’s scores on two different tests purporting to measure the same thing requires a sophisticated knowledge of the properties of each of the two tests, as well as some rather complicated statistical procedures designed to equate the scores.
Another aspect of the appeal of CTT is that its assumptions allow for its application in most situations (Hambleton & Swaminathan, 1985). The fact that CTT assumptions are rather easily met and therefore applicable to so many measurement situations can be Page 164advantageous, especially for the test developer in search of an appropriate model of measurement for a particular application. Still, in psychometric parlance, CTT assumptions are characterized as “weak”—this precisely because its assumptions are so readily met. By contrast, the assumptions in another model of measurement, item response theory (IRT), are more difficult to meet. As a consequence, you may read of IRT assumptions being characterized in terms such as “strong,” “hard,” “rigorous,” and “robust.” A final advantage of CTT over any other model of measurement has to do with its compatibility and ease of use with widely used statistical techniques (as well as most currently available data analysis software). Factor analytic techniques, whether exploratory or confirmatory, are all “based on the CTT measurement foundation” (Zickar & Broadfoot, 2009, p. 52).
For all of its appeal, measurement experts have also listed many problems with CTT. For starters, one problem with CTT has to do with its assumption concerning the equivalence of all items on a test; that is, all items are presumed to be contributing equally to the score total. This assumption is questionable in many cases, and particularly questionable when doubt exists as to whether the scaling of the instrument in question is genuinely interval level in nature. Another problem has to do with the length of tests that are developed using a CTT model. Whereas test developers favor shorter rather than longer tests (as do most testtakers), the assumptions inherent in CTT favor the development of longer rather than shorter tests. For these reasons, as well as others, alternative measurement models have been developed. Below we briefly describe domain sampling theory and generalizability theory. We will then describe in greater detail, item response theory (IRT), a measurement model that some believe is a worthy successor to CTT (Borsbroom, 2005; Harvey & Hammer, 1999).
Domain sampling theory and generalizability theory
The 1950s saw the development of a viable alternative to CTT. It was originally referred to as domain sampling theory and is better known today in one of its many modified forms as generalizability theory. As set forth by Tryon (1957), the theory of domain sampling rebels against the concept of a true score existing with respect to the measurement of psychological constructs. Whereas those who subscribe to CTT seek to estimate the portion of a test score that is attributable to error, proponents of domain sampling theory seek to estimate the extent to which specific sources of variation under defined conditions are contributing to the test score. In domain sampling theory, a test’s reliability is conceived of as an objective measure of how precisely the test score assesses the domain from which the test draws a sample (Thorndike, 1985). A domain of behavior, or the universe of items that could conceivably measure that behavior, can be thought of as a hypothetical construct: one that shares certain characteristics with (and is measured by) the sample of items that make up the test. In theory, the items in the domain are thought to have the same means and variances of those in the test that samples from the domain. Of the three types of estimates of reliability, measures of internal consistency are perhaps the most compatible with domain sampling theory.
In one modification of domain sampling theory called generalizability theory, a “universe score” replaces that of a “true score” (Shavelson et al., 1989). Developed by Lee J. Cronbach (1970) and his colleagues (Cronbach et al., 1972), generalizability theory is based on the idea that a person’s test scores vary from testing to testing because of variables in the testing situation. Instead of conceiving of all variability in a person’s scores as error, Cronbach encouraged test developers and researchers to describe the details of the particular test situation or universe leading to a specific test score. This universe is described in terms of its facets , which include things like the number of items in the test, the amount of training the test scorers have had, and the purpose of the test administration. Page 165According to generalizability theory, given the exact same conditions of all the facets in the universe, the exact same test score should be obtained. This test score is the universe score , and it is, as Cronbach noted, analogous to a true score in the true score model. Cronbach (1970) explained as follows:
“What is Mary’s typing ability?” This must be interpreted as “What would Mary’s word processing score on this be if a large number of measurements on the test were collected and averaged?” The particular test score Mary earned is just one out of a universe of possible observations. If one of these scores is as acceptable as the next, then the mean, called the universe score and symbolized here by Mp (mean for person p), would be the most appropriate statement of Mary’s performance in the type of situation the test represents.
The universe is a collection of possible measures “of the same kind,” but the limits of the collection are determined by the investigator’s purpose. If he needs to know Mary’s typing ability on May 5 (for example, so that he can plot a learning curve that includes one point for that day), the universe would include observations on that day and on that day only. He probably does want to generalize over passages, testers, and scorers—that is to say, he would like to know Mary’s ability on May 5 without reference to any particular passage, tester, or scorer… .
The person will ordinarily have a different universe score for each universe. Mary’s universe score covering tests on May 5 will not agree perfectly with her universe score for the whole month of May… . Some testers call the average over a large number of comparable observations a “true score”; e.g., “Mary’s true typing rate on 3-minute tests.” Instead, we speak of a “universe score” to emphasize that what score is desired depends on the universe being considered. For any measure there are many “true scores,” each corresponding to a different universe.
When we use a single observation as if it represented the universe, we are generalizing. We generalize over scorers, over selections typed, perhaps over days. If the observed scores from a procedure agree closely with the universe score, we can say that the observation is “accurate,” or “reliable,” or “generalizable.” And since the observations then also agree with each other, we say that they are “consistent” and “have little error variance.” To have so many terms is confusing, but not seriously so. The term most often used in the literature is “reliability.” The author prefers “generalizability” because that term immediately implies “generalization to what?” … There is a different degree of generalizability for each universe. The older methods of analysis do not separate the sources of variation. They deal with a single source of variance, or leave two or more sources entangled. (Cronbach, 1970, pp. 153–154)
How can these ideas be applied? Cronbach and his colleagues suggested that tests be developed with the aid of a generalizability study followed by a decision study. A generalizability study examines how generalizable scores from a particular test are if the test is administered in different situations. Stated in the language of generalizability theory, a generalizability study examines how much of an impact different facets of the universe have on the test score. Is the test score affected by group as opposed to individual administration? Is the test score affected by the time of day in which the test is administered? The influence of particular facets on the test score is represented by coefficients of generalizability . These coefficients are similar to reliability coefficients in the true score model.
After the generalizability study is done, Cronbach et al. (1972) recommended that test developers do a decision study, which involves the application of information from the generalizability study. In the decision study , developers examine the usefulness of test scores in helping the test user make decisions. In practice, test scores are used to guide a variety of decisions, from placing a child in special education to hiring new employees to Page 166discharging mental patients from the hospital. The decision study is designed to tell the test user how test scores should be used and how dependable those scores are as a basis for decisions, depending on the context of their use. Why is this so important? Cronbach (1970) noted:
The decision that a student has completed a course or that a patient is ready for termination of therapy must not be seriously influenced by chance errors, temporary variations in performance, or the tester’s choice of questions. An erroneous favorable decision may be irreversible and may harm the person or the community. Even when reversible, an erroneous unfavorable decision is unjust, disrupts the person’s morale, and perhaps retards his development. Research, too, requires dependable measurement. An experiment is not very informative if an observed difference could be accounted for by chance variation. Large error variance is likely to mask a scientifically important outcome. Taking a better measure improves the sensitivity of an experiment in the same way that increasing the number of subjects does. (p. 152)
Generalizability has not replaced CTT. Perhaps one of its chief contributions has been its emphasis on the fact that a test’s reliability does not reside within the test itself. From the perspective of generalizability theory, a test’s reliability is very much a function of the circumstances under which the test is developed, administered, and interpreted.
Item response theory (IRT)
Another alternative to the true score model is item response theory (IRT; Lord & Novick, 1968; Lord, 1980). The procedures of item response theory provide a way to model the probability that a person with X ability will be able to perform at a level of Y. Stated in terms of personality assessment, it models the probability that a person with X amount of a particular personality trait will exhibit Y amount of that trait on a personality test designed to measure it. Because so often the psychological or educational construct being measured is physically unobservable (stated another way, is latent) and because the construct being measured may be a trait (it could also be something else, such as an ability), a synonym for IRT in the academic literature is latent-trait theory . Let’s note at the outset, however, that IRT is not a term used to refer to a single theory or method. Rather, it refers to a family of theories and methods—and quite a large family at that—with many other names used to distinguish specific approaches. There are well over a hundred varieties of IRT models. Each model is designed to handle data with certain assumptions and data characteristics.
Examples of two characteristics of items within an IRT framework are the difficulty level of an item and the item’s level of discrimination; items may be viewed as varying in terms of these, as well as other, characteristics. “Difficulty” in this sense refers to the attribute of not being easily accomplished, solved, or comprehended. In a mathematics test, for example, a test item tapping basic addition ability will have a lower difficulty level than a test item tapping basic algebra skills. The characteristic of difficulty as applied to a test item may also refer to physical difficulty—that is, how hard or easy it is for a person to engage in a particular activity. Consider in this context three items on a hypothetical “Activities of Daily Living Questionnaire” (ADLQ), a true–false questionnaire designed to tap the extent to which respondents are physically able to participate in activities of daily living. Item 1 of this test is I am able to walk from room to room in my home. Item 2 is I require assistance to sit, stand, and walk. Item 3 is I am able to jog one mile a day, seven days a week. With regard to difficulty related to mobility, the respondent who answers true to item 1 and false to item 2 may be presumed to have more mobility than the respondent who answers false to item 1 and true to item 2. In classical test theory, each of these items might be scored with 1 point awarded to responses indicative Page 167of mobility and 0 points for responses indicative of a lack of mobility. Within IRT, however, responses indicative of mobility (as opposed to a lack of mobility or impaired mobility) may be assigned different weights. A true response to item 1 may therefore earn more points than a false response to item 2, and a true response to item 3 may earn more points than a true response to item 1.
In the context of IRT, discrimination signifies the degree to which an item differentiates among people with higher or lower levels of the trait, ability, or whatever it is that is being measured. Consider two more ADLQ items: item 4, My mood is generally good; and item 5, I am able to walk one block on flat ground. Which of these two items do you think would be more discriminating in terms of the respondent’s physical abilities? If you answered “item 5” then you are correct. And if you were developing this questionnaire within an IRT framework, you would probably assign differential weight to the value of these two items. Item 5 would be given more weight for the purpose of estimating a person’s level of physical activity than item 4. Again, within the context of classical test theory, all items of the test might be given equal weight and scored, for example, 1 if indicative of the ability being measured and 0 if not indicative of that ability.
A number of different IRT models exist to handle data resulting from the administration of tests with various characteristics and in various formats. For example, there are IRT models designed to handle data resulting from the administration of tests with dichotomous test items (test items or questions that can be answered with only one of two alternative responses, such as true–false, yes–no, or correct–incorrect questions). There are IRT models designed to handle data resulting from the administration of tests with polytomous test items (test items or questions with three or more alternative responses, where only one is scored correct or scored as being consistent with a targeted trait or other construct). Other IRT models exist to handle other types of data.
In general, latent-trait models differ in some important ways from CTT. For example, in CTT, no assumptions are made about the frequency distribution of test scores. By contrast, such assumptions are inherent in latent-trait models. As Allen and Yen (1979, p. 240) have pointed out, “Latent-trait theories propose models that describe how the latent trait influences performance on each test item. Unlike test scores or true scores, latent traits theoretically can take on values from −∞ to +∞ [negative infinity to positive infinity].” Some IRT models have very specific and stringent assumptions about the underlying distribution. In one group of IRT models developed by the Danish mathematician Georg Rasch, each item on the test is assumed to have an equivalent relationship with the construct being measured by the test. A shorthand reference to these types of models is “Rasch,” so reference to the Rasch model is a reference to an IRT model with very specific assumptions about the underlying distribution.
The psychometric advantages of IRT have made this model appealing, especially to commercial and academic test developers and to large-scale test publishers. It is a model that in recent years has found increasing application in standardized tests, professional licensing examinations, and questionnaires used in behavioral and social sciences (De Champlain, 2010). However, the mathematical sophistication of the approach has made it out of reach for many everyday users of tests such as classroom teachers or “mom and pop” employers (Reise & Henson, 2003). To learn more about the approach that Roid (2006) once characterized as having fostered “new rules of measurement” for ability testing ask your instructor to access the Instructor Resources within Connect and check out OOBAL-5-B2, “Item Response Theory (IRT).” More immediately, you can meet a “real-life” user of IRT in this chapter’s Meet an Assessment Professional feature.Page 168
MEET AN ASSESSMENT PROFESSIONAL
Meet Dr. Bryce B. Reeve
Iuse my skills and training as a psychometrician to design questionnaires and studies to capture the burden of cancer and its treatment on patients and their families… . The types of questionnaires I help to create measure a person’s health-related quality of life (HRQOL). HRQOL is a multidimensional construct capturing such domains as physical functioning, mental well-being, and social well-being. Different cancer types and treatments for those cancers may have different impact on the magnitude and which HRQOL domain is affected. All cancers can impact a person’s mental health with documented increases in depressive symptoms and anxiety… . There may also be positive impacts of cancer as some cancer survivors experience greater social well-being and appreciation of life. Thus, our challenge is to develop valid and precise measurement tools that capture these changes in patients’ lives. Psychometrically strong measures also allow us to evaluate the impact of new behavioral or pharmacological interventions developed to improve quality of life. Because many patients in our research studies are ill, it is important to have very brief questionnaires to minimize their burden responding to a battery of questionnaires.
… we … use both qualitative and quantitative methodologies to design … HRQOL instruments. We use qualitative methods like focus groups and cognitive interviewing to make sure we have captured the experiences and perspectives of cancer patients and to write questions that are comprehendible to people with low literacy skills or people of different cultures. We use quantitative methods to examine how well individual questions and scales perform for measuring the HRQOL domains. Specifically, we use classical test theory, factor analysis, and item response theory (IRT) to: (1) develop and refine questionnaires; (2) identify the performance of instruments across different age groups, males and females, and cultural/racial groups; and (3) to develop item banks which allow for creating standardized questionnaires or administering computerized adaptive testing (CAT).
Bryce B. Reeve, Ph.D., U.S. National Cancer Institute © Bryce B. Reeve/National Institute of Health
I use IRT models to get an in-depth look as to how questions and scales perform in our cancer research studies. [Using IRT], we were able to reduce a burdensome 21-item scale down to a brief 10-item scale… .
Differential item function (DIF) is a key methodology to identify … biased items in questionnaires. I have used IRT modeling to examine DIF in item responses on many HRQOL questionnaires. It is especially important to evaluate DIF in questionnaires that have been translated to multiple languages for the purpose of conducting international research studies. An instrument may be translated to have the same words in multiple languages, but the words themselves may have entirely different meaning to people of different cultures. For example, researchers at the University of Massachusetts found Chinese respondents gave lower satisfaction ratings of their medical doctors than non-Chinese. In a review of the translation, the “Excellent” response category translated into Chinese as “God-like.” IRT modeling gives me the ability to not only detect DIF items, but the flexibility to correct for bias as well. I can use IRT to look at unadjusted and adjusted IRT scores to see the effect of the DIF item without removing the item from the scale if the item is deemed relevant… .Page 169
The greatest challenges I found to greater application or acceptance of IRT methods in health care research are the complexities of the models themselves and lack of easy-to-understand resources and tools to train researchers. Many researchers have been trained in classical test theory statistics, are comfortable interpreting these statistics, and can use readily available software to generate easily familiar summary statistics, such as Cronbach’s coefficient α or item-total correlations. In contrast, IRT modeling requires an advanced knowledge of measurement theory to understand the mathematical complexities of the models, to determine whether the assumptions of the IRT models are met, and to choose the model from within the large family of IRT models that best fits the data and the measurement task at hand. In addition, the supporting software and literature are not well adapted for researchers outside the field of educational testing.
Read more of what Dr. Reeve had to say—his complete essay—through the Instructor Resources within Connect.
Used with permission of Bryce B. Reeve.
Reliability and Individual Scores
The reliability coefficient helps the test developer build an adequate measuring instrument, and it helps the test user select a suitable test. However, the usefulness of the reliability coefficient does not end with test construction and selection. By employing the reliability coefficient in the formula for the standard error of measurement, the test user now has another descriptive statistic relevant to test interpretation, this one useful in estimating the precision of a particular test score.
The Standard Error of Measurement
The standard error of measurement, often abbreviated as SEM or SEM, provides a measure of the precision of an observed test score. Stated another way, it provides an estimate of the amount of error inherent in an observed score or measurement. In general, the relationship between the SEM and the reliability of a test is inverse; the higher the reliability of a test (or individual subtest within a test), the lower the SEM.
To illustrate the utility of the SEM, let’s revisit The Rochester Wrenchworks (TRW) and reintroduce Mary (from Cronbach’s excerpt earlier in this chapter), who is now applying for a job as a word processor. To be hired at TRW as a word processor, a candidate must be able to word-process accurately at the rate of 50 words per minute. The personnel office administers a total of seven brief word-processing tests to Mary over the course of seven business days. In words per minute, Mary’s scores on each of the seven tests are as follows:
52 55 39 56 35 50 54
If you were in charge of hiring at TRW and you looked at these seven scores, you might logically ask, “Which of these scores is the best measure of Mary’s ‘true’ word-processing ability?” And more to the point, “Which is her ‘true’ score?”
The “true” answer to this question is that we cannot conclude with absolute certainty from the data we have exactly what Mary’s true word-processing ability is. We can, however, make an educated guess. Our educated guess would be that her true word-processing ability is equal to the mean of the distribution of her word-processing scores plus or minus a number of points accounted for by error in the measurement process. We do not know how many points are accounted for by error in the measurement process. The best we can do is estimate how much error entered into a particular test score.
The standard error of measurement is the tool used to estimate or infer the extent to which an observed score deviates from a true score. We may define the standard error of Page 170measurement as the standard deviation of a theoretically normal distribution of test scores obtained by one person on equivalent tests. Also known as the standard error of a score and denoted by the symbol σmeas, the standard error of measurement is an index of the extent to which one individual’s scores vary over tests presumed to be parallel. In accordance with the true score model, an obtained test score represents one point in the theoretical distribution of scores the testtaker could have obtained. But where on the continuum of possible scores is this obtained score? If the standard deviation for the distribution of test scores is known (or can be calculated) and if an estimate of the reliability of the test is known (or can be calculated), then an estimate of the standard error of a particular score (or, the standard error of measurement) can be determined by the following formula:
where σmeas is equal to the standard error of measurement, σ is equal to the standard deviation of test scores by the group of testtakers, and rxx is equal to the reliability coefficient of the test. The standard error of measurement allows us to estimate, with a specific level of confidence, the range in which the true score is likely to exist.
If, for example, a spelling test has a reliability coefficient of .84 and a standard deviation of 10, then
In order to use the standard error of measurement to estimate the range of the true score, we make an assumption: If the individual were to take a large number of equivalent tests, scores on those tests would tend to be normally distributed, with the individual’s true score as the mean. Because the standard error of measurement functions like a standard deviation in this context, we can use it to predict what would happen if an individual took additional equivalent tests:
· approximately 68% (actually, 68.26%) of the scores would be expected to occur within ±1σmeas of the true score;
· approximately 95% (actually, 95.44%) of the scores would be expected to occur within ±2σmeas of the true score;
· approximately 99% (actually, 99.74%) of the scores would be expected to occur within ±3σmeas of the true score.
Of course, we don’t know the true score for any individual testtaker, so we must estimate it. The best estimate available of the individual’s true score on the test is the test score already obtained. Thus, if a student achieved a score of 50 on one spelling test and if the test had a standard error of measurement of 4, then—using 50 as the point estimate—we can be:
· 68% (actually, 68.26%) confident that the true score falls within 50 ± 1σmeas (or between 46 and 54, including 46 and 54);
· 95% (actually, 95.44%) confident that the true score falls within 50 ± 2σmeas (or between 42 and 58, including 42 and 58);
· 99% (actually, 99.74%) confident that the true score falls within 50 ± 3σmeas (or between 38 and 62, including 38 and 62).
The standard error of measurement, like the reliability coefficient, is one way of expressing test reliability. If the standard deviation of a test is held constant, then the smaller the σmeas, the more reliable the test will be; as rxx increases, the σmeas decreases. For example, when a reliability coefficient equals .64 and σ equals 15, the standard error of measurement equals 9:
With a reliability coefficient equal to .96 and σ still equal to 15, the standard error of measurement decreases to 3:
In practice, the standard error of measurement is most frequently used in the interpretation of individual test scores. For example, intelligence tests are given as part of the assessment of individuals for intellectual disability. One of the criteria for mental retardation is an IQ score of 70 or below (when the mean is 100 and the standard deviation is 15) on an individually administered intelligence test (American Psychiatric Association, 1994). One question that could be asked about these tests is how scores that are close to the cutoff value of 70 should be treated. Specifically, how high above 70 must a score be for us to conclude confidently that the individual is unlikely to be retarded? Is 72 clearly above the retarded range, so that if the person were to take a parallel form of the test, we could be confident that the second score would be above 70? What about a score of 75? A score of 79?
Useful in answering such questions is an estimate of the amount of error in an observed test score. The standard error of measurement provides such an estimate. Further, the standard error of measurement is useful in establishing what is called a confidence interval : a range or band of test scores that is likely to contain the true score.
Consider an application of a confidence interval with one hypothetical measure of adult intelligence. The manual for the test provides a great deal of information relevant to the reliability of the test as a whole as well as more specific reliability-related information for each of its subtests. As reported in the manual, the standard deviation is 3 for the subtest scaled scores and 15 for IQ scores. Across all of the age groups in the normative sample, the average reliability coefficient for the Full Scale IQ (FSIQ) is .98, and the average standard error of measurement for the FSIQ is 2.3.
Knowing an individual testtaker’s FSIQ score and his or her age, we can calculate a confidence interval. For example, suppose a 22-year-old testtaker obtained a FSIQ of 75. The test user can be 95% confident that this testtaker’s true FSIQ falls in the range of 70 to 80. This is so because the 95% confidence interval is set by taking the observed score of 75, plus or minus 1.96, multiplied by the standard error of measurement. In the test manual we find that the standard error of measurement of the FSIQ for a 22-year-old testtaker is 2.37. With this information in hand, the 95% confidence interval is calculated as follows:
The calculated interval of 4.645 is rounded to the nearest whole number, 5. We can therefore be 95% confident that this testtaker’s true FSIQ on this particular test of intelligence lies somewhere in the range of the observed score of 75 plus or minus 5, or somewhere in the range of 70 to 80.
In the interest of increasing your SEM “comfort level,” consider the data presented in Table 5–5. These are SEMs for selected age ranges and selected types of IQ measurements as reported in the Technical Manual for the Stanford-Binet Intelligence Scales, fifth edition (SB5). When presenting these and related data, Roid (2003c, p. 65) noted: “Scores that are more precise and consistent have smaller differences between true and observed scores, resulting in lower SEMs.” Given this, just think: What hypotheses come to mind regarding SB5 IQ scores at ages 5, 10, 15, and 80+?
|Age (in years)|
|Full Scale IQ||2.12||2.60||2.12||2.12|
|Abbreviated Battery IQ||4.24||5.20||4.50||3.00|
|Table 5–5Standard Errors of Measurement of SB5 IQ Scores at Ages 5, 10, 15, and 80+|
The standard error of measurement can be used to set the confidence interval for a particular score or to determine whether a score is significantly different from a criterion (such as the cutoff score of 70 described previously). But the standard error of measurement cannot be used to compare scores. So, how do test users compare scores?Page 172
The Standard Error of the Difference Between Two Scores
Error related to any of the number of possible variables operative in a testing situation can contribute to a change in a score achieved on the same test, or a parallel test, from one administration of the test to the next. The amount of error in a specific test score is embodied in the standard error of measurement. But scores can change from one testing to the next for reasons other than error.
True differences in the characteristic being measured can also affect test scores. These differences may be of great interest, as in the case of a personnel officer who must decide which of many applicants to hire. Indeed, such differences may be hoped for, as in the case of a psychotherapy researcher who hopes to prove the effectiveness of a particular approach to therapy. Comparisons between scores are made using the standard error of the difference , a statistical measure that can aid a test user in determining how large a difference should be before it is considered statistically significant. As you are probably aware from your course in statistics, custom in the field of psychology dictates that if the probability is more than 5% that the difference occurred by chance, then, for all intents and purposes, it is presumed that there was no difference. A more rigorous standard is the 1% standard. Applying the 1% standard, no statistically significant difference would be deemed to exist unless the observed difference could have occurred by chance alone less than one time in a hundred.
The standard error of the difference between two scores can be the appropriate statistical tool to address three types of questions:
1. How did this individual’s performance on test 1 compare with his or her performance on test 2?
2. How did this individual’s performance on test 1 compare with someone else’s performance on test 1?
3. How did this individual’s performance on test 1 compare with someone else’s performance on test 2?
As you might have expected, when comparing scores achieved on the different tests, it is essential that the scores be converted to the same scale. The formula for the standard error of the difference between two scores is
where σdiff is the standard error of the difference between two scores, is the squared standard error of measurement for test 1, and is the squared standard error of measurement for test 2. If we substitute reliability coefficients for the standard errors of measurement of the separate scores, the formula becomes
where r1 is the reliability coefficient of test 1, r2 is the reliability coefficient of test 2, and σ is the standard deviation. Note that both tests would have the same standard deviation because they must be on the same scale (or be converted to the same scale) before a comparison can be made.
The standard error of the difference between two scores will be larger than the standard error of measurement for either score alone because the former is affected by measurement error in both scores. This also makes good sense: If two scores each contain error such that in each case the true score could be higher or lower, then we would want the two scores to be further apart before we conclude that there is a significant difference between them.
The value obtained by calculating the standard error of the difference is used in much the same way as the standard error of the mean. If we wish to be 95% confident that the two scores are different, we would want them to be separated by 2 standard errors of the difference. A separation of only 1 standard error of the difference would give us 68% confidence that the two true scores are different.
As an illustration of the use of the standard error of the difference between two scores, consider the situation of a corporate personnel manager who is seeking a highly responsible person for the position of vice president of safety. The personnel officer in this hypothetical situation decides to use a new published test we will call the Safety-Mindedness Test (SMT) to screen applicants for the position. After placing an ad in the employment section of the local newspaper, the personnel officer tests 100 applicants for the position using the SMT. The personnel officer narrows the search for the vice president to the two highest scorers on the SMT: Moe, who scored 125, and Larry, who scored 134. Assuming the measured reliability of this test to be .92 and its standard deviation to be 14, should the personnel officer conclude that Larry performed significantly better than Moe? To answer this question, first calculate the standard error of the difference:
Note that in this application of the formula, the two test reliability coefficients are the same because the two scores being compared are derived from the same test.
What does this standard error of the difference mean? For any standard error of the difference, we can be:
· 68% confident that two scores differing by 1σdiff represent true score differences;
· 95% confident that two scores differing by 2σdiff represent true score differences;
· 99.7% confident that two scores differing by 3σdiff represent true score differences.
Applying this information to the standard error of the difference just computed for the SMT, we see that the personnel officer can be:
· 68% confident that two scores differing by 5.6 represent true score differences;
· 95% confident that two scores differing by 11.2 represent true score differences;
· 99.7% confident that two scores differing by 16.8 represent true score differences.
The difference between Larry’s and Moe’s scores is only 9 points, not a large enough difference for the personnel officer to conclude with 95% confidence that the two individuals have true scores that differ on this test. Stated another way: If Larry and Moe were to take a parallel form of the SMT, then the personnel officer could not be 95% confident that, at the next testing, Larry would again outperform Moe. The personnel officer in this example would have to resort to other means to decide whether Moe, Larry, or someone else would be the best candidate for the position (Curly has been patiently waiting in the wings).
JUST THINK . . .
With all of this talk about Moe, Larry, and Curly, please tell us that you have not forgotten about Mary. You know, Mary from the Cronbach quote on page 165—yes, that Mary. Should she get the job at TRW? If your instructor thinks it would be useful to do so, do the math before responding.
As a postscript to the preceding example, suppose Larry got the job primarily on the basis of data from our hypothetical SMT. And let’s further suppose that it soon became all too clear that Larry was the hands-down absolute worst vice president of safety that the company had ever seen. Larry spent much of his time playing practical jokes on fellow corporate officers, and he spent many of his off-hours engaged in his favorite pastime, flagpole sitting. The personnel officer might then have very good reason to question how well the instrument called the Safety-Mindedness Test truly measured safety-mindedness. Or, to put it another way, the personnel officer might question the validity of the test. Not coincidentally, the subject of test validity is taken up in the next chapter.
Test your understanding of elements of this chapter by seeing if you can explain each of the following terms, expressions, and abbreviations:
Modules chapter 3 wk2 p655
C H A P T E R 3
A Statistics Refresher
From the red-pencil number circled at the top of your first spelling test to the computer printout of your college entrance examination scores, tests and test scores touch your life. They seem to reach out from the paper and shake your hand when you do well and punch you in the face when you do poorly. They can point you toward or away from a particular school or curriculum. They can help you to identify strengths and weaknesses in your physical and mental abilities. They can accompany you on job interviews and influence a job or career choice.
JUST THINK . . .
For most people, test scores are an important fact of life. But what makes those numbers so meaningful? In general terms, what information, ideally, should be conveyed by a test score?
In your role as a student, you have probably found that your relationship to tests has been primarily that of a testtaker. But as a psychologist, teacher, researcher, or employer, you may find that your relationship with tests is primarily that of a test user—the person who breathes life and meaning into test scores by applying the knowledge and skill to interpret them appropriately. You may one day create a test, whether in an academic or a business setting, and then have the responsibility for scoring and interpreting it. In that situation, or even from the perspective of one who would take that test, it’s essential to understand the theory underlying test use and the principles of test-score interpretation.
Test scores are frequently expressed as numbers, and statistical tools are used to describe, make inferences from, and draw conclusions about numbers. 1 In this statistics refresher, we cover scales of measurement, tabular and graphic presentations of data, measures of central tendency, measures of variability, aspects of the normal curve, and standard scores. If these statistics-related terms look painfully familiar to you, we ask your indulgence and ask you to remember that overlearning is the key to retention. Of course, if any of these terms appear unfamiliar, we urge you to learn more about them. Feel free to supplement the discussion here with a review of these and related terms in any good elementary statistics text. The brief review of statistical concepts that follows can in no way replace a sound grounding in basic statistics gained through an introductory course in that subject.Page 76
Scales of Measurement
We may formally define measurement as the act of assigning numbers or symbols to characteristics of things (people, events, whatever) according to rules. The rules used in assigning numbers are guidelines for representing the magnitude (or some other characteristic) of the object being measured. Here is an example of a measurement rule: Assign the number 12 to all lengths that are exactly the same length as a 12-inch ruler. A scale is a set of numbers (or other symbols) whose properties model empirical properties of the objects to which the numbers are assigned. 2
JUST THINK . . .
What is another example of a measurement rule?
There are various ways in which a scale can be categorized. One way of categorizing a scale is according to the type of variable being measured. Thus, a scale used to measure a continuous variable might be referred to as a continuous scale, whereas a scale used to measure a discrete variable might be referred to as a discrete scale. A continuous scale exists when it is theoretically possible to divide any of the values of the scale. A distinction must be made, however, between what is theoretically possible and what is practically desirable. The units into which a continuous scale will actually be divided may depend on such factors as the purpose of the measurement and practicality. In measurement to install venetian blinds, for example, it is theoretically possible to measure by the millimeter or even by the micrometer. But is such precision necessary? Most installers do just fine with measurement by the inch.
As an example of measurement using a discrete scale, consider mental health research that presorted subjects into one of two discrete groups: (1) previously hospitalized and (2) never hospitalized. Such a, categorization scale would be characterized as discrete because it would not be accurate or meaningful to categorize any of the subjects in the study as anything other than “previously hospitalized” or “not previously hospitalized.”
JUST THINK . . .
The scale with which we are all perhaps most familiar is the common bathroom scale. How are a psychological test and a bathroom scale alike? How are they different? Your answer may change as you read on.
JUST THINK . . .
Assume the role of a test creator. Now write some instructions to users of your test that are designed to reduce to the absolute minimum any error associated with test scores. Be sure to include instructions regarding the preparation of the site where the test will be administered.
Measurement always involves error. In the language of assessment, error refers to the collective influence of all of the factors on a test score or measurement beyond those specifically measured by the test or measurement. As we will see, there are many different sources of error in measurement. Consider, for example, the score someone received on a test in American history. We might conceive of part of the score as reflecting the testtaker’s knowledge of American history and part of the score as reflecting error. The error part of the test score may be due to many different factors. One source of error might have been a distracting thunderstorm going on outside at the time the test was administered. Another source of error was the particular selection of test items the instructor chose to use for the test. Had a different item or two been used in the test, the testtaker’s score on the test might have been higher or lower. Error is very much an element of all measurement, and it is an element for which any theory of measurement must surely account.
Measurement using continuous scales always involves error. To illustrate why, let’s go back to the scenario involving venetian Page 77blinds. The length of the window measured to be 35.5 inches could, in reality, be 35.7 inches. The measuring scale is conveniently marked off in grosser gradations of measurement. Most scales used in psychological and educational assessment are continuous and therefore can be expected to contain this sort of error. The number or score used to characterize the trait being measured on a continuous scale should be thought of as an approximation of the “real” number. Thus, for example, a score of 25 on some test of anxiety should not be thought of as a precise measure of anxiety. Rather, it should be thought of as an approximation of the real anxiety score had the measuring instrument been calibrated to yield such a score. In such a case, perhaps the score of 25 is an approximation of a real score of, say, 24.7 or 25.44.
It is generally agreed that there are four different levels or scales of measurement. Within these levels or scales of measurement, assigned numbers convey different kinds of information. Accordingly, certain statistical manipulations may or may not be appropriate, depending upon the level or scale of measurement. 3
JUST THINK . . .
Acronyms like noir are useful memory aids. As you continue in your study of psychological testing and assessment, create your own acronyms to help remember related groups of information. Hey, you may even learn some French in the process.
The French word for black is noir (pronounced “‘nwăre”). We bring this up here only to call attention to the fact that this word is a useful acronym for remembering the four levels or scales of measurement. Each letter in noir is the first letter of the succeedingly more rigorous levels: N stands for nominal, ofor ordinal, i for interval, and r for ratio scales.
Nominal scales are the simplest form of measurement. These scales involve classification or categorization based on one or more distinguishing characteristics, where all things measured must be placed into mutually exclusive and exhaustive categories. For example, in the specialty area of clinical psychology, a nominal scale in use for many years is the Diagnostic and Statistical Manual of Mental Disorders. Each disorder listed in that manual is assigned its own number. In a past version of that manual, the version really does not matter for the purposes of this example, the number 303.00 identified alcohol intoxication, and the number 307.00 identified stuttering. But these numbers were used exclusively for classification purposes and could not be meaningfully added, subtracted, ranked, or averaged. Hence, the middle number between these two diagnostic codes, 305.00, did not identify an intoxicated stutterer.
Individual test items may also employ nominal scaling, including yes/no responses. For example, consider the following test items:
Instructions: Answer either yes or no.
Are you actively contemplating suicide? __________
Are you currently under professional care for a psychiatric disorder? _______
Have you ever been convicted of a felony? _______
JUST THINK . . .
What are some other examples of nominal scales?
In each case, a yes or no response results in the placement into one of a set of mutually exclusive groups: suicidal or not, under care for psychiatric disorder or not, and felon or not. Arithmetic operations that can legitimately be performed with Page 78nominal data include counting for the purpose of determining how many cases fall into each category and a resulting determination of proportion or percentages. 4
JUST THINK . . .
What are some other examples of interval scales?
Like nominal scales, ordinal scales permit classification. However, in addition to classification, rank ordering on some characteristic is also permissible with ordinal scales. In business and organizational settings, job applicants may be rank-ordered according to their desirability for a position. In clinical settings, people on a waiting list for psychotherapy may be rank-ordered according to their need for treatment. In these examples, individuals are compared with others and assigned a rank (perhaps 1 to the best applicant or the most needy wait-listed client, 2 to the next, and so forth).
Although he may have never used the term ordinal scale, Alfred Binet, a developer of the intelligence test that today bears his name, believed strongly that the data derived from an intelligence test are ordinal in nature. He emphasized that what he tried to do with his test was not to measure people (as one might measure a person’s height), but merely to classify (and rank) people on the basis of their performance on the tasks. He wrote:
I have not sought . . . to sketch a method of measuring, in the physical sense of the word, but only a method of classification of individuals. The procedures which I have indicated will, if perfected, come to classify a person before or after such another person, or such another series of persons; but I do not believe that one may measure one of the intellectual aptitudes in the sense that one measures a length or a capacity. Thus, when a person studied can retain seven figures after a single audition, one can class him, from the point of his memory for figures, after the individual who retains eight figures under the same conditions, and before those who retain six. It is a classification, not a measurement . . . we do not measure, we classify. (Binet, cited in Varon, 1936, p. 41)
Assessment instruments applied to the individual subject may also use an ordinal form of measurement. The Rokeach Value Survey uses such an approach. In that test, a list of personal values—such as freedom, happiness, and wisdom—are put in order according to their perceived importance to the testtaker (Rokeach, 1973). If a set of 10 values is rank ordered, then the testtaker would assign a value of “1” to the most important and “10” to the least important.
Ordinal scales imply nothing about how much greater one ranking is than another. Even though ordinal scales may employ numbers or “scores” to represent the rank ordering, the numbers do not indicate units of measurement. So, for example, the performance difference between the first-ranked job applicant and the second-ranked applicant may be small while the difference between the second- and third-ranked applicants may be large. On the Rokeach Value Survey, the value ranked “1” may be handily the most important in the mind of the testtaker. However, ordering the values that follow may be difficult to the point of being almost arbitrary.
JUST THINK . . .
What are some other examples of ordinal scales?
Ordinal scales have no absolute zero point. In the case of a test of job performance ability, every testtaker, regardless of standing on the test, is presumed to have some ability. No testtaker is presumed to have zero ability. Zero is without meaning in such a test because the number of units that separate one testtaker’s score from another’s is simply not known. The scores are ranked, but the actual number of units separating one score from the next may be many, just a few, or practically none. Because there is no zero point on an ordinal scale, the ways in which data from such scales can be analyzed statistically are limited. One cannot average the qualifications of the Page 79first- and third-ranked job applicants, for example, and expect to come out with the qualifications of the second-ranked applicant.
In addition to the features of nominal and ordinal scales, interval scales contain equal intervals between numbers. Each unit on the scale is exactly equal to any other unit on the scale. But like ordinal scales, interval scales contain no absolute zero point. With interval scales, we have reached a level of measurement at which it is possible to average a set of measurements and obtain a meaningful result.
Scores on many tests, such as tests of intelligence, are analyzed statistically in ways appropriate for data at the interval level of measurement. The difference in intellectual ability represented by IQs of 80 and 100, for example, is thought to be similar to that existing between IQs of 100 and 120. However, if an individual were to achieve an IQ of 0 (something that is not even possible, given the way most intelligence tests are structured), that would not be an indication of zero (the total absence of) intelligence. Because interval scales contain no absolute zero point, a presumption inherent in their use is that no testtaker possesses none of the ability or trait (or whatever) being measured.
In addition to all the properties of nominal, ordinal, and interval measurement, a ratio scale has a true zero point. All mathematical operations can meaningfully be performed because there exist equal intervals between the numbers on the scale as well as a true or absolute zero point.
In psychology, ratio-level measurement is employed in some types of tests and test items, perhaps most notably those involving assessment of neurological functioning. One example is a test of hand grip, where the variable measured is the amount of pressure a person can exert with one hand (see Figure 3–1 ). Another example is a timed test of perceptual-motor ability that requires the testtaker to assemble a jigsaw-like puzzle. In such an instance, the time taken to successfully complete the puzzle is the measure that is recorded. Because there is a true zero point on this scale (or, 0 seconds), it is meaningful to say that a testtaker who completes the assembly in 30 seconds has taken half the time of a testtaker who completed it in 60 seconds. In this example, it is meaningful to speak of a true zero point on the scale—but in theory only. Why? Just think . . .
Figure 3–1 Ratio-Level Measurement in the Palm of One’s Hand Pictured above is a dynamometer, an instrument used to measure strength of hand grip. The examinee is instructed to squeeze the grips as hard as possible. The squeezing of the grips causes the gauge needle to move and reflect the number of pounds of pressure exerted. The highest point reached by the needle is the score. This is an example of ratio-level measurement. Someone who can exert 10 pounds of pressure (and earns a score of 10) exerts twice as much pressure as a person who exerts 5 pounds of pressure (and earns a score of 5). On this test it is possible to achieve a score of 0, indicating a complete lack of exerted pressure. Although it is meaningful to speak of a score of 0 on this test, we have to wonder about its significance. How might a score of 0 result? One way would be if the testtaker genuinely had paralysis of the hand. Another way would be if the testtaker was uncooperative and unwilling to comply with the demands of the task. Yet another way would be if the testtaker was attempting to malinger or “fake bad” on the test. Ratio scales may provide us “solid” numbers to work with, but some interpretation of the test data yielded may still be required before drawing any “solid” conclusions.© BanksPhotos/Getty Images RF
JUST THINK . . .
What are some other examples of ratio scales?
No testtaker could ever obtain a score of zero on this assembly task. Stated another way, no testtaker, not even The Flash (a comic-book superhero whose power is the ability to move at superhuman speed), could assemble the puzzle in zero seconds.
Measurement Scales in Psychology
The ordinal level of measurement is most frequently used in psychology. As Kerlinger (1973, p. 439) put it: “Intelligence, aptitude, and personality test scores are, basically and strictly speaking, ordinal. These tests indicate with more or less accuracy not the amount of intelligence, aptitude, and personality traits of individuals, but rather the rank-order positions of the individuals.” Kerlinger allowed that “most psychological and educational scales approximate interval equality fairly well,” though he cautioned that if ordinal measurements are treated as if they were interval measurements, then the test user must “be constantly alert to the possibility of gross inequality of intervals” (pp. 440–441).Page 80
Why would psychologists want to treat their assessment data as interval when those data would be better described as ordinal? Why not just say that they are ordinal? The attraction of interval measurement for users of psychological tests is the flexibility with which such data can be manipulated statistically. “What kinds of statistical manipulation?” you may ask.
In this chapter we discuss the various ways in which test data can be described or converted to make those data more manageable and understandable. Some of the techniques we’ll describe, such as the computation of an average, can be used if data are assumed to be interval- or ratio-level in nature but not if they are ordinal- or nominal-level. Other techniques, such as those involving the creation of graphs or tables, may be used with ordinal- or even nominal-level data.Page 81
Suppose you have magically changed places with the professor teaching this course and that you have just administered an examination that consists of 100 multiple-choice items (where 1 point is awarded for each correct answer). The distribution of scores for the 25 students enrolled in your class could theoretically range from 0 (none correct) to 100 (all correct). A distribution may be defined as a set of test scores arrayed for recording or study. The 25 scores in this distribution are referred to as raw scores. As its name implies, a raw score is a straightforward, unmodified accounting of performance that is usually numerical. A raw score may reflect a simple tally, as in number of items responded to correctly on an achievement test. As we will see later in this chapter, raw scores can be converted into other types of scores. For now, let’s assume it’s the day after the examination and that you are sitting in your office looking at the raw scores listed in Table 3–1 . What do you do next?
|Student||Score (number correct)|
|Table 3–1Data from Your Measurement Course Test|
JUST THINK . . .
In what way do most of your instructors convey test-related feedback to students? Is there a better way they could do this?
One task at hand is to communicate the test results to your class. You want to do that in a way that will help students understand how their performance on the test compared to the performance of other students. Perhaps the first step is to organize the data by transforming it from a random listing of raw scores into something that immediately conveys a bit more information. Later, as we will see, you may wish to transform the data in other ways.
The data from the test could be organized into a distribution of the raw scores. One way the scores could be distributed is by the frequency with which they occur. In a frequency distribution , all scores are listed alongside the number of times each score occurred. The scores might be listed in tabular or graphic form. Table 3–2 lists the frequency of occurrence of each score in one column and the score itself in the other column.
|Table 3–2Frequency Distribution of Scores from Your Test|
Often, a frequency distribution is referred to as a simple frequency distribution to indicate that individual scores have been used and the data have not been grouped. Another kind of Page 82frequency distribution used to summarize data is a grouped frequency distribution. In a grouped frequency distribution , test-score intervals, also called class intervals, replace the actual test scores. The number of class intervals used and the size or width of each class interval (or, the range of test scores contained in each class interval) are for the test user to decide. But how?
In most instances, a decision about the size of a class interval in a grouped frequency distribution is made on the basis of convenience. Of course, virtually any decision will represent a trade-off of sorts. A convenient, easy-to-read summary of the data is the trade-off for the loss of detail. To what extent must the data be summarized? How important is detail? These types of questions must be considered. In the grouped frequency distribution in Table 3–3 , the test scores have been grouped into 12 class intervals, where each class interval is equal to 5 points. 5 The highest class interval (95–99) and the lowest class interval (40–44) are referred to, respectively, as the upper and lower limits of the distribution. Here, the need for convenience in reading the data outweighs the need for great detail, so such groupings of data seem logical.
|Class Interval||f (frequency)|
|Table 3–3A Grouped Frequency Distribution|
Frequency distributions of test scores can also be illustrated graphically. A graph is a diagram or chart composed of lines, points, bars, or other symbols that describe and illustrate Page 83data. With a good graph, the place of a single score in relation to a distribution of test scores can be understood easily. Three kinds of graphs used to illustrate frequency distributions are the histogram, the bar graph, and the frequency polygon ( Figure 3–2 ). A histogram is a graph Page 84with vertical lines drawn at the true limits of each test score (or class interval), forming a series of contiguous rectangles. It is customary for the test scores (either the single scores or the midpoints of the class intervals) to be placed along the graph’s horizontal axis (also referred to as the abscissa or X-axis) and for numbers indicative of the frequency of occurrence to be placed along the graph’s vertical axis (also referred to as the ordinate or Y-axis). In a bar graph , numbers indicative of frequency also appear on the Y-axis, and reference to some categorization (e.g., yes/no/maybe, male/female) appears on the X-axis. Here the rectangular bars typically are not contiguous. Data illustrated in a frequency polygon are expressed by a continuous line connecting the points where test scores or class intervals (as indicated on the X-axis) meet frequencies (as indicated on the Y-axis).
Figure 3–2 Graphic Illustrations of Data from Table 3–3 A histogram (a), a bar graph (b), and a frequency polygon (c) all may be used to graphically convey information about test performance. Of course, the labeling of the bar graph and the specific nature of the data conveyed by it depend on the variables of interest. In (b), the variable of interest is the number of students who passed the test (assuming, for the purpose of this illustration, that a raw score of 65 or higher had been arbitrarily designated in advance as a passing grade). Returning to the question posed earlier—the one in which you play the role of instructor and must communicate the test results to your students—which type of graph would best serve your purpose? Why? As we continue our review of descriptive statistics, you may wish to return to your role of professor and formulate your response to challenging related questions, such as “Which measure(s) of central tendency shall I use to convey this information?” and “Which measure(s) of variability would convey the information best?”
Graphic representations of frequency distributions may assume any of a number of different shapes ( Figure 3–3 ). Regardless of the shape of graphed data, it is a good idea for the consumer of the information contained in the graph to examine it carefully—and, if need be, critically. Consider, in this context, this chapter’s Everyday Psychometrics.
Figure 3–3 Shapes That Frequency Distributions Can Take
As we discuss in detail later in this chapter, one graphic representation of data of particular interest to measurement professionals is the normal or bell-shaped curve. Before getting to that, however, let’s return to the subject of distributions and how we can describe and characterize them. One way to describe a distribution of test scores is by a measure of central tendency.
Measures of Central Tendency
A measure of central tendency is a statistic that indicates the average or midmost score between the extreme scores in a distribution. The center of a distribution can be defined in different ways. Perhaps the most commonly used measure of central tendency is the arithmetic mean (or, more simply, mean ), which is referred to in everyday language as the “average.” The mean takes into account the actual numerical value of every score. In special instances, such as when there are only a few scores and one or two of the scores are extreme in relation to the remaining ones, a measure of central tendency other than the mean may be desirable. Other measures of central tendency we review include the median and the mode. Note that, in the formulas to follow, the standard statistical shorthand called “summation notation” (summation meaning “the sum of”) is used. The Greek uppercase letter sigma, Σ, is the symbol used to signify “sum”; if X represents a test score, then the expression Σ X means “add all the test scores.”
The arithmetic mean
The arithmetic mean , denoted by the symbol (and pronounced “X bar”), is equal to the sum of the observations (or test scores, in this case) divided by the number of observations. Symbolically written, the formula for the arithmetic mean is where n equals the number of observations or test scores. The arithmetic mean is typically the most appropriate measure of central tendency for interval or ratio data when the distributions are believed to be approximately normal. An arithmetic mean can also be computed from a frequency distribution. The formula for doing this is
where Σ(fX) means “multiply the frequency of each score by its corresponding score and then sum.” An estimate of the arithmetic mean may also be obtained from a grouped frequency distribution using the same formula, where X is equal to the midpoint of the class interval. Table 3–4 illustrates a calculation of the mean from a grouped frequency distribution. After doing the math you will find that, using the grouped data, a mean of 71.8 (which may be rounded to 72) is calculated. Using the raw scores, a mean of 72.12 (which also may be rounded to 72) is calculated. Frequently, the choice of statistic will depend on the required degree of precision in measurement.
|Class Interval||f||X (midpoint of class interval)||fX|
|Σ f = 25||Σ (fX) = 1,795|
|Table 3–4Calculating the Arithmetic Mean from a Grouped Frequency Distribution|
To estimate the arithmetic mean of this grouped frequency distribution,
To calculate the mean of this distribution using raw scores,
JUST THINK . . .
Imagine that a thousand or so engineers took an extremely difficult pre-employment test. A handful of the engineers earned very high scores but the vast majority did poorly, earning extremely low scores. Given this scenario, what are the pros and cons of using the mean as a measure of central tendency for this test?
Consumer (of Graphed Data), Beware!
One picture is worth a thousand words, and one purpose of representing data in graphic form is to convey information at a glance. However, although two graphs may be accurate with respect to the data they represent, their pictures—and the impression drawn from a glance at them—may be vastly different. As an example, consider the following hypothetical scenario involving a hamburger restaurant chain we’ll call “The Charred House.”
The Charred House chain serves very charbroiled, microscopically thin hamburgers formed in the shape of little triangular houses. In the 10-year period since its founding in 1993, the company has sold, on average, 100 million burgers per year. On the chain’s tenth anniversary, The Charred House distributes a press release proudly announcing “Over a Billion Served.”
Reporters from two business publications set out to research and write a feature article on this hamburger restaurant chain. Working solely from sales figures as compiled from annual reports to the shareholders, Reporter 1 focuses her story on the differences in yearly sales. Her article is entitled “A Billion Served—But Charred House Sales Fluctuate from Year to Year,” and its graphic illustration is reprinted here.
Quite a different picture of the company emerges from Reporter 2’s story, entitled “A Billion Served—And Charred House Sales Are as Steady as Ever,” and its accompanying graph. The latter story is based on a diligent analysis of comparable data for the same number of hamburger chains in the same areas of the country over the same time period. While researching the story, Reporter 2 learned that yearly fluctuations in sales are common to the entire industry and that the annual fluctuations observed in the Charred House figures were—relative to other chains—insignificant.
Compare the graphs that accompanied each story. Although both are accurate insofar as they are based on the correct numbers, the impressions they are likely to leave are quite different.
Incidentally, custom dictates that the intersection of the two axes of a graph be at 0 and that all the points on the Y-axis be in equal and proportional intervals from 0. This custom is followed in Reporter 2’s story, where the first point on the ordinate is 10 units more than 0, and each succeeding point is also 10 more units away from 0. However, the custom is violated in Reporter 1’s story, where the first point on the ordinate is 95 units more than 0, and each succeeding point increases only by 1. The fact that the custom is violated in Reporter 1’s story should serve as a warning to evaluate pictorial representations of data all the more critically.
The median , defined as the middle score in a distribution, is another commonly used measure of central tendency. We determine the median of a distribution of scores by ordering the scores in a list by magnitude, in either ascending or descending order. If the total number of scores ordered is an odd number, then the median will be the score that is exactly in the middle, with one-half of the remaining scores lying above it and the other half of the remaining scores lying below it. When the total number of scores ordered is an even number, then the median can be calculated by determining the arithmetic mean of the two middle scores. For example, suppose that 10 people took a preemployment word-processing test at The Rochester Wrenchworks (TRW) Corporation. They obtained the following scores, presented here in descending order:
66 65 61 59 53 52 41 36 35 32
The median of these data would be calculated by obtaining the average (or, the arithmetic mean) of the two middle scores, 53 and 52 (which would be equal to 52.5). The median is an appropriate measure of central tendency for ordinal, interval, and ratio data. The median may be a particularly useful measure of central tendency in cases where relatively few scores fall at the high end of the distribution or relatively few scores fall at the low end of the distribution.
Suppose not 10 but rather tens of thousands of people had applied for jobs at The Rochester Wrenchworks. It would be impractical to find the median by simply ordering the data and finding the midmost scores, so how would the median score be identified? For our purposes, the answer is simply that there are advanced methods for doing so. There are also techniques for identifying the median in other sorts of distributions, such as a grouped frequency distribution and a distribution wherein various scores are identical. However, instead of delving into such new and complex territory, let’s resume our discussion of central tendency and consider another such measure.
The most frequently occurring score in a distribution of scores is the mode . 6 As an example, determine the mode for the following scores obtained by another TRW job applicant, Bruce. The scores reflect the number of words Bruce word-processed in seven 1-minute trials:
43 34 45 51 42 31 51
It is TRW policy that new hires must be able to word-process at least 50 words per minute. Now, place yourself in the role of the corporate personnel officer. Would you hire Bruce? The most frequently occurring score in this distribution of scores is 51. If hiring guidelines gave you the freedom to use any measure of central tendency in your personnel decision making, then it would be your choice as to whether or not Bruce is hired. You could hire him and justify this decision on the basis of his modal score (51). You also could not hire him and justify this decision on the basis of his mean score (below the required 50 words per minute). Ultimately, whether Rochester Wrenchworks will be Bruce’s new home away from home will depend on other job-related factors, such as the nature of the job market in Rochester and the qualifications of competing applicants. Of course, if company guidelines dictate that only the mean score be used in hiring decisions, then a career at TRW is not in Bruce’s immediate future.
Distributions that contain a tie for the designation “most frequently occurring score” can have more than one mode. Consider the following scores—arranged in no particular order—obtained by 20 students on the final exam of a new trade school called the Home Study School of Elvis Presley Impersonators:
51 49 51 50 66 52 53 38 17 66 33 44 73 13 21 91 87 92 47 3
These scores are said to have a bimodal distribution because there are two scores (51 and 66) that occur with the highest frequency (of two). Except with nominal data, the mode tends not to be a very commonly used measure of central tendency. Unlike the arithmetic mean, which has to be calculated, the value of the modal score is not calculated; one simply counts and determines which score occurs most frequently. Because the mode is arrived at in this manner, the modal score may be totally atypical—for instance, one at an extreme end of the distribution—which nonetheless occurs with the greatest frequency. In fact, it is theoretically possible for a bimodal distribution to have two modes, each of which falls at the high or the low end of the distribution—thus violating the expectation that a measure of central tendency should be . . . well, central (or indicative of a point at the middle of the distribution).Page 89
Even though the mode is not calculated in the sense that the mean is calculated, and even though the mode is not necessarily a unique point in a distribution (a distribution can have two, three, or even more modes), the mode can still be useful in conveying certain types of information. The mode is useful in analyses of a qualitative or verbal nature. For example, when assessing consumers’ recall of a commercial by means of interviews, a researcher might be interested in which word or words were mentioned most by interviewees.
The mode can convey a wealth of information in addition to the mean. As an example, suppose you wanted an estimate of the number of journal articles published by clinical psychologists in the United States in the past year. To arrive at this figure, you might total the number of journal articles accepted for publication written by each clinical psychologist in the United States, divide by the number of psychologists, and arrive at the arithmetic mean. This calculation would yield an indication of the average number of journal articles published. Whatever that number would be, we can say with certainty that it would be more than the mode. It is well known that most clinical psychologists do not write journal articles. The mode for publications by clinical psychologists in any given year is zero. In this example, the arithmetic mean would provide us with a precise measure of the average number of articles published by clinicians. However, what might be lost in that measure of central tendency is that, proportionately, very few of all clinicians do most of the publishing. The mode (in this case, a mode of zero) would provide us with a great deal of information at a glance. It would tell us that, regardless of the mean, most clinicians do not publish.
Because the mode is not calculated in a true sense, it is a nominal statistic and cannot legitimately be used in further calculations. The median is a statistic that takes into account the order of scores and is itself ordinal in nature. The mean, an interval-level statistic, is generally the most stable and useful measure of central tendency.
JUST THINK . . .
Devise your own example to illustrate how the mode, and not the mean, can be the most useful measure of central tendency.
Measures of Variability
Variability is an indication of how scores in a distribution are scattered or dispersed. As Figure 3–4 illustrates, two or more distributions of test scores can have the same mean even though differences in the dispersion of scores around the mean can be wide. In both distributions A and B, test scores could range from 0 to 100. In distribution A, we see that the mean score was 50 and the remaining scores were widely distributed around the mean. In distribution B, the mean was also 50 but few people scored higher than 60 or lower than 40.
Figure 3–4 Two Distributions with Differences in Variability
Statistics that describe the amount of variation in a distribution are referred to as measures of variability . Some measures of variability include the range, the interquartile range, the semi-interquartile range, the average deviation, the standard deviation, and the variance.Page 90
The range of a distribution is equal to the difference between the highest and the lowest scores. We could describe distribution B of Figure 3–3 , for example, as having a range of 20 if we knew that the highest score in this distribution was 60 and the lowest score was 40 (60 − 40 = 20). With respect to distribution A, if we knew that the lowest score was 0 and the highest score was 100, the range would be equal to 100 − 0, or 100. The range is the simplest measure of variability to calculate, but its potential use is limited. Because the range is based entirely on the values of the lowest and highest scores, one extreme score (if it happens to be the lowest or the highest) can radically alter the value of the range. For example, suppose distribution B included a score of 90. The range of this distribution would now be equal to 90 − 40, or 50. Yet, in looking at the data in the graph for distribution B, it is clear that the vast majority of scores tend to be between 40 and 60.
JUST THINK . . .
Devise two distributions of test scores to illustrate how the range can overstate or understate the degree of variability in the scores.
As a descriptive statistic of variation, the range provides a quick but gross description of the spread of scores. When its value is based on extreme scores in a distribution, the resulting description of variation may be understated or overstated. Better measures of variation include the interquartile range and the semi-interquartile range.
The interquartile and semi-interquartile ranges
A distribution of test scores (or any other data, for that matter) can be divided into four parts such that 25% of the test scores occur in each quarter. As illustrated in Figure 3–5 , the dividing points between the four quarters in the distribution are the quartiles . There are three of them, respectively labeled Q1, Q2, and Q3. Note that quartile refers to a specific point whereas quarter refers to an interval. An individual score may, for example, fall at the third quartile or in the third quarter (but not “in” the third quartile or “at” the third quarter). It should come as no surprise to you that Q2 and the median are exactly the same. And just as the median is the midpoint in a distribution of scores, so are quartiles Q1 and Q3 the quarter-points in a distribution of scores. Formulas may be employed to determine the exact value of these points.
Figure 3–5 A Quartered DistributionPage 91
The interquartile range is a measure of variability equal to the difference between Q3 and Q1. Like the median, it is an ordinal statistic. A related measure of variability is the semi-interquartile range , which is equal to the interquartile range divided by 2. Knowledge of the relative distances of Q1 and Q3 from Q2 (the median) provides the seasoned test interpreter with immediate information as to the shape of the distribution of scores. In a perfectly symmetrical distribution, Q1 and Q3 will be exactly the same distance from the median. If these distances are unequal then there is a lack of symmetry. This lack of symmetry is referred to as skewness, and we will have more to say about that shortly.
The average deviation
Another tool that could be used to describe the amount of variability in a distribution is the average deviation , or AD for short. Its formula is
The lowercase italic x in the formula signifies a score’s deviation from the mean. The value of x is obtained by subtracting the mean from the score (X − mean = x). The bars on each side of x indicate that it is the absolute value of the deviation score (ignoring the positive or negative sign and treating all deviation scores as positive). All the deviation scores are then summed and divided by the total number of scores (n) to arrive at the average deviation. As an exercise, calculate the average deviation for the following distribution of test scores:
85 100 90 95 80
Begin by calculating the arithmetic mean. Next, obtain the absolute value of each of the five deviation scores and sum them. As you sum them, note what would happen if you did not ignore the plus or minus signs: All the deviation scores would then sum to 0. Divide the sum of the deviation scores by the number of measurements (5). Did you obtain an AD of 6? The AD tells us that the five scores in this distribution varied, on average, 6 points from the mean.
JUST THINK . . .
After reading about the standard deviation, explain in your own words how an understanding of the average deviation can provide a “stepping-stone” to better understanding the concept of a standard deviation.
The average deviation is rarely used. Perhaps this is so because the deletion of algebraic signs renders it a useless measure for purposes of any further operations. Why, then, discuss it here? The reason is that a clear understanding of what an average deviation measures provides a solid foundation for understanding the conceptual basis of another, more widely used measure: the standard deviation. Keeping in mind what an average deviation is, what it tells us, and how it is derived, let’s consider its more frequently used “cousin,” the standard deviation.
The standard deviation
Recall that, when we calculated the average deviation, the problem of the sum of all deviation scores around the mean equaling zero was solved by employing only the absolute value of the deviation scores. In calculating the standard deviation, the same problem must be dealt with, but we do so in a different way. Instead of using the absolute value of each deviation score, we use the square of each score. With each score squared, the sign of any negative deviation becomes positive. Because all the deviation scores are squared, we know that our calculations won’t be complete until we go back and obtain the square root of whatever value we reach.
We may define the standard deviation as a measure of variability equal to the square root of the average squared deviations about the mean. More succinctly, it is equal to the square root of the variance. The variance is equal to the arithmetic mean of the squares of the differences between the scores in a distribution and their mean. The formula used to calculate the variance (s2) using deviation scores is
Simply stated, the variance is calculated by squaring and summing all the deviation scores and then dividing by the total number of scores. The variance can also be calculated in other ways. For example: From raw scores, first calculate the summation of the raw scores squared, divide by the number of scores, and then subtract the mean squared. The result is
The variance is a widely used measure in psychological research. To make meaningful interpretations, the test-score distribution should be approximately normal. We’ll have more to say about “normal” distributions later in the chapter. At this point, think of a normal distribution as a distribution with the greatest frequency of scores occurring near the arithmetic mean. Correspondingly fewer and fewer scores relative to the mean occur on both sides of it.
For some hands-on experience with—and to develop a sense of mastery of—the concepts of variance and standard deviation, why not allot the next 10 or 15 minutes to calculating the standard deviation for the test scores shown in Table 3–1 ? Use both formulas to verify that they produce the same results. Using deviation scores, your calculations should look similar to these:
Using the raw-scores formula, your calculations should look similar to these:
In both cases, the standard deviation is the square root of the variance (s2). According to our calculations, the standard deviation of the test scores is 14.10. If s = 14.10, then 1 standard deviation unit is approximately equal to 14 units of measurement or (with reference to our example and rounded to a whole number) to 14 test-score points. The test data did not provide a good normal curve approximation. Test professionals would describe these data as “positively skewed.” Skewness, as well as related terms such as negatively skewed and positively skewed, are covered in the next section. Once you are “positively familiar” with terms like positively skewed, you’ll appreciate all the more the section later in this chapter entitled “The Area Under the Normal Curve.” There you will find a wealth of information about test-score interpretation Page 93in the case when the scores are not skewed—that is, when the test scores are approximately normal in distribution.
The symbol for standard deviation has variously been represented as s, S, SD, and the lowercase Greek letter sigma (σ). One custom (the one we adhere to) has it that s refers to the sample standard deviation and σ refers to the population standard deviation. The number of observations in the sample is n, and the denominator n − 1 is sometimes used to calculate what is referred to as an “unbiased estimate” of the population value (though it’s actually only lessbiased; see Hopkins & Glass, 1978). Unless n is 10 or less, the use of n or n − 1 tends not to make a meaningful difference.
Whether the denominator is more properly n or n − 1 has been a matter of debate. Lindgren (1983) has argued for the use of n − 1, in part because this denominator tends to make correlation formulas simpler. By contrast, most texts recommend the use of n − 1 only when the data constitute a sample; when the data constitute a population, n is preferable. For Lindgren (1983), it doesn’t matter whether the data are from a sample or a population. Perhaps the most reasonable convention is to use n either when the entire population has been assessed or when no inferences to the population are intended. So, when considering the examination scores of one class of students—including all the people about whom we’re going to make inferences—it seems appropriate to use n.
Having stated our position on the n versus n − 1 controversy, our formula for the population standard deviation follows. In this formula, represents a sample mean and M a population mean:
The standard deviation is a very useful measure of variation because each individual score’s distance from the mean of the distribution is factored into its computation. You will come across this measure of variation frequently in the study and practice of measurement in psychology.
Distributions can be characterized by their skewness , or the nature and extent to which symmetry is absent. Skewness is an indication of how the measurements in a distribution are distributed. A distribution has a positive skew when relatively few of the scores fall at the high end of the distribution. Positively skewed examination results may indicate that the test was too difficult. More items that were easier would have been desirable in order to better discriminate at the lower end of the distribution of test scores. A distribution has a negative skew when relatively few of the scores fall at the low end of the distribution. Negatively skewed examination results may indicate that the test was too easy. In this case, more items of a higher level of difficulty would make it possible to better discriminate between scores at the upper end of the distribution. (Refer to Figure 3–3 for graphic examples of skewed distributions.)
The term skewed carries with it negative implications for many students. We suspect that skewed is associated with abnormal, perhaps because the skewed distribution deviates from the symmetrical or so-called normal distribution. However, the presence or absence of symmetry in a distribution (skewness) is simply one characteristic by which a distribution can be described. Consider in this context a hypothetical Marine Corps Ability and Endurance Screening Test administered to all civilians seeking to enlist in the U.S. Marines. Now look again at the graphs in Figure 3–3 . Which graph do you think would best describe the resulting distribution of test scores? (No peeking at the next paragraph before you respond.)
No one can say with certainty, but if we had to guess, then we would say that the Marine Corps Ability and Endurance Screening Test data would look like graph C, the positively skewed distribution in Figure 3–3 . We say this assuming that a level of difficulty would have been built into the test to ensure that relatively few assessees would score at the high end of Page 94the distribution. Most of the applicants would probably score at the low end of the distribution. All of this is quite consistent with the advertised objective of the Marines, who are only looking for a few good men. You know: the few, the proud. Now, a question regarding this positively skewed distribution: Is the skewness a good thing? A bad thing? An abnormal thing? In truth, it is probably none of these things—it just is. By the way, although they may not advertise it as much, the Marines are also looking for (an unknown quantity of) good women. But here we are straying a bit too far from skewness.
Various formulas exist for measuring skewness. One way of gauging the skewness of a distribution is through examination of the relative distances of quartiles from the median. In a positively skewed distribution, Q3 − Q2 will be greater than the distance of Q2 − Q1. In a negatively skewed distribution, Q3− Q2 will be less than the distance of Q2 − Q1. In a distribution that is symmetrical, the distances from Q1 and Q3 to the median are the same.
The term testing professionals use to refer to the steepness of a distribution in its center is kurtosis . To the root kurtic is added to one of the prefixes platy-, lepto-, or meso- to describe the peakedness/flatness of three general types of curves ( Figure 3–6 ). Distributions are generally described as platykurtic (relatively flat), leptokurtic (relatively peaked), or—somewhere in the middle— mesokurtic . Distributions that have high kurtosis are characterized by a high peak and “fatter” tails compared to a normal distribution. In contrast, lower kurtosis values indicate a distribution with a rounded peak and thinner tails. Many methods exist for measuring kurtosis. According to the original definition, the normal bell-shaped curve (see graph A from Figure 3–3 ) would have a kurtosis value of 3. In other methods of computing kurtosis, a normal distribution would have kurtosis of 0, with positive values indicating higher kurtosis and negative values indicating lower kurtosis. It is important to keep the different methods of calculating kurtosis in mind when examining the values reported by researchers or computer programs. So, given that this can quickly become an advanced-level topic and that this book is of a more introductory nature, let’s move on. It’s time to focus on a type of distribution that happens to be the standard against which all other distributions (including all of the kurtic ones) are compared: the normal distribution.
Figure 3–6 The Kurtosis of Curves
JUST THINK . . .
Like skewness, reference to the kurtosis of a distribution can provide a kind of “shorthand” description of a distribution of test scores. Imagine and describe the kind of test that might yield a distribution of scores that form a platykurtic curve.
The Normal Curve
Before delving into the statistical, a little bit of the historical is in order. Development of the concept of a normal curve began in the middle of the eighteenth century with the work of Abraham DeMoivre and, later, the Marquis de Laplace. At the beginning of the nineteenth century, Karl Friedrich Gauss made some substantial contributions. Through the early nineteenth century, scientists referred to it as the “Laplace-Gaussian curve.” Karl Pearson is credited with being the first to refer to the curve as the normal curve, perhaps in an effort to be diplomatic to all of the people who helped develop it. Somehow the term normal curve stuck—but don’t be surprised if you’re sitting at some scientific meeting one day and you hear this distribution or curve referred to as Gaussian.
Theoretically, the normal curve is a bell-shaped, smooth, mathematically defined curve that is highest at its center. From the center it tapers on both sides approaching the X-axis asymptotically (meaning that it approaches, but never touches, the axis). In theory, the distribution of the normal curve ranges from negative infinity to positive infinity. The curve is perfectly symmetrical, with no skewness. If you folded it in half at the mean, one side would lie exactly on top of the other. Because it is symmetrical, the mean, the median, and the mode all have the same exact value.
Why is the normal curve important in understanding the characteristics of psychological tests? Our Close-Up provides some answers.
The Area Under the Normal Curve
The normal curve can be conveniently divided into areas defined in units of standard deviation. A hypothetical distribution of National Spelling Test scores with a mean of 50 and a standard deviation of 15 is illustrated in Figure 3–7 . In this example, a score equal to 1 standard deviation above the mean would be equal to 65 (+ 1s = 50 + 15 = 65).
Figure 3–7 The Area Under the Normal CurvePage 96
The Normal Curve and Psychological Tests
Scores on many psychological tests are often approximately normally distributed, particularly when the tests are administered to large numbers of subjects. Few, if any, psychological tests yield precisely normal distributions of test scores (Micceri, 1989). As a general rule (with ample exceptions), the larger the sample size and the wider the range of abilities measured by a particular test, the more the graph of the test scores will approximate the normal curve. A classic illustration of this was provided by E. L. Thorndike and his colleagues (1927). They compiled intelligence test scores from several large samples of students. As you can see in Figure 1 , the distribution of scores closely approximated the normal curve.
Figure 1 Graphic Representation of Thorndike et al. Data The solid line outlines the distribution of intelligence test scores of sixth-grade students (N = 15,138). The dotted line is the theoretical normal curve (Thorndike et al., 1927).
Following is a sample of more varied examples of the wide range of characteristics that psychologists have found to be approximately normal in distribution.
· The strength of handedness in right-handed individuals, as measured by the Waterloo Handedness Questionnaire (Tan, 1993).
· Scores on the Women’s Health Questionnaire, a scale measuring a variety of health problems in women across a wide age range (Hunter, 1992).
· Responses of both college students and working adults to a measure of intrinsic and extrinsic work motivation (Amabile et al., 1994).
· The intelligence-scale scores of girls and women with eating disorders, as measured by the Wechsler Adult Intelligence Scale–Revised and the Wechsler Intelligence Scale for Children–Revised (Ranseen & Humphries, 1992).
· The intellectual functioning of children and adolescents with cystic fibrosis (Thompson et al., 1992).
· Decline in cognitive abilities over a one-year period in people with Alzheimer’s disease (Burns et al., 1991).
· The rate of motor-skill development in developmentally delayed preschoolers, as measured by the Vineland Adaptive Behavior Scale (Davies & Gavin, 1994).Page 97
· Scores on the Swedish translation of the Positive and Negative Syndrome Scale, which assesses the presence of positive and negative symptoms in people with schizophrenia (von Knorring & Lindstrom, 1992).
· Scores of psychiatrists on the Scale for Treatment Integration of the Dually Diagnosed (people with both a drug problem and another mental disorder); the scale examines opinions about drug treatment for this group of patients (Adelman et al., 1991).
· Responses to the Tridimensional Personality Questionnaire, a measure of three distinct personality features (Cloninger et al., 1991).
· Scores on a self-esteem measure among undergraduates (Addeo et al., 1994).
In each case, the researchers made a special point of stating that the scale under investigation yielded something close to a normal distribution of scores. Why? One benefit of a normal distribution of scores is that it simplifies the interpretation of individual scores on the test. In a normal distribution, the mean, the median, and the mode take on the same value. For example, if we know that the average score for intellectual ability of children with cystic fibrosis is a particular value and that the scores are normally distributed, then we know quite a bit more. We know that the average is the most common score and the score below and above which half of all the scores fall. Knowing the mean and the standard deviation of a scale and that it is approximately normally distributed tells us that (1) approximately two-thirds of all testtakers’ scores are within a standard deviation of the mean and (2) approximately 95% of the scores fall within 2 standard deviations of the mean.
The characteristics of the normal curve provide a ready model for score interpretation that can be applied to a wide range of test results.
Before reading on, take a minute or two to calculate what a score exactly at 3 standard deviations below the mean would be equal to. How about a score exactly at 3 standard deviations above the mean? Were your answers 5 and 95, respectively? The graph tells us that 99.74% of all scores in these normally distributed spelling-test data lie between ±3 standard deviations. Stated another way, 99.74% of all spelling test scores lie between 5 and 95. This graph also illustrates the following characteristics of all normal distributions.
· 50% of the scores occur above the mean and 50% of the scores occur below the mean.
· Approximately 34% of all scores occur between the mean and 1 standard deviation above the mean.
· Approximately 34% of all scores occur between the mean and 1 standard deviation below the mean.
· Approximately 68% of all scores occur between the mean and ±1 standard deviation.
· Approximately 95% of all scores occur between the mean and ±2 standard deviations.
A normal curve has two tails. The area on the normal curve between 2 and 3 standard deviations above the mean is referred to as a tail . The area between −2 and −3 standard deviations below the mean is also referred to as a tail. Let’s digress here momentarily for a “real-life” tale of the tails to consider along with our rather abstract discussion of statistical concepts.
As observed in a thought-provoking article entitled “Two Tails of the Normal Curve,” an intelligence test score that falls within the limits of either tail can have momentous consequences in terms of the tale of one’s life:
Individuals who are mentally retarded or gifted share the burden of deviance from the norm, in both a developmental and a statistical sense. In terms of mental ability as operationalized by tests of intelligence, performance that is approximately two standard deviations from the mean (or, IQ of 70–75 or lower or IQ of 125–130 or higher) is one key element in identification. Success at life’s tasks, or its absence, also plays a defining role, but the primary classifying feature of both gifted and retarded groups is intellectual deviance. These individuals are out of sync with more average people, simply by their difference from what is expected for their age Page 98and circumstance. This asynchrony results in highly significant consequences for them and for those who share their lives. None of the familiar norms apply, and substantial adjustments are needed in parental expectations, educational settings, and social and leisure activities. (Robinson et al., 2000, p. 1413)
Robinson et al. (2000) convincingly demonstrated that knowledge of the areas under the normal curve can be quite useful to the interpreter of test data. This knowledge can tell us not only something about where the score falls among a distribution of scores but also something about a person and perhaps even something about the people who share that person’s life. This knowledge might also convey something about how impressive, average, or lackluster the individual is with respect to a particular discipline or ability. For example, consider a high-school student whose score on a national, well-respected spelling test is close to 3 standard deviations above the mean. It’s a good bet that this student would know how to spell words like asymptotic and leptokurtic.
Just as knowledge of the areas under the normal curve can instantly convey useful information about a test score in relation to other test scores, so can knowledge of standard scores.
Simply stated, a standard score is a raw score that has been converted from one scale to another scale, where the latter scale has some arbitrarily set mean and standard deviation. Why convert raw scores to standard scores?
Raw scores may be converted to standard scores because standard scores are more easily interpretable than raw scores. With a standard score, the position of a testtaker’s performance relative to other testtakers is readily apparent.
Different systems for standard scores exist, each unique in terms of its respective mean and standard deviations. We will briefly describe z scores, Tscores, stanines, and some other standard scores. First for consideration is the type of standard score scale that may be thought of as the zero plus or minus one scale. This is so because it has a mean set at 0 and a standard deviation set at 1. Raw scores converted into standard scores on this scale are more popularly referred to as z scores.
A z score results from the conversion of a raw score into a number indicating how many standard deviation units the raw score is below or above the mean of the distribution. Let’s use an example from the normally distributed “National Spelling Test” data in Figure 3–7 to demonstrate how a raw score is converted to a z score. We’ll convert a raw score of 65 to a z score by using the formula
In essence, a z score is equal to the difference between a particular raw score and the mean divided by the standard deviation. In the preceding example, a raw score of 65 was found to be equal to a z score of +1. Knowing that someone obtained a z score of 1 on a spelling test provides context and meaning for the score. Drawing on our knowledge of areas under the normal curve, for example, we would know that only about 16% of the other testtakers obtained higher scores. By contrast, knowing simply that someone obtained a raw score of 65 on a spelling test conveys virtually no usable information because information about the context of this score is lacking.Page 99
In addition to providing a convenient context for comparing scores on the same test, standard scores provide a convenient context for comparing scores on different tests. As an example, consider that Crystal’s raw score on the hypothetical Main Street Reading Test was 24 and that her raw score on the (equally hypothetical) Main Street Arithmetic Test was 42. Without knowing anything other than these raw scores, one might conclude that Crystal did better on the arithmetic test than on the reading test. Yet more informative than the two raw scores would be the two z scores.
Converting Crystal’s raw scores to z scores based on the performance of other students in her class, suppose we find that her z score on the reading test was 1.32 and that her z score on the arithmetic test was −0.75. Thus, although her raw score in arithmetic was higher than in reading, the z scores paint a different picture. The z scores tell us that, relative to the other students in her class (and assuming that the distribution of scores is relatively normal), Crystal performed above average on the reading test and below average on the arithmetic test. An interpretation of exactly how much better she performed could be obtained by reference to tables detailing distances under the normal curve as well as the resulting percentage of cases that could be expected to fall above or below a particular standard deviation point (or z score).
If the scale used in the computation of z scores is called a zero plus or minus one scale, then the scale used in the computation of T scores can be called a fifty plus or minus ten scale; that is, a scale with a mean set at 50 and a standard deviation set at 10. Devised by W. A. McCall (1922, 1939) and named a Tscore in honor of his professor E. L. Thorndike, this standard score system is composed of a scale that ranges from 5 standard deviations below the mean to 5 standard deviations above the mean. Thus, for example, a raw score that fell exactly at 5 standard deviations below the mean would be equal to a T score of 0, a raw score that fell at the mean would be equal to a T of 50, and a raw score 5 standard deviations above the mean would be equal to a T of 100. One advantage in using T scores is that none of the scores is negative. By contrast, in a z score distribution, scores can be positive and negative; this can make further computation cumbersome in some instances.
Other Standard Scores
Numerous other standard scoring systems exist. Researchers during World War II developed a standard score with a mean of 5 and a standard deviation of approximately 2. Divided into nine units, the scale was christened a stanine , a term that was a contraction of the words standard and nine.
Stanine scoring may be familiar to many students from achievement tests administered in elementary and secondary school, where test scores are often represented as stanines. Stanines are different from other standard scores in that they take on whole values from 1 to 9, which represent a range of performance that is half of a standard deviation in width ( Figure 3–8 ). The 5th stanine indicates performance in the average range, from 1/4 standard deviation below the mean to 1/4 standard deviation above the mean, and captures the middle 20% of the scores in a normal distribution. The 4th and 6th stanines are also 1/2 standard deviation wide and capture the 17% of cases below and above (respectively) the 5th stanine.
Figure 3–8 Stanines and the Normal Curve
Another type of standard score is employed on tests such as the Scholastic Aptitude Test (SAT) and the Graduate Record Examination (GRE). Raw scores on those tests are converted to standard scores such that the resulting distribution has a mean of 500 and a standard deviation of 100. If the letter A is used to represent a standard score from a college or graduate school admissions test whose distribution has a mean of 500 and a standard deviation of 100, then the following is true:
Have you ever heard the term IQ used as a synonym for one’s score on an intelligence test? Of course you have. What you may not know is that what is referred to variously as IQ, deviation IQ, or deviation intelligence quotient is yet another kind of standard score. For most IQ tests, the distribution of raw scores is converted to IQ scores, whose distribution typically has a mean set at 100 and a standard deviation set at 15. Let’s emphasize typically because there is some variation in standard scoring systems, depending on the test used. The typical mean and standard deviation for IQ tests results in approximately 95% of deviation IQs ranging from 70 to 130, which is 2 standard deviations below and above the mean. In the context of a normal distribution, the relationship of deviation IQ scores to the other standard scores we have discussed so far (z, T, and A scores) is illustrated in Figure 3–9 .
Figure 3–9 Some Standard Score Equivalents Note that the values presented here for the IQ scores assume that the intelligence test scores have a mean of 100 and a standard deviation of 15. This is true for many, but not all, intelligence tests. If a particular test of intelligence yielded scores with a mean other than 100 and/or a standard deviation other than 15, then the values shown for IQ scores would have to be adjusted accordingly.
Standard scores converted from raw scores may involve either linear or nonlinear transformations. A standard score obtained by a linear transformation is one that retains a direct numerical relationship to the original raw score. The magnitude of differences between such standard scores exactly parallels the differences between corresponding raw scores. Sometimes scores may undergo more than one transformation. For example, the creators of the SAT did a second linear transformation on their data to convert z scores into a new scale that has a mean of 500 and a standard deviation of 100.
A nonlinear transformation may be required when the data under consideration are not normally distributed yet comparisons with normal distributions need to be made. In a nonlinear transformation, the resulting standard score does not necessarily have a direct numerical relationship to the original, raw score. As the result of a nonlinear transformation, the original distribution is said to have been normalized.
Normalized standard scores
Many test developers hope that the test they are working on will yield a normal distribution of scores. Yet even after very large samples have been tested with the instrument under development, skewed distributions result. What should be done?
One alternative available to the test developer is to normalize the distribution. Conceptually, normalizing a distribution involves “stretching” the skewed curve into the shape of a normal curve and creating a corresponding scale of standard scores, a scale that is technically referred to as a normalized standard score scale .
Normalization of a skewed distribution of scores may also be desirable for purposes of comparability. One of the primary advantages of a standard score on one test is that it can readily be compared with a standard score on another test. However, such comparisons are appropriate only when the distributions from which they derived are the same. In most instances, Page 101they are the same because the two distributions are approximately normal. But if, for example, distribution A were normal and distribution B were highly skewed, then z scores in these respective distributions would represent different amounts of area subsumed under the curve. A z score of −1 with respect to normally distributed data tells us, among other things, that about 84% of the scores in this distribution were higher than this score. A z score of −1 with respect to data that were very positively skewed might mean, for example, that only 62% of the scores were higher.
JUST THINK . . .
Apply what you have learned about frequency distributions, graphing frequency distributions, measures of central tendency, measures of variability, and the normal curve and standard scores to the question of the data listed in Table 3–1 . How would you communicate the data from Table 3–1 to the class? Which type of frequency distribution might you use? Which type of graph? Which measure of central tendency? Which measure of variability? Might reference to a normal curve or to standard scores be helpful? Why or why not?
For test developers intent on creating tests that yield normally distributed measurements, it is generally preferable to fine-tune the test according to difficulty or other relevant variables so that the resulting distribution will approximate the normal curve. That usually is a better bet than attempting to normalize skewed distributions. This is so because there are technical cautions to be observed before attempting normalization. For example, transformations should be made only when there is good reason to believe that the test sample was large enough and representative enough and that the failure to obtain normally distributed scores was due to the measuring instrument.Page 102
Correlation and Inference
Central to psychological testing and assessment are inferences (deduced conclusions) about how some things (such as traits, abilities, or interests) are related to other things (such as behavior). A coefficient of correlation (or correlation coefficient ) is a number that provides us with an index of the strength of the relationship between two things. An understanding of the concept of correlation and an ability to compute a coefficient of correlation is therefore central to the study of tests and measurement.
The Concept of Correlation
Simply stated, correlation is an expression of the degree and direction of correspondence between two things. A coefficient of correlation (r) expresses a linear relationship between two (and only two) variables, usually continuous in nature. It reflects the degree of concomitant variation between variable X and variable Y. The coefficient of correlation is the numerical index that expresses this relationship: It tells us the extent to which X and Y are “co-related.”
The meaning of a correlation coefficient is interpreted by its sign and magnitude. If a correlation coefficient were a person asked “What’s your sign?,” it wouldn’t answer anything like “Leo” or “Pisces.” It would answer “plus” (for a positive correlation), “minus” (for a negative correlation), or “none” (in the rare instance that the correlation coefficient was exactly equal to zero). If asked to supply information about its magnitude, it would respond with a number anywhere at all between −1 and +1. And here is a rather intriguing fact about the magnitude of a correlation coefficient: It is judged by its absolute value. This means that to the extent that we are impressed by correlation coefficients, a correlation of −.99 is every bit as impressive as a correlation of +.99. To understand why, you need to know a bit more about correlation.
“Ahh . . . a perfect correlation! Let me count the ways.” Well, actually there are only two ways. The two ways to describe a perfect correlation between two variables are as either +1 or −1. If a correlation coefficient has a value of +1 or −1, then the relationship between the two variables being correlated is perfect—without error in the statistical sense. And just as perfection in almost anything is difficult to find, so too are perfect correlations. It’s challenging to try to think of any two variables in psychological work that are perfectly correlated. Perhaps that is why, if you look in the margin, you are asked to “just think” about it.
JUST THINK . . .
Can you name two variables that are perfectly correlated? How about two psychological variables that are perfectly correlated?
If two variables simultaneously increase or simultaneously decrease, then those two variables are said to be positively (or directly) correlated. The height and weight of normal, healthy children ranging in age from birth to 10 years tend to be positively or directly correlated. As children get older, their height and their weight generally increase simultaneously. A positive correlation also exists when two variables simultaneously decrease. For example, the less a student prepares for an examination, the lower that student’s score on the examination. A negative (or inverse) correlation occurs when one variable increases while the other variable decreases. For example, there tends to be an inverse relationship between the number of miles on your car’s odometer (mileage indicator) and the number of dollars a car dealer is willing to give you on a trade-in allowance; all other things being equal, as the mileage increases, the number of dollars offered on trade-in decreases. And by the way, we all know students who use cell phones during class to text, tweet, check e-mail, or otherwise be engaged with their phone at a questionably appropriate time and place. What would you estimate the correlation to be between such daily, in-class cell phone use and test grades? See Figure 3–10 for one such estimate (and kindly refrain from sharing the findings on Facebook during class).Page 103
Figure 3–10 Cell Phone Use in Class and Class Grade This may be the “wired” generation, but some college students are clearly more wired than others. They seem to be on their cell phones constantly, even during class. Their gaze may be fixed on Mech Commander when it should more appropriately be on Class Instructor. Over the course of two semesters, Chris Bjornsen and Kellie Archer (2015) studied 218 college students, each of whom completed a questionnaire on their cell phone usage right after class. Correlating the questionnaire data with grades, the researchers reported that cell phone usage during class was significantly, negatively correlated with grades.© Caia Image/Glow Images RF
If a correlation is zero, then absolutely no relationship exists between the two variables. And some might consider “perfectly no correlation” to be a third variety of perfect correlation; that is, a perfect noncorrelation. After all, just as it is nearly impossible in psychological work to identify two variables that have a perfect correlation, so it is nearly impossible to identify two variables that have a zero correlation. Most of the time, two variables will be fractionally correlated. The fractional correlation may be extremely small but seldom “perfectly” zero.
JUST THINK . . .
Bjornsen & Archer (2015) discussed the implications of their cell phone study in terms of the effect of cell phone usage on student learning, student achievement, and post-college success. What would you anticipate those implications to be?
JUST THINK . . .
Could a correlation of zero between two variables also be considered a “perfect” correlation? Can you name two variables that have a correlation that is exactly zero?
As we stated in our introduction to this topic, correlation is often confused with causation. It must be emphasized that a correlation coefficient is merely an index of the relationship between two variables, not an index of the causal relationship between two variables. If you were told, for example, that from birth to age 9 there is a high positive correlation between hat size and spelling ability, would it be appropriate to conclude that hat size causes spelling ability? Of course not. The period Page 104from birth to age 9 is a time of maturation in all areas, including physical size and cognitive abilities such as spelling. Intellectual development parallels physical development during these years, and a relationship clearly exists between physical and mental growth. Still, this doesn’t mean that the relationship between hat size and spelling ability is causal.
Although correlation does not imply causation, there is an implication of prediction. Stated another way, if we know that there is a high correlation between X and Y, then we should be able to predict—with various degrees of accuracy, depending on other factors—the value of one of these variables if we know the value of the other.
The Pearson r
Many techniques have been devised to measure correlation. The most widely used of all is the Pearson r , also known as the Pearson correlation coefficientand the Pearson product-moment coefficient of correlation. Devised by Karl Pearson ( Figure 3–11 ), r can be the statistical tool of choice when the relationship between the variables is linear and when the two variables being correlated are continuous (or, they can theoretically take any value). Other correlational techniques can be employed with data that are discontinuous and where the relationship is nonlinear. The formula for the Pearson r takes into account the relative position of each test score or measurement with respect to the mean of the distribution.
Figure 3–11 Karl Pearson (1857–1936) Karl Pearson’s name has become synonymous with correlation. History records, however, that it was actually Sir Francis Galton who should be credited with developing the concept of correlation (Magnello & Spies, 1984). Galton experimented with many formulas to measure correlation, including one he labeled r. Pearson, a contemporary of Galton’s, modified Galton’sr, and the rest, as they say, is history. The Pearson r eventually became the most widely used measure of correlation.© TopFoto/Fotomas/The Image Works
A number of formulas can be used to calculate a Pearson r. One formula requires that we convert each raw score to a standard score and then multiply each pair of standard scores. A mean for the sum of the products is calculated, and that mean is the value of the Pearson r. Even from this simple verbal conceptualization of the Pearson r, it can be seen that the sign of the resulting r would be a function of the sign and the magnitude of the standard scores used. If, for example, negative Page 105standard score values for measurements of X always corresponded with negative standard score values for Y scores, the resulting r would be positive (because the product of two negative values is positive). Similarly, if positive standard score values on X always corresponded with positive standard score values on Y, the resulting correlation would also be positive. However, if positive standard score values for Xcorresponded with negative standard score values for Y and vice versa, then an inverse relationship would exist and so a negative correlation would result. A zero or near-zero correlation could result when some products are positive and some are negative.
The formula used to calculate a Pearson r from raw scores is
This formula has been simplified for shortcut purposes. One such shortcut is a deviation formula employing “little x,” or x in place of and “little y,” or y in place of
Another formula for calculating a Pearson r is
Although this formula looks more complicated than the previous deviation formula, it is easier to use. Here N represents the number of paired scores; Σ XY is the sum of the product of the paired X and Y scores; Σ X is the sum of the X scores; Σ Y is the sum of the Y scores; Σ X2 is the sum of the squared Xscores; and Σ Y2 is the sum of the squared Y scores. Similar results are obtained with the use of each formula.
The next logical question concerns what to do with the number obtained for the value of r. The answer is that you ask even more questions, such as “Is this number statistically significant, given the size and nature of the sample?” or “Could this result have occurred by chance?” At this point, you will need to consult tables of significance for Pearson r—tables that are probably in the back of your old statistics textbook. In those tables you will find, for example, that a Pearson r of .899 with an N = 10 is significant at the .01 level (using a two-tailed test). You will recall from your statistics course that significance at the .01 level tells you, with reference to these data, that a correlation such as this could have been expected to occur merely by chance only one time or less in a hundred if X and Y are not correlated in the population. You will also recall that significance at either the .01 level or the (somewhat less rigorous) .05 level provides a basis for concluding that a correlation does indeed exist. Significance at the .05 level means that the result could have been expected to occur by chance alone five times or less in a hundred.
The value obtained for the coefficient of correlation can be further interpreted by deriving from it what is called a coefficient of determination , or r2. The coefficient of determination is an indication of how much variance is shared by the X– and the Y-variables. The calculation of r2 is quite straightforward. Simply square the correlation coefficient and multiply by 100; the result is equal to the percentage of the variance accounted for. If, for example, you calculated r to be .9, then r2 would be equal to .81. The number .81 tells us that 81% of the variance is accounted for by the X– and Y-variables. The remaining variance, equal to 100(1 − r2), or 19%, could presumably be accounted for by chance, error, or otherwise unmeasured or unexplainable factors. 7 Page 106
Before moving on to consider another index of correlation, let’s address a logical question sometimes raised by students when they hear the Pearson r referred to as the product-moment coefficient of correlation. Why is it called that? The answer is a little complicated, but here goes.
In the language of psychometrics, a moment describes a deviation about a mean of a distribution. Individual deviations about the mean of a distribution are referred to as deviates. Deviates are referred to as the first moments of the distribution. The second moments of the distribution are the moments squared. The third moments of the distribution are the moments cubed, and so forth. The computation of the Pearson r in one of its many formulas entails multiplying corresponding standard scores on two measures. One way of conceptualizing standard scores is as the first moments of a distribution. This is because standard scores are deviates about a mean of zero. A formula that entails the multiplication of two corresponding standard scores can therefore be conceptualized as one that entails the computation of the product of corresponding moments. And there you have the reason r is called product-moment correlation. It’s probably all more a matter of psychometric trivia than anything else, but we think it’s cool to know. Further, you can now understand the rather “high-end” humor contained in the cartoon (below).
The Spearman Rho
The Pearson r enjoys such widespread use and acceptance as an index of correlation that if for some reason it is not used to compute a correlation coefficient, mention is made of the statistic that was used. There are many alternative ways to derive a coefficient of correlation. One commonly used alternative statistic is variously called a rank-order correlation coefficient , a rank-difference correlation coefficient , or simply Spearman’s rho .Developed by Charles Spearman, a British psychologist ( Figure 3–12 ), this coefficient of correlation is frequently used when the sample size is small (fewer than 30 pairs of measurements) and especially when both sets of measurements are in ordinal (or rank-order) form. Special tables are used to determine whether an obtained rho coefficient is or is not significant.
Figure 3–12 Charles Spearman (1863–1945) Charles Spearman is best known as the developer of the Spearman rho statistic and the Spearman-Brown prophecy formula, which is used to “prophesize” the accuracy of tests of different sizes. Spearman is also credited with being the father of a statistical method called factor analysis, discussed later in this text.© Atlas Archive/The Image Works Copyright 2016 by Ronald Jay Cohen. All rights reserved.Page 107
Graphic Representations of Correlation
One type of graphic representation of correlation is referred to by many names, including a bivariate distribution , a scatter diagram , a scattergram , or—our favorite—a scatterplot . A scatterplot is a simple graphing of the coordinate points for values of the X-variable (placed along the graph’s horizontal axis) and the Y-variable (placed along the graph’s vertical axis). Scatterplots are useful because they provide a quick indication of the direction and magnitude of the relationship, if any, between the two variables. Figures 3–13 and 3–14 offer a quick course in eyeballing the nature and degree of correlation by means of scatterplots. To distinguish positive from negative correlations, note the direction of the curve. And to estimate the strength of magnitude of the correlation, note the degree to which the points form a straight line.
Figure 3–13 Scatterplots and Correlations for Positive Values of r Figure 3–14 Scatterplots and Correlations for Negative Values of r
Scatterplots are useful in revealing the presence of curvilinearity in a relationship. As you may have guessed, curvilinearity in this context refers to an “eyeball gauge” of how curved a graph is. Remember that a Pearson r should be used only if the relationship between the variables is linear. If the graph does not appear to take the form of a straight line, the chances are good that the relationship is not linear ( Figure 3–15 ). When the relationship is nonlinear, other statistical tools and techniques may be employed. 8
Page 110Figure 3–15 Scatterplot Showing a Nonlinear Correlation
A graph also makes the spotting of outliers relatively easy. An outlier is an extremely atypical point located at a relatively long distance—an outlying distance—from the rest of the coordinate points in a scatterplot ( Figure 3–16 ). Outliers stimulate interpreters of test data to speculate about the reason for the atypical score. For example, consider an outlier on a scatterplot that reflects a correlation between hours each member of a fifth-grade class spent studying and their grades on a 20-item spelling test. And let’s say that one student studied for 10 hours and received a failing grade. This outlier on the scatterplot might raise a red flag and compel the test user to raise some important questions, such as “How effective are this student’s study skills and habits?” or “What was this student’s state of mind during the test?”
Figure 3–16 Scatterplot Showing an Outlier
In some cases, outliers are simply the result of administering a test to a very small sample of testtakers. In the example just cited, if the test were given statewide to fifth-graders and the sample size were much larger, perhaps many more low scorers who put in large amounts of study time would be identified.
As is the case with very low raw scores or raw scores of zero, outliers can sometimes help identify a testtaker who did not understand the instructions, was not able to follow the instructions, or was simply oppositional and did not follow the instructions. In other cases, an outlier can provide a hint of some deficiency in the testing or scoring procedures.
People who have occasion to use or make interpretations from graphed data need to know if the range of scores has been restricted in any way. To understand why this is so necessary to know, consider Figure 3–17 . Let’s say that graph A describes the relationship between Public University entrance test scores for 600 applicants (all of whom were later admitted) and their grade point averages at the end of the first semester. The scatterplot indicates that the relationship between entrance test scores and grade point average is both linear and positive. But what if the admissions officer had accepted only the applications of the students who scored within the top half or so on the entrance exam? To a trained eye, this scatterplot (graph B) appears to indicate a weaker correlation than that indicated in graph A—an effect attributable exclusively to the restriction of range. Graph B is less a straight line than graph A, and its direction is not as obvious.
Figure 3–17 Two Scatterplots Illustrating Unrestricted and Restricted Ranges
Generally, the best estimate of the correlation between two variables is most likely to come not from a single study alone but from analysis of the data from several studies. One option to Page 111facilitate understanding of the research across a number of studies is to present the range of statistical values calculated from a number of different studies of the same phenomenon. Viewing all of the data from a number of studies that attempted to determine the correlation between variable X and variable Y, for example, might lead the researcher to conclude that “The correlation between variable Xand variable Y ranges from .73 to .91.” Another option might be to combine statistically the information across the various studies; that is what is done using a statistical technique called meta-analysis. Using this technique, researchers raise (and strive to answer) the question: “Combined, what do all of these studies tell us about the matter under study?” For example, Imtiaz et al. (2016) used meta-analysis to draw some conclusions regarding the relationship between cannabis use and physical health. Colin (2015) used meta-analysis to study the correlations of use-of-force decisions among American police officers.
Meta-analysis may be defined as a family of techniques used to statistically combine information across studies to produce single estimates of the data under study. The estimates derived, referred to as effect size , may take several different forms. In most meta-analytic studies, effect size is typically expressed as a correlation coefficient. 9 Meta-analysis facilitates the drawing of conclusions and the making of statements like, “the typical therapy client is better off than 75% of untreated individuals” (Smith & Glass, 1977, p. 752), there is “about 10% increased risk for antisocial behavior among children with incarcerated parents, compared to peers” (Murray et al., 2012), and “GRE and UGPA [undergraduate grade point average] are generalizably valid predictors of graduate grade point average, 1st-year graduate grade point average, comprehensive examination scores, publication citation counts, and faculty ratings” (Kuncel et al., 2001, p. 162).Page 112
MEET AN ASSESSMENT PROFESSIONAL
Meet Dr. Joni L. Mihura
Hi, my name is Joni Mihura, and my research expertise is in psychological assessment, with a special focus on the Rorschach. To tell you a little about me, I was the only woman* to serve on the Research Council for John E. Exner’s Rorschach Comprehensive System (CS) until he passed away in 2006. Due to the controversy around the Rorschach’s validity, I began reviewing the research literature to ensure I was teaching my doctoral students valid measures to assess their clients. That is, the controversy about the Rorschach has not been that it is a completely invalid test—the critics have endorsed several Rorschach scales as valid for their intended purpose—the main problem that they have highlighted is that only a small proportion of its scales had been subjected to “meta-analysis,” a systematic technique for summarizing the research literature. To make a long story short, I eventually published my review of the Rorschach literature in the top scientific review journal in psychology (Psychological Bulletin) in the form of systematic reviews and meta-analyses of the 65 main Rorschach CS variables (Mihura et al., 2013), therefore making the Rorschach the psychological test with the most construct validity meta-analyses for its scales!
My meta-analyses also resulted in two other pivotal events. They formed the backbone for a new scientifically based Rorschach system of which I am a codeveloper—the Rorschach Performance Assessment System (R-PAS; Meyer et al., 2011), and they resulted in the Rorschach critics removing the “moratorium” they had recommended for the Rorschach (or, Garb, 1999) for the scales they deemed had solid support in our meta-analyses (Wood et al., 2015; also see our reply, Mihura et al., 2015).
I’m very excited to talk with you about meta-analysis. First, to set the stage, let’s take a step back and look at what you might have experienced so far when reading about psychology. When students take their first psychology course, they are often surprised how much of the field is based on research findings rather than just “common sense.” Even so, because undergraduate textbooks have numerous topics about which they cannot cite all of the research, it can appear that the textbook is relying on just one or two studies as the “proof.” Therefore, you might be surprised just how many psychological research studies actually exist! Conducting a quick search in the PsycINFO database shows that over a million psychology journal articles are classified as empirical studies—and that excludes chapters, theses, dissertations, and many other studies not listed in PsycINFO.
Joni L. Mihura, Ph.D. is Associate Professor of Psychology at the University of Toledo in Toledo, Ohio © Joni L. Mihura, Ph.D.
But, good news or bad news, a significant challenge with many research studies is how to summarize results. The classic example of such a dilemma and the eventual solution is a fascinating one that comes from the psychotherapy literature. In 1952, Hans Eysenck published a classic article entitled Page 113“The Effects of Psychotherapy: An Evaluation,” in which he summarized the results of a few studies and concluded that psychotherapy doesn’t work! Wow! This finding had the potential to shake the foundation of psychotherapy and even ban its existence. After all, Eysenck had cited research that suggested that the longer a person was in therapy, the worse-off they became. Notwithstanding the psychotherapists and the psychotherapy enterprise, Eysenck’s publication had sobering implications for people who had sought help through psychotherapy. Had they done so in vain? Was there really no hope for the future? Were psychotherapists truly ill-equipped to do things like reduce emotional suffering and improve peoples’ lives through psychotherapy?
In the wake of this potentially damning article, several psychologists—and in particular Hans H. Strupp—responded by pointing out problems with Eysenck’s methodology. Other psychologists conducted their own reviews of the psychotherapy literature. Somewhat surprisingly, after reviewing the same body of research literature on psychotherapy, various psychologists drew widely different conclusions. Some researchers found strong support for the efficacy of psychotherapy. Other researchers found only modest support for the efficacy of psychotherapy. Yet other researchers found no support for it at all.
How can such different conclusions be drawn when the researchers are reviewing the same body of literature? A comprehensive answer to this important question could fill the pages of this book. Certainly, one key element of the answer to this question had to do with a lack of systematic rules for making decisions about including studies, as well as lack of a widely acceptable protocol for statistically summarizing the findings of the various studies. With such rules and protocols absent, it would be all too easy for researchers to let their preexisting biases run amok. The result was that many researchers “found” in their analyses of the literature what they believed to be true in the first place.
A fortuitous bi-product of such turmoil in the research community was the emergence of a research technique called “meta-analysis.” Literally, “an analysis of analyses,” meta-analysis is a tool used to systematically review and statistically summarize the research findings for a particular topic. In 1977, Mary Lee Smith and Gene V. Glass published the first meta-analysis of psychotherapy outcomes. They found strong support for the efficacy of psychotherapy. Subsequently, others tried to challenge Smith and Glass’ findings. However, the systematic rigor of their meta-analytic technique produced findings that were consistently replicated by others. Today there are thousands of psychotherapy studies, and many meta-analysts ready to research specific, therapy-related questions (like “What type of psychotherapy is best for what type of problem?”).
What does all of this mean for psychological testing and assessment? Meta-analytic methodology can be used to glean insights about specific tools of assessment, and testing and assessment procedures. However, meta-analyses of information related to psychological tests brings new challenges owing, for example, to the sheer number of articles to be analyzed, the many variables on which tests differ, and the specific methodology of the meta-analysis. Consider, for example, that multiscale personality tests may contain over 50, and sometimes over 100, scales that each need to be evaluated separately. Furthermore, some popular multiscale personality tests, like the MMPI-2 and Rorschach, have had over a thousand research studies published on them. The studies typically report findings that focus on varied aspects of the test (such as the utility of specific test scales, or other indices of test reliability or validity). In order to make the meta-analytic task manageable, meta-analyses for multiscale tests will typically focus on one or another of these characteristics or indices.
In sum, a thoughtful meta-analysis of research on a specific topic can yield important insights of both theoretical and applied value. A meta-analytic review of the literature on a particular psychological test can even be instrumental in the formulation of revised ways to score the test and interpret the findings (just ask Meyer et al., 2011). So, the next time a question about psychological research arises, students are advised to respond to that question with their own question, namely “Is there a meta-analysis on that?”
Used with permission of Joni L. Mihura.
*I have also edited the Handbook of Gender and Sexuality in Psychological Assessment (Brabender & Mihura, 2016).
A key advantage of meta-analysis over simply reporting a range of findings is that, in meta-analysis, more weight can be given to studies that have larger numbers of subjects. This weighting process results in more accurate estimates (Hunter & Schmidt, 1990). Some advantages to meta-analyses are: (1) meta-analyses can be replicated; (2) the conclusions of meta-analyses tend to be more reliable and precise than the conclusions from single studies; (3) there is more focus on effect size rather than statistical significance alone; and (4) meta-analysis promotes evidence-based practice , which may be defined as professional practice that is based on clinical and research findings (Sánchez-Meca & Marin-Martinez, 2010). Despite these and other advantages, meta-analysis is, at least to some degree, art as well as science (Hall & Rosenthal, 1995). The value of any meta-analytic investigation is very much a matter of the skill and ability of the meta-analyst (Kavale, 1995), and use of an inappropriate meta-analytic method can lead to misleading conclusions (Kisamore & Brannick, 2008).
It may be helpful at this time to review this statistics refresher to make certain that you indeed feel “refreshed” and ready to continue. We will build on your knowledge of basic statistical principles in the chapters to come, and it is important to build on a rock-solid foundation.
Test your understanding of elements of this chapter by seeing if you can explain each of the following terms, expressions, and abbreviations:
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email. |
Gothic architecture is an architectural style that flourished in Europe during the High and Late Middle Ages. It evolved from Romanesque architecture and was succeeded by Renaissance architecture. Originating in 12th century France and lasting into the 16th century, Gothic architecture was known during the period as Opus Francigenum ("French work") with the term Gothic first appearing during the later part of the Renaissance. Its characteristics include the pointed arch, the ribbed vault (which evolved from the joint vaulting of Romanesque architecture) and the flying buttress. Gothic architecture is most familiar as the architecture of many of the great cathedrals, abbeys and churches of Europe. It is also the architecture of many castles, palaces, town halls, guild halls, universities and to a less prominent extent, private dwellings, such as dorms and rooms.
It is in the great churches and cathedrals and in a number of civic buildings that the Gothic style was expressed most powerfully, its characteristics lending themselves to appeals to the emotions, whether springing from faith or from civic pride. A great number of ecclesiastical buildings remain from this period, of which even the smallest are often structures of architectural distinction while many of the larger churches are considered priceless works of art and are listed with UNESCO as World Heritage Sites. For this reason a study of Gothic architecture is often largely a study of cathedrals and churches.
A series of Gothic revivals began in mid-18th century England, spread through 19th century Europe and continued, largely for ecclesiastical and university structures, into the 20th century.
Unlike with past and future styles of art, like the Carolingian style as noted by French art historian Louis Grodecki in his work Gothic Architecture, Gothic's lack of a definite historical or geographic nexus results in a weak concept of what truly is Gothic. This is further compounded by the fact that the technical, ornamentation, and formal features of Gothic are not entirely unique to it. Though modern historians have invariably accepted the conventional use of "Gothic" as a label, even in formal analysis processes due to a longstanding tradition of doing so, the definition of "Gothic" has historically varied wildly.
The term "Gothic architecture" originated as a pejorative description. Giorgio Vasari used the term "barbarous German style" in his 1550 Lives of the Artists to describe what is now considered the Gothic style, and in the introduction to the Lives he attributes various architectural features to "the Goths" whom he held responsible for destroying the ancient buildings after they conquered Rome, and erecting new ones in this style. Vasari was not alone among 15th and 16th Italian writers, as Filarete and Giannozzo Manetti had also written scathing criticisms of the Gothic style, calling it a "barbaric prelude to the Renaissance." Vasari and company were writing at a time when many aspects and vocabulary pertaining to Classical architecture had been reasserted with the Renaissance in the late 15th and 16th centuries, and they had the perspective that the "maniera tedesca" or "maniera dei Goti" was the antithesis of this resurgent style leading to the continuation of this negative connotation in the 17th century. François Rabelais, also of the 16th century, imagines an inscription over the door of his utopian Abbey of Thélème, "Here enter no hypocrites, bigots..." slipping in a slighting reference to "Gotz" and "Ostrogotz."[a] Molière also made this note of the Gothic style in the 1669 poem La Gloire:
(in French): "...fade goût des ornements gothiques, Ces monstres odieux de siècles ignorants, Que de la barbarie ont produit les torrents.."
(in English): "...the insipid taste of Gothic ornamentation, these odious monstrosities of an ignorant age, produced by the torrents of barbarism..."— Molière, La Gloire
In English 17th century usage, "Goth" was an equivalent of "vandal," a savage despoiler with a Germanic heritage, and so came to be applied to the architectural styles of northern Europe from before the revival of classical types of architecture. According to a 19th-century correspondent in the London Journal Notes and Queries:
There can be no doubt that the term 'Gothic' as applied to pointed styles of ecclesiastical architecture was used at first contemptuously, and in derision, by those who were ambitious to imitate and revive the Grecian orders of architecture, after the revival of classical literature. Authorities such as Christopher Wren lent their aid in deprecating the old medieval style, which they termed Gothic, as synonymous with everything that was barbarous and rude.
The first movements that reevaluated medieval art took place in the 18th century, even when the Académie Royale d'Architecture met in Paris on 21 July 1710 and, amongst other subjects, discussed the new fashions of bowed and cusped arches on chimneypieces being employed to "finish the top of their openings. The Academy disapproved of several of these new manners, which are defective and which belong for the most part to the Gothic." Despite resistance in the 19th and 20th centuries, such as the writings of Wilhelm Worringer, critics like Père Laugier, William Gilpin, August Wilhelm Schlegel and other critics began to give the term a more positive meaning. Johann Wolfgang von Goethe called Gothic the "deutsche Architektur" and the "embodiment of German genius," while some French writers like Camille Enlart instead nationalised it for France, dubbing it "architecture français." This second group made some of their claims using the chronicle of Burchard von Halle that tells of the Church of Bad Wimpfen's construction "opere francigeno," or "in the French style." Today, the term is defined with spatial observations and historical and ideological information.
Since the studies of the 18th century, many have attempted to define the Gothic style using a list of characteristic features, principally with the pointed arch,[b] the vaulting supported by intersecting arches, and the flying buttress. Eventually, historians composed a fairly large list of those features that were alien to both early medieval and Classical arts that includes piers with groups of colonettes, pinnacles, gables, rose windows, and openings broken into many different lancet-shaped sections. Certain combinations thereof have been singled out for identifying regional or national sub-styles of Gothic or to follow the evolution of the style. From this emerge labels such as Flamboyant, Rayonnant, and the English Perpendicular because of the observation of components like window tracery and pier moldings. This idea, dubbed by Paul Frankl as "componential," had also occurred to mid 19th century writers such as Arcisse de Caumont, Robert Willis and Franz Mertens.[c]
As an architectural style, Gothic developed primarily in ecclesiastical architecture, and its principles and characteristic forms were applied to other types of buildings. Buildings of every type were constructed in the Gothic style, with evidence remaining of simple domestic buildings, elegant town houses, grand palaces, commercial premises, civic buildings, castles, city walls, bridges, village churches, abbey churches, abbey complexes and large cathedrals.
The greatest number of surviving Gothic buildings are churches. These range from tiny chapels to large cathedrals, and although many have been extended and altered in different styles, a large number remain either substantially intact or sympathetically restored, demonstrating the form, character and decoration of Gothic architecture. The Gothic style is most particularly associated with the great cathedrals of Northern France, the Low Countries, England and Spain, with other fine examples occurring across Europe.
The roots of the Gothic style lie in those towns that, since the 11th century, had been enjoying increased prosperity and growth, began to experience more and more freedom from traditional feudal authority. At the end of the 12th century, Europe was divided into a multitude of city states and kingdoms. The area encompassing modern Germany, southern Denmark, the Netherlands, Belgium, Luxembourg, Switzerland, Liechtenstein, Austria, Slovakia, Czech Republic and much of northern Italy (excluding Venice and Papal State) was nominally part of the Holy Roman Empire, but local rulers exercised considerable autonomy under the system of Feudalism. France, Denmark, Poland, Hungary, Portugal, Scotland, Castile, Aragon, Navarre, Sicily and Cyprus were independent kingdoms, as was the Angevin Empire, whose Plantagenet kings ruled England and large domains in what was to become modern France.[d] Norway came under the influence of England, while the other Scandinavian countries and Poland were influenced by trading contacts with the Hanseatic League. Angevin kings brought the Gothic tradition from France to Southern Italy, while Lusignan kings introduced French Gothic architecture to Cyprus. Gothic art is sometimes viewed as the art of the era of feudalism but also as being connected to change in medieval social structure, as the Gothic style of architecture seemed to parallel the beginning of the decline of feudalism. Nevertheless, the influence of the established feudal elite can be seen in the Chateaux of French lords and in those churches sponsored by feudal lords.
Throughout Europe at this time there was a rapid growth in trade and an associated growth in towns, and they would come to be predominate in Europe by the end of the 13th century. Germany and the Low Countries had large flourishing towns that grew in comparative peace, in trade and competition with each other or united for mutual weal, as in the Hanseatic League. Civic building was of great importance to these towns as a sign of wealth and pride. England and France remained largely feudal and produced grand domestic architecture for their kings, dukes and bishops, rather than grand town halls for their burghers. Viollet-le-Duc contended that the blossoming of the Gothic style came about as a result of growing freedoms in construction professions.
The geographical expanse of the Gothic style is analogous to that of the Catholic Church, which prevailed across Europe at this time and influenced not only faith but also wealth and power. Bishops were appointed by the feudal lords (Kings, Dukes, and other landowners) and they often ruled as virtual princes over large estates. The early Medieval periods had seen a rapid growth in monasticism, with several different orders being prevalent and spreading their influence widely. Foremost were the Benedictines whose great abbey churches vastly outnumbered any others in France and England. A part of their influence was that towns developed around them and they became centers of culture, learning and commerce. The Cluniac and Cistercian Orders were prevalent in France, the great monastery at Cluny having established a formula for a well planned monastic site which was then to influence all subsequent monastic building for many centuries. In the 13th century St. Francis of Assisi established the Franciscans, a mendicant order. The Dominicans, another mendicant order founded during the same period but by St. Dominic in Toulouse and Bologna, were particularly influential in the building of Italy's Gothic churches.
The primary use of the Gothic style is in religious structures, naturally leading it to an association with the Church and it is considered to be one of the most formal and coordinated forms of the physical church, thought of as being the physical residence of God on Earth. According to Hans Sedlmayr, it was "even the considered the temporal image of Paradise, of the New Jerusalem." The horizontal and vertical scope of the Gothic church, filled with the light thought of as a symbol of the grace of God admitted into the structure via the style's iconic windows are among the very best examples of Christian architecture. Grodecki's Gothic Architecture also notes that the glass pieces of various colors that make up those windows have been compared to "precious stones encrusting the walls of the New Jerusalem," and that "the numerous towers and pinnacles evoke similar structures that appear in the visions of Saint John." Another idea, held by Georg Dehio and Erwin Panofsky, is that the designs of Gothic followed the current theological scholastic thought. The PBS show NOVA explored the influence of the Holy Bible in the dimensions and design of some cathedrals.
From the 10th to the 13th century, Romanesque architecture had become a pan-European style and manner of construction, affecting buildings in countries as far apart as Ireland and Croatia, and Sweden and Sicily. The same wide geographic area was then affected by the development of Gothic architecture, but the acceptance of the Gothic style and methods of construction differed from place to place, as did the expressions of Gothic taste. The proximity of some regions meant that modern country borders did not define divisions of style. On the other hand, some regions such as England and Spain produced defining characteristics rarely seen elsewhere, except where they have been carried by itinerant craftsmen, or the transfer of bishops. Many different factors like geographical/geological, economic, social, or political situations caused the regional differences in the great abbey churches and cathedrals of the Romanesque period that would often become even more apparent in the Gothic. For example, studies of the population statistics reveals disparities such as the multitude of churches, abbeys, and cathedrals in northern France while in more urbanised regions construction activity of a similar scale was reserved to a few important cities. Such an example comes from Roberto López, wherein the French city of Amiens was able to fund its architectural projects whereas Cologne could not because of the economic inequality of the two. This wealth, concentrated in rich monasteries and noble families, would eventually spread certain Italian, Catalan, and Hanseatic bankers. This would be amended when the economic hardships of the 13th century were no longer felt, allowing Normandy, Tuscany, Flanders, and the southern Rhineland to enter into competition with France.
The local availability of materials affected both construction and style. In France, limestone was readily available in several grades, the very fine white limestone of Caen being favoured for sculptural decoration. England had coarse limestone and red sandstone as well as dark green Purbeck marble which was often used for architectural features. In Northern Germany, Netherlands, northern Poland, Denmark, and the Baltic countries local building stone was unavailable but there was a strong tradition of building in brick. The resultant style, Brick Gothic, is called Backsteingotik in Germany and Scandinavia and is associated with the Hanseatic League. In Italy, stone was used for fortifications, so brick was preferred for other buildings. Because of the extensive and varied deposits of marble, many buildings were faced in marble, or were left with undecorated façade so that this might be achieved at a later date. The availability of timber also influenced the style of architecture, with timber buildings prevailing in Scandinavia. Availability of timber affected methods of roof construction across Europe. It is thought that the magnificent hammerbeam roofs of England were devised as a direct response to the lack of long straight seasoned timber by the end of the Medieval period, when forests had been decimated not only for the construction of vast roofs but also for ship building.
The pointed arch, one of the defining attributes of Gothic, was earlier incorporated into Islamic architecture following the Islamic conquests of Roman Syria and the Sassanid Empire in the 7th century. The pointed arch and its precursors had been employed in Late Roman and Sassanian architecture; within the Roman context, evidenced in early church building in Syria and occasional secular structures, like the Roman Karamagara Bridge; in Sassanid architecture, in the parabolic and pointed arches employed in palace and sacred construction. Use of the pointed arch seems to have taken off dramatically after its incorporation into Islamic architecture. It begins to appear throughout the Islamic world in close succession after its adoption in the late Umayyad or early Abbasid period. Some examples are the Al-Ukhaidir Palace (775 AD), the Abbasid reconstruction of the Al-Aqsa mosque in 780 AD, the Ramlah Cisterns (789 AD), the Great Mosque of Samarra (851 AD), and the Mosque of Ibn Tulun (879 AD) in Cairo. It also appears in one of the early reconstructions of the Great Mosque of Kairouan in Tunisia, and the Mosque–Cathedral of Córdoba in 987 AD. David Talbot Rice points out that, "The pointed arch had already been used in Syria, but in the mosque of Ibn Tulun we have one of the earliest examples of its use on an extensive scale, some centuries before it was exploited in the West by the Gothic architects."
Increasing military and cultural contacts with the Muslim world, including the Norman conquest of Islamic Sicily in 1090, the Crusades (beginning 1096), and the Islamic presence in Spain, may have influenced Medieval Europe's adoption of the pointed arch, although this hypothesis remains controversial. Certainly, in those parts of the Western Mediterranean subject to Islamic control or influence, rich regional variants arose, fusing Romanesque and later Gothic traditions with Islamic decorative forms, for example in Monreale and Cefalù Cathedrals, the Alcázar of Seville, and Teruel Cathedral.
A number of scholars have cited the Armenian Cathedral of Ani, completed 1001 or 1010, as a possible influence on the Gothic, especially due to its use of pointed arches and cluster piers. However, other scholars such as Sirarpie Der Nersessian, who rejected this notion as she argued that the pointed arches did not serve the same function of supporting the vault. Lucy Der Manuelian contends that some Armenians (historically documented as being in Western Europe in the Middle Ages) could have brought the knowledge and technique employed at Ani to the west.
The view held by the majority of scholars however is that the pointed arch evolved naturally in Western Europe as a structural solution to a technical problem, with evidence for this being its use as a stylistic feature in Romanesque French and English churches.
The Gothic style originated in the Ile-de-France region of France at the Romanesque era in the first half of the 12th century, at the Cathedral of Sens (1130–62) and Abbey of St-Denis (c. 1130–40 and 1140–44), and did not immediately supersede it. An example of this lack clean break is the blossoming of the Late Romanesque (German: Spätromanisch) in the Holy Roman Empire under the Hohenstaufens and Rhineland while the Gothic style spread into England and France in the 12th century.
By the 12th century, Romanesque architecture, termed Norman Gothic in England, was established throughout Europe and provided the basic architectural forms and units that were to remain in evolution throughout the Medieval period. The important categories of building: the cathedral, parish church, monastery, castle, palace, great hall, gatehouse, and civic building had been established in the Romanesque period.
Many architectural features that are associated with Gothic architecture had been developed and used by the architects of Romanesque buildings, but not fully exploited. These include ribbed vaults, buttresses, clustered columns, ambulatories, wheel windows, spires, stained glass windows, and richly carved door tympana. These features, namely the rib vault and the pointed arch, had been used since the late 11th century in Southern Italy, Durham, and Picardy.
It was principally the widespread introduction of a single feature, the pointed arch, which was to bring about the change that separates Gothic from Romanesque. The technological change permitted a stylistic change which broke the tradition of massive masonry and solid walls penetrated by small openings, replacing it with a style where light appears to triumph over substance. With its use came the development of many other architectural devices, previously put to the test in scattered buildings and then called into service to meet the structural, aesthetic and ideological needs of the new style. These include the flying buttresses, pinnacles and traceried windows which typify Gothic ecclesiastical architecture.
Gothic architecture did not emerge from a dying Romanesque tradition, but from a Romanesque style at the height of its popularity, and it would supplant it for many years. This shift in style beginning in the mid 12th century came about in an environment of much intellectual and political development as the Catholic Church began to grow into a very powerful political entity. Another transition made by Gothic was the move from the rural monasteries of the Romanesque into urban environments with new Gothic churches built in wealthy cities by secular clergy knowing full well the growing unity and power of the Church. The characteristic forms that were to define Gothic architecture grew out of Romanesque architecture and developed at several different geographic locations, as the result of different influences and structural requirements. While barrel vaults and groin vaults are typical of Romanesque architecture, ribbed vaults were used in many later Romanesque churches. The first examples of the ribbed vault, atop the thick walls of the Romanesque church, appeared at the same time in England and Normandy at Durham Cathedral (from 1093-before 1110), Winchester, Peterborough and Gloucester, Lessay Abbey's choir and transept, Duclair and Church of Saint Paul in Rouen. The geometric ornamentation borne by the moldings of some of these vaults attests to the want for more decoration, and this would be answered later by architects working in Ile-de-France, Valois, and Vexin. Later French projects from 1125 to 1135 show the lightening up of vaults contoured in a single or double convex profile and thinner walls. The Abbey of Notre Dame de Morienval in Valois is one such example, with vaulting covering trapezoidal around an ambulatory, lightened supports and vaulting that would be copied at Sens Cathedral and Suger's Basilica of Saint-Denis. While Norman architects would also participate in this development, the Romanesque in the Holy Roman Empire and Lombardy would remain the same with only little experimentation with vaulting. Two more features of Norman Romanesque, the wall buttress and the thick "double shell" wall at window height, were to later play a role in the birth of Gothic architecture. This double wall, a convenient way to reach the windows, hosted a passageway of recycled space that first appeared in the transepts of Bernay and Jumièges Abbey around 1040-50. This window-level passageway gave an illusion of weightlessness, inspired Noyon Cathedral, and would affect the entirety of the Gothic form of art.
Other characteristics of early Gothic architecture, such as vertical shafts, clustered columns, compound piers, plate tracery and groups of narrow openings had evolved during the Romanesque period. The west front of Ely Cathedral exemplifies this development. Internally the three tiered arrangement of arcade, gallery and clerestory was established. Interiors had become lighter with the insertion of more and larger windows.
All modern historians agree that Suger's St.-Denis and Henri Sanglier's Sens Cathedral exemplify the development of Norman Romanesque architectural features into the Gothic through a new ordering of interior space, accented by support from supports freestanding and otherwise, and the shift of emphasis from sheer size to admittance of light. Later additions or remodeling prevent the observation of either structure in the time of their construction, the original plan was nonetheless recreated the plans of each and, as Francis Salet points out, Sens (the older of the two) still uses a Romanesque plan with an ambulatory and no transept and echoes with its supports the old Norman alternations. Its three-story high pointed arcade, openings above the vaulting, and windows are not derived from Burgundy, but rather from the triple division present in Normandy and England. Even the sexpartite vaulting of Sens's nave is likely of Norman origin, though the presence of wall ribbing belies Burgundian influence in design. Sens would, in spite of its archaic Norman features, exert much influence. From Sens spread the shrinking or omitting of the transept, the sexpartite vault, alternating interior, and the three-story elevation of future churches.
The beginning of the Gothic style is held by all modern historians to be in the first half of the 12th century at the Basilica of St Denis in the Ile-de-France, the royal domain of the Capetian kings rich in industry and the wool trade, because of the records he left during reconstruction of what he desired of this renovation, rather than the contemporary churches that explore some of the same ideas used at St. Denis. Suger believed in the spiritual power of light and colour, following in the philosophy of the 3rd century pagan Dionysius the Areopagite, whose identity was fused with that of the patron saint of Paris, and leading him in the end to require large windows of stained glass.[e] This new church also needed to be larger than the previous Carolingian building to allow a greater number of pilgrims to feast inside the church. The solution, Suger found, was to make unprecedented use of the ribbed vault and the pointed arch. St. Denis's plan possesses some very irregular shapes in its bays, prompting its architect to build the arches first so that arches of different height had keystones at the same height. Next the infill was added, and this method was proven to both provide more visual stimulation and speed up construction.
The choir and west front of the Abbey of Saint-Denis both became the prototypes for further building in the royal domain of northern France and in the Duchy of Normandy. Through the rule of the Angevin dynasty, the new style was introduced to England and spread throughout France, the Low Countries, Germany, Spain, northern Italy and Sicily.
Compared to Sens Cathedral, St.-Denis is more complex and innovative. There is an obvious difference between the enclosing ambulatory around the choir, dedicated 11 June 1144 in the presence of the King,[f] and the pre-Suger narthex, or antenave, (1140) that is derived from pre-Romanesque Ottonian Westwerk, and it shows in the heavily molded cross-ribbing and multiple projecting colonnettes positioned directly under the volutes of the rib's archivolts. However, in iconographical terms, the three portals display, for the first time, sculpture that is demonstrably no longer Romanesque.
Even as the role of the monastic orders seemed to diminish in the dawn of the Gothic era, the orders still had their own parts to play in the spread of the Gothic style, also disproving the common evaluation of Romanesque as the rural monastic style and Gothic as the urban ecclesiastical style. Chief among early promoters of this style were the Benedictines in England, France, and Normandy. Gothic churches that can be associated with them include Durham Cathedral in England, the Abbey of St Denis, Vezelay Abbey, and Abbey of Saint-Remi in France. Later Benedictine projects (constructions and renovations), made possible by the continued prominence of the Benedictine order throughout the Middle Ages, include Reims's Abbey of Saint-Nicaise, Rouen's Abbey of Saint-Ouen, Abbey of St. Robert at La Chaise-Dieu, and the choir of Mont Saint-Michel in France; English examples are Westminster Abbey, and the reconstruction of the Benedictine church at Canterbury. The Cistercians also had a hand in the spread of the Gothic style, first utilizing the Romanesque style for their monasteries since their inception as a reflection of their poverty, they became the total disseminators of the Gothic style as far east and south as Poland and Hungary. Smaller orders, the Carthusians and Premonstratensians, also built some 200 churches (usually near cities), but it was the mendicant orders, the Franciscans and Dominicans, who would most affect the change of art from the Romanesque to the Gothic in the 13th and 14th centuries. Of the military orders, the Knights Templar did not much contribute while the Teutonic Order spread Gothic art into Pomerania, East Prussia, and the Baltic region.
While many secular buildings exist from the Late Middle Ages, it is in the cathedrals and great churches that Gothic architecture displays its pertinent structures and characteristics to the fullest advantage. 19th century art historians and critics, used to the Baroque or Neoclassical works of the 17th and 18th centuries, were astounded by the soaring heights of a Gothic cathedral and made note of the extreme length compared to proportionally modest width and accentuating clusters of support colonnettes. This emphasis on verticality and light was applied to an ecclesiastical building was achieved by the development of certain architectural features of the Gothic style that, when together, provided inventive solutions to various engineering problems. As Eugène Viollet-le-Duc observed, the Gothic cathedral, almost always laid out in a cruciform shape, was based on a logical skeleton of clustered columns, pointed ribbed vaults and flying buttresses arranged in a system of diagonal arches and arches enclosing the vault field that allows the outward thrust exerted by the groin vaults to be channeled from the walls and into specific points on a supporting mass. The result of this curvature in the vaults and arches of the church was the casting of indeterminable localised thrust that architects learned to counter with an opposing thrust in the form of the flying buttress and application of calculated weight via the pinnacle. This dynamic system of various constituent elements filling a certain role allowed for the slimming of previously massive walls or replacement thereof with windows. Gothic churches were also very ornamented and highly decorated, serving as a Poor Man's Bible and a record of their construction in the stained glass windows that admit light into the church interior and some of the gargoyles. These structures, for centuries the principle landmark in a town, would then often be surmounted by one or more towers and pinnacles and perhaps tall spires.
One of the defining characteristics of Gothic architecture is the pointed (or ogival) arch, and it is used in nearly all places a vaulted shape might be called for structural or decorative consideration, like doorways, windows, arcades, and galleries. Gothic vaulting above spaces, regardless of size, is sometimes supported by richly moulded ribs. The constant use of the pointed arch in Gothic arches and tracery eventually led to the creation of the now extinct term "ogival architecture."
The pointed arch is also a characteristic feature of Near Eastern pre-Islamic Sassanian architecture that was adopted in the 7th century by Islamic architecture and appears in structures like the Al-Ukhaidir Palace (775 AD), the Abbasid reconstruction of the Al-Aqsa mosque in 780 AD, Ramlah Cistern (789 AD), the Great Mosque of Samarra (851 AD), and the Mosque of Ibn Tulun (879 AD) in Cairo. It also appears in the Great Mosque of Kairouan, Mosque–Cathedral of Córdoba, and several structures of Norman Sicily. Then, it appeared in some Romanesque works in Italy (Cathedral of Modena) Burgundy (Autun Cathedral), later being mastered by Gothic architects for the cathedrals of Notre-Dame de Paris and Noyon Cathedral. The majority view of scholars however is the idea that the pointed arch was a simultaneous and natural evolution in Western Europe as a solution to the problem of vaulting spaces of irregular plan, or to bring transverse vaults to the same height as diagonal vaults, as evidenced by Durham Cathedral's nave aisles, built in 1093. Pointed arches also occur extensively in Romanesque decorative blind arcading, where semi-circular arches overlap each other in a simple decorative pattern and their points an accident in design. In addition to being able to its applicability to rectangular or irregular shapes, the pointed arch channels weight onto the bearing piers or columns at a steep angle, enabling architects to raise vaults much higher than was possible in Romanesque architecture. When used with other typical features of Gothic construction, a system of mutual independence in dispensing the immense weight of a Gothic cathedral's roof and vaulting emerges.
Rows of pointed arches upon delicate shafts form a typical wall decoration known as blind arcading. Niches with pointed arches and containing statuary are a major external feature. The pointed arch lent itself to elaborate intersecting shapes which developed within window spaces into complex Gothic tracery forming the structural support of the large windows that are characteristic of the style.
The ribbed vault, another key feature of the Gothic style, has a history just as colorful, having long been adapted for the Roman (Villa of Sette Bassi), Sassanian, Islamic (Abbas I's Mosque at Isfahan, Mosque of Cristo de la Luz), Romanesque (L'Hôpital-Saint-Blaise), and then Gothic styles. Until the height of the Gothic era, few Western rib vaults matched the complexity of Islamic (mostly Moorish), beginning with experiments in Armenia and Georgia, from the 10th to the 13th centuries, such as ribbed domes (Ani Cathedral and Nikortsminda Cathedral), diagonal arches on a square field (Ani), and arches perpendicular to walls (Homoros Vank). However, the function of these vaults is entirely structural rather than decorative, as in Gothic cathedrals. However, their indrect method of supporting the vault via its shoulders has been found at Casale Monferrato, Tour Guinette, and at a tower at Bayeux Cathedral. One reason for this perhaps is the record of economic and political exchange between some of western Europe and Armenia, which might explain the similarities between Armenian architecture and the ribbed vaults at San Nazzaro Sesia and at Lodi Vecchio in Lombardy and the Abbey of Saint Aubin in Angers. Ribbed vaults saw something of a golden age of development in the Anglo-Norman period, and led to the establishment of French Gothic and outlined many future Gothic solutions to the problem of support with buttresses.
A characteristic of Gothic church architecture is its height, both absolute and in proportion to its width, the verticality suggesting an aspiration to Heaven. A section of the main body of a Gothic church usually shows the nave as considerably taller than it is wide. In England the proportion is sometimes greater than 2:1, while the greatest proportional difference achieved is at Cologne Cathedral with a ratio of 3.6:1. The highest internal vault is at Beauvais Cathedral at 48 metres (157 ft). The pointed arch, itself a suggestion of height, is appearance is characteristically further enhanced by both the architectural features and the decoration of the building.
Verticality is emphasised on the exterior in a major way by the towers and spires, a characteristic of Gothic churches both great and small varying from church to church, and in a lesser way by strongly projecting vertical buttresses, by narrow half-columns called attached shafts which often pass through several storeys of the building, by long narrow windows, vertical mouldings around doors and figurative sculpture which emphasises the vertical and is often attenuated. The roofline, gable ends, buttresses and other parts of the building are often terminated by small pinnacles, Milan Cathedral being an extreme example in the use of this form of decoration. In Italy, the tower, if present, is almost always detached from the building, as at Florence Cathedral, and is often from an earlier structure. In France and Spain, two towers on the front is the norm. In England, Germany and Scandinavia this is often the arrangement, but an English cathedral may also be surmounted by an enormous tower at the crossing. Smaller churches usually have just one tower, but this may also be the case at larger buildings, such as Salisbury Cathedral or Ulm Minster in Ulm, Germany, completed in 1890 and possessing the tallest spire in the world, slightly exceeding that of Lincoln Cathedral, the tallest spire that was actually completed during the medieval period, at 160 metres (520 ft).
On the interior of the building attached shafts often sweep unbroken from floor to ceiling and meet the ribs of the vault, like a tall tree spreading into branches. The verticals are generally repeated in the treatment of the windows and wall surfaces. In many Gothic churches, particularly in France, and in the Perpendicular period of English Gothic architecture, the treatment of vertical elements in gallery and window tracery creates a strongly unifying feature that counteracts the horizontal divisions of the interior structure.
Most large Gothic churches and many smaller parish churches are of the Latin cross (or "cruciform") plan, with a long nave making the body of the church, a transverse arm called the transept and, beyond it, an extension which may be called the choir, chancel or presbytery. There are several regional variations on this plan.
The nave is generally flanked on either side by aisles, usually single, but sometimes double. The nave is generally considerably taller than the aisles, having clerestory windows which light the central space. Gothic churches of the Germanic tradition, like St. Stephen of Vienna, often have nave and aisles of similar height and are called Hallenkirche. In the South of France there is often a single wide nave and no aisles, as at Sainte-Marie in Saint-Bertrand-de-Comminges.
In some churches with double aisles, like Notre Dame, Paris, the transept does not project beyond the aisles. In English cathedrals transepts tend to project boldly and there may be two of them, as at Salisbury Cathedral, though this is not the case with lesser churches.
The eastern arm shows considerable diversity. In England it is generally long and may have two distinct sections, both choir and presbytery. It is often square ended or has a projecting Lady Chapel, dedicated to the Virgin Mary. In France the eastern end is often polygonal and surrounded by a walkway called an ambulatory and sometimes a ring of chapels called a "chevet." While German churches are often similar to those of France, in Italy, the eastern projection beyond the transept is usually just a shallow apsidal chapel containing the sanctuary, as at Florence Cathedral.
Another very characteristic feature of the Gothic style, domestic and ecclesiastical alike, is the division of interior space into individual cells according to the building's ribbing and vaults, regardless of whether or not the structure actually has a vaulted ceiling. This system of cells of varying size and shape juxtaposed in various patterns was again totally unique to antiquity and the Early Middle Ages and scholars, Frankl included, have emphasised the mathematical and geometric nature of this design. Frankl in particular thought of this layout as "creation by division" rather than the Romanesque's "creation by addition." Others, namely Viollet-le-Duc, Wilhelm Pinder, and August Schmarsow, instead proposed the term "articulated architecture." The opposite theory as suggested by Henri Focillon and Jean Bony is of "spacial unification", or of the creation of an interior that is made for sensory overload via the interaction of many elements and perspectives. Interior and exterior partitions, often extensively studied, have been found to at times contain features, such as thoroughfares at window height, that make the illusion of thickness. Additionally, the piers separating the isles eventually stopped being part of the walls but rather independent objects that jut out from the actual aisle wall itself.
One of the most ubiquitous elements of Gothic architecture is the shrinking of the walls and inserting of large windows. Notables such as Viollet-le-Duc, Focillon, Aubert, and Max Dvořák contended that this is one of the most universal features of the Gothic style. Yet another departure from the Romanesque style, windows grew in size as the Gothic style evolved, eventually almost eliminating all the wall-space as in Paris's Sainte-Chapelle, admitting immense amounts of light into the church. This expansive interior light has been a feature of Gothic cathedrals since their inception, and this is because of the function of space in a Gothic cathedral as a function of light that is very widely referred to in contemporary text. The metaphysics of light in the Middle Ages led to clerical belief in its divinity and the importance of its display in holy settings. Much of this belief was based on the writings of Pseudo-Dionysius, a 6th-century mystic whose book, The Celestial Hierarchy, was popular among monks in France. Pseudo-Dionysius held that all light, even light reflected from metals or streamed through windows, was divine. To promote such faith, the abbot in charge of the Saint-Denis church on the north edge of Paris, the Abbot Suger, encouraged architects remodeling the building to make the interior as bright as possible.
Ever since the remodeled Basilica of Saint-Denis opened in 1144, Gothic architecture has featured expansive windows, such as at Sainte Chapelle, York Minster, Gloucester Cathedral. The increase in size between windows of the Romanesque and Gothic periods is related to the use of the ribbed vault, and in particular, the pointed ribbed vault which channeled the weight to a supporting shaft with less outward thrust than a semicircular vault. Walls did not need to be so weighty.
A further development was the flying buttress which arched externally from the springing of the vault across the roof of the aisle to a large buttress pier projecting well beyond the line of the external wall. These piers were often surmounted by a pinnacle or statue, further adding to the downward weight, and counteracting the outward thrust of the vault and buttress arch as well as stress from wind loading.
The internal columns of the arcade with their attached shafts, the ribs of the vault and the flying buttresses, with their associated vertical buttresses jutting at right-angles to the building, created a stone skeleton. Between these parts, the walls and the infill of the vaults could be of lighter construction. Between the narrow buttresses, the walls could be opened up into large windows.
Through the Gothic period, thanks to the versatility of the pointed arch, the structure of Gothic windows developed from simple openings to immensely rich and decorative sculptural designs. The windows were very often filled with stained glass which added a dimension of colour to the light within the building, as well as providing a medium for figurative and narrative art.
The façade of a large church or cathedral, often referred to as the West Front, is generally designed to create a powerful impression on the approaching worshipper, demonstrating both the might of God and the might of the institution that it represents. One of the best known and most typical of such façades is that of Notre Dame de Paris.
Central to the façade is the main portal, often flanked by additional doors. In the arch of the door, the tympanum, is often a significant piece of sculpture, most frequently Christ in Majesty and Judgment Day. If there is a central doorjamb or a trumeau, then it frequently bears a statue of the Madonna and Child. There may be much other carving, often of figures in niches set into the mouldings around the portals, or in sculptural screens extending across the façade.
Above the main portal there is generally a large window, like that at York Minster, or a group of windows such as those at Ripon Cathedral. In France there is generally a rose window like that at Reims Cathedral. Rose windows are also often found in the façades of churches of Spain and Italy, but are rarer elsewhere and are not found on the façades of any English Cathedrals. The gable is usually richly decorated with arcading or sculpture or, in the case of Italy, may be decorated with the rest of the façade, with polychrome marble and mosaic, as at Orvieto Cathedral.
The West Front of a French cathedral and many English, Spanish and German cathedrals generally have two towers, which, particularly in France, express an enormous diversity of form and decoration. However some German cathedrals have only one tower located in the middle of the façade (such as Freiburg Münster).
The way in which the pointed arch was drafted and utilised developed throughout the Gothic period. There were fairly clear stages of development which did not progress at the same rate or in the same way in every country. Moreover, the names used to define various periods or styles within Gothic architecture differs from country to country. The work of art historians Hans R. Hahnloser and Robert Branner in studying manuscripts and architectural drawings showed that the use of geometric shapes and proportions in squares, circles, semi-circular shapes, and equilateral triangles, abandoned in the Renaissance, was a constant effort in the Middle Ages.
Transverse arches, perpendicular to the upper level of the walls and hidden under gallery roofing, appeared circa 1100 at Durham Cathedral and at Cérisy-la-Forêt and are thought to have been used to facilitate roofing and the construction of wall buttressing, as there was no need to give any further support to already thick Romanesque walls. Used at the nave of Durham and at Caen's Abbey of Saint-Trinité, this practice would also be used by Gothic architects at Saint-Germer-de-Fly Abbey and Laon Cathedral. Further application and refinement of this technique since the 11th century made the purpose of the transverse clearer, culminating in the late 12th century as architects used its gallery to buttress the upper echelons of a church.
The simplest shape is the long opening with a pointed arch known in England as the lancet. Lancet openings are often grouped, usually as a cluster of three or five. Lancet openings may be very narrow and steeply pointed. Lancet arches are typically defined as two-centered arches whose radii are larger than the arch's span.
Salisbury Cathedral is famous for the beauty and simplicity of its Lancet Gothic, known in England as the Early English Style. York Minster has a group of lancet windows each fifty feet high and still containing ancient glass. They are known as the Five Sisters. These simple undecorated grouped windows are found at Chartres and Laon Cathedrals and are used extensively in Italy.
Many Gothic openings are based upon the equilateral form. In other words, when the arch is drafted, the radius is exactly the width of the opening and the centre of each arch coincides with the point from which the opposite arch springs. This makes the arch higher in relation to its width than a semi-circular arch which is exactly half as high as it is wide.
The Equilateral Arch gives a wide opening of satisfying proportion useful for doorways, decorative arcades and large windows.
The structural beauty of the Gothic arch means, however, that no set proportion had to be rigidly maintained. The Equilateral Arch was employed as a useful tool, not as a principle of design. This meant that narrower or wider arches were introduced into a building plan wherever necessity dictated. In the architecture of some Italian cities, notably Venice, semi-circular arches are interspersed with pointed ones.
The Equilateral Arch lends itself to filling with tracery of simple equilateral, circular and semi-circular forms. The type of tracery that evolved to fill these spaces is known in England as Geometric Decorated Gothic and can be seen to splendid effect at many English and French Cathedrals, notably Lincoln and Notre Dame in Paris. Windows of complex design and of three or more lights or vertical sections, are often designed by overlapping two or more equilateral arches.
The Flamboyant Arch is one that is drafted from four points, the upper part of each main arc turning upwards into a smaller arc and meeting at a sharp, flame-like point. These arches create a rich and lively effect when used for window tracery and surface decoration. The form is structurally weak and has very rarely been used for large openings except when contained within a larger and more stable arch. It is not employed at all for vaulting.
Some of the most beautiful and famous traceried windows of Europe employ this type of tracery. It can be seen at St Stephen's in Vienna, Sainte Chapelle in Paris, at the Cathedrals of Limoges and Rouen in France. In England the most famous examples are the West Window of York Minster with its design based on the Sacred Heart, the extraordinarily rich nine-light East Window at Carlisle Cathedral and the exquisite East window of Selby Abbey.
Doorways surmounted by Flamboyant mouldings are very common in both ecclesiastical and domestic architecture in France. They are much rarer in England. A notable example is the doorway to the Chapter Room at Rochester Cathedral.
The style was much used in England for wall arcading and niches. Prime examples in are in the Lady Chapel at Ely, the Screen at Lincoln and externally on the façade of Exeter Cathedral. In German and Spanish Gothic architecture it often appears as openwork screens on the exterior of buildings. The style was used to rich and sometimes extraordinary effect in both these countries, notably on the famous pulpit in Vienna Cathedral.
The depressed or four-centred arch is much wider than its height and gives the visual effect of having been flattened under pressure. Its structure is achieved by drafting two arcs which rise steeply from each springing point on a small radius and then turn into two arches with a wide radius and much lower springing point.
This type of arch, when employed as a window opening, lends itself to very wide spaces, provided it is adequately supported by many narrow vertical shafts. These are often further braced by horizontal transoms. The overall effect produces a grid-like appearance of regular, delicate, rectangular forms with an emphasis on the perpendicular. It is also employed as a wall decoration in which arcade and window openings form part of the whole decorative surface.
The style, known as Perpendicular, that evolved from this treatment is specific to England, although very similar to contemporary Spanish style in particular, and was employed to great effect through the 15th century and first half of the 16th as Renaissance styles were much slower to arrive in England than in Italy and France.
It can be seen notably at the East End of Gloucester Cathedral where the East Window is said to be as large as a tennis court. There are three very famous royal chapels and one chapel-like Abbey which show the style at its most elaborate: King's College Chapel, Cambridge; St George's Chapel, Windsor; Henry VII's Chapel at Westminster Abbey and Bath Abbey. However very many simpler buildings, especially churches built during the wool boom in East Anglia, are fine examples of the style.
The Gothic cathedral represented the universe in microcosm and each architectural concept, including the loftiness and huge dimensions of the structure, were intended to convey a theological message: the great glory of God. The building becomes a microcosm in two ways. Firstly, the mathematical and geometrical nature of the construction is an image of the orderly universe, in which an underlying rationality and logic can be perceived.
Secondly, the statues, sculptural decoration, stained glass and murals incorporate the essence of creation in depictions of the Labours of the Months and the Zodiac[h] and sacred history from the Old and New Testaments and Lives of the Saints, as well as reference to the eternal in the Last Judgment and Coronation of the Virgin.
Many churches were very richly decorated, both inside and out. Sculpture and architectural details were often bright with coloured paint of which traces remain at the Cathedral of Chartres. Wooden ceilings and panelling were usually brightly coloured. Sometimes the stone columns of the nave were painted, and the panels in decorative wall arcading contained narratives or figures of saints. These have rarely remained intact, but may be seen at the Chapterhouse of Westminster Abbey.
Some important Gothic churches could be severely simple such as the Basilica of Mary Magdalene in Saint-Maximin, Provence where the local traditions of the sober, massive, Romanesque architecture were still strong.
Wherever Gothic architecture is found, it is subject to local influences, and frequently the influence of itinerant stonemasons and artisans, carrying ideas between cities and sometimes between countries. Certain characteristics are typical of particular regions and often override the style itself, appearing in buildings hundreds of years apart.
The distinctive characteristic of French,cathedrals, and those in Germany and Belgium that were strongly influenced by them, is their height and their impression of verticality. Each French cathedral tends to be stylistically unified in appearance when compared with an English cathedral where there is great diversity in almost every building. They are compact, with slight or no projection of the transepts and subsidiary chapels. The west fronts are highly consistent, having three portals surmounted by a rose window, and two large towers. Sometimes there are additional towers on the transept ends. The east end is polygonal with ambulatory and sometimes a chevette of radiating chapels. In the south of France, many of the major churches are without transepts and some are without aisles.
The distinctive characteristic of English cathedrals is their extreme length, and their internal emphasis upon the horizontal, which may be emphasised visually as much or more than the vertical lines. Each English cathedral (with the exception of Salisbury) has an extraordinary degree of stylistic diversity, when compared with most French, German and Italian cathedrals. It is not unusual for every part of the building to have been built in a different century and in a different style, with no attempt at creating a stylistic unity. Unlike French cathedrals, English cathedrals sprawl across their sites, with double transepts projecting strongly and Lady Chapels tacked on at a later date, such as at Westminster Abbey. In the west front, the doors are not as significant as in France, the usual congregational entrance being through a side porch. The West window is very large and never a rose, which are reserved for the transept gables. The west front may have two towers like a French Cathedral, or none. There is nearly always a tower at the crossing and it may be very large and surmounted by a spire. The distinctive English east end is square, but it may take a completely different form. Both internally and externally, the stonework is often richly decorated with carvings, particularly the capitals.
Romanesque architecture in Germany, Poland and the Czech Republic (earlier called Bohemia) is characterised by its massive and modular nature. This characteristic is also expressed in the Gothic architecture of Central Europe in the huge size of the towers and spires, often projected, but not always completed.[i] Gothic design in Germany and Czech lands, generally follows the French formula, but the towers are much taller and, if complete, are surmounted by enormous openwork spires that are a regional feature. Because of the size of the towers, the section of the façade between them may appear narrow and compressed. The distinctive character of the interior of German Gothic cathedrals is their breadth and openness. This is the case even when, as at Cologne, they have been modelled upon a French cathedral. German and Czech cathedrals, like the French, tend not to have strongly projecting transepts. There are also many hall churches (Hallenkirchen) without clerestory windows. In contrast to the Gothic designs found in western German and Czech areas, which followed the French patterns, Brick Gothic was particularly prevalent in Poland and northern Germany. The Polish gothic architecture is characterised by its utilitarian nature, with very limited use of sculpture and heavy exterior design.
The distinctive characteristic of Gothic cathedrals of the Iberian Peninsula is their spatial complexity, with many areas of different shapes leading from each other. They are comparatively wide, and often have very tall arcades surmounted by low clerestories, giving a similar spacious appearance to the Hallenkirche of Germany, as at the Church of the Batalha Monastery in Portugal. Many of the cathedrals are completely surrounded by chapels. Like English cathedrals, each is often stylistically diverse. This expresses itself both in the addition of chapels and in the application of decorative details drawn from different sources. Among the influences on both decoration and form are Islamic architecture and, towards the end of the period, Renaissance details combined with the Gothic in a distinctive manner. The West front, as at Leon Cathedral, typically resembles a French west front, but wider in proportion to height and often with greater diversity of detail and a combination of intricate ornament with broad plain surfaces. At Burgos Cathedral there are spires of German style. The roofline often has pierced parapets with comparatively few pinnacles. There are often towers and domes of a great variety of shapes and structural invention rising above the roof.
In Crown of Aragon and the territories under its influence (Aragon, Catalonia, Northern Catalonia in France, the Balearic Islands, the Valencian Country, among others in the Italian islands), the Gothic style suppressed the transept and made the aisle almost as high as the main nave, allowing it to create very wide spaces, with few ornaments; it is called Catalan Gothic style (different than the Kingdom of Castile or French style).
The most important samples of Catalan Gothic style are the cathedrals of Girona, Barcelona, Perpignan and Palma (in Mallorca), the basilica of Santa Maria del Mar (in Barcelona), the Basílica del Pi (in Barcelona), and the church of Santa Maria de l'Alba in Manresa.
The distinctive characteristic of Italian Gothic is the use of polychrome decoration, both externally as marble veneer on the brick façade and also internally where the arches are often made of alternating black and white segments, and where the columns may be painted red, the walls decorated with frescoes and the apse with mosaic. The plan is usually regular and symmetrical, Italian cathedrals have few and widely spaced columns. The proportions are generally mathematically equilibrated, based on the square and the concept of "armonìa," and except in Venice where they loved flamboyant arches, the arches are almost always equilateral. Colours and moldings define the architectural units rather than blending them. Italian cathedral façades are often polychrome and may include mosaics in the lunettes over the doors. The façades have projecting open porches and occular or wheel windows rather than roses, and do not usually have a tower. The crossing is usually surmounted by a dome. There is often a free-standing tower and baptistry. The eastern end usually has an apse of comparatively low projection. The windows are not as large as in northern Europe and, although stained glass windows are often found, the favourite narrative medium for the interior is the fresco.
Synagogues were commonly built in the Gothic style in Europe during the Medieval period. A surviving example is the Old New Synagogue in Prague built in the 13th century.
The Palais des Papes in Avignon is the best complete large royal palace, alongside the Royal palace of Olite, built during the 13th and 14th centuries for the kings of Navarre. The Malbork Castle built for the master of the Teutonic order is an example of Brick Gothic architecture. Partial survivals of former royal residences include the Doge's Palace of Venice, the Palau de la Generalitat in Barcelona, built in the 15th century for the kings of Aragon, or the famous Conciergerie, former palace of the kings of France, in Paris.
Secular Gothic architecture can also be found in a number of public buildings such as town halls, universities, markets or hospitals. The Gdańsk, Wrocław and Stralsund town halls are remarkable examples of northern Brick Gothic built in the late 14th centuries. The Belfry of Bruges or Brussels Town Hall, built during the 15th century, are associated to the increasing wealth and power of the bourgeoisie in the late Middle Ages; by the 15th century, the traders of the trade cities of Burgundy had acquired such wealth and influence that they could afford to express their power by funding lavishly decorated buildings of vast proportions. This kind of expressions of secular and economic power are also found in other late mediaeval commercial cities, including the Llotja de la Seda of Valencia, Spain, a purpose built silk exchange dating from the 15th century, in the partial remains of Westminster Hall in the Houses of Parliament in London, or the Palazzo Pubblico in Siena, Italy, a 13th-century town hall built to host the offices of the then prosperous republic of Siena. Other Italian cities such as Florence (Palazzo Vecchio), Mantua or Venice also host remarkable examples of secular public architecture.
By the late Middle Ages university towns had grown in wealth and importance as well, and this was reflected in the buildings of some of Europe's ancient universities. Particularly remarkable examples still standing nowadays include the Collegio di Spagna in the University of Bologna, built during the 14th and 15th centuries; the Collegium Carolinum of the University of Prague in Bohemia; the Escuelas mayores of the University of Salamanca in Spain; the chapel of King's College, Cambridge; or the Collegium Maius of the Jagiellonian University in Kraków, Poland.
In addition to monumental secular architecture, examples of the Gothic style in private buildings can be seen in surviving medieval portions of cities across Europe, above all the distinctive Venetian Gothic such as the Ca' d'Oro. The house of the wealthy early 15th century merchant Jacques Coeur in Bourges, is the classic Gothic bourgeois mansion, full of the asymmetry and complicated detail beloved of the Gothic Revival.
Other cities with a concentration of secular Gothic include Bruges and Siena. Most surviving small secular buildings are relatively plain and straightforward; most windows are flat-topped with mullions, with pointed arches and vaulted ceilings often only found at a few focal points. The country-houses of the nobility were slow to abandon the appearance of being a castle, even in parts of Europe, like England, where defence had ceased to be a real concern. The living and working parts of many monastic buildings survive, for example at Mont Saint-Michel.
Exceptional works of Gothic architecture can also be found on the islands of Sicily and Cyprus, in the walled cities of Nicosia and Famagusta. Also, the roofs of the Old Town Hall in Prague and Znojmo Town Hall Tower in the Czech Republic are an excellent example of late Gothic craftsmanship.
In 1663 at the Archbishop of Canterbury's residence, Lambeth Palace, a Gothic hammerbeam roof was built to replace that destroyed when the building was sacked during the English Civil War. Also in the late 17th century, some discrete Gothic details appeared on new construction at Oxford University and Cambridge University, notably on Tom Tower at Christ Church, Oxford, by Christopher Wren. It is not easy to decide whether these instances were Gothic survival or early appearances of Gothic revival.
Ireland was a focus for Gothic architecture in the 17th and 18th centuries. Derry Cathedral (completed 1633), Sligo Cathedral (c. 1730), and Down Cathedral (1790-1818) are notable examples. The term "Planter's Gothic" has been applied to the most typical of these.
In England in the mid-18th century, the Gothic style was more widely revived, first as a decorative, whimsical alternative to Rococo that is still conventionally termed 'Gothick', of which Horace Walpole's Twickenham villa, Strawberry Hill, is the familiar example.
The middle of the 19th century was a period marked by the restoration, and in some cases modification, of ancient monuments and the construction of Neo-Gothic edifices such as the nave of Cologne Cathedral and the Sainte-Clotilde of Paris as speculation of medieval architecture turned to technical consideration. London’s Palace of Westminster, St. Pancras railway station, New York’s Trinity Church and St. Patrick’s Cathedral are also famous examples of Gothic Revival buildings. Such style also reached the Far East in the period, for instance, the Anglican St. John's Cathedral which was located at the centre of Victoria City in Central, Hong Kong.
While some credit for this new ideation can reasonably be assigned to German and English writers, namely Johannes Vetter, Franz Mertens, and Robert Willis respectively, this emerging style's champion was Eugène Viollet-le-Duc, whose lead was taken by archaeologists, historians, and architects like Jules Quicherat, Auguste Choisy, and Marcel Aubert. In the last years of the 19th century, a trend among study in art history emerged in Germany that a building, as defined by Henri Focillon was an interpretation of space. When applied to Gothic cathedrals, historians and architects used to the dimensions of 17th and 18th Baroque or Neoclassical structures, were astounded by the height and extreme length of the cathedrals compared to its proportionally modest width. Goethe, in the preceding century, was mesmerised by the space within a Gothic church and succeeding historians like Georg Dehio, Walter Ueberwasser, Paul Frankl, and Maria Velte sought to rediscover the methodology used in their construction by making measurements and drawings of the buildings, and reading and making conjectures from documents and treaties pertaining to their construction.
In England, partly in response to a philosophy propounded by the Oxford Movement and others associated with the emerging revival of 'high church' or Anglo-Catholic ideas during the second quarter of the 19th century, neo-Gothic began to become promoted by influential establishment figures as the preferred style for ecclesiastical, civic and institutional architecture. The appeal of this Gothic revival (which after 1837, in Britain, is sometimes termed Victorian Gothic), gradually widened to encompass "low church" as well as "high church" clients. This period of more universal appeal, spanning 1855–1885, is known in Britain as High Victorian Gothic.
The Houses of Parliament in London by Sir Charles Barry with interiors by a major exponent of the early Gothic Revival, Augustus Welby Pugin, is an example of the Gothic revival style from its earlier period in the second quarter of the 19th century. Examples from the High Victorian Gothic period include George Gilbert Scott's design for the Albert Memorial in London, and William Butterfield's chapel at Keble College, Oxford. From the second half of the 19th century onwards it became more common in Britain for neo-Gothic to be used in the design of non-ecclesiastical and non-governmental buildings types. Gothic details even began to appear in working-class housing schemes subsidised by philanthropy, though given the expense, less frequently than in the design of upper and middle-class housing.
In France, simultaneously, the towering figure of the Gothic Revival was Eugène Viollet-le-Duc, who outdid historical Gothic constructions to create a Gothic as it ought to have been, notably at the fortified city of Carcassonne in the south of France and in some richly fortified keeps for industrial magnates. Viollet-le-Duc compiled and coordinated an Encyclopédie médiévale that was a rich repertory his contemporaries mined for architectural details. He effected vigorous restoration of crumbling detail of French cathedrals, including the Abbey of Saint-Denis and famously at Notre Dame de Paris, where many of whose most "Gothic" gargoyles are Viollet-le-Duc's. He taught a generation of reform-Gothic designers and showed how to apply Gothic style to modern structural materials, especially cast iron.
In Germany, the great cathedral of Cologne and the Ulm Minster, left unfinished for 600 years, were brought to completion, while in Italy, Florence Cathedral finally received its polychrome Gothic façade. New churches in the Gothic style were created all over the world, including Mexico, Argentina, Japan, Thailand, India, Australia, New Zealand, Hawaii and South Africa.
As in Europe, the United States, Canada, Australia and New Zealand utilised Neo-Gothic for the building of universities, a fine example being the University of Sydney by Edmund Blacket. In Canada, the Canadian Parliament Buildings in Ottawa designed by Thomas Fuller and Chilion Jones with its huge centrally placed tower is influenced by Flemish Gothic buildings.
Although falling out of favour for domestic and civic use, Gothic for churches and universities continued into the 20th century with buildings such as Liverpool Cathedral, the Cathedral of Saint John the Divine, New York and São Paulo Cathedral, Brazil. The Gothic style was also applied to iron-framed city skyscrapers such as Cass Gilbert's Woolworth Building and Raymond Hood's Tribune Tower.
Post-Modernism in the late 20th and early 21st centuries has seen some revival of Gothic forms in individual buildings, such as the Gare do Oriente in Lisbon, Portugal and a finishing of the Cathedral of Our Lady of Guadalupe in Mexico.
To Near Eastern scholars the Armenian cathedral at Ani (989–1001), designed by Trdat (972–1036), seemed to anticipate Gothic. |
The bright side of Earth
Observed from space, the Earth’s northern and southern hemispheres appear equally bright. This symmetrical brightness is scientifically unexpected because the Southern Hemisphere is mostly covered with dark oceans, whereas the Northern Hemisphere has vast and much brighter land. In a new study, published in the Proceedings of the National Academy of Sciences, Weizmann researchers revealed a possible answer to this mystery: a strong correlation between storm intensity, cloudiness, and the solar energy reflection rate in each hemisphere. This correlation may also indicate how climate change could alter the reflection rate in the future.
Reflectivity of solar radiation is known in scientific lingo as “albedo.” To appreciate albedo, think about driving at night: It’s easy to spot the intermittent white lines, which reflect light from the car’s headlights well, but difficult to discern the dark asphalt. The same is true when observing Earth from space: The ratio of the solar energy hitting the Earth to the energy reflected by each region is determined by various factors, one of which is the ratio of dark oceans to bright land. The land area of the Northern Hemisphere is about twice as large as that of the Southern, and when measuring near the surface of the Earth, when the skies are clear, there is more than a 10% difference in albedo. Still, both hemispheres appear equally bright from space.
In this study, Prof. Yohai Kaspi, in the Department of Earth and Planetary Sciences, and Or Hadas, a grad student in his lab, focused on another factor influencing albedo, located in high altitudes and reflecting solar radiation—clouds. The Kaspi team, with colleagues in Germany, Sweden, and France, analyzed cloud data collected by NASA satellites, and global weather data from sources in the air and on the ground, dating back to 1950. The scientists classified storms from the last 50 years into three categories, according to intensity. They discovered a direct link between storm intensity and the number of clouds forming around the storm. While comparatively weak storms afflict the Northern Hemisphere and land areas in general, above the oceans of the Southern Hemisphere, moderate and strong storms prevail. Data analysis showed that the link between storm intensity and cloudiness accounts for the difference in albedo between the hemispheres.
Earth has been undergoing rapid changes in climate in recent years. To examine whether and how such changes could affect hemispheric albedo symmetry, the scientists used a set of models run by climate modeling centers around the world to simulate climate change.
The models predict that global warming will result in a decreased frequency of all storms above the Northern Hemisphere and of weak and moderate storms above the Southern. However, the strongest storms of the Southern Hemisphere will intensify. One might speculate that this difference should break hemispheric albedo symmetry; but studies show that a further increase in storm intensity might not change the degree of cloudiness in the Southern Hemisphere because cloud amounts reach saturation in very strong storms. Thus, symmetry might be preserved.
“This research solves a basic scientific question and deepens our understanding of Earth’s radiation balance and its effectors,” says Prof. Kaspi. “As global warming continues, geoengineered solutions will become vital for human life to carry on alongside it. I hope that a better understanding of basic climate phenomena, such as the hemispheric
albedo symmetry, will help in developing these solutions.”
Yohai Kaspi is supported by:
- Susanne and René Braginsky
- Helen Kimmel Center for Planetary Science |
1. Classify chemical reactions as Synthesis (Combination), Decomposition, Single Displacement (Replacement), Double Displacement (Replacement) and Combustion. [5.2, Pg.72]
2. Explain the significance of the coefficients of a balanced chemical reaction (number of particles, moles, and volume)
3. Balance chemical equations (by applying the law of conservation of mass and constant/definite proportion)[5.1, Pg.72]
4. Calculate the mass-to-mass stoichiometry for a chemical reaction. [5.5, Pg.72]
5. Calculate percent yield in a chemical reaction [5.6, Pg.72]
All information after line to end of page is Not on MCAS
<b>From previous objective, not gotten to on last test. Now on this test.
3. Understand the difference between an Empirical Formula (EF) and Molecular Formula (MF) and how it relates to the subscripts of the chemical formula.
4. Calculate EFand MF. This include determining EF from mass data, percent composition info, and MF from mass data, percent composition and EF data.
5. Predicting Products of:
6. Explain the difference between a complete chemical reaction and a net ionic reaction
Updated 2006 - (5.6) Calculate percent yield in a chemical reaction (also in Limiting Reactant & Redox section)
Things needed to be memorized |
The previous article covered the basics of Probability Distributions and talked about the Uniform Probability Distribution. This article covers the Exponential Probability Distribution which is also a Continuous distribution just like Uniform Distribution.
Suppose we are posed with the question- How much time do we need to wait before a given event occurs?
The answer to this question can be given in probabilistic terms if we model the given problem using the Exponential Distribution.
Since the time we need to wait is unknown, we can think of it as a Random Variable. If the probability of the event happening in a given interval is proportional to the length of the interval, then the Random Variable has an exponential distribution.
The support (set of values the Random Variable can take) of an Exponential Random Variable is the set of all positive real numbers.
Probability Density Function –
For a positive real number the probability density function of a Exponentially distributed Random variable is given by-
Here is the rate parameter and its effects on the density function are illustrated below –
To check if the above function is a legitimate probability density function, we need to check if it’s integral over its support is 1.
Cumulative Density Function –
As we know, the cumulative density function is nothing but the sum of probability of all events upto a certain value of .
In the Exponential distribution, the cumulative density function is given by-
Expected Value –
To find out the expected value, we simply multiply the probability distribution function with x and integrate over all possible values(support).
Variance and Standard deviation –
The variance of the Exponential distribution is given by-
The Standard Deviation of the distribution –
- Example – Let X denote the time between detections of a particle with a Geiger counter and assume that X has an exponential distribution with E(X) = 1.4 minutes. What is the probability that we detect a particle within 30 seconds of starting the counter?
- Solution – Since the Random Variable (X) denoting the time between successive detection of particles is exponentially distributed, the Expected Value is given by-
To find the probability of detecting the particle within 30 seconds of the start of the experiment, we need to use the cumulative density function discussed above. We convert the given 30 seconds in minutes since we have our rate parameter in terms of minutes.
Lack of Memory Property –
Now consider that in the above example, after detecting a particle at the 30 second mark, no particle is detected three minutes since.
Because we have been waiting for the past 3 minutes, we feel that a detection is due i.e. the probability of detection of a particle in the next 30 seconds should be higher than 0.3. However. this is not true for the exponential distribution. We can prove so by finding the probability of the above scenario, which can be expressed as a conditional probability-
The fact that we have waited three minutes without a detection does not change the probability of a detection in the next 30 seconds. Therefore, the probability only depends on the length of the interval being considered.
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.
- Mathematics | Probability Distributions Set 5 (Poisson Distribution)
- Mathematics | Probability Distributions Set 1 (Uniform Distribution)
- Mathematics | Probability Distributions Set 4 (Binomial Distribution)
- Mathematics | Probability Distributions Set 3 (Normal Distribution)
- Mathematics | Probability
- Mathematics | Hypergeometric Distribution model
- Mathematics | Law of total probability
- Mathematics | Conditional Probability
- Mathematics | Renewal processes in probability
- Introduction of Statistical Data Distributions
- Student's t-distribution in Statistics
- Bayes's Theorem for Conditional Probability
- Probability and Statistics | Simpson's Paradox (UC Berkeley's Lawsuit)
- Mathematics | Generalized PnC Set 1
- Mathematics | Generalized PnC Set 2
- Mathematics | Predicates and Quantifiers | Set 2
- Mathematics | Rules of Inference
- Definite Integral | Mathematics
- Mathematics | Power Set and its Properties
- Mathematics | PnC and Binomial Coefficients
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Improved By : JiaweiSun |
Developing critical thinking skills is crucial for students, but actually measuring and assessing these skills can be tricky. This comprehensive guide will walk you through everything you need to know about evaluating critical thinking. Let’s dive in!
- Critical thinking involves skills like analyzing arguments, interpreting information, problem solving, and making decisions.
- Assessing critical thinking goes beyond traditional standardized tests and requires evaluating students’ thought processes and reasoning.
- Common assessment methods include open-ended questions, discussions, projects, concept maps, rubrics, and observation.
- It’s important to assess critical thinking to identify gaps, improve instruction, and prepare students for higher education and careers.
- Effective assessment requires clearly defining skills, using multidimensional tools, and providing opportunities to demonstrate reasoning.
What is Critical Thinking?
Before we can measure critical thinking skills, we need to define what we mean by “critical thinking.”
Critical thinking refers to the ability to carefully evaluate information and make reasoned judgments. It involves skills like:
- Analyzing arguments and claims
- Interpreting data, patterns, and evidence
- Solving problems
- Making logical connections and identifying assumptions
- Reflecting on different perspectives
- Drawing conclusions based on evidence
Critical thinkers can understand complex ideas, apply knowledge, and explain their thought process. They don’t take information at face value but question and scrutinize it.
These skills allow students to succeed not just in school but also in higher education, careers, and life. That’s why teaching and assessing critical thinking is so important!
Why Assess Critical Thinking Skills?
Here are some of the key reasons educators should formally assess students’ critical thinking abilities:
Identify Gaps in Understanding
Assessment reveals areas where students are struggling with critical thinking. This allows teachers to address gaps through targeted instruction.
Improve Teaching Methods
Assessment provides feedback to educators on the effectiveness of their teaching methods in developing critical thinking. Teachers can use results to adjust approaches.
Prepare for Higher Education
Critical thinking is essential for success in college. Assessing these skills while students are still in K-12 schools helps prepare them for higher academic rigor.
Meet Standards and Requirements
Many education standards and frameworks emphasize critical thinking skills. Assessment is necessary to track students’ progress on meeting expected competencies.
Predict Future Performance
Performance on critical thinking assessments can indicate how students may handle the complex cognitive tasks required in future careers.
Support Long-Term Development
Regular assessment allows teachers to monitor growth in critical thinking over time, ensuring continued enrichment of these essential skills.
Challenges of Assessing Critical Thinking
Measuring any complex cognitive skill presents challenges. Here are some factors that make assessing critical thinking uniquely difficult:
Difficult to Define and Measure
There is no one universally accepted definition of critical thinking. This ambiguity makes it hard to establish assessment criteria.
Goes Beyond Content Knowledge
Critical thinking involves broader reasoning skills that cannot be captured by traditional fact-based tests.
Requires Multidimensional Tools
Simple assessments like multiple choice questions do not provide enough insights into student thinking. Multifaceted tools are needed.
Demands Flexible Thinking
Set formulas and standard algorithms cannot easily assess how students think through abstract or novel problems.
Cumbersome to Administer
Comprehensive critical thinking assessments require time-intensive open-ended tasks. Shorter tests offer limited insights.
Qualitative Judgments Required
Scoring student reasoning often relies on subjective human judgment, making standardized measurement difficult.
Despite these challenges, creating a thoughtful assessment approach makes critical thinking measurable.
Methods for Assessing Critical Thinking
While a single perfect assessment may not exist, using a combination of different tools can provide a well-rounded evaluation. Here are some of the most effective strategies:
Questions with no one right answer allow students to demonstrate their logic, interpretation, and reasoning skills. Example: “What conclusions can you draw about Character X based on Events A, B and C?”
Discussions and Debates
Back-and-forth conversation reveals thought processes. Teachers can pose probing follow-up questions to dig deeper. Example: “Why do you think that?”
Multi-step scenarios that require strategic thinking assess analysis, inference, and decision making skills. Real-world situations are ideal.
Open-ended projects let students show information literacy, evaluation of sources, synthesis of ideas, and drawing conclusions.
Having students graphically organize and connect concepts provides insights into understanding and mental frameworks.
Scoring guides with preset criteria for reasoning, argumentation, and drawing conclusions help standardize evaluation of open-ended work.
Writing tasks that require explanation of thought processes and justifying conclusions based on evidence show critical thinking.
document students’ thinking skills displayed in classroom discussions, groupwork, and other learning activities provides ongoing insights.
This mix of qualitative and quantitative tools from simple observation to complex projects gives a multidimensional perspective on students’ abilities.
Best Practices for Assessment
Following research-based best practices will improve the quality of critical thinking assessment:
- Clearly define skills – Target specific critical thinking skills with each assessment tool or task.
- Use multidimensional assessments – No single method gives the full picture; use a variety.
- Align to learning goals – Tailor assessment to the critical thinking skills the course aims to develop.
- Provide meaningful contexts – Situate tasks in real-world scenarios relevant to students’ lives.
- Allow demonstration of reasoning – Open-ended assessments provide insights into thought processes.
- Establish clear criteria – Use rubrics, checklists, or question guides to standardize evaluation.
- Check for misconceptions – Assessments can reveal flawed thinking and gaps in understanding.
- Require justification – Ask students to explain their logic and provide evidence for conclusions.
- Analyze patterns in responses – Identify common errors or weaknesses to address through instruction.
- Provide actionable feedback – Give students clear guidance for developing their thinking skills.
Following best practices takes time, but results in data that provides real insight into students’ abilities.
Why Critical Thinking Matters
In our complex and ever-changing world, strong critical thinking skills empower students to excel. They need these cognitive tools to tackle real-world ambiguities and make sound judgments.
By assessing their progress, we not only identify instructional needs, but pave the way for their long-term success in academics, careers, and life. With some thoughtful planning, critical thinking is within reach for today’s students.
Frequently Asked Questions
What are some examples of critical thinking assessment questions?
Some examples include:
- Analyze the argument made in this passage. Do you find it convincing? Why or why not?
- How would you solve this real-world problem? Explain your approach.
- Here is a scenario. What conclusions can you draw? What additional information would help?
How can you assess critical thinking in young students?
For younger students, focus on skills like:
- Making comparisons
- Sorting objects by common characteristics
- Asking questions about stories
- Explaining their reasoning in simple terms
Should critical thinking assessment be standardized?
Not necessarily. Standardized tests offer limited insights. Using a variety of qualitative and quantitative assessments tailored to learning goals gives a better perspective.
What if students struggle with writing and verbal skills?
Allow flexibility in how students demonstrate critical thinking, like through hands-on tasks, concept maps, drawings, etc. Focus on evaluating their thought process, not just communication skills.
How often should critical thinking be assessed?
Ideally, integrate frequent low-stakes assessments through discussions, projects, etc. to monitor ongoing development. Conduct more formal assessments at key milestones. |
With this activity we are teaching the scientific process and encouraging kids to use inquiry based activities to prove theories.
Yesterday I posted about an experiment we did testing liquid density with water balloons. This activity was a huge hit and shortly after doing it we came up with the idea to test some science we had researched in the past. This lead to some interesting discussions around the scientific theory and how we can question and test theories ourselves. This project turned into a wonderful chance for teaching the Scientific Process.
Water Balloon Science Experiment
Disclaimer: This article may contain commission or affiliate links. As an Amazon Associate I earn from qualifying purchases.
Not seeing our videos? Turn off any adblockers to ensure our video feed can be seen.
You may remember an activity we did this past summer where we tested whether cans of soda pop float or sink. The results were amazing! Diet pop floated, while regular pop sank. When we researched the science we discovered it was because the sugar was heavier and created a higher liquid density, than diet sweeteners.
At the time my kids walked away happy.
Then we did a water balloon liquid density experiment and it started my oldest questioning. How did we know for sure the science we read about was right?
Great question! So we set up our own experiment to test the scientific theory and in the process teaching the scientific method.
Introducing the Scientific Method
With what we learned from those previous activities we developed our theories and hypothesis. We didn’t have any reason not to believe the scientists who told why why the pop cans floated or sank, but we still wanted to prove it for ourselves. So using concepts from the two activities we developed our own experiment. This allowed us to come up with a way to prove the theory of why diet pop floats and regular sinks.
Our experiment was quite simple, using the strategies we used in the water balloon science experiment we prepared two types of water balloons, a sugar balloon and an artificial sweetener balloon (we used Splenda). To fill the balloons we used a 60 mL catheter syringe and it worked beautifully! One syringe full made a nice egg sized water balloon.
One variable we needed to consider was the saturation of our liquids. So we made a few different sugar and diet solutions, varying the dilution (the amount of product added to the water).
After filling our balloons and labeling them, we prepared to place them in our testing chamber (fancy science speak for my hurricane jar!).
First the sugar balloons. And they sank.
Then the diet balloons. And they floated!
And the scientists were right! The sugar ones sank and the diet ones floated!
This was a fantastic approach to teaching the scientific process and exploring how we can prove theories ourselves.
Plus, I want to raise kids that are critical thinkers. Able to gather information and verify the voracity of details, so they can come to educated conclusions. There is so much information available, knowing how to prove what is right will be very important in their lives. Today’s activity was a fantastic step in the right direction.
Oh, and it was tons of fun having a water fight with the balloons after, even if we did get a little extra sticky from the sugar!
A great addition to this activity is our scientific method printables. Members of of the STEAM Powered Family mailing list get this resource and many others for free. Join to access your free educational resources now. |
Plato’s argument simply accepts that the soul exists; he is only trying to prove that they are eternal. But, the main area of discussion in the philosophy of the mind is the existence of the soul.
One doctrine that holds that the soul does exist is called dualism; its name comes from the fact that he postulates that humans consist of two substances: body and soul. Arguments in favor of dualism are indirect arguments in favor of eternity, or at least support the possibility of survival from death. Because, if the soul exists, it is an immaterial substance. And, as much as it is immaterial, it is not subject to the decomposition of material objects; hence, it is eternal.
Most dualists agree that the soul is identical with the mind but different from the brain or function. Some dualists believe that the mind may be a kind of brain that arises: it depends on the brain, but it is not identical to the brain or its processes. This position is often labeled the Dualism Property, but here we are discussing the substance of dualism, namely the doctrine which states that the mind is a separate substance and not merely a separate property from the body, and therefore, can survive the death of the body.
Descartes’s argument for Dualism
René Descartes is usually regarded as the father of dualism because he presents some very sharp arguments in favor of the existence of the soul as a separate substance (Descartes, 1980). In his most famous argument, Descartes invites thought experiments: imagine we exist, but not our bodies. We wake up in the morning, but when we approach the mirror, we don’t see ourselves there. We try to reach the face with our hands, but it is thin air. We try to scream, but no sound comes out. Etc.
Now, Descartes believes that it is indeed possible to imagine such a scenario. However, if one can imagine the existence of a person without the existence of a body, then people are not formed by their body, and therefore, mind and body are two different substances. If the mind is identical with the body, it is impossible to imagine the existence of the mind without imagining at the same time the existence of the body.
This argument has been widely researched. Dualists certainly believe that is valid, but not without criticism. Descartes seems to assume that anything that can be imagined is possible. Indeed, many philosophers have long agreed that imagination is a good guide to what is possible (Hume, 2010).
But, this criterion is still being debated. Imagination seems to be a psychological process, and therefore not entirely a logical process. Therefore, maybe we can imagine a scenario that is actually not possible.
Descartes presents another argument. As Leibniz would later formalize in the Indiscernibles Identity Principle, two entities can be considered identical, if and only if, they deeply share the same attributes. Descartes exploited this principle and sought to discover the nature of the mind that is not possessed by the body (or vice versa), to state that they are not identical, and therefore, are separate substances.
“There is a big difference between mind and body because the body is basically something that can be shared, while the mind clearly cannot be shared. . . As far as I’m just thinking, I can’t distinguish any part of me. . . . Although the whole mind seems to be united with the whole body, however, it is the leg or arm or other body parts that are amputated, I know that nothing will be taken from the mind ”- Descartes, 1980: 97
Descartes believed that the mind and body could not be the same substance. Descartes put forward another similar argument: the body has extensions in space, and as such, can be linked to physical properties. But the mind has no extension, and because of that, it has no physical nature. It makes no sense to ask what color the desire to eat strawberries, or how heavy the Communist ideology is. If the body has an extension, and the mind has no extension, then the mind can be considered a separate substance.
Another Descartes argument draws some differences between mind and body. Descartes famously pondered the possibility that an evil demon might deceive him about the world. Maybe this world is not real. In as many possibilities as possible, Descartes believed that one might doubt the existence of one’s own body. However, Descartes argues that one cannot doubt the existence of one’s own mind. Because, if someone is in doubt, he thinks; and if someone thinks, then it is certain that his mind exists.
This argument is not without criticism. Indeed, the Leibniz Principle of Indiscernibles will make us think that, as much as the mind and body do not deeply share the same nature, they cannot be the same substance. However, in some contexts, it seems possible that A and B may be identical, even if it does not imply that everything predicted from A can be predicted from B.
For example, consider a masked man who robbed a bank. If the witness asks the masked man whether he robbed the bank or not, the witness will answer “yes!”. But if we ask the witness whether his father robbed the bank, he might answer “no”. However, that does not imply that the witness’s father was not a bank robber: maybe the masked person was the witness’s father, and the witness did not realize it. This is what is called ‘Masked Man Fallacy’.
This case forces us to reconsider Leibniz’s Law: A is identical to B, not if everything predicted from A is predicted from B, but instead when A and B deeply share the same traits. And what people believe about the matter is not property. Being the object of doubt is not, strictly speaking, property, but a deliberate relationship. And, in our case, being able to doubt the existence of the body, but not the existence of the mind does not imply that the mind and body are not the same substance.
Other Dualism Arguments
In more recent times, Descartes’ strategy has been used by other dualist philosophers to explain the difference between mind and body. Some philosophers argue that the mind is personal, while the body is not. Anyone can know the state of the body, but no one, including even yourself, can really know the state of his mind.
Some philosophers point to ‘intentionality’ as another difference between mind and body. The mind has intentionality, while the body does not. The mind is about something, whereas parts of the body are not. As much as thoughts have intentionality, they may also have values of truth. Not all thoughts are, of course, right or wrong, but at least thoughts that pretend to represent the world. On the other hand, the physical condition may not have a truth value.
Again, this argument exploits the difference between mind and body. But, as Descartes argues, it is not entirely clear that they avoided the Errors of Masked Men.
Arguments against Dualism
Opponents of dualism not only reject their argument; they also highlight conceptual and empirical problems with this doctrine.
Most opponents of dualism are materialists.
They believe that mental things are really identical to the brain, or at best, an epiphenomenon of the brain. Materialism limits the prospect of eternity: if the mind is not a substance separate from the brain, then at the time of brain death, the mind also becomes extinct, and hence, the person does not survive death. Materialism need not undermine all hopes of eternity, but it destroys the immortality of the soul.
The main difficulty with dualism is the so-called ‘problem of interaction’. If the mind is an immaterial substance, how can it interact with material substances?
The desire to move our hands, but how exactly does that happen? There seems to be an inconsistency with the immateriality of the mind: at one time, the mind is immaterial and not affected by material conditions, at other times the mind manages to come into contact with the body and cause its movement.
Daniel Dennett has mocked this inconsistency by capturing the comic strip character, Casper. This friendly Ghost animated film can penetrate walls. But, suddenly, he can also catch the ball. The same inconsistency arises with dualism: in its interactions with the body, sometimes the mind does not interact with the body (Dennett, 1992).
Details have offered several solutions to this problem. Occultists argue that God directly causes material events. Thus, the mind and body never interact. Likewise, paralegals argue that mental and physical events are coordinated by God so that they seem to cause each other, but in reality, they don’t. These alternatives were actually rejected by most contemporary philosophers.
However, some dualists might answer that the fact that we cannot fully explain how the body and soul interact, does not imply that interaction does not occur. We know many things happen in the universe, even though we don’t know how that happened.
Richard Swinburne, for example, argues as follows:
“Body events that cause brain events and that this causes pain, images, and beliefs, where their subjects have privileged access to the last and not the first, is one of the clearest phenomena of human experience. If we cannot explain how it happens, we must not try to pretend that it did not happen. We only have to admit that humans are not omniscient, and cannot understand everything “(Swinburne, 1997, xii).
On the other hand, Dualism postulates the existence of incorporeal thoughts, but it is not clear that this is a coherent concept. In the opinion of most dualists, the incorporeal mind does feel. But, it is not clear how the mind can feel without sense organs. Descartes seems to have no problem imagining the existence of the intangible, in his thought experiments.
Perhaps the most serious objection to dualism, and the substantial argument in favor of materialism, is the correlation of the mind with the brain. Recent developments in neuroscience increasingly confirm that mental conditions depend on the state of the brain. Neurologists have been able to identify certain areas of the brain that are associated with certain mental dispositions. And, insofar as there seems to be a strong correlation between mind and brain, it seems that the mind can be reduced to the brain, and therefore will not be separate substances.
In recent decades, neuroscience has collected data that confirms that brain damage has a major influence on people’s mental constitution. The Phineas Gage case is famous for this: Gage is a responsible and good train worker, but had an accident that caused damage to the frontal lobes of his brain. Since then, Gage has turned into an aggressive and irresponsible person, unrecognizable by his friends (Damasio, 2006).
Departing from the Gage case, scientists have concluded that the frontal regions of the brain determine personality. And, if mental contents can be severely damaged by brain injury, it doesn’t seem right to postulate that thoughts are immaterial substances. If, like the proposition of dualism, Gage has an immaterial immortal soul, why does his soul not remain intact after his brain injury?
Similar difficulties arise when we consider degenerative neurological diseases, such as Alzheimer’s disease. As is widely known, this disease eradicates the mental contents of the patient, until the patient loses his memory almost completely. If most of the memories eventually disappear, what remains of the soul? When a patient suffering from Alzheimer’s dies, what survives if most of his memory has been lost? Of course, correlation is not an identity, and the fact that the brain empirically correlates with thought does not imply that the mind is the brain. However, many contemporary thought philosophers adhere to what is called ‘identity theory’: mental states are exactly the same thing as firing certain neurons.
Dualists can respond by claiming that a brain is merely a tool of the soul. If the brain does not work well, the soul will not work properly, but brain damage does not imply a deterioration of the soul. Consider, for example, a violinist. If the violin does not play accurately, the violinist will not perform well. But, that doesn’t imply that violinists have lost their talent. In the same way, a person may have a brain that is lacking, however, keeping his soul intact.
Dualists might also suggest that the mind is not identical with the soul. In fact, while many philosophers tend to consider the soul and mind to be identical, various religions assume that a person actually consists of three substances: body, mind, and soul. In such a view, even if the mind degenerates, the soul remains. However, it would be far from clear what the soul really is, if not identical to the mind.
Every philosophical discussion about eternity touches on the fundamental problem of personal identity. If we hope to be saved from death, we want to make sure that people who continue to live after death are the same people who existed before death. And, for religions that postulate the Last Judgment, this is an important issue: if God wants to apply justice, the person who is rewarded or punished in the hereafter must be the same person whose actions determine the outcome.
The question of personal identity refers to the criteria in which a person remains the same ie, numerical identity all the time. Traditionally, philosophers have discussed three main criteria: soul, body, and psychological continuity.
Soul criteria for people’s personal identities remain the same all the time, if and only if they maintain their souls (Swinburne, 2004). Philosophers who adhere to this criterion usually do not consider the soul identical with the mind. Very few philosophers like the criterion of the soul, because it faces great difficulty: if the soul is an incomprehensible substance that is immaterial (to be exact, as much as it is not identical with the mind), how can we be sure that one person remains the same? We don’t know whether, in the middle of the night, our neighbor’s soul has been transferred to another body. Even though the contents of the body and mental of our neighbors remain the same, we will never know whether the soul is the same. Under this criterion, there seems to be no way to ensure that someone is always the same person.
However, there are arguments that sufficiently support the criteria of the soul. To pursue such an argument, Richard Swinburne proposes the following thought experiment: if the A’s brain succeeds in splitting into two, and as a result, we get two people; one with the left hemisphere of the A’s brain, the other with the right hemisphere. Now, which one is A? Both have the brain part of A, and both save some of the mental contents of A. So, one of them must be A, but which one?
Unlike the body and mind, the soul cannot be divided or imitated. So, even though we don’t know which one will be A, we know that only one of the two people is A. And that will be the person who takes care of A’s soul, even if we don’t have a way to identify it. In such a way, even though we know about the body and mind of the A, we cannot distinguish who that A is; therefore, the identity of the A is not his mind or body, but his soul.
Common sense informs that people are their bodies, but, although many philosophers disprove this, ordinary people generally hold such views. So, under these criteria, a person remains the same, if, and only if, they preserve the same body. Of course, the body changes, and finally, all of its cells are replaced. This evokes an ancient philosophical conundrum known as The Theseus Ship: the boards of Theseus are gradually replaced until none of the originals remain. Is it still the same boat? There has been much discussion about this, but most philosophers agree that, in the case of the human body, the total replacement of atoms and slight changes in shape do not change the numerical identity of the human body.
However, the body’s criteria soon experienced difficulties. Imagine two patients, A and B, who underwent surgery simultaneously. Inadvertently, their brain is exchanged by being placed in the wrong body. Thus, brain A is placed in body B. Let’s call this person C. Naturally, as much as brain A, it will have memory A, mental contents, and so on. Now, who is C? Is the B with brain A; or is she A with body B? Most people will think about the latter. However, the brain is the center of consciousness.
Thus, it would appear that the body’s criteria must give way to the brain’s criteria: a person remains the same, if and only if, he preserves the same brain. But, once again, we have difficulties. What if the brain experiences division and each half is placed in a new body? As a result, we will have two people who pretend to be genuine, but, because of the principle of transitivity, we know that they cannot be genuine. And, it seems that one of them must be a genuine person and not the other. This difficulty invites consideration of other criteria for personal identity.
John Locke famously asked what we would think if one day a prince wakes up in the body of a cobbler, and a cobbler in the body of a prince (Locke, 2009). Although fellow cobblers will recognize him as a cobbler, he will have the memories of a prince. Now, if before that event, the prince committed a crime, who should be punished? Should people in the palace, who remember being cobblers; or should the person in the workshop, who remembers being a prince, include his memory of evil?
It seemed that the man in the garage had to be punished for the prince’s crime, because, even if it was not the prince’s original body, that person was the prince, just as he kept his memory. Locke, therefore, believes that a person remains the same, if and only if, he maintains psychological continuity.
Although it seems to be an improvement with respect to the two previous criteria, psychological criteria also face several problems. Suppose someone claimed today as the Reincarnation of the A, and kept it very clearly and accurately the memories of seventeenth-century conspirators. With psychological criteria, such a person will indeed be the A. But, what if, simultaneously, other people also claim to be the A, even with the same level of accuracy? Obviously, the two people are not A. Again, it seems arbitrary to conclude that one person is A, but the other person is not. It seems more plausible that there is no person named A, and therefore psychological continuity is not a good criterion for personal identity.
Based on the difficulties with the above criteria, some philosophers have argued that, in a certain sense, people do not exist. Or, rather, the self does not change. In the words of David Hume, a person “is nothing but a collection or collection of different perceptions, which succeed with each other at an unimaginable speed, and are in perpetual flux and motion” (Hume, 2010: 178). This is called the ‘bundle theory about self’.
As a corollary, Derek Parfit argues that, when considering survival, personal identity is not truly important (Parfit, 1984). What’s important is psychological continuity. Parfit asks to consider this example.
Suppose a person enters a booth where, when he presses a button, the scanner records the state of all the cells in his brain and body, destroying both while doing it. This information is then transmitted at the speed of light to several other planets, where the replicator produces a perfect organic copy for it. Because the replica brain is exactly like hers, it seems like she will remember living her life until when she presses the button, her character will be the same as hers, it will always be another psychological way to be with her. (Parfit, 1997: 311)
Now, under psychological criteria, such a replica is actually him. But what if the machine did not destroy the original body, or make more than one replica? In such a case, there will be two people claiming to be him. As we have seen, this is a major problem for psychological criteria. But, Parfit argues that, even if the person being replicated is not the same person who entered the cubicle, it is psychologically continuous. And, that is indeed relevant.
Parfit’s position has important implications for the discussion of eternity. According to this view, someone in the afterlife is not the same person who has lived before. But, that should not concern us. We must care about the prospect that, in the hereafter, there will be at least one person who is psychologically constantly with us. |
With a deeper understanding of algorithms and its basic concepts, our programming assignment experts will help you to define the basic groundwork and algorithms based on your theory. We can help you to create the efficient and correct algorithms to achieve the goal of your application or operation.
Algorithms are the base of your application and creating the powerful algorithms is always helping you to achieve your goal of the system.
An algorithm is a methodical approach for solving a problem.
When writing a good algorithm, it should be written in such a way that it can be applied in all programming language. Algorithm can be common for many programming languages to achieve similar output from it. Every step in algorithm must be clear and very important. Algorithm in Programming should have a clear starting and a stopping point. Some students usually mistake algorithm with coding but algorithm is just the instructions to the computer to get an expected result. There are various ways to classify algorithms. Some of the popular methodologies are:
The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplying only the logic component.
Serial, Parallel or Distributed:
Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a network.
Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical sub problems and collect the results back together.
Deterministic or Non-deterministic:
Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics.
Exact or Approximate:
While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. Approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems.
Divide and Conquer:
A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments.
Search and Enumeration:
Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.
Such algorithms make some choices randomly. They can be very useful in finding approximate solutions for problems where finding exact solutions can be
Reduction of Complexity:
This technique involves solving a difficult problem by transforming it into a better known problem for which we have asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's.
When searching for optimal solutions to a linear function bound to linear equality and inequality constraints, the constraints of the problem can be used directly in producing the optimal solutions.
When a problem shows optimal substructures – meaning the optimal solution to a problem can be constructed from optimal solutions to sub-problems – and overlapping sub-problems, meaning the same sub-problems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. Sub-problems overlap in dynamic programming.
Dynamic programming reduces the exponential nature of many problems to polynomial complexity.
The Greedy Method:
A greedy algorithm is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications.
Properties of Algorithms
1. Initialize sum =0 and count= 0
2. Enter n (I/O)
3. Find sum+n and assign it to sum then increment count by 1
4. If count<5
if YES go to step 2
Print sum (I/O)
If you are struggling with the programming assignment help or if you want the help in your Algorithm project with to the point Algorithm assignment solution or if you have very little time to complete the assignment, here our expert will help you with it.
All you need to do will be send out the inquiries to be able to us with deadline at email@example.com for the Algorithm Assignment solution.
|Alpha Beta Pruning Algorithm|
Analysis of Algorithms
Bellman Ford Algorithm
Binary Search Algorithms
Bucket Sort Algorithm
Bubble Sort Algorithm
Burrows Wheeler Algorithm
Comb Sort Algorithm
Counting Sort Algorithm
Depth First Search Algorithm
Dikin Ellipsoid Algorithm
Huffman Code Algorithm
Huffman Tree Algorithm
Insertion Sort Algorithm
Linear Search Algorithm
Merge Sort Algorithm
NP Completeness Algorithm
Selection Sort Algorithm
Sequential Search Algorithm
Sort Merge Joins Algorithm
Topological Sort Algorithm
Tree Sort Algorithm |
Understanding Python Async Programming
Python, being one of the most popular programming languages, has various libraries and technologies associated with it. Two of the most popular among them are asynchronous programming and multithreading. Both of these enable an application to execute its code concurrently and improve its performance significantly. But curious minds wonder, “What is the difference between Asynchronous Programming and Multithreading?”
Before we dive deep into their differences, let’s first define what asynchronous programming and multithreading really mean. Asynchronous programming is a programming paradigm that enables an application to perform multiple tasks out of order, the results being fetched in the future when the completion is guaranteed. Meanwhile, multithreading refers to the process of running two or more threads concurrently to achieve fast performance, with each thread performing a specific task in isolation from others.
Now that we have understood both approaches, let’s compare them based on a few factors.
Speed and Performance
One of the critical factors that differentiate asynchronous programming and multithreading is the speed and performance of the application. In terms of speed, asynchronous programming appears to be faster than multithreading. The reason is that async programming allows multiple tasks to run at once without blocking the execution, whereas multithreading relies on multi-tasking, which leads to the creation of multiple threads. Since creating and synchronizing these threads can be resource-intensive and require significant time, it may negatively impact the application’s speed and performance compared to the Asynchronous Programming model.
However, this does not mean that multithreading is inferior in every situation. Multithreading mode works better when the application needs to perform a lot of I/O-bound operations, such as waiting for a network request to complete. In such cases, multithreading can distribute multiple requests to different threads, allowing them to run independently and simultaneously, improving performance dramatically.
The next thing to consider is the error handling capabilities of both approaches. Asynchronous programming has an excellent error handling capability as the exception handling mechanism takes care of any exception that occurs. Since the tasks are executed independently, an error in one task would not affect the other tasks, allowing the application to continue running without any disruptions.
Multithreading, on the other hand, can be more complicated to handle errors than async programming. Since threads operate concurrently, an error in one thread may affect others, causing severe issues like deadlocks, memory leaks, etc.
Finally, resource utilization is an essential factor to consider while choosing between asynchronous programming and multithreading. Since multithreading mode creates a new thread every time it executes a new task, it demands more memory resources than its asynchronous counterpart. As a result, the application’s performance can be negatively affected if the memory is not managed properly. On the other hand, async programming does not create new resources, and thus, it can handle many tasks at one time using the same thread, making it more memory-efficient. This efficient resource utilization helps maintain better consistency of the application’s performance.
In conclusion, Asynchronous programming and multithreading are powerful tools available to developers that can significantly improve an application’s performance if used properly. The optimal application depends on the type of problem that needs to be solved. By selecting the right approach and optimizing accordingly, an application can reach its full potential.
Introduction to Threading in Python
Threading in Python is a way of controlling multiple threads in a program using the threading module. A thread can be thought of as a separate flow of execution within a program. Each thread can be executing instructions independently of each other, allowing for simultaneous tasks to be performed.
In Python, there are two types of threads: the main thread and secondary threads. The main thread is created automatically when a program starts and all the code in the main thread executes in a sequential manner. Secondary threads are created by the programmer using the threading module and can run concurrently with the main thread.
The primary benefit of threading in Python is that it allows for processes to be run simultaneously, thus improving the overall efficiency of the program. Additionally, threading can be used to perform operations that would normally block the main thread, such as I/O operations.
However, threading in Python has its limitations. It is not suitable for CPU-intensive tasks as the Global Interpreted Lock (GIL) in Python restricts the ability of multiple threads to execute code simultaneously.
For I/O-intensive tasks, however, threading can be a more efficient alternative to synchronous programming.
Creating threads in Python involves defining a new function, known as the thread’s entry point. The entry point should take no arguments and return none. Once a new thread is created, it will begin executing the entry point function.
The following code demonstrates how to create a new thread in Python:
import threading def my_function(): ... return my_thread = threading.Thread(target=my_function) my_thread.start()
In the above code, the Thread() function creates a new thread, which executes the my_function() function. The start() method is then called to begin executing the thread.
Implementing threading in Python can be tricky due to the potential for race conditions and deadlocks. It is important to ensure that shared resources, such as variables and objects, are accessed by only one thread at a time to avoid these issues.
In summary, threading in Python provides a convenient way to control multiple threads of execution, allowing for simultaneous tasks to be performed. While it has its limitations, it can be an efficient alternative to synchronous programming for I/O-intensive tasks.
Differences Between Python Async and Threading
Asynchronous programming has become a popular programming paradigm in recent years as developers seek to improve the overall performance of their programs. But what exactly is asynchronous programming, and how does it differ from thread-based concurrency? In this article, we’ll explore the differences between Python Async and Threading.
1. Concurrency vs. Parallelism
Concurrency and parallelism are often used interchangeably, but they are actually quite different. Concurrency is the ability of a program to perform multiple tasks at the same time, while parallelism is the ability of a program to split a task into smaller subtasks that can be executed simultaneously on multiple CPUs or cores. While threading provides parallelism in Python, async programming provides concurrency via coroutines.
2. CPU-bound vs. I/O-bound tasks
CPU-bound tasks require more processing power, while I/O-bound tasks are more focused on input/output operations like reading data from a database or processing requests from multiple clients. Threading is ideal for CPU-bound tasks because it can provide parallelism, while async programming is best suited for I/O-bound tasks because it can provide concurrency without using threads.
3. How Async Programming Works
Async programming in Python allows developers to write code that can perform multiple tasks simultaneously without traditional threading. Async programming follows a single-threaded, event-driven model that supports non-blocking I/O operations. Instead of using threads to handle multiple tasks, async programming relies on coroutines that can execute efficiently without blocking the main thread.
When a coroutine encounters an I/O operation, it can suspend the current task and allow another coroutine to execute, improving overall performance. Once the I/O operation is complete, the coroutine can resume from where it left off, allowing the program to continue executing without blocking.
One of the benefits of async programming is that it is more efficient at handling I/O-bound tasks since it doesn’t require multiple threads or processes to handle tasks concurrently. Async programming also has less overhead than threading since it relies on coroutines, which are lighter weight and easier to manage than threads.
In the battle between Python Async and threading, the choice between the two ultimately depends on the type of tasks you need to perform. If you’re working with CPU-bound tasks, threading might be the best choice since it can provide true parallelism. If you’re working with I/O-bound tasks, however, async programming is better suited since it provides concurrency via coroutines while avoiding the thread overhead of traditional threading techniques.
Pros and Cons of Using Python Async vs Threading
Python is a high-level, interpreted, general-purpose programming language. It is widely used in web development, data analytics, machine learning, and artificial intelligence. Python offers two ways of dealing with concurrent programming: Async and Threading.
Async and Threading are different approaches to achieve concurrency in Python. Async allows the program to continue executing other tasks while waiting for I/O operations to complete, while Threading allows multiple threads to execute concurrently. Both have their own set of strengths and weaknesses. In this article, we will discuss the pros and cons of using Python Async vs Threading.
Pros and Cons of Python Async
- High-performance: Async can provide better performance than threading in I/O-bound applications because of its non-blocking I/O model.
- Easy to learn: Compared to threading, Async is easy to learn and implement due to its higher-level abstractions.
- Memory efficient: Async consumes less memory than threading because it runs everything in a single thread.
- Scalability: Async can handle large numbers of I/O-bound tasks simultaneously and thus can scale better.
- Not suitable for CPU-bound tasks: Async is not an ideal choice for CPU-bound tasks because it runs everything in a single thread.
- Debugging can be difficult: When dealing with complex Async code, debugging can be a challenge.
Pros and Cons of Python Threading
- Suitable for CPU-bound tasks: Threading is ideal for CPU-bound tasks because of its ability to run multiple threads simultaneously.
- Easy to debug: Threading is easier to debug than Async because it is based on more familiar constructs like threads and locks.
- Can help improve program responsiveness: Threading can help improve the responsiveness of a program by allowing the GUI to remain responsive while long-running tasks are executed in the background.
- Potentially slower than Async: Threading can be slower than Async in I/O-bound applications because of its blocking I/O model.
- Can be difficult to implement: Threading can be challenging to implement due to its lower-level abstractions.
- Memory-intensive: Threading consumes more memory than Async due to its need to create multiple threads.
- Concurrency issues: Threading can introduce concurrency issues like deadlocks and race conditions that can be difficult to debug.
Choosing between Async and Threading requires a thorough analysis of the requirements of your application. Async is suitable for I/O-bound applications, whereas Threading is ideal for CPU-bound tasks. Async is easy to learn and debug, whereas Threading can be more challenging to implement and can be prone to concurrency issues. Both have their own set of strengths and weaknesses, and the choice ultimately depends on the specific use case.
Regardless of which approach you choose, it is essential to keep in mind the scalability and reliability of your application. By keeping these factors in mind, you will be able to choose the right concurrency model for your Python application and ensure optimal performance.
When to Use Async or Threading in Python
Python provides two different approaches to handling multiple tasks simultaneously: async and threading. Both have their specific purposes and advantages, but choosing which one to use can be a daunting task. Therefore, in this article, we’ll take a closer look at when to use async or threading in Python.
1. I/O-Bound Tasks
Async is the preferable approach when dealing with I/O-bound tasks that mainly rely on waiting for external operations such as file I/O, network I/O, or fetching APIs. Async utilizes a single thread to execute multiple functions, where the thread moves on to the next function once the current function issues an I/O request, thus avoiding unwanted waiting time. By doing so, async also enables other useful features such as context switching, cooperative multitasking, and non-blocking I/O operations.
2. CPU-Bound Tasks
Threading should be used for CPU-bound tasks that require significant computation and processing power, such as machine learning algorithms, data classification, or image processing. In such cases, threading enables the use of multiple threads to perform multiple tasks simultaneously, thus increasing performance and speed. However, it’s important to note that using too many threads can result in performance degradation by leading to thread congestion, context switching overhead, and decreased processing speed
Concurrency is the ability of multiple tasks to run at the same time. It’s typically achieved in Python by using async or threading. Async is preferable when dealing with I/O-bound tasks as it enables multiple functions to run on a single thread with the help of non-blocking I/O operations. However, concurrency in threading can be more effective when dealing with CPU-bound tasks by utilizing multiple threads to perform multiple tasks simultaneously. Therefore, choosing between async or threading depends on the task and its requirements for concurrency.
4. Type of Application
The type of application also plays a critical role in determining whether to use async or threading. If the application relies heavily on I/O-bound tasks such as downloading files, retrieving data from an external API, or fetching data from a database, then async is the preferable approach. However, if the application involves complex computations and algorithms that consume significant CPU resources, then threading could be a better solution.
5. Debugging and Maintenance
Debugging and maintenance are essential aspects of software development. While both async and threading can be effective in handling tasks, they differ significantly when it comes to debugging and maintenance. Threads can be challenging to debug when issues arise as they share the same memory space, thus making it challenging to identify where issues originate. On the contrary, async employs a single thread, and each task runs within its own context, making it easier to detect, isolate, and resolve issues as they arise. Furthermore, maintenance is also an essential aspect to consider when choosing between async or threading. Async can be challenging to maintain, especially when dealing with more complex applications due to its elaborate design. In contrast, threading has a simpler design, and hence it may be easier to maintain over extended periods efficiently.
In conclusion, choosing between async or threading in Python depends on several factors. When dealing with I/O-bound tasks, async is the preferable approach, while threading is more effective in addressing CPU-bound tasks. The type of application also plays a massive role in determining which approach to use. Debugging and maintenance are also crucial aspects to consider when selecting between async or threading. Therefore, it’s essential to evaluate the program’s demands and infrastructure before selecting between asyncio and multithreading to ensure the optimal performance and smooth maintenance of the application. |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
- Introduction & Quick Facts
- Government and society
- Cultural life
- Modern Serbia
Conflict in Kosovo
When in 1945 the six republics were created, two areas within Serbia had been accorded distinctive constitutional status—the Autonomous Province of Vojvodina and the Autonomous Region of Kosovo-Metohija. (The latter also was made an autonomous province under the constitutional revision of 1963.) The creation of the autonomous provinces was intended to reflect their special circumstances as areas of ethnic complexity rather than any status as quasi-republics that might serve as “homelands” for the Hungarians (Magyars) or Albanians. In the decade after World War II, the communist regime considered its acknowledgment of ethnicity to be just a way-stage en route to the eventual creation of a broader Yugoslav identity. The Kosovar Albanians always presented a particular threat to this ambition. Even before the war’s end, a revolt had broken out in Uroševac in support of the unification of Kosovo with Albania, and it was suppressed only in the summer of 1945. Under the direction of Ranković, many thousands of Kosovar Albanian Muslims were subsequently deported to Turkey, their religious affiliation being used to justify their “repatriation.” Kosovar Albanian protests in 1968 ushered in a federal plan of accommodation. A separate Albanian-language university opened in Priština in 1969, but economic disadvantages, compared with facilities for Serbs in Kosovo as well as the rest of Yugoslavia, led to student protest and brutal suppression in 1981.
Economic growth and vulnerability
Measured in economic growth rates, the reforms of the 1950s and ’60s were a success, and there was unparalleled prosperity. Yugoslavia emerged as a major international tourist destination, and some manufactures, such as metal goods and textiles, became highly profitable on both the domestic and foreign markets. Industrialization and urbanization created a society that was radically different from the economically backward peasant economy of the prewar years.
Yet beneath this growth were certain fundamental weaknesses. Most seriously, the country’s northern republics of Slovenia and Croatia, as well as the Vojvodina, became steadily more prosperous than the other republics. Across a wide range of economic indexes, Serbia was invariably at or close to the Yugoslav average. Kosovo, on the other hand, was almost invariably at the bottom of the scale. An attempt to resolve these disparities was made through a Federal Fund for the Development of the Underdeveloped Areas of Yugoslavia. After enormous sums were redistributed between 1965 and 1988, however, this controversial fund was abandoned, no appreciable impact having been made upon the problem it was set up to address. Serbia’s role as the “hinge” of the redistribution process placed it in a particularly sensitive position. To the developed regions, which resented the diversion of profits from their enterprises, Serbia came to be identified with the potential use of federal power against republican autonomy. Within Kosovo itself, the experience of continuing underdevelopment suggested that the funds were being disbursed more for political reasons than for economic efficiency. As a result, Serbs were placed on the defensive at both levels—a situation that intensified into open struggle with the onset of further economic crisis.
By 1981 the unsupervised pursuit of foreign loans at the federal, republican, and local levels had made Yugoslavia one of the most heavily indebted states of Europe. International funding organized by the United States and the “Friends of Yugoslavia”—an informal collection of lenders assembled by the U.S. ambassador to Belgrade, Lawrence Eagleburger—rescheduled the short-term debt. Yugoslavia’s federal executive council also accepted standby funding from the International Monetary Fund (IMF) but abandoned it in 1986 rather than enforce demands terms on domestic credit. Although the system of self-managed enterprises acknowledged the market mechanism to a greater degree than in any Soviet bloc regime, Yugoslavia was still a long way from being a market economy. The decentralization of economic authority had allowed the republics’ political authorities to promote local monopolies for party favorites. The 1976 subdivision of workers’ councils into basic organizations of associated labour proved to be another damaging decentralization that discouraged competition at the enterprise level.
The rise of Slobodan Milošević
Mounting inflation and the failure of federal commissions on the economy (1983) and the political framework (1986) to change the country’s course opened the way for new party leaders at the republic level. Within Serbia, demands by the Kosovar Albanian majority for greater representation or even formal status as a republic faced growing protests from the Serb minority there. Slobodan Milošević, one of the “postliberal” generation of local leaders, skillfully used these protests to rise to power. By 1987 he had brushed aside his former mentor Ivan Stambolić and was championing party reform as an “antibureaucratic revolution.” He used this slogan to replace party leaderships in the Vojvodina, Montenegro, and Kosovo, as well as in Serbia, with his supporters. In 1990 he abolished the provincial autonomy of the Vojvodina and Kosovo.
By taking effective control of four of the eight constituent communist parties, Milošević confronted the republics of Bosnia and Herzegovina, Croatia, Macedonia, and Slovenia with the threat of political as well as economic centralization stemming from Belgrade. His effort to convene a full congress of the League of Communists in Belgrade in January 1990 ended abruptly with the dissolution of the party. The multiparty elections throughout the republics later that year generally resulted in communist defeats. In Serbia, however, Milošević simply changed his party’s name to the Socialist Party of Serbia (Socijalistička Partije Srbije; SPS) and used a media monopoly and heavy-handed intimidation to win a large parliamentary majority in belated December elections. Relying on the Serbian domination of the Yugoslav People’s Army (YPA) to hold the federation together, he confronted the secession of Slovenia, Croatia, and Macedonia in 1991 and of Bosnia and Herzegovina in 1992. |
Welcome to “Building Strong Foundations: A Comprehensive English Curriculum.” In this in-depth and carefully crafted guide, we will take you on a journey of creating an English curriculum that nurtures language proficiency and engages students in meaningful learning experiences. Designed to provide a comprehensive understanding of each step, we will explore examples and explanations to ensure you develop an effective curriculum that meets the diverse needs of your students. Whether you are a seasoned educator or new to curriculum design, this guide will equip you with the knowledge and strategies to build a solid foundation in English language education. Together, let’s embark on this exciting journey of curriculum development!
Step 1: Identify Learning Objectives The first step in creating an English curriculum is to establish clear and specific learning objectives. These objectives serve as guiding beacons, defining what you want your students to achieve in terms of language skills, knowledge, and competencies. By setting measurable goals, you provide a clear direction for your curriculum design, ensuring that every step contributes to the overall learning outcomes. For instance, you may focus on enhancing reading comprehension, developing writing proficiency, improving grammar accuracy, expanding vocabulary, or refining oral communication skills. Each objective brings a unique dimension to your curriculum, addressing the multifaceted nature of language learning.
Step 2: Assess Students’ Needs and Abilities To effectively meet your students’ needs, it is essential to assess their current English language abilities and identify their specific learning requirements. This stage involves conducting comprehensive assessments, such as diagnostic tests, interviews, and observations, to gather data on students’ language proficiency, learning styles, and individual strengths and weaknesses. By understanding their existing knowledge and learning preferences, you gain valuable insights that inform your curriculum design. This student-centered approach ensures that your curriculum is tailored to address the unique challenges and aspirations of each learner, fostering an inclusive and supportive learning environment.
Step 3: Determine Scope and Sequence With learning objectives and student assessments in hand, you are ready to determine the scope and sequence of your curriculum. This step involves carefully selecting the topics, themes, and language skills to be covered in each unit or module, as well as establishing the order in which they will be taught. The scope and sequence should follow a logical progression, ensuring that each new concept builds upon previously acquired knowledge. By organizing your curriculum in a coherent manner, you create a scaffolding structure that facilitates seamless transitions between different language components, enabling students to develop a holistic understanding of the English language.
Step 4: Select Appropriate Resources and Materials A well-designed curriculum requires the selection of appropriate resources and materials that align with your learning objectives and engage students’ interests. These resources can include textbooks, authentic texts, online platforms, multimedia materials, and interactive activities. By carefully curating a diverse range of resources, you cater to different learning styles and provide students with varied opportunities to explore, practice, and apply their language skills. Engaging and relevant materials not only enhance students’ motivation but also expose them to real-world language use, fostering authentic learning experiences.
Step 5: Plan Engaging Lesson Activities To bring your curriculum to life, it is crucial to develop detailed lesson plans that incorporate a wide range of activities. These activities should align with the curriculum objectives, provide ample opportunities for students to engage in reading, writing, listening, and speaking, and foster critical thinking and problem-solving skills. By utilizing instructional techniques such as group work, pair work, discussions, debates, presentations, and role-plays, you create a dynamic and interactive learning environment that encourages active participation and collaboration. Engaging lesson activities not only deepen students’ understanding but also make learning enjoyable and memorable.
Step 6: Assess and Evaluate Student Progress Assessment plays a vital role in tracking student progress and measuring the effectiveness of your curriculum. Regularly assess and evaluate students’ understanding and mastery of the curriculum through a combination of formative and summative assessments. Formative assessments, such as quizzes, classroom observations, and projects, provide ongoing feedback that helps guide instruction and address individual needs. Summative assessments, such as exams or portfolios, measure overall achievement and serve as milestones in students’ language learning journey. By utilizing a range of assessment methods, you gain a comprehensive view of students’ strengths, areas for improvement, and growth over time, allowing you to make informed instructional decisions and tailor interventions as necessary.
Step 7: Provide Ongoing Feedback and Support Creating an effective English curriculum extends beyond planning and delivering lessons. It involves providing ongoing feedback and support to students throughout their learning journey. Regularly offer constructive feedback that highlights students’ achievements and identifies areas for improvement. Encourage self-assessment and reflection, empowering students to take ownership of their learning and set personal goals. Provide individualized support and additional resources to students who require further assistance, ensuring that no learner is left behind. By creating a nurturing and supportive classroom environment, you foster a growth mindset and cultivate a love for lifelong learning.
Step 8: Continuous Curriculum Review and Improvement A successful English curriculum is a dynamic entity that evolves with the changing needs and advancements in language education. Regularly review and evaluate the effectiveness of your curriculum, seeking feedback from students, colleagues, and your own observations. Stay updated with the latest research, teaching methodologies, and technological advancements, integrating innovative practices into your curriculum design. Embrace a growth mindset, continuously refining and improving your curriculum to meet the evolving needs of your students. By embracing a culture of continuous improvement, you ensure that your curriculum remains relevant, engaging, and effective in nurturing strong foundations in English language education.
Learning Objective: Enhance Writing Proficiency
Assessment: Diagnostic Writing Task
- Students will complete a diagnostic writing task to assess their writing skills, grammar usage, vocabulary, and organization. This task will provide valuable insights into their current abilities and learning needs.
Unit 1: Introduction to Narrative Writing
- Theme: Personal Stories
- Skills Covered: Descriptive language, plot development, character development
- Scope: Students will learn to write personal narratives by incorporating descriptive language, developing engaging plots, and creating well-rounded characters.
Resources and Materials:
- Book: “The House on Mango Street” by Sandra Cisneros
- Short stories and personal narratives from various authors
Lesson Activity: Analyzing Descriptive Language
- Read a passage from “The House on Mango Street” that exemplifies descriptive language.
- Discuss the impact of vivid descriptions on the reader’s experience.
- Engage students in a group activity where they analyze descriptive language in short stories and personal narratives, identifying techniques used by the authors.
Assessment: Descriptive Writing Assignment
- Students write a descriptive paragraph or short story using the techniques discussed in class. Provide feedback on their use of descriptive language, structure, and creativity.
Unit 2: Argumentative Writing
- Theme: Social Issues
- Skills Covered: Claim development, evidence analysis, counterarguments
- Scope: Students will develop skills in constructing persuasive arguments by formulating clear claims, supporting them with evidence, and addressing counterarguments.
Resources and Materials:
- Articles and opinion pieces from reputable sources on social issues (easily available online)
Lesson Activity: Analyzing Arguments
- Read and analyze an article or opinion piece on a social issue, identifying the author’s claim, supporting evidence, and counterarguments.
- Engage students in a discussion, encouraging them to critically evaluate the effectiveness of the argument.
Assessment: Argumentative Essay
- Students write an argumentative essay on a social issue of their choice, presenting a clear claim, supporting it with evidence, and addressing counterarguments. Provide feedback on their argument development, logical reasoning, and use of evidence.
Continuous Curriculum Review and Improvement:
- Regularly assess student progress through formative assessments, such as writing samples and classroom discussions.
- Collect feedback from students on their learning experiences and adjust instructional strategies accordingly.
- Stay updated with educational research and resources to enhance and refine the curriculum over time.
The example English curriculum presented here aims to cultivate strong foundations in writing proficiency by incorporating narrative and argumentative writing. Through a step-by-step approach, teachers can guide students in developing their writing skills while fostering creativity, critical thinking, and effective communication.
By beginning with a diagnostic writing task, teachers can assess students’ current abilities and tailor the curriculum to their specific needs. The curriculum progresses through two units, starting with narrative writing centered around the theme of personal stories. Utilizing resources such as the book “The House on Mango Street” and various short stories and personal narratives, students learn to incorporate descriptive language, develop engaging plots, and create well-rounded characters.
The second unit focuses on argumentative writing, exploring social issues. Through the analysis of articles and opinion pieces, students develop the skills to formulate clear claims, support them with evidence, and address counterarguments. This unit encourages critical thinking, persuasive writing, and the ability to articulate informed opinions.
Throughout the curriculum, engaging activities such as analyzing descriptive language and arguments, as well as formative assessments and personalized feedback, promote active learning and continuous improvement. Teachers continuously review and refine the curriculum based on student performance, feedback, and ongoing educational research, ensuring its effectiveness and relevance.
By implementing this example curriculum, teachers provide students with the necessary tools to excel in writing while fostering their creativity, critical thinking, and communication skills. This curriculum not only enhances writing proficiency but also nurtures a love for language and empowers students to express their ideas confidently and effectively. As students progress through the curriculum, they build a solid foundation in writing, enabling them to thrive in academic, professional, and personal contexts. |
|You might also like:||Going Up at a 40-Degree Angle||Triangle Classification Worksheet||Label Protractor Diagram Print-out||Triangles||Triangles: Shapes||Today's featured page: The Little Red Hen Story|
|Our subscribers' grade-level estimate for this page: 4th - 5th|
More on Triangles
Label Angles and Triangles
|More Label Me! printouts|
acute angle - an angle that is less than 90°.
equilateral triangle - a triangle whose sides are all the same length.
isosceles triangle - a triangle that has two sides the same length (and two angles the same).
obtuse angle - an angle that is greater than 90°.
right angle - an angle that measures exactly 90°.
right triangle - a triangle that has one interior angle that is exactly 90°. Note, some right triangles are also scalene triangles.
scalene triangle - a triangle whose sides are all different lengths. Note, some scalene triangles are also right triangles.
straight angle - an angle that measures 180°.
|Search the Enchanted Learning website for:| |
The next variation of voltage contrast is biased voltage contrast. Biased voltage contrast is the imaging of voltages on a device with a bias applied to one or more connections. For instance, a CMOS integrated circuit can be observed using biased voltage contrast by connecting VDD to power and VSS to ground. The mechanism is the same as that of passive voltage contrast. The secondary electrons are sensitive to the local electrical potentials on the conductors. A conductor at ground emits more secondary electrons, resulting in a light contrast. A conductor at a positive voltage emits fewer electrons, resulting in a dark contrast. Biased voltage contrast can be used to isolate failure sites to a logic block or even down to a metal trace.
The graph in Figure 8 shows the energies of emitted secondary electrons. The number of secondary electrons peaks at around 2 electron volts. A long tail extends out beyond 10 electron volts. If an interconnect is biased at zero volts, then secondary electrons with any particular energy can escape the conductor and be collected by the secondary electron detector. If a line is biased at 5 volts, then only secondary electrons with energies greater than 5 volts can escape. Fewer electrons escape than the total number above 0 volts. A large number of secondary electrons escaping creates a bright image, while a small number escaping creates a dark image. Also, notice from the shape of the curve that it will be difficult to detect the difference between 3 volts and 3.1 volts. The slight increase in the number of secondary electrons creates little discernible difference in contrast. This fact will be important later when we discuss electron beam probing.
As the primary electrons interact with the device, secondary electrons are given off. The number of secondary electrons that make it to the detector will be a function of their energies--as we discussed in the paragraph--a function of the position of the detector, and a function of the location where the primary electron beam strikes the sample (Figure 9a). One should also be aware that this is for a condition where the metal conductors are exposed. If the conductors are covered by a dielectric layer, the secondary emission will be quite different. Instead of being determined by an electric field that is based on the voltages at the conductors, the secondary emission is determined by the potential on the surface of the dielectric due to charging (Figure 9b). The initial opposite polarity image charge from the voltages on the conductors is replaced by a slight positive charge from the electron beam surface interaction. As a result, the voltage contrast image fades while an area is imaged.
Figure 10 shows an example image demonstrating biased voltage contrast. In this image one can see a portion of a circuit and two bond wires. The top layer of dielectric has been removed so that the voltage contrast is permanently visible. The bond wires are bright, indicating a ground potential. A portion of the interconnect is bright and a portion is dark. The bright interconnect is at ground, while the dark interconnect is at 5 volts.
Figure 11 is the same circuit and the same field of view with a different set of conditions applied to the pins. Note that some lines have toggled bright, while others have toggled dark.
Figure 12 is a third set of input conditions. Notice that still other lines have changed from bright to dark--or zero to 5 volts--and others have changed from dark to bright, or 5 volts to ground.
One technique frequently used with voltage contrast is to switch one or more inputs and then look for discontinuities in the interconnect. To perform this method, hook a function generator up to a single input pin--as shown in Figure 13--or hook a pattern generator to a group of inputs and drive a pattern design to look for defects. The interaction of the function generator with the scan rate creates the bright and dark stripes seen in the image. Some analysts refer to the phenomenon as "barber poling." The size of the stripes can be altered by changing the frequency of the input pattern. In the image, the arrow indicates the location of the open circuit: the place where the "barber poling" stops, but the interconnect continues.
Another form of voltage contrast is capacitive coupling voltage contrast. The technique is sometimes abbreviated CCVC or called stroboscopic voltage contrast. CCVC permits imaging and measurement of dynamic voltages on structures beneath the overlying dielectric layers. The technique uses the top glass layer as a discharging capacitor. Because of the tendency for the primary beam to charge the top dielectric and remove the image charge, one must use fast scan rates and low primary beam currents. One must also create an electrical condition such that the interconnect of interest changes periodically. These requirements create tradeoffs among the signal to noise ratio, timing resolution, and length of the vector loop. We discuss those tradeoffs further in this presentation. One must pay attention to local electric fields; cross talk between adjacent interconnect can distort signals. One must also pay attention to charging and contamination. These problems can degrade or obscure the voltage contrast signals. The best policy is to use a primary electron beam of 1kV or less to avoid charging and damage.
The image in Figure 14 shows an example of capacitive coupling voltage contrast. The arrow indicates the open in the interconnect line. Capacitive coupling voltage contrast images have less signal to noise than static voltage contrast images because the primary beam current must be kept low to sustain the voltage contrast effect. A low primary beam current yields a poor image at TV frame rates. Secondly, the lower signal to noise occurs because the primary beam voltage is quite low, around 1kV. The resolution of an SEM at 1kV is not as good as it is at 30kV. And three, each CCVC image is a single frame. Since the image is constantly changing, one must capture individual frames to see the image. One can increase the signal to noise ratio somewhat by averaging multiple frames during the same clock cycle in the vector set.
The technique for obtaining clearer capacitive-coupled voltage contrast images is to use a device called a beam blanker. Beam blankers are used on devices that operate at higher frequencies (greater than 1MHz). When on, the device bends the electron beam away from a sample. The beam blanker is then turned off for a particular vector, allowing the electron beam to hit the sample. If one creates a loop of vectors and ties the "off" cycle of the beam blanker to the vector of interest, an image of the circuit in a particular vector state can be created. One can then change the off state of the beam blanker to correspond to a different vector to view the logic state at that particular vector.
If one uses a beam blanker, a pulse from the test system is used to blank the beam except for the vector of interest (Figure 15). If the beam is being blanked, the voltage in the beam blanking hardware bends the beam to one side, causing it not to go through the final aperture. If the beam blanking hardware is off, the beam travels down through the column through the final aperture, hitting the sample. From there, the secondary electrons will be collected in the photomultiplier tube and amplified to create an image similar to the one seen on the right.
The fourth variant of voltage contrast is the ability to obtain waveforms from the voltage contrast data via an electron beam probing system. Waveform measurement was developed at Cambridge in the early 1980s and incorporated into a system called the Cambridge DVCS-1500. In the mid-1980s, Neil Richardson and Stefano Concina at Schlumberger added computer-aided design navigation features and computer control to create the first modern electron beam probing system, the IDS-5000. The machines were widely used in the 1990s on one, two, and three-level metal ICs before chemical mechanical planarization and flip chip packaging made frontside analysis difficult. Part of the reason for the tools’ wide acceptance in the industry is the user-friendly computer interface. Second, they could be driven using the computer-aided design database from the chip. The layout could be locked to the SEM image, which in turn could be locked to the netlist and the schematic. This feature made tracing signals much easier. Before the IDS-5000, the analyst had to trace signals on the chip by hand. The tools were used not only in failure analysis laboratories but also in design debug activities. The ability of the instrument to act as an oscilloscope inside the chip proved invaluable to designers attempting to debug complex chip designs.
Figure 16 is an example of the Schlumberger IDS-10000 user interface. The IDS-10k runs on Unix to take advantage of connections to computer-aided design software. The interface here shows four windows or tools. The SEM tool shows a secondary electron image of the surface of the device. At higher magnifications a spot is present that can be moved around the image and located on an interconnect segment of interest, much like one would touch a scope probe on a board trace of interest. The scope tool is an oscilloscope-like window that shows voltages as a function of time. The waveform corresponds to the location of the spot in the SEM tool window. The schematic tool shows the schematic of the device under test. The layout tool shows a CAD rendition of the area displayed in the SEM Tool.
Figure 17 is an example of the types of waveforms that can be obtained from an electron beam probing system. These represent waveforms under ideal conditions. The instrument is capable of approximately 50 millivolts resolution and 20 picoseconds timing accuracy. The waveforms shown here are a clock signal at 2.5GHz and three signals at 1GHz, 400MHz, and 200MHz respectively. The waveforms have a peak-to-peak voltage of 3.3 volts, the operating voltage of the 0.35µm device on which the signals were obtained.
Most signals acquired on an electron beam probe system are not that clean. A number of factors can make the signals worse, including the depth of the buried conductor, probing, the proximity to adjacent conductors, and various settings on the electron beam prober itself. The waveforms shown in Figure 18 are more indicative of the types of waveforms one will see in a practical application. The waveform at the top has some noise in it. This is typical, even for a waveform that has been averaged a number of times. The waveform immediately below is the same signal, but from a line buried more deeply. In this case, the signal came from metal 5 in a six-layer metal device. It can be almost impossible to obtain a waveform from more than two levels below the surface. The green waveform shows the effects of cross talk from an adjacent line. Notice the depressed peaks indicated by the arrows. The signal at the secondary detector is being altered by an adjacent line and its electric fields. Finally, some signals can simply be too degraded to determine the behavior. In the red signal at the bottom, this is caused by a combination of system noise, depth, a long vector loop, and cross talk.
CAD navigation is an important aspect of electron beam probing and many other fault localization techniques. The technique is becoming increasingly important for complex integrated circuits for several reasons. One is that most integrated circuits are now planarized. It can be quite difficult to locate features in an SEM on a planarized IC. Another is that the feature sizes on integrated circuits are quite small. Optical microscopy cannot resolve features below about 0.25µm. It can also be quite difficult to locate features from the backside due to wavelength limitations and substrate doping. As a result, fault localization without CAD navigation is much like driving around in an unfamiliar city without a map. Generating databases for CAD navigation is not trivial; it requires some planning upfront during the design cycle. This means that the failure analysis and design departments must coordinate the transfer of the appropriate intermediate design files. The design tools must also be compatible with the CAD navigation tools. Most CAD navigation tools use Dracula or some type of layout versus schematic routine to lock the layout to the netlist and schematic. The design department must therefore save the netlist, layout, and schematics in a form that the CAD navigation tools can use.
Figure 19 is a pictorial diagram of the setup process for CAD navigation. The design department will need to supply three or four main files for the process: a technology file that defines the layers and connections; an optional layout versus schematic text file that tells the computer what features constitute electrical primitives such as transistors, resistors, and diodes; the layout database that contains the polygon information for each layer; and the netlist, which contains the electrical connectivity of the primitive elements. Ideally, the design department should provide netlists and schematics before the hierarchy has been flattened to a single level to make navigation easier on complex chips. Once the files are available, they are run through a series of batch processes to link features within the layout, netlist, and schematic. The layout is processed with a colormap to make layers easily visible on the screen. Finally, the netlist is processed with an index file to complete the link with the layout.
This technical tidbit covers the taxonomy of resistors. Resistors are ubiquitous, even in today’s advanced electronics. They play an integral role in signal integrity, protection, and signal formation, and can be used not only at the board level, but also as an element within a packaged integrated circuit.
Component suppliers manufacture resistors in several different formats. They include composition, metal or carbon film, thin film, thick film, and wire wound.
Here we show resistors by their taxonomy. Resistors can be divided into two majoring groupings: linear and non-linear. The non-linear group contains devices such as thermistors, photo-resistors, varistors, and surface mount devices. Linear resistors can be further divided into fixed and variable groupings. Variable resistors include potentiometers, rheostats, and trimmers. The fixed resistor category is the biggest, and includes carbon composition resistors, wire-wound resistors, thick film and thin film resistors. There are other sub-groups beyond what we show on this slide, but these are the major ones.
We can also divide resistors by application type. Some major groupings would include surface mount resistors, leaded resistors, high power resistors, high voltage resistors, current sense and shunt resistors, precision resistors, custom resistors, wirewound resistors, and pulse protection resistors.
Q: Is there a relationship between the EFO wand length and the lifetime of the wand?
A: Normally, the wand tip is made from a very high temperature material like iridium oxide. Although the wand length may slowly shorten over time and affect its lifetime, and another factor is the formation of bumps on the tip. This creates an irregular electric field at the tip end, causing fluctuations in the spark gap voltage. One might use the SEM to examine the end of the wand to determine how the bumps develop over time. This might allow the user to better understand the overall lifetime of the wand tip.
Please visit http://www.semitracks.com/courses/analysis/failure-and-yield-analysis.php to learn more about this exciting course!
(Click on each item for details).
Failure and Yield Analysis on January 30-February 2, 2017 (Mon.-Thurs.) in Portland, OR, USA
Advanced CMOS/FinFET Fabrication on February 6, 2017 (Mon.) in Portland, OR, USA
Semiconductor Statistics on February 7-8, 2017 (Tues.-Wed.) in Portland, OR, USA
Semiconductor Reliability on March 13-15, 2017 (Mon.-Thurs.) in Singapore and Malaysia
Defect-Based Testing on May 3-4, 2017 (Wed.-Thurs.) in Munich, Germany
Failure and Yield Analysis on May 8-11, 2017 (Mon.-Thurs.) in Munich, Germany
Semiconductor Reliability and Product Qualification on May 15-18, 2017 (Mon.-Thurs.) in Munich, Germany
If you have a suggestion or a comment regarding our courses, online training, discussion forums, reference materials, or if you wish to suggest a new course or location, please feel free to call us at 1-505-858-0454, or e-mail us at email@example.com.
To submit questions to the Q&A section, inquire about an article, or suggest a topic you would like to see covered in the next newsletter, please contact Jeremy Henderson by e-mail (firstname.lastname@example.org).
We are always looking for ways to enhance our courses and educational materials.
Home > Newsletters > 2016 December Newsletter |
Decimal representation worksheets are much useful to the kids who would like to practice problems on decimals and rational numbers.
Before we look at the worksheet, let us come to know some basic stuff about "Decimal representation"
If we have a rational number written as a fraction p/q, we can get the decimal representation by long division.
When we divide p by q using long division method either the remainder becomes zero or the remainder never becomes zero and we get a repeating string of remainders.
Let us express 7/16 in decimal form. Then 7/16 = 0.4375.
In this example, we observe that the remainder becomes zero after a few steps.
Also the decimal expansion of 7/16 terminates.
Similarly, using long division method we can express the following rational numbers in decimal form as
1/2 = 0.5
7/5 = 1.5
-8/25 = -0.32
In the above examples, the decimal expansion terminates or ends after a finite number of steps.
Does every rational number has a terminating decimal expansion?
Before answering the question, let us express 5/11 and 7/6 in decimal form.
Thus, the decimal expansion of a rational number need not terminate.
In the above examples, we observe that the remainders never become zero. Also we note that the remainders repeat after some steps. So, we have a repeating (recurring) block of digits in the quotient.
To simplify the notation, we place a bar over the first block of the repeating (recurring) part and omit the remaining blocks.
So, we can write the expansion of 5/11 and 7/6 as follows..
The following table shows decimal representation of the reciprocals of the first ten natural numbers. We know that the reciprocal of a number n is 1/n. Obviously, the reciprocals of natural numbers are rational numbers.
Thus we see that,
A rational number can be expressed by either a terminating or a non-terminating and recurring (repeating) decimal expansion.
The converse of this statement is also true.
That is, if the decimal expansion of a number is terminating or non-terminating and recurring (repeating), then the number is a rational number.
Express the following rational numbers as decimal numbers.
1) 3/4 = 0.75
2) 5/8 = 0.625
3) 9/16 = 0.5625
4) 7/25 = 0.0.28
5) 47/99 = 0.474747.........
6) 1/999 = 0.001001001........
7) 26/45 = 0.577777........
8) 27/110 = 0.2454545.........
9) 2/3 = 0.6666........
10) 14/9 = 1.5555..........
After having gone through the stuff given above, we hope that the students would have understood "Decimal representation worksheets"
If you want to know more about "Decimal representation of rational numbers", please click here.
If you need any other stuff in math, please use our google custom search here.
You can also visit our following web pages on different stuff in math.
APTITUDE TESTS ONLINE
ACT MATH ONLINE TEST
TRANSFORMATIONS OF FUNCTIONS
ORDER OF OPERATIONS
Customary units worksheet
Integers and absolute value worksheets
Nature of the roots of a quadratic equation worksheets
Point of intersection
MATH FOR KIDS
Word problems on linear equations
Trigonometry word problems
Word problems on mixed fractrions
Converting repeating decimals in to fractions |
To inquire whether the universe’s mass is constant is a question that has perplexed many great minds throughout history. Indeed, the notion of a continuous universe was a prevailing scientific belief for centuries until later discoveries and theories challenged it. In this article, we shall provide a comprehensive and elaborate response to this inquiry by delving into the historical background of this subject matter, examining the idea of a static universe, exploring the implications of General Relativity and the Big Bang Theory, and ultimately addressing the question at hand.
Let’s get the facts right
Before we address the title problem, we need to clarify some terminology: It is a common misunderstanding to conflate mass with weight. Mass is a rudimentary property of matter and refers to the amount of material in an object. In contrast, weight measures the force exerted on an object due to gravity. The weight of an object changes depending on its location in the universe, but its mass remains constant.
For example, consider a 5-kilogram mass on Earth. The force of gravity acting on this mass gives it a weight of approximately 49 Newtons. However, if the same mass were taken to the moon, its weight would be reduced to only 1/6th of that on Earth due to the weaker gravitational pull. Nevertheless, its mass would remain constant at 5 kilograms.
It is, therefore, essential to differentiate between mass and weight because they have different physical meanings and implications. Mass is conserved in biological processes, and it is an intrinsic property of an object that determines its inertia and gravitational attraction. In contrast, weight is not conserved, and it depends on the gravitational field in which the object is located.
The notion of a static universe has its roots in the early Greek philosophers who believed in a perfect, unchanging cosmos. This idea continued to dominate scientific thought for centuries until the development of modern astronomy in the 20th century. The discovery of galaxies and their motion challenged the notion of a static universe, as they appeared to be moving away from each other. This finding led to the formulation of the Hubble Law, which showed that galaxies were receding from each other at a rate proportional to their distance.
This discovery was the first indication that the universe was expanding, leading to the formulation of the Big Bang Theory. The theory suggests that the universe formed from a singularity, a point of limitless density and temperature, rapidly expanding in a Big Bang. The universe’s expansion continues today, as evidenced by the redshift of light from distant galaxies.
Big Bang, again…
The Big Bang Theory also has implications for the universe’s mass. According to this theory, the universe began as a singularity with a finite mass. As the universe expanded, its mass remained constant, but its density decreased. This circumstance means that the mass per unit volume of the universe has decreased over time. Therefore, the universe’s mass is not constant but has increased as it expands.
Another theory that has implications for the question of the mass of the universe is General Relativity. This theory describes the relationship between gravity, space, and time. According to General Relativity, gravity results from the curvature of spacetime by mass and energy. The theory predicts that the distribution of mass and energy in the universe will determine the curvature of spacetime, affecting objects’ motion.
The distribution of mass and energy in the universe is not uniform, and this non-uniformity affects the curvature of spacetime. Therefore, the distribution of mass and energy in the universe affects the motion of objects in the universe. This allotment implies that the universe’s mass is not constant, as the universe’s distribution of mass and energy is not constant.
Furthermore, the presence of dark matter and dark energy in the universe also suggests that the universe’s mass is not constant. These invisible forms of matter and energy are thought to make up approximately 95% of the total mass energy of the universe. The discovery of dark matter and energy has important implications for the structure and evolution of the universe and our understanding of the fundamental laws of physics.
Current estimates place the universe’s mass between 10^53 kg and 10^60 kg. However, it is paramount to note that these estimates are based on various assumptions and extrapolations and, thus, may be subject to revision as new data and theories emerge.
One of the main challenges in estimating the universe’s mass is that much of its matter is dark matter, which cannot be directly observed or measured using current techniques. Dark matter is a hypothesized state of matter that does not interact with light or other forms of electromagnetic radiation and is believed to make up about 27% of the total mass-energy content of the universe.
Another challenge in estimating the universe’s mass is that it is difficult to observe and measure highly far away and faint objects, such as galaxies and galaxy clusters. Astronomers use various techniques to estimate the mass of these objects, such as counting the velocities of stars and galaxies within them and observing the gravitational lensing effect that occurs when their gravity bends the light of more distant objects.
Despite these challenges, astronomers have made significant progress in estimating the universe’s mass over the past century. Early estimates based on observations of individual galaxies suggested that the universe was composed mainly of visible matter and had a relatively small mass. However, discovering dark matter and developing more sophisticated observational techniques have led to higher total mass estimates.
To conclude, whether the universe’s mass is constant has been a subject of scientific inquiry for centuries. The finding of the expansion of the universe and the development of the Big Bang Theory challenged the prevailing belief in a static universe. General Relativity also suggests that the universe’s mass is not constant, as the distribution of mass and energy affects the curvature of spacetime. The presence of dark matter and energy further supports the notion that the universe’s mass is not constant. Therefore, it can be concluded that the universe’s mass is not constant. However, it has increased as the universe expands and is affected by the distribution of mass and energy in the universe.
★ ★ ★ ★ ★
This is an original article published exclusively by Space Expert. You may cite it as:
"Constant mass or changing universe?" in Space Expert, 2023 |
let’s see what’s out there
NASA’s James Webb Space Telescope is the world’s largest and most complex in history and is expected to send images back to Earth this week.
The revolutionary technology of the James Webb Space Telescope will study all phases of cosmic history, from the interior of our solar system to the most distant observable galaxies in the early universe.
Webb’s Infrared Telescope will explore a wide range of scientific questions to help us understand the origins of the universe.
- First light and reionization
- first galaxies in the universe
- How galaxies evolve
- Birth of stars and planets.
- near infrared camera
- near infrared spectrograph
- mid-infrared instrument
- Narrow-infrared imager and slitless spectrograph with fine guide sensor
Webb is an international collaboration between NASA and its partners, the European Space Agency and the Canadian Space Agency.
Thousands of engineers and hundreds of scientists worked to make Webb a reality, along with more than 300 universities, organizations, and companies from 29 US states and 14 countries.
Development began in 1996 for a release initially planned for 2007 with a budget of $500 million. There were many delays and cost overruns, including a major redesign in 2005, a broken sun visor during a practice deployment, recommendations from an independent review board, a threat from the US Congress to cancel the project, the COVID-19 pandemic. 19 and problems with the telescope.
Construction was completed in late 2016, followed by years of extensive testing prior to launch. The total cost of the project is expected to be around $9.7 billion.
Some Webb developments have had indirect benefits. One example helps surgeons performing LASIK eye surgery: Engineers developed a technique to accurately and quickly measure mirrors to guide their grinding and polishing.
Since then, this technology has been adapted to create high-definition maps of patients’ eyes to improve surgical precision.
The observatory has a temperature range of -390 degrees Fahrenheit on the inner shell to 260 degrees on the outer shell. It will work at about -370 degrees.
Webb will look back in time when the universe was young, more than 13.5 billion years ago, a few hundred million years after the big bang theory, to search for the first galaxies in the universe.
Webb is so sensitive that it could theoretically detect a bumblebee’s heat signature at a distance from the moon.
Why infrared?Webb will study infrared light from celestial objects with much greater clarity and sensitivity than ever before.
Unlike the tight, short wavelengths of visible light, the longer wavelengths of infrared light slip through dust more easily.
Thus, the star- and planet-forming universe hidden behind the dust clouds appears clearly to the view of Webb’s infrared instruments.
Historyoftelescope.com Timeline: Significant Events in Telescope History
1608 — German-Dutch eyeglass maker Hans Lippershey applies his patent on what is now known as a telescope. He managed to beat two other Dutch scientists (Jacob Metius and Zacharias Janssen) who also tried to register their own inventions.1611 — The name “telescope” was created by the Greek mathematician Giovanni Demisiani, during his visit to the Italian science academy “Accademia dei Lincei” which housed one of Galileo Galilei’s telescopes. This word was coined from the words “tele” (far) and “skopein” (to look or see). The oldest Observatory in America is located in Bogotá, Colombia (1803).1970 — First telescope launched into space aboard the Uhuru probe. This was also the first gamma-ray telescope to be used. Since 1970, NASA and ESA have launched more than 90 space telescopes into orbit. An average of 2 per year. Some are longer than others. 61 are no longer active, 26 are still active.1975 — BTA-6 is the first major telescope to use an altazimuth mount, which is mechanically simpler but requires computer control for precise pointing.1990 — Hubble Telescope launched into Earth orbit. It quickly became one of the most famous and important telescopes ever built.2003 — The Spitzer Space Telescope, formerly the Space Infrared Telescope Facility, is an infrared space observatory launched in 2003. It is the fourth and last in NASA’s Great Observatories program.2008 — Max Tegmark and Matias Zaldarriaga created the fast Fourier transform telescope.2009 – Kepler telescope launched into space, with the aim of locating planets orbiting our neighboring stars. It has a 2.4 m diameter mirror.2011 — NASA announces plans to launch the most ambitious space telescope of all time in 2018. The James Webb Space Telescope will operate in deep space and will have an astonishing 6.5m diameter mirror.2022 — The James Webb Telescope is launched by NASA.
in july skies
July 13 – Full Buck Moon Super Supermoon at 2:38 p.m. While this will be the “largest” full Moon of 2022, the variation in the Moon’s distance will not be apparent to observers. However, the Moon’s close proximity to Earth dramatically affects tides, which can cause severe coastal flooding. At 3 am on this day, the Moon will reach perigee. This means that the Moon will get as close as possible to Earth in all of 2022, at 221,994 miles away. Their gravitational pull creates extremely high and low ocean tides. Such a tide is known as a perigeal spring tide.
July 29 – Peak of two meteor showersSeveral long-lasting meteor showers appear to launch from the southern part of the sky from mid-July to August. The Moon in its “new” phase combined with two peaking meteor showers makes for a great opportunity to catch a shooting star. The South Delta Aquarid meteor shower, emanating from the constellation of Aquarius, will peak this morning. (Start searching after midnight). Also peaking around this time is the Alpha Capricornid shower, which emanates from the constellation Capricornus. There will be a high proportion of bright meteors. Look at the southern sky.
Sources: NASA, History of Telescope, Space.com, The Associated Press Image above is an artist’s conception NASA GSFC/CIL/Adriana Manrique Gutierrez |
Retrotransposons (also called transposons via RNA intermediates) are genetic elements that can amplify themselves in a genome and are ubiquitous components of the DNA of many eukaryotic organisms. These DNA sequences use a "copy-and-paste" mechanism, whereby they are first transcribed into RNA, then converted back into identical DNA sequences using reverse transcription, and these sequences are then inserted into the genome at target sites.
Retrotransposons are particularly abundant in plants, where they are often a principal component of nuclear DNA. In maize, 49–78% of the genome is made up of retrotransposons. In wheat, about 90% of the genome consists of repeated sequences and 68% of transposable elements. In mammals, almost half the genome (45% to 48%) is transposons or remnants of transposons. Around 42% of the human genome is made up of retrotransposons, while DNA transposons account for about 2–3%.
The retrotransposons' replicative mode of transposition by means of an RNA intermediate rapidly increases the copy numbers of elements and thereby can increase genome size. Like DNA transposable elements (class II transposons), retrotransposons can induce mutations by inserting near or within genes. Furthermore, retrotransposon-induced mutations are relatively stable, because the sequence at the insertion site is retained as they transpose via the replication mechanism.
Retrotransposons copy themselves to RNA and then back to DNA that may integrate back to the genome. The second step of forming DNA may be carried out by a reverse transcriptase, which the retrotransposon encodes. Transposition and survival of retrotransposons within the host genome are possibly regulated both by retrotransposon- and host-encoded factors, to avoid deleterious effects on host and retrotransposon as well. The understanding of how retrotransposons and their hosts' genomes have co-evolved mechanisms to regulate transposition, insertion specificities, and mutational outcomes in order to optimize each other's survival is still in its infancy.
Because of accumulated mutations, most retrotransposons are no longer able to retrotranspose.
Retrotransposons, also known as class I transposable elements, consist of two subclasses, the long terminal repeat (LTR-retrotransposons) and the non-LTR retrotransposons. Classification into these subclasses is based on the phylogeny of the reverse transcriptase, which goes in line with structural differences, such as presence/absence of long terminal repeats as well as number and types of open reading frames, encoding domains and target site duplication lengths.
LTR retrotransposons have direct LTRs that range from ~100 bp to over 5 kb in size. LTR retrotransposons are further sub-classified into the Ty1-copia-like (Pseudoviridae), Ty3-gypsy-like (Metaviridae), and BEL-Pao-like groups based on both their degree of sequence similarity and the order of encoded gene products. Ty1-copia and Ty3-gypsy groups of retrotransposons are commonly found in high copy number (up to a few million copies per haploid nucleus) in animals, fungi, protista, and plants genomes. BEL-Pao like elements have so far only been found in animals.
Although retroviruses are often classified separately, they share many features with LTR retrotransposons. A major difference with Ty1-copia and Ty3-gypsy retrotransposons is that retroviruses have an envelope protein (ENV). A retrovirus can be transformed into an LTR retrotransposon through inactivation or deletion of the domains that enable extracellular mobility. If such a retrovirus infects and subsequently inserts itself in the genome in germ line cells, it may become transmitted vertically and become an Endogenous Retrovirus (ERV). Endogenous retroviruses make up about 8% of the human genome and approximately 10% of the mouse genome.
In plant genomes, LTR retrotransposons are the major repetitive sequence class, e.g. able to constitute more than 75% of the maize genome.
Endogenous retroviruses (ERV)
Endogenous retroviruses are an important type of LTR retrotransposon in mammals, including in humans where the Human ERVs make up 8% of the genome.
Non-LTR retrotransposons consist of two sub-types, long interspersed elements (LINEs) and short interspersed elements (SINEs). They can also be found in high copy numbers, as shown in the plant species. Non-long terminal repeat (LTR) retroposons are widespread in eukaryotic genomes. LINEs possess two ORFs, which encode all the functions needed for retrotransposition. These functions include reverse transcriptase and endonuclease activities, in addition to a nucleic acid-binding property needed to form a ribonucleoprotein particle. SINEs, on the other hand, co-opt the LINE machinery and function as nonautonomous retroelements. While historically viewed as "junk DNA", recent research suggests that, in some rare cases, both LINEs and SINEs were incorporated into novel genes so as to evolve new functionality.
Long INterspersed Elements (LINE) are a group of genetic elements that are found in large numbers in eukaryotic genomes, comprising 17% of the human genome (99.9% of which is no longer capable of retrotransposition, and therefore considered "dead" or inactive). Among the LINE, there are several subgroups, such as L1, L2 and L3. Human coding L1 begin with an untranslated region (UTR) that includes an RNA polymerase II promoter, two non-overlapping open reading frames (ORF1 and ORF2), and ends with another UTR. Recently, a new open reading frame in the 5' end of the LINE elements has been identified in the reverse strand. It is shown to be transcribed and endogenous proteins are observed. The name ORF0 is coined due to its position with respect to ORF1 and ORF2. ORF1 encodes an RNA binding protein and ORF2 encodes a protein having an endonuclease (e.g. RNase H) as well as a reverse transcriptase. The reverse transcriptase has a higher specificity for the LINE RNA than other RNA, and makes a DNA copy of the RNA that can be integrated into the genome at a new site. The endonuclease encoded by non-LTR retroposons may be AP (Apurinic/Pyrimidinic) type or REL (Restriction Endonuclease Like) type. Elements in the R2 group have REL type endonuclease, which shows site specificity in insertion.
The 5' UTR contains the promoter sequence, while the 3' UTR contains a polyadenylation signal (AATAAA) and a poly-A tail. Because LINEs (and other class I transposons, e.g. LTR retrotransposons and SINEs) move by copying themselves (instead of moving by a cut and paste like mechanism, as class II transposons do), they enlarge the genome. The human genome, for example, contains about 500,000 LINEs, which is roughly 17% of the genome. Of these, approximately 7,000 are full-length, a small subset of which are capable of retrotransposition.
Interestingly, it was recently found that specific LINE-1 retroposons in the human genome are actively transcribed and the associated LINE-1 RNAs are tightly bound to nucleosomes and essential in the establishment of local chromatin environment.
SINEs are the only TEs that are non- autonomous by nature, meaning that they did not evolve from autonomous elements. They are small (80- 500 bases)) and rely in trans on functional LINEs for their replication, but their evolutionary origin is very distinct. SINEs can be found in very diverse eukaryotes, but they have only accumulated to impressive amount in mammals, where they represent between 5 and 15% of the genome with millions of copies.
Structure and propagation
SINEs typically possess a “head” with an RNA pol III promoter that enables autonomous transcription, and a body of various composition. SINEs are postulated to originate from the accidental retrotransposition of various RNA pol III transcripts, and have appeared separately numerous times in evolution history. The type of RNA pol III promoter defines the different superfamilies and reveal their origin: tRNA, 5S ribosomal RNA or signal recognition particle 7SL RNA.
SINEs do not encode a functional reverse transcriptase protein and rely on other mobile elements for the transposition, especially LINEs. SINE RNAs form a complex with LINE ORF2 proteins and are inserted into the genome by target primed reverse transcription, creating short TSDs upon insertion. Some SINE families are thought to rely on specific LINEs for their replication, while others seem to be more generalist.
Alu and B1 elements, with their 1.1 million and 650,000 copies in the human and mouse genomes, respectively, harbor a 7SL promoter. The 350,000 copies of B2 SINEs in the mouse are on the other hand tRNA-related.
Alu and B1 elements, with their 1.1 million and 650,000 copies in the human and mouse genomes, respectively, harbor a 7SL promoter.
The 350,000 copies of B2 SINEs in the mouse are on the other hand tRNA- related.
The most common SINE in primates is Alu. Alu elements are approximately 350 base pairs long, do not contain any coding sequences, and can be recognized by the restriction enzyme AluI (hence the name). The distribution of these elements has been implicated in some genetic diseases and cancers.
Hominid genomes contain also original elements termed SVA. They are composite transposons formed by the fusion of a SINE-R and an Alu, separated by a variable number of tandems repeats. Less than 3kb in length and apparently mobilized using LINE1 machinery, they are around 2500-3000 copies in human or gorilla genomes, and less than 1000 in orangutan. SVA are one of the youngest transposable element in great apes genome and among the most active and polymorphic in the human population.
- Endogenous retrovirus
- Insertion sequences
- Copy-number variation
- Genomic organization
- Interspersed repeat
- Retrotransposon markers, a powerful method of reconstructing phylogenies.
- SanMiguel P, Bennetzen JL (1998). "Evidence that a recent increase in maize genome size was caused by the massive amplification of intergene retrotranposons" (PDF). Annals of Botany. 82 (Suppl A): 37–44. doi:10.1006/anbo.1998.0746.
- Li W, Zhang P, Fellers JP, Friebe B, Gill BS (November 2004). "Sequence composition, organization, and evolution of the core Triticeae genome". Plant J. 40 (4): 500–11. doi:10.1111/j.1365-313X.2004.02228.x. PMID 15500466.
- Lander ES, Linton LM, Birren B, et al. (February 2001). "Initial sequencing and analysis of the human genome". Nature. 409 (6822): 860–921. doi:10.1038/35057062. PMID 11237011.
- Dombroski BA, Feng Q, Mathias SL, et al. (July 1994). "An in vivo assay for the reverse transcriptase of human retrotransposon L1 in Saccharomyces cerevisiae". Mol. Cell. Biol. 14 (7): 4485–92. doi:10.1128/mcb.14.7.4485. PMC . PMID 7516468.
- Xiong, Y; Eickbush, TH (October 1990). "Origin and evolution of retroelements based upon their reverse transcriptase sequences". The EMBO Journal. 9 (10): 3353–62. PMC . PMID 1698615.
- Copeland CS, Mann VH, Morales ME, Kalinna BH, Brindley PJ (2005). "The Sinbad retrotransposon from the genome of the human blood fluke, Schistosoma mansoni, and the distribution of related Pao-like elements". BMC Evol. Biol. 5 (1): 20. doi:10.1186/1471-2148-5-20. PMC . PMID 15725362.
- Wicker T, Sabot F, Hua-Van A, et al. (December 2007). "A unified classification system for eukaryotic transposable elements". Nat. Rev. Genet. 8 (12): 973–82. doi:10.1038/nrg2165. PMID 17984973.
- McCarthy EM, McDonald JF (2004). "Long terminal repeat retrotransposons of Mus musculus". Genome Biol. 5 (3): R14. doi:10.1186/gb-2004-5-3-r14. PMC . PMID 15003117.
- Baucom, RS; Estill, JC; Chaparro, C; Upshaw, N; Jogi, A; Deragon, JM; Westerman, RP; Sanmiguel, PJ; Bennetzen, JL (November 2009). "Exceptional diversity, non-random distribution, and rapid evolution of retroelements in the B73 maize genome". PLoS Genetics. 5 (11): e1000732. doi:10.1371/journal.pgen.1000732. PMC . PMID 19936065.
- "Transposon regulation upon dynamic loss of DNA methylation (PDF Download Available)". ResearchGate. doi:10.13140/rg.2.2.18747.21286.
- Schmidt, Thomas (1999-08-01). "LINEs, SINEs and repetitive DNA: non-LTR retrotransposons in plant genomes". Plant Molecular Biology. 40 (6): 903–910. doi:10.1023/A:1006212929794. ISSN 0167-4412.
- Yadav, VP; Mandal, PK; Rao, DN; Bhattacharya, S (December 2009). "Characterization of the restriction enzyme-like endonuclease encoded by the Entamoeba histolytica non-long terminal repeat retroposon EhLINE1". The FEBS Journal. 276 (23): 7070–82. doi:10.1111/j.1742-4658.2009.07419.x. PMID 19878305.
- Santangelo AM, de Souza FS, Franchini LF, Bumaschny VF, Low MJ, Rubinstein M (October 2007). "Ancient Exaptation of a CORE-SINE Retroposon into a Highly Conserved Mammalian Neuronal Enhancer of the Proopiomelanocortin Gene". PLoS Genetics. Public Library of Science. 3 (10): 1813–26. doi:10.1371/journal.pgen.0030166. PMC . PMID 17922573. Retrieved 2007-12-31.
- Liang, Kung-Hao; Yeh, Chau-Ting (2013). "A gene expression restriction network mediated by sense and antisense Alu sequences located on protein-coding messenger RNAs". BMC Genomics. 14: 325. doi:10.1186/1471-2164-14-325. PMC . PMID 23663499. Retrieved 2013-05-11.
- Singer MF (March 1982). "SINEs and LINEs: highly repeated short and long interspersed sequences in mammalian genomes". Cell. 28 (3): 433–4. doi:10.1016/0092-8674(82)90194-5. PMID 6280868.
- Doucet AJ, Hulme AE, Sahinovic E, Kulpa DA, Moldovan JB, Kopera HC, Athanikar JN, Hasnaoui M, Bucheton A, Moran JV, Gilbert N (October 7, 2010). "Characterization of LINE-1 ribonucleoprotein particles". PLOS Genetics. 6 (10): e1001150. doi:10.1371/journal.pgen.1001150. PMC . PMID 20949108.
- Denli, AM; Narvaiza, I; Kerman, BE; Pena, M; Benner, C; Marchetto, MC; Diedrich, JK; Aslanian, A; Ma, J; Moresco, JJ; Moore, L; Hunter, T; Saghatelian, A; Gage, FH (22 October 2015). "Primate-Specific ORF0 Contributes to Retrotransposon-Mediated Diversity". Cell. 163 (3): 583–93. doi:10.1016/j.cell.2015.09.025. PMID 26496605.
- Ohshima K, Okada N (2005). "SINEs and LINEs: symbionts of eukaryotic genomes with a common tail". Cytogenet. Genome Res. 110 (1–4): 475–90. doi:10.1159/000084981. PMID 16093701.
- Yadav, VP; Mandal, PK; Rao, DN; Bhattacharya, S (December 2009). "Characterization of the restriction enzyme-like endonuclease encoded by the Entamoeba histolytica non-long terminal repeat retrotransposon EhLINE1". The FEBS Journal. 276 (23): 7070–82. doi:10.1111/j.1742-4658.2009.07419.x. PMID 19878305.
- Deininger PL, Batzer MA (October 2002). "Mammalian retroelements". Genome Res. 12 (10): 1455–65. doi:10.1101/gr.282402. PMID 12368238.
- Richard Cordaux; Mark Batzer (October 2009). "The impact of retrotransposons on human genome evolution". Nature Reviews Genetics. 10 (10): 691–703. doi:10.1038/nrg2640. PMC . PMID 19763152.
- Griffiths, Anthony J. (2008). Introduction to genetic analysis (9th ed.). New York: W.H. Freeman. p. 505. ISBN 0-7167-6887-9.
- Rangwala S, Kazazian HH (2009). "Many LINE1 elements contribute to the transcriptome of human somatic cells". Genome Biology. 10 (9): R100. doi:10.1186/gb-2009-10-9-r100. PMC . PMID 19772661.
- Chueh, A.C.; Northrop, Emma L.; Brettingham-Moore, Kate H.; Choo, K. H. Andy; Wong, Lee H. (Jan 2009). Bickmore, Wendy A., ed. "LINE Retrotransposon RNA Is an Essential Structural and Functional Epigenetic Component of a Core Neocentromeric Chromatin". PLoS Genetics. 5 (1): e1000354. doi:10.1371/journal.pgen.1000354. PMC . PMID 19180186.
- Stansfield, William D.; King, Robert C. (1997). A dictionary of genetics (5th ed.). Oxford [Oxfordshire]: Oxford University Press. ISBN 0-19-509441-7.
- KRAMEROV, D; VASSETZKY, N. "Short Retroposons in Eukaryotic Genomes". International Review of Cytology. 247: 165–221. doi:10.1016/s0074-7696(05)47004-7.
- Dewannieux, Marie; Esnault, Cécile; Heidmann, Thierry. "LINE-mediated retrotransposition of marked Alu sequences". Nature Genetics. 35 (1): 41–48. doi:10.1038/ng1223. |
In mathematics, a recurrence relation is an equation that recursively defines a sequence or multidimensional array of values, once one or more initial terms are given: each further term of the sequence or array is defined as a function of the preceding terms.
The term difference equation sometimes (and for the purposes of this article) refers to a specific type of recurrence relation. However, "difference equation" is frequently used to refer to any recurrence relation.
- 1 Examples
- 2 Relationship to difference equations narrowly defined
- 3 Solving
- 3.1 Solving homogeneous linear recurrence relations with constant coefficients
- 3.2 Solving non-homogeneous linear recurrence relations with constant coefficients
- 3.3 Solving first-order non-homogeneous recurrence relations with variable coefficients
- 3.4 Solving general homogeneous linear recurrence relations
- 3.5 Solving first-order rational difference equations
- 4 Stability
- 5 Relationship to differential equations
- 6 Applications
- 7 See also
- 8 Notes
- 9 References
- 10 External links
An example of a recurrence relation is the logistic map:
with a given constant r; given the initial term x0 each subsequent term is determined by this relation.
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
The recurrence satisfied by the Fibonacci numbers is the archetype of a homogeneous linear recurrence relation with constant coefficients (see below). The Fibonacci sequence is defined using the recurrence
with seed values
Explicitly, the recurrence yields the equations
We obtain the sequence of Fibonacci numbers, which begins
- 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
The recurrence can be solved by methods described below yielding Binet's formula, which involves powers of the two roots of the characteristic polynomial t2 = t + 1; the generating function of the sequence is the rational function
A simple example of a multidimensional recurrence relation is given by the binomial coefficients , which count the number of ways of selecting k out of a set of n elements. They can be computed by the recurrence relation
with the base cases . Using this formula to compute the values of all binomial coefficients generates an infinite array called Pascal's triangle. The same values can also be computed directly by a different formula that is not a recurrence, but that requires multiplication and not just addition to compute:
Relationship to difference equations narrowly defined
The second difference is defined as
which can be simplified to
More generally: the kth difference of the sequence an is written as is defined recursively as
(The sequence and its differences are related by a binomial transform.) The more restrictive definition of difference equation is an equation composed of an and its kth differences. (A widely used broader definition treats "difference equation" as synonymous with "recurrence relation". See for example rational difference equation and matrix difference equation.)
Actually, it is easily seen that Thus, a difference equation can be defined as an equation that involves an, an-1, an-2 etc. (or equivalenty an, an+1, an+2 etc.)
Since difference equations are a very common form of recurrence, some authors use the two terms interchangeably. For example, the difference equation
is equivalent to the recurrence relation
Thus one can solve many recurrence relations by rephrasing them as difference equations, and then solving the difference equation, analogously to how one solves ordinary differential equations. However, the Ackermann numbers are an example of a recurrence relation that do not map to a difference equation, much less points on the solution to a differential equation.
From sequences to grids
Single-variable or one-dimensional recurrence relations are about sequences (i.e. functions defined on one-dimensional grids). Multi-variable or n-dimensional recurrence relations are about n-dimensional grids. Functions defined on n-grids can also be studied with partial difference equations.
Solving homogeneous linear recurrence relations with constant coefficients
Roots of the characteristic polynomial
An order-d homogeneous linear recurrence with constant coefficients is an equation of the form
where the d coefficients ci (for all i) are constants.
A constant-recursive sequence is a sequence satisfying a recurrence of this form. There are d degrees of freedom for solutions to this recurrence, i.e., the initial values can be taken to be any values but then the recurrence determines the sequence uniquely.
The same coefficients yield the characteristic polynomial (also "auxiliary polynomial")
whose d roots play a crucial role in finding and understanding the sequences satisfying the recurrence. If the roots r1, r2, ... are all distinct, then each solution to the recurrence takes the form
where the coefficients ki are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of n. For instance, if the characteristic polynomial can be factored as (x−r)3, with the same root r occurring three times, then the solution would take the form
As well as the Fibonacci numbers, other constant-recursive sequences include the Lucas numbers and Lucas sequences, the Jacobsthal numbers, the Pell numbers and more generally the solutions to Pell's equation.
For order 1, the recurrence
has the solution an = rn with a0 = 1 and the most general solution is an = krn with a0 = k. The characteristic polynomial equated to zero (the characteristic equation) is simply t − r = 0.
Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that an = rn is a solution for the recurrence exactly when t = r is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices.
Consider, for example, a recurrence relation of the form
When does it have a solution of the same general form as an = rn? Substituting this guess (ansatz) in the recurrence relation, we find that
must be true for all n > 1.
Dividing through by rn−2, we get that all these equations reduce to the same thing:
which is the characteristic equation of the recurrence relation. Solve for r to obtain the two roots λ1, λ2: these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution
while if they are identical (when A2 + 4B = 0), we have
This is the most general solution; the two constants C and D can be chosen based on two given initial conditions a0 and a1 to produce a specific solution.
In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters C and D), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as Then it can be shown that
can be rewritten as:576–585
Here E and F (or equivalently, G and δ) are real constants which depend on the initial conditions. Using
one may simplify the solution given above as
where a1 and a2 are the initial conditions and
In this way there is no need to solve for λ1 and λ2.
In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable a converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown to be equivalent to |A| < 1 − B < 2, which is equivalent to |B| < 1 and |A| < 1 − B.
The equation in the above example was homogeneous, in that there was no constant term. If one starts with the non-homogeneous recurrence
with constant term K, this can be converted into homogeneous form as follows: The steady state is found by setting bn = bn−1 = bn−2 = b* to obtain
Then the non-homogeneous recurrence can be rewritten in homogeneous form as
which can be solved as above.
The stability condition stated above in terms of eigenvalues for the second-order case remains valid for the general nth-order case: the equation is stable if and only if all eigenvalues of the characteristic equation are less than one in absolute value.
Given a homogeneous linear recurrence relation with constant coefficients of order d, let p(t) be the characteristic polynomial (also "auxiliary polynomial")
such that each ci corresponds to each ci in the original recurrence relation (see the general form above). Suppose λ is a root of p(t) having multiplicity r. This is to say that (t−λ)r divides p(t). The following two properties hold:
- Each of the r sequences satisfies the recurrence relation.
- Any sequence satisfying the recurrence relation can be written uniquely as a linear combination of solutions constructed in part 1 as λ varies over all distinct roots of p(t).
As a result of this theorem a homogeneous linear recurrence relation with constant coefficients can be solved in the following manner:
- Find the characteristic polynomial p(t).
- Find the roots of p(t) counting multiplicity.
- Write an as a linear combination of all the roots (counting multiplicity as shown in the theorem above) with unknown coefficients bi.
- This is the general solution to the original recurrence relation. (q is the multiplicity of λ*)
- 4. Equate each from part 3 (plugging in n = 0, ..., d into the general solution of the recurrence relation) with the known values from the original recurrence relation. However, the values an from the original recurrence relation used do not usually have to be contiguous: excluding exceptional cases, just d of them are needed (i.e., for an original homogeneous linear recurrence relation of order 3 one could use the values a0, a1, a4). This process will produce a linear system of d equations with d unknowns. Solving these equations for the unknown coefficients of the general solution and plugging these values back into the general solution will produce the particular solution to the original recurrence relation that fits the original recurrence relation's initial conditions (as well as all subsequent values of the original recurrence relation).
The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is eλx where λ is a complex number that is determined by substituting the guess into the differential equation.
This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation:
it can be seen that the coefficients of the series are given by the nth derivative of f(x) evaluated at the point a. The differential equation provides a linear difference equation relating these coefficients.
This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation.
The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that:
and more generally
Example: The recurrence relationship for the Taylor series coefficients of the equation:
is given by
This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way.
Example: The differential equation
The conversion of the differential equation to a difference equation of the Taylor coefficients is
It is easy to see that the nth derivative of eax evaluated at 0 is an
Solving via linear algebra
A linearly recursive sequence y of order n
is identical to
Expanded with n-1 identities of kind this n-th order equation is translated into a matrix difference equation system of n first order linear equations]],
Observe that the vector can be computed by n applications of the companion matrix, C, to the initial state vector, . Thereby, n-th entry of the sought sequence y, is the top component of .
Eigendecomposition, into eigenvalues, , and eigenvectors, , is used to compute Thanks to the crucial fact that system C time-shifts every eigenvector, e, by simply scaling its components λ times,
that is, time-shifted version of eigenvector,e, has components λ times larger, the eigenvector components are powers of λ, and, thus, recurrent homogeneous linear equation solution is a combination of exponential functions, . The components can be determined out of initial conditions:
Solving for coefficients,
This also works with arbitrary boundary conditions , not necessary the initial ones,
This description is really no different from general method above, however it is more succinct. It also works nicely for situations like
where there are several linked recurrences.
Solving with z-transforms
Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward.
Solving non-homogeneous linear recurrence relations with constant coefficients
If the recurrence is non-homogeneous, a particular solution can be found by the method of undetermined coefficients and the solution is the sum of the solution of the homogeneous and the particular solutions. Another method to solve an non-homogeneous recurrence is the method of symbolic differentiation. For example, consider the following recurrence:
This is an non-homogeneous recurrence. If we substitute n ↦ n+1, we obtain the recurrence
Subtracting the original recurrence from this equation yields
This is a homogeneous recurrence, which can be solved by the methods explained above. In general, if a linear recurrence has the form
where are constant coefficients and p(n) is the inhomogeneity, then if p(n) is a polynomial with degree r, then this non-homogeneous recurrence can be reduced to a homogeneous recurrence by applying the method of symbolic differencing r times.
is the generating function of the inhomogeneity, the generating function
of the non-homogeneous recurrence
with constant coefficients ci is derived from
If P(x) is a rational generating function, A(x) is also one. The case discussed above, where pn = K is a constant, emerges as one example of this formula, with P(x) = K/(1−x). Another example, the recurrence with linear inhomogeneity, arises in the definition of the schizophrenic numbers. The solution of homogeneous recurrences is incorporated as p = P = 0.
Solving first-order non-homogeneous recurrence relations with variable coefficients
Moreover, for the general first-order non-homogeneous linear recurrence relation with variable coefficients:
there is also a nice method to solve it:
Solving general homogeneous linear recurrence relations
Many homogeneous linear recurrence relations may be solved by means of the generalized hypergeometric series. Special cases of these lead to recurrence relations for the orthogonal polynomials, and many special functions. For example, the solution to
is given by
the Bessel function, while
is solved by
Solving first-order rational difference equations
A first order rational difference equation has the form . Such an equation can be solved by writing as a nonlinear transformation of another variable which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in .
Stability of linear higher-order recurrences
The linear recurrence of order d,
has the characteristic equation
The recurrence is stable, meaning that the iterates converge asymptotically to a fixed value, if and only if the eigenvalues (i.e., the roots of the characteristic equation), whether real or complex, are all less than unity in absolute value.
Stability of linear first-order matrix recurrences
In the first-order matrix difference equation
with state vector x and transition matrix A, x converges asymptotically to the steady state vector x* if and only if all eigenvalues of the transition matrix A (whether real or complex) have an absolute value which is less than 1.
Stability of nonlinear first-order recurrences
Consider the nonlinear first-order recurrence
This recurrence is locally stable, meaning that it converges to a fixed point x* from points sufficiently close to x*, if the slope of f in the neighborhood of x* is smaller than unity in absolute value: that is,
A nonlinear recurrence could have multiple fixed points, in which case some fixed points may be locally stable and others locally unstable; for continuous f two adjacent fixed points cannot both be locally stable.
A nonlinear recurrence relation could also have a cycle of period k for k > 1. Such a cycle is stable, meaning that it attracts a set of initial conditions of positive measure, if the composite function
with f appearing k times is locally stable according to the same criterion:
where x* is any point on the cycle.
In a chaotic recurrence relation, the variable x stays in a bounded region but never converges to a fixed point or an attracting cycle; any fixed points or cycles of the equation are unstable. See also logistic map, dyadic transformation, and tent map.
Relationship to differential equations
with Euler's method and a step size h, one calculates the values
by the recurrence
Systems of linear first order differential equations can be discretized exactly analytically using the methods shown in the discretization article.
Some of the best-known difference equations have their origins in the attempt to model population dynamics. For example, the Fibonacci numbers were once used as a model for the growth of a rabbit population.
The logistic map is used either directly to model population growth, or as a starting point for more detailed models. In this context, coupled difference equations are often used to model the interaction of two or more populations. For example, the Nicholson-Bailey model for a host-parasite interaction is given by
with Nt representing the hosts, and Pt the parasites, at time t.
Recurrence relations are also of fundamental importance in analysis of algorithms. If an algorithm is designed so that it will break a problem into smaller subproblems (divide and conquer), its running time is described by a recurrence relation.
A simple example is the time an algorithm takes to find an element in an ordered vector with elements, in the worst case.
A naive algorithm will search from left to right, one element at a time. The worst possible scenario is when the required element is the last, so the number of comparisons is .
A better algorithm is called binary search. However, it requires a sorted vector. It will first check if the element is at the middle of the vector. If not, then it will check if the middle element is greater or lesser than the sought element. At this point, half of the vector can be discarded, and the algorithm can be run again on the other half. The number of comparisons will be given by
which will be close to .
Digital signal processing
In digital signal processing, recurrence relations can model feedback in a system, where outputs at one time become inputs for future time. They thus arise in infinite impulse response (IIR) digital filters.
For example, the equation for a "feedforward" IIR comb filter of delay T is:
Where is the input at time t, is the output at time t, and α controls how much of the delayed signal is fed back into the output. From this we can see that
Recurrence relations, especially linear recurrence relations, are used extensively in both theoretical and empirical economics. In particular, in macroeconomics one might develop a model of various broad sectors of the economy (the financial sector, the goods sector, the labor market, etc.) in which some agents' actions depend on lagged variables. The model would then be solved for current values of key variables (interest rate, real GDP, etc.) in terms of exogenous variables and lagged endogenous variables. See also time series analysis.
- Partial difference equations, Sui Sun Cheng, CRC Press, 2003, ISBN 978-0-415-29884-1
- Greene, Daniel H.; Knuth, Donald E. (1982), "2.1.1 Constant coefficients – A) Homogeneous equations", Mathematics for the Analysis of Algorithms (2nd ed.), Birkhäuser, p. 17.
- Chiang, Alpha C., Fundamental Methods of Mathematical Economics, third edition, McGraw-Hill, 1984.
- Papanicolaou, Vassilis, "On the asymptotic stability of a class of linear difference equations," Mathematics Magazine 69(1), February 1996, 34–43.
- Maurer, Stephen B.; Ralston, Anthony (1998), Discrete Algorithmic Mathematics (2nd ed.), A K Peters, p. 609, ISBN 9781568810911.
- Cormen, T. et al, Introduction to Algorithms, MIT Press, 2009
- R. Sedgewick, F. Flajolet, An Introduction to the Analysis of Algorithms, Addison-Wesley, 2013
- Stokey, Nancy L.; Lucas, Robert E., Jr.; Prescott, Edward C. (1989). Recursive Methods in Economic Dynamics. Cambridge: Harvard University Press. ISBN 0-674-75096-9.
- Ljungqvist, Lars; Sargent, Thomas J. (2004). Recursive Macroeconomic Theory (Second ed.). Cambridge: MIT Press. ISBN 0-262-12274-X.
- Batchelder, Paul M. (1967). An introduction to linear difference equations. Dover Publications.
- Miller, Kenneth S. (1968). Linear difference equations. W. A. Benjamin.
- Fillmore, Jay P.; Marx, Morris L. (1968). "Linear recursive sequences". SIAM Rev. 10 (3). pp. 324–353. JSTOR 2027658.
- Brousseau, Alfred (1971). Linear Recursion and Fibonacci Sequences. Fibonacci Association.
- Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 1990. ISBN 0-262-03293-7. Chapter 4: Recurrences, pp. 62–90.
- Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science (2 ed.). Addison-Welsey. ISBN 0-201-55802-5.
- Enders, Walter (2010). Applied Econometric Times Series (3 ed.).
- Cull, Paul; Flahive, Mary; Robson, Robbie (2005). Difference Equations: From Rabbits to Chaos. Springer. ISBN 0-387-23234-6. chapter 7.
- Jacques, Ian (2006). Mathematics for Economics and Business (Fifth ed.). Prentice Hall. pp. 551–568. ISBN 0-273-70195-9. Chapter 9.1: Difference Equations.
- Minh, Tang; Van To, Tan (2006). "Using generating functions to solve linear inhomogeneous recurrence equations" (PDF). Proc. Int. Conf. Simulation, Modelling and Optimization, SMO'06. pp. 399–404.
- Polyanin, Andrei D. "Difference and Functional Equations: Exact Solutions". at EqWorld - The World of Mathematical Equations.
- Polyanin, Andrei D. "Difference and Functional Equations: Methods". at EqWorld - The World of Mathematical Equations.
- Wang, Xiang-Sheng; Wong, Roderick (2012). "Asymptotics of orthogonal polynomials via recurrence relations". Anal. Appl. 10 (2): 215–235. doi:10.1142/S0219530512500108.
- Hazewinkel, Michiel, ed. (2001), "Recurrence relation", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Recurrence Equation", MathWorld.
- Mathews, John H. "Homogeneous Difference Equations".
- "OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients) |
1 CCGPS . Frameworks Student Edition Mathematics First Grade Unit Two Developing Base Ten Number Sense Georgia Department of Education common core Georgia Performance Standards framework First Grade Mathematics Unit 2. Unit 2: Developing Base Ten Number Sense TABLE OF CONTENTS. Overview ..3. Standards for Mathematical Content ..4. Standards for Mathematical Practice ..5. Enduring Understanding ..5. Essential Questions ..6. Concepts and Skills to Maintain ..6. Selected Terms and Symbols ..7. Strategies for Teaching and Learning ..7. common Misconceptions ..9. Evidence of Learning ..9. Tasks ..10. Button, Button!..11. Count it, Graph it ..14. One Minute Challenge ..19. More or Less Revisited ..25. Close, Far and in Between ..28. Finding Neighbors ..32. Make it Straight.
2 36. Number Hotel ..40. Mystery Number ..46. Tens and Some More ..48. Dropping Tens ..52. Riddle Me This ..57. Drop it, Web it, Graph it ..60. Mathematics GRADE 1 UNIT 2: Developing Base Ten Number Sense Georgia Department of Education Dr. John D. Barge, State School Superintendent May 2012 Page 2 of 64. All Rights Reserved Georgia Department of Education common core Georgia Performance Standards framework First Grade Mathematics Unit 2. OVERVIEW. Many of the skills and concepts in this unit are readdressed from Unit 1. Even though they are revisited, it is important to note that they are not necessarily presented in the same way as in Unit 1. In this unit, students will: rote count forward to 120 by counting on from any number less than 120. represent the number of a quantity using numerals.
3 Locate 0-100 on a number line. use the strategies of counting on and counting back to understand number relationships. explore with the 99 chart to see patterns between numbers, such as, all of the numbers in a column on the hundreds chart have the same digit in the ones place, and all of the numbers in a row have the same digit in the tens place. read, write and represent a number of objects with a written numeral (number form or standard form). build an understanding of how the numbers in the counting sequence are related each number is one more, ten more (or one less, ten less) than the number before (or after). work with categorical data by organizing, representing and interpreting data using charts and tables. pose questions with 3 possible responses and then work with the data that they collect.
4 All mathematical tasks and activities should be meaningful and interesting to students . Posing relevant questions, collecting data related to those questions, and analyzing the data creates a real world connection to counting. The meaning students attach to counting is the key conceptual idea on which all other number concepts are developed. students begin thinking of counting as a string of words, but then they make a gradual transition to using counting as a tool for describing their world. They must construct the idea of counting using manipulatives and have opportunities to see numbers visually (dot cards, tens frames, number lines, 0-99 chart, hundreds charts, arithmetic rack- ex: small frame abacus and physical groups of tens and ones). To count successfully, students must remember the rote counting sequence, assign one counting number to each object counted, and at the same time have a strategy for keeping track of what has already been counted and what still needs to be counted.
5 Only the counting sequence is a rote procedure. Most students can count forward in sequence. Counting on and counting back are difficult skills for many students . students will develop successful and meaningful counting strategies as they practice counting and as they listen to and watch others count. They should begin using strategies of skip counting by 2's, 5's, and 10's. The use of a 99 chart is an extremely useful tool to help students identify number relationships and patterns. Listed below are several reasons that support use of a 99 chart: A 0-99 chart begins with zero where a hundred chart begins with 1. We need to include zero because it is one of the ten digits and just as important as 1-9. A 100 chart puts the decade numerals (10, 20, 30, etc.) in the wrong row.
6 For instance, on a hundred chart, 20 appears at the end of the teens row, where it simply doesn't belong because Mathematics GRADE 1 UNIT 2: Developing Base Ten Number Sense Georgia Department of Education Dr. John D. Barge, State School Superintendent May 2012 Page 3 of 64. All Rights Reserved Georgia Department of Education common core Georgia Performance Standards framework First Grade Mathematics Unit 2. it is not a teen number. The number 20 is the beginning of the 20's decade; therefore it should be in the beginning of the 20's row as in a 99 chart. A 0-99 chart ends with the last two-digit number, 99, whereas a hundred chart ends in 100. Again, this is the wrong place for the number 100, it should begin a whole new chart because it is the first three-digit number.
7 0 1 2 3 4 5 6 7 8 9. 10 11 12 13 14 15 16 17 18 19. 20 21 22 23 24 25 26 27 28 29. 30 31 32 33 34 35 36 37 38 39. 40 41 42 43 44 45 46 47 48 49. 50 51 52 53 54 55 56 57 58 59. 60 61 62 63 64 65 66 67 68 69. 70 71 72 73 74 75 76 77 78 79. 80 81 82 83 84 85 86 87 88 89. 90 91 92 93 94 95 96 97 98 99. As students in first grade begin to count larger amounts, they should group concrete materials into tens and ones to keep track of what they have counted. This is an introduction to the concept of place value. students must learn that digits have different values depending on their position in numbers. Although the units in this instructional framework emphasize key standards and big ideas at specific times of the year, routine topics such as counting, time, money, positional words, patterns, and tallying should be addressed on an ongoing basis through the use of routines, centers, and games.
8 This first unit should establish these routines, allowing students to gradually understand the concept of number and time. students in first grade are only asked to construct tables and charts. Picture graphs and bar graphs are not introduced until 2nd grade. Although students are not expected to count money in first grade, they should use money as a manipulative for patterns, skip counting and any counting additional counting activities. STANDARDS FOR MATHEMATICAL CONTENT. Extend the counting sequence. Count to 120, starting at any number less than 120. In this range, read and write numerals and represent a number of objects with a written numeral. Represent and interpret data. Organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another.
9 Mathematics GRADE 1 UNIT 2: Developing Base Ten Number Sense Georgia Department of Education Dr. John D. Barge, State School Superintendent May 2012 Page 4 of 64. All Rights Reserved Georgia Department of Education common core Georgia Performance Standards framework First Grade Mathematics Unit 2. STANDARDS FOR MATHEMATICAL PRACTICE. The Standards for Mathematical Practice describe varieties of expertise that Mathematics educators at all levels should seek to develop in their students . These practices rest on important processes and proficiencies with longstanding importance in Mathematics education. students are expected to: 1. Make sense of problems and persevere in solving them. 2. Reason abstractly and quantitatively. 3. Construct viable arguments and critique the reasoning of others.
10 4. Model with Mathematics . 5. Use appropriate tools strategically. 6. Attend to precision. 7. Look for and make use of structure. 8. Look for and express regularity in repeated reasoning. **Mathematical Practices 1 and 6 should be evident in EVERY lesson**. ENDURING UNDERSTANDINGS. students can count on starting at any number less than 120. Read, write and represent a number of objects with a written numeral Quantities can be compared using matching and words. Recognize and understand patterns on a 99 chart. (tens and ones). A number line can represent the order of numbers. Problems can be solved in different ways. Important information can be found in representations of data such as tallies, tables, and charts. Tables and charts can help make solving problems easier. |
3.3 Soil Water Retention
If you have ever taken a walk along a sandy beach, you probably observed that there is a place quite near the water’s edge where the ground is dry enough and firm enough to easily walk on. In contrast, if you have walked along the edge of a lake or pond where the surrounding soil was fine-textured, you probably found that the ground near the water’s edge was wet and muddy. The differences you experienced in those two cases can be partly explained by the differing capabilities of coarse- and fine-textured soils to retain, or store, water. These capabilities are described by a relationship called the soil water retention curve. The soil water retention curve is the relationship between soil water content and matric potential. Understanding this relationship is crucial to understanding processes such as soil water storage, water flow, and plant water uptake.
3.3.1 Features of Soil Water Retention Curves
The most fundamental concept to understand about soil water retention is that soil water content is positively related to soil matric potential. As soil water content decreases, matric potential also decreases, becoming more negative. When all the pores in a soil are filled with water, the soil is at its saturated water content (θs) and the matric potential is 0. Consider the water retention curve for the Rothamsted loam shown in Fig. 3‑6. The intersection of the solid curve with the left-hand y-axis shows that for this soil θs is approximately 0.51 cm3 cm-3.
As we move to the right along the solid curve, we are moving toward more negative values of matric potential. The absolute value of matric potential, rather than matric potential itself, is plotted on the x-axis in this figure, as is common for water retention curve plots. The absolute value of matric potential is sometimes called suction. Using the absolute value for matric potential allows us to use a logarithmic scale for matric potential to compensate for its large numerical range relative to that of soil water content.
The water retention curve for the Rothamsted loam is flat between 100 cm (i.e. 1 cm) and approximately 102 cm (100 cm), then at lower matric potentials the curve bends downward. The highest matric potential at which air has displaced water in some of the pores of a previously saturated soil is called the air-entry potential (ψe). For this Rothamsted loam the air-entry potential was estimated to be -128 cm of water.
As we follow the water retention curve toward the right from the air-entry potential, we encounter a region where the decrease in water content is relatively large for each corresponding decrease in matric potential. There is a subtle inflection point approximately halfway down the descending limb of the water retention curve where the shape changes from concave to convex. The location of this inflection point may have some practical significance for soil management. The water content at this inflection point may be the optimum water content for tillage, resulting in the greatest proportion of small aggregates , and the slope of the curve at the inflection point may be a useful indicator of soil quality .
To the right of the inflection point, the steep portion of the curve tapers off into a relatively flat portion of the curve when the matric potential takes on large negative values. In this tail of the water retention curve, large decreases in matric potential are associated with only small decreases in soil water content.
3.3.2 Soil Properties Affecting Soil Water Retention
Another fundamental characteristic of soil water retention curves is that coarse-textured soils retain less water than fine textured soils at the same matric potential. Consider the substantial differences in the curves for the sand (L-soil), sandy loam (Royal), and loam (Rothamsted) textured soils in Fig. 3‑6. The sand exhibits a much lower saturated water content than the loam, in this case 0.18 cm3 cm-3 versus 0.51 cm3 cm-3. The sand also has a higher (less negative) air-entry potential than the loam, -32 cm versus -128 cm. The water retention for the medium-textured sandy loam soil is intermediate between those of the other two soils. Throughout the subsequent chapters, one common theme will be how these substantial differences in water retention between different soil textures dramatically influence water movement, plant growth, and related processes in both managed and natural ecosystems.
A secondary influence on soil water retention is the soil bulk density (Fig. 3‑7). If you compare compacted and un-compacted samples of the same soil, the compacted soil will typically have a lower porosity, lower saturated water content, and lower air-entry potential. Sufficiently compacted soils can also have higher water contents for matric potentials below the air-entry potential than a similar un-compacted soil. This pattern is evident for the samples with the highest bulk density in Fig. 3‑7 .
Advocates for conservation tillage, cover crops, soil quality, and, more recently, soil health have often stated that increasing soil organic matter improves soil water retention. However, the scientific evidence for this claim is somewhat unclear. While a number of studies have found that increasing organic matter increases soil water retention, a similar number of studies have found no such effect . One plausible hypothesis is that in some soils increasing organic matter results in decreased bulk density, leading indirectly to positive effects on water retention similar to those shown in Fig. 3‑7.
3.3.3 Hysteresis in Soil Water Retention
The soil water retention curve can also be influenced by whether the soil is undergoing wetting (sorption) or drying (desorption). When the soil water retention curve differs between wetting and drying, that phenomenon is called hysteresis. This phenomenon has a number of important effects on soil water dynamics. For example, hysteresis in the water retention curve can increase the amount of water that is stored near the soil surface after an infiltration and drainage event . Hysteresis can also slow the rate of solute leaching in soil under natural rainfall conditions with greater effects in coarse-textured than fine-textured soils . In subsequent chapters, we will further consider the effects of hysteresis. For now, we will examine its nature and causes.
For a soil exhibiting hysteresis, the equilibrium water content associated with any particular matric potential will be lower for a wetting curve than for a drying curve (Fig. 3‑8). The initial water content for the wetting or drying process also plays a role. Notice in Fig. 3‑8 the clear difference in the drying curve for the silty clay loam soil when the drying process began from full saturation compared to when the drying process began at a lower water content indicated by the point labeled “B” .
Hysteresis in the soil water retention curve has multiple possible causes including: air entrapment, contact angle hysteresis, and the “ink bottle” effect. Air-entrapment occurs when a partially-drained soil is rewetted and small pockets of air become trapped in the interior pore spaces. This entrapped air cannot easily be removed, even if the soil is submerged underwater. As a result, higher water contents occur along the primary drainage curve from a fully saturated condition than those that occur during subsequent re-wetting (e.g. Fig. 3‑8). Due to air-entrapment during re-wetting, the soil water content approaches a maximum value below the true saturated water content and this lower value is sometimes called the satiated water content. The image in Fig. 3‑9 was generated by X-ray computed tomography and shows air-entrapment in the complex macropore network of a satiated soil column . Soil chemical, physical, and biological processes can alter the amount and distribution of entrapped air over time, so the impact of air-entrapment on soil water retention can change with each subsequent re-wetting cycle.
A second potential cause of hysteresis in the soil water retention curve is a phenomenon known as contact angle hysteresis. The contact angle is the angle at which a liquid-gas interface meets a solid surface (Fig. 3‑10). In our context, this means the angle at which the interface between the soil solution and the soil gas phase contacts the soil solids. Mineral soils often have contact angles <90° and are classified as hydrophilic, i.e. having affinity for water. Organic soils and mineral soils in which much of the surface area becomes covered with organic coatings can have contact angles >90°, making them hydrophobic, i.e. tending to repel water.
To visualize contact angle hysteresis and how it may affect soil water retention, a thought experiment may help. Imagine if we added a sufficiently small volume of liquid to the drop in Fig. 3‑10a, the edge of the drop would not move but the contact angle would increase slightly. Likewise if we removed a sufficiently small amount of liquid, the contact angle would decrease slightly. Thus, contact angles for wetting and drying processes are different, i.e. contact angles exhibit hysteresis. The larger contact angles during wetting versus drying lead to higher (less negative) pressure potentials for the same water contents, consistent with Fig. 3‑8.
A third potential cause for hysteresis is the ink bottle effect, which refers to the way in which drainage from a relatively large cavity, such as the body of an old-fashioned ink bottle, can be restricted if the fluid must drain through a relatively narrow opening, such as the neck of an inverted ink bottle. The analogy is somewhat helpful, but to better understand how this phenomenon influences soil water retention, we need to understand an important related phenomenon called capillary rise. Capillary rise is the rise of liquid against the force of gravity due to the upward force produced by the attraction of the liquid molecules to a solid surface and to each other.
When you insert a small diameter tube, or capillary, into a fluid, such as water, the surface of the fluid inside the capillary may rise above that of the surrounding fluid, and the height (h) of this capillary rise is described by:
where γ is the surface tension of the fluid (N m-1), α is the contact angle of the liquid-gas interface on the wall of the tube, ρ is the fluid density (kg m-3), g is the acceleration due to gravity (m s-2), and r is the radius of the capillary (m). Thus, the smaller the radius of the capillary, the greater the height of the capillary rise. To better understand this equation, watch this video. The pressure potential just below the capillary meniscus is simply the negative of the capillary rise.
In Fig. 3‑11, two capillary tubes have been inserted into water. The height of the resulting capillary rise was greater for the uniformly narrow tube on the right than for the non-uniform tube on the left. During this filling or wetting phase, capillary rise could only raise water to the bottom of the tube section with the enlarged diameter. If instead both tubes had drained from an initially water filled condition, then the enlarged section would have remained water-filled and height of water in both tubes would have been equal. Thus, for capillary tubes or soil pores with non-uniform radii, that non-uniformity can cause hysteresis in the water retention curve .
3.3.4 Measuring Soil Water Retention Curves
Because of the complexity of soil pore networks, we are currently unable to theoretically predict soil water retention curves from first principles, although progress has been made and is being made toward that goal [26, 27]. Until that goal is achieved, we will continue to determine soil water retention curves primarily by empirical methods, i.e. methods based on measurements and experience rather than theory or logical reasoning. Measurements of soil water retention are typically, but not always, performed in the laboratory with different methods being suitable for different portions of the possible range in soil matric potential. Near saturation, intact soil samples should be used because the soil structure and inter-aggregate pores can strongly influence water retention. At matric potentials below approximately –15 kPa, the effects of soil structure on water retention appear to be negligible and smaller homogenized soil samples are typically used.
For matric potentials between 0 kPa and approximately -10 kPa, a simple hanging water column or tension table is often used to precisely control a sample’s matric potential (Fig. 3‑12a). When the sample reaches equilibrium with the imposed matric potential, i.e. when water stops flowing, the water content of the sample can be determined by the change in the mass of the sample. For matric potentials between -10 and -100 kPa, small pressurized chambers often called Tempe cells work well, particularly for intact soil samples (Fig. 3‑12b). A special porous ceramic plate at the bottom of the chamber, when saturated, allows water, but not air, to flow out of the chamber. The air pressure is increased to the absolute value of the desired matric potential, and once equilibrium is reached, the water content of the sample is determined based on the volume of water which flowed out of the sample or the change in mass of the sample. For matric potentials between -100 and -1500 kPa, specialized pressure plates in larger chambers have often been used (Fig. 3‑12c). The principle of operation is similar to that of Tempe cells, but smaller samples of homogenized soil are used with each chamber housing multiple samples, and sometimes even multiple pressure plates. At these low matric potentials, true equilibrium may take many weeks or may never be reached, and a growing body of research suggests that data from pressure plate measurements may be unreliable at matric potentials below -100 kPa [29-31]. Dewpoint potentiometers (Fig. 3‑5) offer one alternative measurement approach in this matric potential range.
3.3.5 Mathematical Functions for Soil Water Retention
Once we have measured soil water retention at several values of matric potential, we often need to fit a mathematical function to the measurements to allow calculation of water content for all other possible values of matric potential. One of the earliest widely-used water retention functions, developed by Brooks and Corey , is defined by:
where θr is the residual water content, which is conceptually the water content below which liquid water flow in the soil is no longer possible, ψe is the air-entry potential, and λ is a number related to the pore size distribution of the soil. Larger values of l indicate more uniformly-sized pores, while small values indicate a wide distribution of pore sizes are present.
A slightly simpler water retention function that is more convenient to use when performing calculations by hand was developed by Campbell and is defined by:
where again b is a parameter related to the pore size distribution. The Campbell water retention function does not include a residual water content.
A more flexible and more widely-used water retention function was developed later by van Genuchten . That function is defined by:
where α is a parameter that is inversely related to the air-entry potential, n is a pore size distribution index similar to λ, and m is a parameter often defined by m = 1 – 1/n.
The most accurate way to estimate the parameters needed for these water retention functions is to obtain measurements of soil water retention across a broad range of matric potentials and then to adjust the parameters to achieve the best possible agreement with the measured values. Measured water retention curves for a loamy sand and a silt loam soil are shown in Fig. 3‑13 along with best-fits of the Brooks and Corey, Campbell, and van Genuchten water retention functions. All three functions fit the data reasonably, with the primary difference in this case being the sharp drop in water content at the air-entry potential predicted by the Brooks and Corey and the Campbell functions. The optimized parameters for each function are listed in Table 3‑1 along with the root mean square error (RMSE), which is a measure of the error in the water content values estimated using the fitted function. For these two soils, all three functions fit the measured data well, but the van Genuchten function has the lowest RMSE.
If you do not have measurements of soil water retention for a particular soil, you can get a general idea of the shape of the water retention curve simply by knowing the soil textural class. Table 3‑2 provides estimates of the parameters for the Brooks and Corey, van Genuchten, and Campbell water retention functions based on
Table 3‑1. Best fit parameters for the Brooks and Corey, Campbell, and van Genuchten water retention functions for samples of Tifton sandy loam and Waukegan silt loam. The root mean square error (RMSE) is also shown to indicate the quality of the fit.
soil textural class alone. This table is one simple type of pedotransfer function, a statistical tool for estimating unknown soil properties from known soil properties. The values in this table are suitable for educational purposes and general approximations of soil water retention behavior but not for many research or design purposes. For more reliable parameter estimates, you can use more complex and more accurate pedotransfer functions if you know additional soil properties such as percent sand, silt, and clay or bulk density or if you have one or more measurements of soil water retention available [35-37].
Table 3‑2. Average parameters for the Brooks and Corey, van Genuchten, and Campbell soil water retention functions by USDA soil textural class. The residual water content (qr), saturated water content (qs), α, and n values were based on Schaap et al. (2001), the air-entry potential (ψe) and λ values were taken from Rawls et al. (1982), and the b values were taken from Rawls et al. (1992). Variables followed by a * are the back-transformed log mean for the textural class. |
Students are introduced to the formal process of solving an equation: starting from the assumption that the original equation has a solution. Students explain each step as following from the properties of equality.
Students identify equations that have the same solution set.
Lesson 12 Summary
If is a solution to an equation, it will also be a solution to the new equation formed when the same number is added to (or subtracted from) each side of the original equation, or when the two sides of the original equation are multiplied by (or divided by) the same non-zero number. These are referred to as the Properties of Equality.
If one is faced with the task of solving an equation, that is, finding the solution set of the equation:
Use the commutative, associative, distributive properties, AND use the properties of equality (adding, subtracting, multiplying by non-zeros, dividing by non-zeros) to keep rewriting the equation into one whose solution set you easily recognize. (We believe that the solution set will not change under these operations.)
Determine which of the following equations have the same solution set by recognizing properties, rather than solving.
a. 2x + 3 = 13 - 5x
b. 6 + 4x = -10x + 26
c. 6x + 9 = 13/5 - x
d. 0.6 + 0.4x = -x + 2.6
e. 3(2x + 3) = 13/5 - x
f. 4x = -10x + 20
g. 15(2x + 3) = 13 - 5x
h. 15(2x + 3) + 97 = 110 - 5x
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. |
The genetic sequence that encodes for every protein is different, and cycle sequencing allows scientists to extract the exact DNA sequence from an unknown DNA strand. Cycle sequencing utilizes special kinds of nucleotides known as di-deoxynucleotides (along with normal deoxynucleotides) in order to get short replicas of the complimentary DNA strand of the parent DNA which has to be studied. The dideoxynucleotides (didNTP) quench the reaction because of their lack of the 3′-hydroxyl group, which is required to form the next phosphodiester linkage with the next nucleotide. The lack of this 3′-hydroxyl disables the extension of the DNA strand any further making shorter replicas of the parent molecule. The dideoxynucleotides (didNTP) are also generally labelled with a fluorescent tag. This allows the observation and characterization of the DNA sequence. The animation below demonstrates the whole process and will allow you to understand better of how cycle sequencing helps in the sequencing of the DNA.
We also recommend that you check out another animation on Sanger Sequencing – Early DNA Sequencing.
AnimationSorry, either Adobe flash is not installed or you do not have it enabled
The sequencing method developed by Fred Sanger forms the basis of automated “cycle” sequencing reactions today. Fluorescent dyes are added to the reactions, and a laser within an automated DNA sequencing machine is used to analyze the DNA fragments produced.
- To sequence a piece of DNA, you need:
- The DNA you want to sequence (template DNA)
- A short DNA “primer” that is complimentary to the DNA you want to sequence,
- An enzyme called DNA polymerase, four nucleotides.
To this mix, we also add a second type of nucleotide; one that has a slightly different chemical formula. These “dideoxynucleotides” can be recognised by a DNA sequencer.
To start the sequencing reaction, this mixture is heated, so the template DNA’s two complementary strand separate.
Then the temperature is lowered, so that the short “primer” sequence finds its complementary sequence in the template DNA.
Finally, the temperature is raised slightly. This allows the enzyme to bind the DNA and create a new strand of DNA.
The sequence of this new DNA is complimentary to the original DNA strand.
The enzyme makes no distinction between dNTPs or didNTPs. Each time a didNTP is incorporated, in this case didATP, the synthesis stops.
Because billions of DNA molecules are present in the test tube, the strand can be terminated at any position. This results in collections of DNA strands of many different lengths.
The sequencing reaction is transferred from the tube to a lane of a polyacrylamide gel.
The gel is placed into a DNA sequencer for electrophoresis and analysis.
The fragments migrate according to size , and each is detected as it passed a laser beam at the bottom of the gel.
Each type of dideoxynucleotide emits a colored light of a characteristic wavelength and is recorded band on a simulated gel image.
The computer program interprets the raw data and outputs an electropherogram with colored peaks representing each letter in the sequence.
If these are the fragments from the sequencing reaction, how would they sort out?
The simulated gel image is read from bottom to top, starting with the smallest fragment. |
Moving a nanosatellite around in space takes only a tiny amount of thrust. Engineers from Michigan Technological University and the University of Maryland teamed up, put a nanoscale rocket under a microscope, and watched what happened.
To Infinity and Beyond with Nanosatellites
When a satellite is placed into orbit by a rocket, its journey has only just begun. Released into space on its own, the satellite needs an on-board thruster so it can navigate to its desired location and then remain there despite the many things that do their best to kick it off course.
"Space isn't the empty vacuum of nothingness many of us assume," says Kurt Terhune, a mechanical engineering graduate student and the lead author on a new study published in Nanotechnology this week. "Space actually has a small amount of atmosphere that causes drag, solar winds that push satellites off course and space debris that present a constant hazard."
This is especially important in the new era of space exploration. Dozens of companies plan to launch thousands of tiny satellites—some as small as shoe boxes—within the next five years. Each of these nanosatellites will need its own tiny thruster. One solution comes in the form of an electrospray thruster that Terhune studies along with his advisor, L. Brad King, the Ron and Elaine Starr Professor of Space Systems Engineering. The propellants for these thrusters are called “ionic liquids,” which are room-temperature liquid salts.
"Much like the sodium chloride table salt many of us enjoy on French fries, ionic liquids are comprised of roughly equal numbers of positively and negatively charged ions," Terhune says, explaining that electric fields, supplied by spacecraft batteries, can exert forces on these ions and eject them into space at great velocity. The emitted ion beam can provide the gentle thrust that the nanosatellite needs.
Many of these tiny electrospray thrusters packed together could propel a spacecraft over great distances, maybe even to the nearest exoplanet. Electrospray thrusters are currently being tested on the European Space Agency’s LISA Pathfinder, which hopes to poise objects in space so precisely that they would only be disturbed by gravitational waves.
But these droplet engines have a problem: sometimes they form needle-like spikes that disrupt the way the thruster works—they get in the way of the ions flowing outward and turn the liquid to solid. Terhune and King wanted to find out how this actually happens.
"The challenge is obtaining images of a material in the presence of such a strong electric field, which is why we turned to John Cumings at the University of Maryland," King says, explaining that Cumings is known for his work with challenging materials. To make things harder, the tip of the droplet can move around by a few microns while the thruster is operating. A few microns is a small distance, but compared with the features that the team needed to observe, this made the experiment like trying to find a needle in a haystack.
"Finding the actual nano-scale tip of the droplet with an electron microsope is like trying to look through a soda straw to find a penny somewhere on the floor of a room," King says. "And if that penny moves, like the tip of the molten salt droplet does—then it's off camera, and you have to start searching all over again."
At the Advanced Imaging and Microscopy Lab at the University of Maryland, Cumings put the tiny thruster in a transmission electron microscope (TEM)—an advanced scope that can see things down to millionths of a meter. They watched as the droplet elongated and sharpened to a point, and then started emitting ions. Then the tree-like defects began to appear.
Back in Orbit
The researchers say that figuring out why these branched structures grow could help prevent them from forming. The problem occurs as the microscope's high-energy electron beam exposes the fluid to radiation, breaking some of the bonds between atoms in the ions. This damages the molten salt's molecular structure, so it gels and piles up.
"We were able to watch the dendritic structures accumulate in real time," Terhune says. "The specific mechanism still needs to be investigated, but this could have importance for spacecraft in high-radiation environments."
He adds that the microscope's electron beam is more powerful than natural settings, but the gelling could affect the lifetime of electrospray engines in deep space and geosynchronous orbits where most of the planet's satellites circle. And you don't have to be a rocket scientist to know figuring out the physics to improve that lifetime is a good idea.
Michigan Technological University is a public research university, home to more than 7,000 students from 54 countries. Founded in 1885, the University offers more than 120 undergraduate and graduate degree programs in science and technology, engineering, forestry, business and economics, health professions, humanities, mathematics, and social sciences. Our campus in Michigan’s Upper Peninsula overlooks the Keweenaw Waterway and is just a few miles from Lake Superior. |
Presentation on theme: "Types of Chemical Reactions And Solution Stoichiometry"— Presentation transcript:
1 Types of Chemical Reactions And Solution Stoichiometry Chapter 4
2 Section 4.1: Water, The Common Solvent Hydration of an ionic compound will occur when the partial positive end of a water becomes attracted to the anions in the compound; likewise for the partial negative center of the water and the cations.Solubility depends on the strength of the intermolecular attractions between the ions and water, as well as the intramolecular attractions of the cations and anions of the compound.Water has polar, covalent bonds. The oxygen atom is more electronegative, making electrons more attracted to it than to the hydrogen. This will create a dipole on the molecule, with a more positive end and a more negative end.When an ionic compound dissolves in water, the ions dissociate completely, as shown in the equation.NH4NO3(s) NH4+(aq) + NO3-(aq)
3 What can dissolve in H2O? WHY? Insoluble Soluble Fats Alcohols ex: bacon greaseOilsex: cooking oilNon-Polar Substancesex: turpentineSolubleAlcoholsex: C2H5OHSugarsex: C6H12O6Ionic compoundsex: NaCl, KOH, LiBrDraw the Lewis structures for alcohol, glucose, and water. Observe how the highly EN O and less EN H are present in each of these structures (hydrogen bonding).Like Dissolves Like speaks only about polarity of a molecule. Polar things are attracted to other polar molecules; non polar molecules are attracted to other non-polar molecules. It has nothing to do with shape.WHY?Because of intermolecular forces: the OH group on the sugars and alcohols is particularly attractive to a water molecule.Generally speaking: “Like Dissolves Like”
4 Section 4.2: Strong and Weak Electrolytes Solute + Solvent = SolutionStrong electrolytes conduct electricityWeak electrolytes barely conduct electricityConductivity depends upon ionizationSolute is what is being dissolved (not necessarily solid).Solvent is doing the dissolving (typically more of this present).Solution is homogeneous mixture.More ions present, more conductivity because the ions are the substances capable of carrying a charge and complete the circuit.
5 All of these dissociate completely in water. Weak Electrolytes Strong ElectrolytesSoluble saltsStrong acidsStrong basesAll of these dissociate completely in water.Weak ElectrolytesWeak acidsWeak basesAll of these partially dissociate in waterHCl H+ + Cl-NaOH Na+ + OH-HC2H3O H+ + C2H3O2Weak electrolytes are represented at equilibrium because the ions combine to make the molecule again. In the case of ammonia, the ionization occurs very slowly since it is not favored to occur.Non-Electrolytes are completely molecular substances in water (not even a little dissociation); Non polar substances.
6 Section 4.3: Composition of Solutions Concentration is measured in molarity, molality, and many others.Concentration DOES NOT directly express the number of ions present in a solution.M= moles soluteliters solutionMgCl2 Mg Cl-1.0 M M M
7 Sample ProblemsCalculate the number of moles of Cl- ions in 1.75 L of 1 x 10-3 M ZnCl2.A chemist needs 1.0 L of 0.20 M K2Cr2O7 solution. How much solid K2Cr2O7 must be weighed out to make this solution?DUH! This is chem 1 material.
8 Standard Solution: a solution whose concentration is accurately known. Example: M HCl; M NaOHCreating dilutionsChemical analysis of a compoundTheoretical CalculationsWhat would you do to prepare a standard solution? In your answer, include specific pieces of glassware, techniques, or equipment you should use.ANSWERNOW
9 moles before dilution = moles after dilution DilutionsDilution is the process used to make the solution less concentrated.moles before dilution = moles after dilutionBecause M =mol/L,V1(M1) = V2(M2)Lab Technique: Use a pipet to deliver the correct amount of original solution to a volumetric flask. Add some water, swirl. Fill to line, invert.
10 You have a large quantity of 1. 5 M NaOH solution available You have a large quantity of 1.5 M NaOH solution available. Dilute this to 100.0mL of a 0.05 M solution. Submit your calculations and store your final product for use in our first lab.DONOW
11 Section 4.4: Types of Chemical Reactions There are more than just these few types, but in this chapter we will cover…PrecipitationAcid-baseOxidation-Reduction
12 Section 4.5: Precipitation Reactions Precipitation Reactions (double displacement)Forms a solid precipitate from aqueous reactants.Color of precipitate can help in identificationSolubility rules help BUNCHESMORE…
13 Solubility RULESAll compounds containing alkali metal cations and the ammonium ion are soluble.All compounds containing NO3-, ClO4-, ClO3-, and C2H3O2- anions are soluble.All chlorides, bromides, and iodides are soluble except those containing Ag+, Pb2+, and Hg2+.All sulfates are soluble except those containing Hg2+, Pb2+, Sr2+, Ca2+, and Ba2+.All hydroxides are only slightly soluble, except those containing an alkali metal, Ca2+, Ba2+,and Sr2+. NaOH and KOH are the most soluble hydroxides.All compounds containing PO43-, S2-, CO32-, and SO32- are only slightly soluble except for those containing alkali metals or the ammonium ion.
14 Practice Predicting Potassium nitrate and barium chloride Sodium sulfate and lead (II) nitratePotassium hydroxide and iron (III) nitrate
15 ALL REACTIONS SHOULD BE WRITTEN IN NET IONIC FORM
16 Section 4.7: Stoichiometry of Precipitation Reactions Stoichiometry in a precipitation reaction is performed just like stoichiometry for a molecular reaction.You need to know which ion comes from which molecular formula.
17 Sample problemCalculate the mass of solid NaCl needed to add to 1.5 L of 0.1 M silver nitrate solution to precipitate all Ag+ ions in the form of AgCl.Net Ionic Eq: Ag+ + Cl- AgCl
18 General Format Write the Net Ionic Equation Calculate the moles presentIdentify the Limiting Reactant*Use Mole Ratio(s)Fancy-fy your answer (put in correct units)
19 Try Me!What mass of precipitate will be produced when 50.0 mL of 0.200M aluminum nitrate is added to mL of M potassium hydroxide?
20 Section 4.8: Acid-Base Reactions Acids yield H+Bases yield OH -Definitions of acid and base vary.Arrhenius and Bronsted/Lowry are common theories.Acid-Base rxns are called NEUTRALIZATIONSBases are proton acceptorsAcidsare protondonors
21 Strong Acid-Strong Base (HCl) (NaOH) Both dissociate completelyH+ + OH- H2ONa+ and Cl- are spectators.Weak Acid - Strong Base(HC2H3O2) (KOH)Acetic acid will not dissociateKOH will completelyHC2H3O2 + OH- H2O + C2H3O2-K+ is a spectator.
22 Stoichiometry sampleWhat volume of M HCl is needed to neutralize 25 mL of 0.35 M NaOH?H+ + OH- H2O
24 To complete a successful titration… The reaction between the titrant and the analyte should be known (you should know WHAT substances you have)The equivalence point should be marked accurately (you should use the right indicator)Volume of the titrant needed to reach the equivalence point should be recorded accurately (you should use a buret!)
26 Titration Try Me Calc 1A 50.0 mL sample of a sodium hydroxide solution is to be standardized M of KHP (potassium hydrogen phthalate, KHC8H4O4) is used as the titrant. KHP has one acidic hydrogen mL of the KHP solution is used to titrate the sodium hydroxide solution to the endpoint. What is the resulting concentration of the analyte?
27 Titration Try Me Calc 2How many milliliters of a M sodium hydroxide solution are needed to neutralize 20.0 mL of a M sulfuric acid solution?
28 Norton TutorialGo to the websiteFind the tutorial on Acid/Base ionization.Complete the tutorial question form.
29 Section 4.9: Redox Reactions What is it??-A reaction that occurs in conjunction with a transfer of electrons.We assign oxidation states to individual atoms in a reaction to observe the change in electrons.Oxidation statesare written withthe +/- signbefore the quantity.Ion charges arewritten with the+/- sign behindthe quantity.
30 Assigning Oxidation States The Oxidation State of…Quantity of Oxid. StateExamplesAn atom in element formZeroNa(s), O2(g)A monatomic ionEqual to the charge on the ionNa+, Cl-Fluorine in a compound-1 , alwaysHF, PF3Oxygen in a compound-2, except in peroxide where it is -1H2O, CO2, H2O2Hydrogen in a compound+1, alwaysH2O, HCl, NH3
31 Oxidation= an increase in the oxidation state Reduction = a decrease in the oxidation stateoxidation2Na(s) + Cl2(g) 2NaCl(s)reduction
32 The metal is oxidized and the other substance is reduced. Metal AtomOxidized Substance:Loss of electronsOxidation state increasesGets SmallerCalled the Reducing AgentOther Atome-Other IonMetalIonReduced Substance:Gain of electronsOxidation state decreasesGets BiggerCalled the Oxidizing AgentThe metal is oxidized and the other substance is reduced.
33 Section 4.10: Balancing Redox How To, in Acid:Write the ½ reactionsBalance the non-H and non-O atomsBalance O by adding H2O where neededBalance H by adding H+ where neededBalance charge using e-Multiply by coefficients until both e- are equal for each ½ reactionAdd the ½ reactions together (cancel stuff)
Your consent to our cookies if you continue to use this website. |
AI bias stands for irregularities in the output of machine learning algorithms. These irregularities could be due to the prejudiced assumptions which are made during the development process of an algorithm. Biases in Artificial Intelligence could also exist because of the prejudices in the training data. The problem of bias in Artificial intelligence does have a historical precedence. For example, Back in 1988, the United Kingdom Commission for Racial Equality investigated a British medical school and found them guilty of discrimination. The computer program that the institution was using to analyze the interview of the candidates was determined to be biased against women and those with non-European names. Discrimination predominantly exists because of data in AI and Machine learning programs. This “algorithmic bias” arises when AI and computing systems do not act in complete objective equality instead they act in accordance with the prejudices and stereotypes that exist within the human who formulated, cleaned and structured their data. In addition to that, there is technical bias which arises from technical limitations, whether they are known or not. Technical bias includes the tools and algorithms which are frequently used by an AI system. Another form of bias is the emergent bias which occurs only in the context of using the system, emergent bias is involved whenever some information is introduced or when there’s a user and system design mismatch.
WHY BIAS ARISES IN AI SYSTEMS ?
There are two reasons due to which biases arise in AI systems:
● Cognitive biases: The effective feelings towards a person or a group based on their perceptions is known as cognitive biases. Psychologists have concluded that there are more than 180 human biases. These biases have been defined and classified and each can affect individuals. They could be introduced into machine learning algorithms via either by designers who unknowingly introduce them into the model or a training data set which include those biases.
● Lack of complete data: Biases can occur if the data is not complete, since data cannot be representative if it is not complete and therefore it may include bias. For example, most psychology research studies include results from undergraduate students which are a specific group and do not represent the whole population
EXAMPLES OF AI BIAS
● There are numerous examples of human bias we observe which are happening in tech platforms. Since data on tech platforms is later used to train machine learning models, these biases, in the longer run, produce biased machine learning models.
● Up until 2019, Facebook was allowing its advertisers to target adverts according to gender, race, and religion on purpose. For example, women were prioritized in job advertisements for roles in nursing or secretarial work, on the other hand job ads for janitors and taxi drivers had been mostly shown to men, particularly men from minority communities. Later on, Facebook decided to not allow employers to specify age, gender or race which can lead to targeting of this sort in its ads anymore.
● AI Bias was found in a risk assessment software known as COMPAS. Jurists used it to potentially predict which criminals were most likely to offend. When news organization ProPublica compared COMPAS risk assessments for 10,000 people arrested in one county in Florida with data showing which ones went on to reoffend, it was discovered that with a right algorithm the decision making was fair. But when the algorithm was wrong, people of color were almost twice as likely to be labeled a higher risk, yet they did not re-offend.
● The May 2016 accident involving a Tesla Model S and a tractor trailer in Williston, Florida is one such instance of technical bias. In the accident the Tesla driver succumbed to the injuries, the driver had autopilot engaged when a tractor trailer drove across a divided highway perpendicular to the car. Tesla later shared in an article “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.”
● In 2016, Microsoft launched an AI-based conversational chatbot on Twitter that was designed to interact with people through tweets and direct messages. However, it started replying with highly offensive and racist messages shortly after its release.
● The chatbot was trained on anonymous public data and had an in-built internal learning feature, which led to a targeted attack by a group of people to introduce racist bias in the system. Some users were able to inundate the bot with misogynistic, racist and anti-semitic language.
● On June 30, 2020, the Association for Computing Machinery in New York City ordered the cessation of private and government use of facial recognition technologies due to what they described as a "clear bias based on ethnic, racial, gender and other human characteristics."
● The ACM said that the bias had an exclusive effect particularly on the lives, livelihoods and fundamental rights of individuals in specific demographic groups. Due to the pervasion pushed on due to the nature of AI, it is crucial to address the algorithmic bias issues to make the systems more fair and inclusive.
SOLUTION TO AI BIAS:-
● We must hold accountability while assessing the numerous ways through which AI can improve on conventional human decision-making. Machine learning systems disregard or in other ways discredit variables that do not accurately predict consequences in the data available to them. This is in stark contrast to humans, who could potentially lie about or not even realize the factors that led them to have a bias while, for instance, hiring or disregarding a particular job candidate.
● Numerous resources are available to assist developers with their data production lines these days, but not all tools are built on the same line. Some unconventional crowdsourcing models come with inherent risks because they remain a mystery for everyone. With no relationship established with the workers who process the data, there is absolutely no way to amend subtle problems that might emerge. As a result of this, bias in important datasets is inevitable to occur. This renders those tools inadequate for any business looking to offload enterprise-grade work.
● In order to mitigate the unintended bias developers have to develop systems very attentively. This is independent of the way organizations decide to manage their data. Accountability is a relationship-driven business model, so developers must identify ways to strategically deploy people in the data annotation process.
● Communication and ability to evolve processes important for developers to ensure that their AI systems consume training data that reflects accurately on the ground. Developers should also have the ability to initiate roadblocks and make necessary amendments so that potential bias in the data could be eliminated.
● Investing more in diversifying the AI field itself can also be an option to check bias in the AI system. A more diverse AI community would be better equipped to anticipate, review, and spot bias and engage communities affected. This will require investments in education and opportunities.
● We need to consider how humans and machines can work together so that biases could be mitigated. Some “human-in-the-loop” systems can potentially make recommendations or provide options so that humans could double-check or can choose from a range of options. Transparency about these algorithms’ confidence in its recommendation can potentially help humans to understand. The gravity of the problem that these biases could have on AI systems.
Overall, diversity and proper representation of the marginalised communities can solve the problem of bias in the AI lifecycle. Many of such issues were covered in Discriminating Systems, a major report from the AI now Institute in 2019, which concluded that diversity and AI bias issues should not be considered separately because “they are two sides of the same problem”. |
Learn something new every day More Info... by email
Cross tabulation is a method used when creating graphs which display how different items inter-relate. This allows those creating and reviewing the graphs to see where two or more pieces of data directly relate to or affect one another. Cross tabulation is typically used in surveys, market research and sometimes even financial reports when it is clear that the multiple pieces of information affect each others’ outcome.
The use of cross tabulation is also sometimes referred to as a chi-square. The way these tables are set up shows the results of multiple variables when compared against each other. For example, a person may be creating a table of how many men and women drive green cars versus blue cars. To gather the data, this person would have to interview several people and write the information down. If he interviewed 40 people, the table he created from this data may look something like this:
|Gender||Green Car||Blue Car|
Cross tabulation has benefits even in everyday life. A person may use it to track her family’s monthly spending over a period of time, and even school children are often taught to compile simple data this way. A visual representation of large amounts of variable data is typically easier for people to understand than pages and pages of written data.
In industry, data like this is very important when attempting to forecast markets trends, review financial variables over a long period of time and even track the health records of an entire country. It is very common for these tables to include many variables, and several different tables may even be grouped together which compare and contrast dozens of elements. Cross tabulation has use in many areas like marketing, product management and sometimes even in staff research.
The process of cross tabulation tends to give a more complete picture of past and current trends as well as possible future outcomes. It helps those analyzing the data gain an understanding of what factors affect their given elements, form hypothesis of what may happen if new information or an element is added, and see what information is not in their forecasts that should be.
The cross tabulation method has been around for many years, but in the past, people were forced to manually gather, organize and compile all the data into hand-made tables and reports. In modern times, this is generally done with computer programs which can organize the data, add up complex mathematical problems and create full tables almost automatically. Then it is simply up to the user to check the information for discrepancies or errors and present the information to the appropriate people. |
A torn (perforated) eardrum is not usually serious and often heals on its own without any complications. Complications sometimes occur such as hearing loss and infection in the middle ear. A small procedure to repair a perforated eardrum is an option if it does not heal by itself, especially if you have hearing loss.
What is the eardrum and how do we hear?
The eardrum (also called the tympanic membrane) is a thin skin-like structure in the ear. It lies between the outer (external) ear and the middle ear.
The ear is divided into three parts - the outer, middle and inner ear. Sound waves come into the outer ear and hit the eardrum, causing the eardrum to vibrate.
Behind the eardrum are three tiny bones (ossicles). The vibrations pass from the eardrum to these middle ear bones. The bones then transmit the vibrations to the cochlea in the inner ear. The cochlea converts the vibrations to sound signals which are sent down a nerve to the brain, which we 'hear'.
The middle ear behind the eardrum is normally filled with air. The middle ear is connected to the back of the nose by the Eustachian tube. This allows air in and out of the middle ear.
What is a perforated eardrum and what problems can it cause?
A perforated eardrum is a hole or tear that has developed in the eardrum. It can affect hearing. The extent of hearing loss can vary greatly. For example, tiny perforations may only cause minimal loss of hearing. Larger perforations may affect hearing more severely. Also, if the tiny bones (ossicles) are damaged in addition to the eardrum then the hearing loss would be much greater than, say, a small perforation which is not close to the ossicles.
With a perforation, you are at greater risk of developing an ear infection. This is because the eardrum normally acts as a barrier to bacteria and other germs that may get into the middle ear.
What can cause a perforated eardrum?
- Infections of the middle ear, which can damage the eardrum. In this situation you often have a discharge from the ear as pus runs out from the middle ear.
- Direct injury to the ear - for example, a punch to the ear.
- A sudden loud noise - for example, from a nearby explosion. The shock waves and sudden sound waves can tear (perforate) the eardrum. This is often the most severe type of perforation and can lead to severe hearing loss and ringing in the ears (tinnitus).
- Barotrauma. This occurs when you suddenly have a change in air pressure and there is a sharp difference in the pressure of air outside the ear and in the middle ear. For example, when descending in an aircraft. Pain in the ear due to a tense eardrum is common during height (altitude) changes when flying. However, a perforated eardrum only happens rarely in extreme cases. See separate leaflet called Barotrauma of the Ear for more details.
- Poking objects into the ear. This can sometimes damage the eardrum.
- Grommets. These are tiny tubes that are placed through the eardrum. They are used to treat glue ear, as they allow any mucus that is trapped in the middle ear to drain out from the ear. When a grommet falls out, there is a tiny gap left in the eardrum. This heals quickly in most cases.
How is a perforated eardrum diagnosed?
A doctor can usually diagnose a torn (perforated) eardrum simply by looking into the ear with a special torch called an otoscope. However, sometimes it is difficult to see the eardrum if there is a lot of inflammation, wax or infection present in the ear.
What is the treatment for a perforated eardrum?
No treatment is needed in most cases
A torn (perforated) eardrum will usually heal by itself within 6-8 weeks. It is a skin-like structure and, like skin that is cut, it will usually heal. In some cases, a doctor may prescribe antibiotic medicines if there is an infection or risk of infection developing in the middle ear whilst the eardrum is healing.
It is best to avoid water getting into the ear whilst it is healing. For example, your doctor may advise that you put some cotton wool or similar material into your outer ear whilst showering or washing your hair. It is best not to swim until the eardrum has healed.
Occasionally, a perforated eardrum gets infected and needs antibiotics. Some ear drops can occasionally damage the nerve supply to the ear. Your doctor will select a type that does not have this risk, or may give you medication by mouth.
Surgical treatment is sometimes considered
A small operation is an option to treat a perforated drum that does not heal by itself. There are various techniques which may be used to repair the eardrum, depending on how severe the damage is. This operation may be called a myringoplasty or a tympanoplasty. These operations are usually successful in fixing the perforation and improving hearing.
However, not all people with an unhealed perforation need treatment. Many people have a small permanent perforation with no symptoms or significant hearing loss. Treatment is mainly considered if there is hearing loss, as this may improve if the perforation is fixed. Also, swimmers may prefer to have a perforation repaired, as getting water in the middle ear can increase the risk of having an ear infection.
If you have a perforation that has not healed by itself, a doctor who is an ear specialist will advise on whether treatment is necessary.
Did you find this information useful?
Further reading & references
- Castro O, Perez-Carro AM, Ibarra I, et al; Myringoplasties in children: our results. Acta Otorrinolaringol Esp. 2013 Mar-Apr 64(2):87-91. doi: 10.1016/j.otorri.2012.06.012. Epub 2012 Dec 20.
- Kumar N, Madkikar NN, Kishve S, et al; Using middle ear risk index and et function as parameters for predicting the outcome of tympanoplasty. Indian J Otolaryngol Head Neck Surg. 2012 Mar 64(1):13-6. doi: 10.1007/s12070-010-0115-4. Epub 2011 Feb 2.
- British National Formulary; NICE Evidence Services (UK access only)
- Venekamp RP, Prasad V, Hay AD; Are topical antibiotics an alternative to oral antibiotics for children with acute otitis media and ear discharge? BMJ. 2016 Feb 4 352:i308. doi: 10.1136/bmj.i308.
Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. EMIS has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions. |
Definition: The Sampling Distribution of Proportion measures the proportion of success, i.e. a chance of occurrence of certain events, by dividing the number of successes i.e. chances by the sample size ’n’. Thus, the sample proportion is defined as p = x/n.
The sampling distribution of proportion obeys the binomial probability law if the random sample of ‘n’ is obtained with replacement. Such as, if the population is infinite and the probability of occurrence of an event is ‘π’, then the probability of non-occurrence of the event is (1-π). Now consider all the possible sample size ‘n’ drawn from the population and estimate the proportion ‘p’ of success for each. Then the mean (?p) and the standard deviation (σp) of the sampling distribution of proportion can be obtained as:
?p = mean of proportion
π = population proportion which is defined as π = X/N, where X is the number of elements that possess a certain characteristic and N is the total number of items in the population.
σp = standard error of proportion that measures the success (chance) variations of sample proportions from sample to sample
n= sample size, If the sample size is large (n≥30), then the sampling distribution of proportion is likely to be normally distributed.
The following formula is used when population is finite, and the sampling is made without the replacement:
Leave a Reply |
This parallelogram is a rhomboid as it has no right angles and unequal sides.
|Edges and vertices||4|
|Symmetry group||C2, +, (22)|
|Area||b × h (base × height);
ab sin θ (product of adjacent sides and sine of any vertex angle)
In Euclidean geometry, a parallelogram is a simple (non-self-intersecting) quadrilateral with two pairs of parallel sides. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of equal measure. The congruence of opposite sides and opposite angles is a direct consequence of the Euclidean parallel postulate and neither condition can be proven without appealing to the Euclidean parallel postulate or one of its equivalent formulations.
By comparison, a quadrilateral with just one pair of parallel sides is a trapezoid in American English or a trapezium in British English.
The three-dimensional counterpart of a parallelogram is a parallelepiped.
The etymology (in Greek παραλληλ-όγραμμον, a shape "of parallel lines") reflects the definition.
- 1 Special cases
- 2 Characterizations
- 3 Other properties
- 4 Area formula
- 5 Proof that diagonals bisect each other
- 6 Parallelograms arising from other figures
- 7 See also
- 8 References
- 9 External links
- Rhomboid – A quadrilateral whose opposite sides are parallel and adjacent sides are unequal, and whose angles are not right angles
- Rectangle – A parallelogram with four angles of equal size
- Rhombus – A parallelogram with four sides of equal length.
- Square – A parallelogram with four sides of equal length and angles of equal size (right angles).
- Two pairs of opposite sides are equal in length.
- Two pairs of opposite angles are equal in measure.
- The diagonals bisect each other.
- One pair of opposite sides are parallel and equal in length.
- Adjacent angles are supplementary.
- Each diagonal divides the quadrilateral into two congruent triangles.
- The sum of the squares of the sides equals the sum of the squares of the diagonals. (This is the parallelogram law.)
- It has rotational symmetry of order 2.
- The sum of the distances from any interior point to the sides is independent of the location of the point. (This is an extension of Viviani's theorem.)
Thus all parallelograms have all the properties listed above, and conversely, if just one of these statements is true in a simple quadrilateral, then it is a parallelogram.
- Opposite sides of a parallelogram are parallel (by definition) and so will never intersect.
- The area of a parallelogram is twice the area of a triangle created by one of its diagonals.
- The area of a parallelogram is also equal to the magnitude of the vector cross product of two adjacent sides.
- Any line through the midpoint of a parallelogram bisects the area.
- Any non-degenerate affine transformation takes a parallelogram to another parallelogram.
- A parallelogram has rotational symmetry of order 2 (through 180°) (or order 4 if a square). If it also has exactly two lines of reflectional symmetry then it must be a rhombus or an oblong (a non-square rectangle). If it has four lines of reflectional symmetry, it is a square.
- The perimeter of a parallelogram is 2(a + b) where a and b are the lengths of adjacent sides.
- Unlike any other convex polygon, a parallelogram cannot be inscribed in any triangle with less than twice its area.
- The centers of four squares all constructed either internally or externally on the sides of a parallelogram are the vertices of a square.
- If two lines parallel to sides of a parallelogram are constructed concurrent to a diagonal, then the parallelograms formed on opposite sides of that diagonal are equal in area.
- The diagonals of a parallelogram divide it into four triangles of equal area.
All of the area formulas for general convex quadrilaterals apply to parallelograms. Further formulas are specific to parallelograms:
A parallelogram with base b and height h can be divided into a trapezoid and a right triangle, and rearranged into a rectangle, as shown in the figure to the left. This means that the area of a parallelogram is the same as that of a rectangle with the same base and height:
The base × height area formula can also be derived using the figure to the right. The area K of the parallelogram to the right (the blue area) is the total area of the rectangle less the area of the two orange triangles. The area of the rectangle is
and the area of a single orange triangle is
Therefore, the area of the parallelogram is
Another area formula, for two sides B and C and angle θ, is
The area of a parallelogram with sides B and C (B ≠ C) and angle at the intersection of the diagonals is given by
When the parallelogram is specified from the lengths B and C of two adjacent sides together with the length D1 of either diagonal, then the area can be found from Heron's formula. Specifically it is
where and the leading factor 2 comes from the fact that the number of congruent triangles that the chosen diagonal divides the parallelogram into is two.
Area in terms of Cartesian coordinates of vertices
Let vectors and let denote the matrix with elements of a and b. Then the area of the parallelogram generated by a and b is equal to .
Let vectors and let . Then the area of the parallelogram generated by a and b is equal to .
Let points . Then the area of the parallelogram with vertices at a, b and c is equivalent to the absolute value of the determinant of a matrix built using a, b and c as rows with the last column padded using ones as follows:
Proof that diagonals bisect each other
- (alternate interior angles are equal in measure)
- (alternate interior angles are equal in measure).
(since these are angles that a transversal makes with parallel lines AB and DC).
Also, side AB is equal in length to side DC, since opposite sides of a parallelogram are equal in length.
Therefore triangles ABE and CDE are congruent (ASA postulate, two corresponding angles and the included side).
Since the diagonals AC and BD divide each other into segments of equal length, the diagonals bisect each other.
Separately, since the diagonals AC and BD bisect each other at point E, point E is the midpoint of each diagonal.
Parallelograms arising from other figures
An automedian triangle is one whose medians are in the same proportions as its sides (though in a different order). If ABC is an automedian triangle in which vertex A stands opposite the side a, G is the centroid (where the three medians of ABC intersect), and AL is one of the extended medians of ABC with L lying on the circumcircle of ABC, then BGCL is a parallelogram.
The midpoints of the sides of an arbitrary quadrilateral are the vertices of a parallelogram, called its Varignon parallelogram. If the quadrilateral is convex or concave (that is, not self-intersecting), then the area of the Varignon parallelogram is half the area of the quadrilateral.
Tangent parallelogram of an ellipse
For an ellipse, two diameters are said to be conjugate if and only if the tangent line to the ellipse at an endpoint of one diameter is parallel to the other diameter. Each pair of conjugate diameters of an ellipse has a corresponding tangent parallelogram, sometimes called a bounding parallelogram, formed by the tangent lines to the ellipse at the four endpoints of the conjugate diameters. All tangent parallelograms for a given ellipse have the same area.
It is possible to reconstruct an ellipse from any pair of conjugate diameters, or from any tangent parallelogram.
Faces of a parallelepiped
- Owen Byer, Felix Lazebnik and Deirdre Smeltzer, Methods for Euclidean Geometry, Mathematical Association of America, 2010, pp. 51-52.
- Zalman Usiskin and Jennifer Griffin, "The Classification of Quadrilaterals. A Study of Definition", Information Age Publishing, 2008, p. 22.
- Chen, Zhibo, and Liang, Tian. "The converse of Viviani's theorem", The College Mathematics Journal 37(5), 2006, pp. 390–391.
- Dunn, J.A., and J.E. Pretty, "Halving a triangle", Mathematical Gazette 56, May 1972, p. 105.
- Weisstein, Eric W. "Triangle Circumscribing". Wolfram Math World.
- Weisstein, Eric W. "Parallelogram." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Parallelogram.html
- Mitchell, Douglas W., "The area of a quadrilateral", Mathematical Gazette, July 2009.
|Wikimedia Commons has media related to Parallelograms.|
- Parallelogram and Rhombus - Animated course (Construction, Circumference, Area)
- Weisstein, Eric W. "Parallelogram". MathWorld. |
12. Given triangle ABC. Construct the Orthocenter H. Let points D, E, and F be the feet of the perpendiculars from A, B, and C respectfully. Prove:
Click HERE for a GSP sketch. What if ABC is an obtuse triangle?
One way to approach this problem is to think of the area of triangle ABC as a sum of the area of the three triangles above which can be recognized with the three colors.
So the area of triangle ABC can be written as:
Also, the area of triangle ABC divided by itself is 1
There are several ways to express the area of triangle ABC.
Another way to express the area of triangle ABC divided by the area of triangle ABC
The goal is to reduce this. The first step in the process is
This can be reduced to
1/2CF(AB) = 1/2AD(BC)
1/2AD(BC) =1/2 AC(BE)
We can substitute into the equation above. Therefore,
This reduces to
Therefore this proves our statement through a series of substitutions and simple geometric relationships.
Next, we want to show that
Let’s go back to the picture:
Looking at the picture above, we see that
AH = AD – HD
BH = BE – HE
CH = CF – HF
These can be substituted into the fractions to get
This can be rewritten as
And then reduced to
Now this brings us to the question, does this apply to obtuse triangles?
To find out, I constructed the orthocenter for an obtuse triangle. The orthocenter lies outside the triangle when it is obtuse.
The red triangle represents the given obtuse triangle, and H is the orthocenter. I then found that the orthocenter of the triangle BCH was point A. Based on this, I measured segments HF, CF, HE, BE, HD, and AD to see if the relationship held.
They did not. Based on this and the fact that the orthocenter lies outside the obtuse triangle, this relationship does not hold for obtuse triangles.
To check this out yourself, click here. |
Best Results From Wikipedia Yahoo Answers Youtube
Although only the net chemical change is directly observable for most chemical reactions, experiments can often be designed that suggest the possible sequence of steps in a reaction mechanism. Recently, electrospray ionization mass spectrometry has been used to corroborate the mechanism of several organic reaction proposals.
A chemical mechanism describes in detail exactly what takes place at each stage of an overall chemical reaction (transformation). It also describes each reaction intermediate, activated complex, and transition state, and which bonds are broken (and in what order), and which bonds are formed (and in what order). A complete mechanism must also account for all reactants used, the function of a catalyst, stereochemistry, all products formed and the amount of each, and what the relative rates of the steps are. Reaction intermediates are chemical species, often unstable and short-lived, which are not reactants or products of the overall chemical reaction, but are temporary products and reactants in the mechanism's reaction steps. Reaction intermediates are often free radicals or ions. Transition states can be unstable intermediate molecular states even in the elementary reactions. Transition states are commonly molecular entities involving an unstable number of bonds and/or unstable geometry which may be at chemical potential maxima.
A reaction mechanism must also account for the order in which molecules react. Often what appears to be a single step conversion is in fact a multistep reaction.
Consider the following reaction:
- CO + NO2→ CO2 + NO
In this case, it has been experimentally determined that this reaction takes place according to the rate law R = k[NO_2]^2. Therefore, a possible mechanism by which this reaction takes place is:
- 2 NO2→ NO3 + NO (slow)
- NO3 + CO → NO2 + CO2 (fast)
When determining the overall rate law for a reaction, the slowest step is the step that determines the reaction rate. Because the first step (in the above reaction) is the slowest step, it is the rate-determining step. Because it involves the collision of two NO2 molecules, it is a bimolecular reaction with a rate law of R = k[NO_2]^2. If we were to cancel out all the molecules that appear on both sides of the reaction, we would be left with the original reaction.
A correct reaction mechanism is an important part of accurate predictive modelling. For many combustion and plasma systems, detailed mechanisms are not available or require development.
Even when information is available, identifying and assembling the relevant data from a variety of sources, reconciling discrepant values and extrapolating to different conditions can be a difficult process without expert help. Rate constants or thermochemical data are often not available in the literature, so computational chemistry techniques or group-additivity methods must be used to obtain the required parameters.
At the different stages of a reaction mechanism's elaboration, appropriate methods must be used.
- A reaction involving one molecular entity is called unimolecular.
- A reaction involving two molecular entities is called bimolecular.
- A reaction involving three molecular entities is called termolecular.
From Yahoo Answers
Answers:All reactions require a little bit of activation energy to get it going. Generally, however, Endothermic reactions speed up due to rise in temperature ex. reaction inside an ice pack (therefore ice pack is not cold for long in hotter weather) Exothermic reactions slow down due to rise in temperature ex. dissolving of oxygen into water
Answers:i guess you mean speed. fast: industrial reactions, using any fuel ie in a car, cooking slow: stop microbial growth in fridge, slow rust of metal objects, anti aging in us.
Answers:1. 2 H2(g) + O2(g)--->2 H2O,(l) synthesis 2 .2 H2O-(l)--> O2)g)+2 H2 (g) , decomposition 3. 2NaCl(aq)+ F2(g)--->2 NaF(aq)+ Cl2(g), single displacement 4. AgNO3(aq)+ NaCl(aq)---> AgCl(s) + NaNO3(aq), double displacement
Answers:Isomerisation, in which a chemical compound undergoes a structural rearrangement without any change in its net atomic composition Direct combination or synthesis, in which two or more chemical elements or compounds unite to form a more complex product: N2 + 3 H2 2 NH3 Chemical decomposition or analysis, in which a compound is decomposed into smaller compounds or elements: 2 H2O 2 H2 + O2 Single displacement or substitution, characterized by an element being displaced out of a compound by a more reactive element: 2 Na(s) + 2 HCl(aq) 2 NaCl(aq) + H2(g) Metathesis or Double displacement reaction, in which two compounds exchange ions or bonds to form different compounds: NaCl(aq) + AgNO3(aq) NaNO3(aq) + AgCl(s) Precipitation (chemistry) Reactions where species in solution combine to form a solid product (precipitate). A typical example would be the reaction of methatesis described above. Acid-base reactions, broadly characterized as reactions between an acid and a base, can have different definitions depending on the acid-base concept employed. Some of the most common are: Arrhenius definition: Acids dissociate in water releasing H3O+ ions; bases dissociate in water releasing OH- ions. Br nsted-Lowry definition: Acids are proton (H+) donors; bases are proton acceptors. Includes the Arrhenius definition. Lewis definiton: Acids are electron-pair acceptors; bases are electron-pair donors. Includes the Br nsted-Lowry definition. Redox reactions, in which changes in oxidation numbers of atoms in involved species occur. Those reactions can often be interpreted as transferences of electrons between different molecular sites or species. A typical example of redox rection is: 2 S2O32 (aq) + I2(aq) S4O62 (aq) + 2 I (aq) In which I2 is reduced to I- and S2O32- (thiosulfate anion) is oxidized to S4O62-. Combustion, a kind of redox reaction in which any combustible substance combines with an oxidizing element, usually oxygen, to generate heat and form oxidized products. The term combustion is used usually only large-scale oxidation of whole molecules, i.e. a controlled oxidation of a single functional group is not combustion. C10H8+ 12 O2 10 CO2 + 4 H2O CH2S + 6 F2 CF4 + 2 HF + SF6 Organic reactions encompass a wide assortment of reactions involving compounds which have carbon as the main element in their molecular structure. The reactions an organic compound may take part are largely defined by its functional groups. Defined in opposition to inorganic reactions. |
If we could observe our galaxy from outside, hundreds of thousands of light years, we would see several spiral arms that form an approximately circular disk. But when placing it on the edge, we would check that the disc is not perfect: in addition to having a center bulged by the concentration of stars and gas, the galactic plane is warped, like a wooden board that has warped in the rain.
This deformity of the Milky Way has been known since the late twentieth century, but since astronomers can only observe the edges of the galaxy from within, they have not been able to describe it accurately. A new map published in Nature Astronomy located in three dimensions the position of 1,339 stars of the Milky Way to offer the first faithful view of the warping.
The shape it has is "as if one takes a flexible plastic disc and, raising one end, bends the other down", illustrates Francisco Garzón, an astronomer at the Institute of Astrophysics of the Canary Islands oblivious to the investigation. Several decades ago, it was discovered that the galaxy was not flat, because of the distribution of hydrogen gas in its periphery. But the new study is not based on gas observations; instead, map the position of individual stars that serve as reference points distributed by the disk.
The research team, made up of scientists from China and Australia, focused on stars called Cepheid variables. They are stars that pulsate radially (like a lighthouse), with a very stable period that is directly proportional to their luminosity. This allows knowing the intrinsic brightness of the star by just counting the time interval between its pulsations. Then, you can calculate your distance from Earth, comparing the observable brightness with the real one. These properties of Cepheids, discovered by the calculator Henrietta Leavitt between 1908 and 1912, they have become ideal objects to measure astronomical distances.
"It is notoriously difficult to determine distances from the Sun to areas of the outer disk of the Milky Way without knowing what the shape of that disk is," explains Xiaodian Chen, the lead author of the study, from the Chinese Academy of Sciences in Beijing. To achieve this, he and his companions chose Cepheids that are between four and 20 times more massive than our Sun, and up to 100,000 times brighter. Some are observable in the visible spectrum and others only in the infrared.
The information they analyzed comes from the WISE astronomical space telescope, launched by NASA in December 2009. In addition, they relied on data from the Gaia space probe, the astrometry mission of the European Space Agency, active since 2013. They used this data to eliminate noise from the original sample and thus produce a clean map that "is not blurred by clustered objects," according to Garzón.
The model confirms that the irregularities of the galactic edge are due to the interaction between the gravitational and centrifugal forces of the dense core of stars. They are deformities that were not present when the Milky Way was younger. The new map, for its accuracy, allows to discard some models on the formation and evolution of galaxies, as well as to refine the predictions of others.
For the first time, the investigation reveals a precession in the own axis of the warping of the galaxy, previously unknown. This means that, in addition to bending the edges of the galaxy in opposite directions, they also twist perpendicular to the disk at the ends. To visualize this phenomenon, the Milky Way can be considered as a disk composed of concentric rings, and each ring bends differently, so that they reach more closed angles as they move away from the center.
This map covers a radius of 20 kilopársecs from the galactic center, more or less, or about 65,230 light years. It is the most complete picture of the shape of the Milky Way, but it is known that the actual extent of the galaxy is greater, with a radius of at least 25 kilopársecs. Our Sun is about eight kilopársecs from the center. Although warping does not occur in all galaxies, it is not unique to the Milky Way. In some galaxies that have a favorable orientation -the ones that are singing to us- it is observed that the stars draw a kind of S in the sky However, none has been studied as accurately as ours. |
In physics, natural units are physical units of measurement based only on universal physical constants. For example the elementary charge e is a natural unit of electric charge, or the speed of light c is a natural unit of speed. A purely natural system of units is defined in such a way that some set of selected universal physical constants are normalized to unity; that is, their numerical values in terms of these units become exactly 1.
- 1 Introduction
- 2 Notation and use
- 3 Advantages and disadvantages
- 4 Choosing constants to normalize
- 5 Electromagnetism units
- 6 Systems of natural units
- 7 See also
- 8 References
- 9 External links
Natural units are intended to elegantly simplify particular algebraic expressions appearing in physical law or to normalize some chosen physical quantities that are properties of universal elementary particles and that may be reasonably believed to be constant. However, what may be believed and forced to be constant in one system of natural units can very well be allowed or even assumed to vary in another natural unit system.
Natural units are natural because the origin of their definition comes only from properties of nature and not from any human construct. Planck units are often, without qualification, called "natural units", when in fact they constitute only one of several systems of natural units, albeit the best known such system. Planck units (up to a simple multiplier for each unit) might be considered one of the most "natural" systems in that the set of units is not based on properties of any prototype, object, or particle but are solely derived from the properties of free space.
As with other systems of units, the base units of a set of natural units will include definitions and values for length, mass, time, temperature, and electric charge (in lieu of electric current). Some physicists do not recognize temperature as a fundamental physical quantity, since it simply expresses the energy per degree of freedom of a particle, which can be expressed in terms of energy (or mass, length, and time). Virtually every system of natural units normalizes Boltzmann's constant kB to 1, which can be thought of as simply a way of defining the unit temperature.
In the SI unit system, electric charge is a separate fundamental dimension of physical quantity, but in natural unit systems charge is expressed in terms of the mechanical units of mass, length, and time, similarly to cgs. There are two common ways to relate charge to mass, length, and time: In Lorentz–Heaviside units (also called "rationalized"), Coulomb's law is F=q1q2/(4πr2), and in Gaussian units (also called "non-rationalized"), Coulomb's law is F=q1q2/r2. Both possibilities are incorporated into different natural unit systems.
Notation and use
Natural units are most commonly used by setting the units to one. For example, many natural unit systems include the equation c = 1 in the unit-system definition, where c is the speed of light. If a velocity v is half the speed of light, then from the equations v = 1⁄2c and c = 1, the consequence is v = 1⁄2. The equation v = 1⁄2 means "the velocity v has the value one-half when measured in Planck units", or "the velocity v is one-half the Planck unit of velocity".
The equation c = 1 can be plugged in anywhere else. For example, Einstein's equation E = mc2 can be rewritten in Planck units as E = m. This equation means "The rest-energy of a particle, measured in Planck units of energy, equals the rest-mass of a particle, measured in Planck units of mass."
Advantages and disadvantages
Compared to SI or other unit systems, natural units have both advantages and disadvantages:
- Simplified equations: By setting constants to 1, equations containing those constants appear more compact and in some cases may be simpler to understand. For example, the special relativity equation E2 = p2c2 + m2c4 appears somewhat complicated, but the natural units version, E2 = p2 + m2, appears simpler.
- Physical interpretation: Natural unit systems automatically incorporate dimensional analysis. For example, in Planck units, the units are defined by properties of quantum mechanics and gravity. Not coincidentally, the Planck unit of length is approximately the length where quantum gravity effects become important. Likewise, atomic units are based on the mass and charge of an electron, and not coincidentally the atomic unit of length is the Bohr radius describing the orbit of the electron in a hydrogen atom.
- No prototypes: A prototype is a physical object that defines a unit, such as the International Prototype Kilogram, a certain cylinder whose mass is by definition exactly one kilogram. A prototype definition always has imperfect reproducibility between different places and between different times, and it is an advantage of natural unit systems that they use no prototypes. (They share this advantage with other non-natural unit systems, such as conventional electrical units.)
- Less precise measurements: SI units are designed to be used in precision measurements. For example, the second is defined by an atomic transition frequency in cesium atoms, because this transition frequency can be precisely reproduced with atomic clock technology. Natural unit systems are generally not based on quantities that can be precisely reproduced in a lab. Therefore, a quantity measured in natural units can have fewer digits of precision than the same quantity measured in SI. For example, Planck units use the gravitational constant G, which is measurable in a laboratory only to four significant digits.
- Greater ambiguity: Consider the equation a = 1010 in Planck units. If a represents a length, then the equation means a = ×10−25 m. If a represents a mass, then the equation means a = 220 kg. Therefore, if the variable a was not clearly defined, then the equation a = 1010 might be misinterpreted. By contrast, in 1.6SI units, the equation would be a = 220 kg, and it would be automatically clear that a represents a mass, not a length or anything else. In fact, natural units are especially useful when this ambiguity is deliberate: For example, in special relativity space and time are so closely related that it can be useful to not specify whether a variable represents a distance or a time.
Choosing constants to normalize
Out of the many physical constants, natural unit systems choose a few to normalize (set equal to 1). It is not possible to normalize just any set of constants. For example, the mass of a proton and the mass of an electron cannot both be normalized; if the mass of an electron is defined to be 1, then the mass of a proton has to be ≈1836. In a less trivial example, the fine-structure constant, α≈1/137, cannot be set to 1, because it is a dimensionless number. The fine-structure constant is related to other fundamental constants
where ke is the Coulomb constant, e is the elementary charge, ℏ is the reduced Planck constant, and c is the speed of light. Therefore it is not possible to simultaneously normalize all four of the constants c, ℏ, e, and ke.
For any natural unit system, electromagnetism units are treated in one of two ways:
- Lorentz–Heaviside units (classified as a rationalized system of electromagnetism units):
- Gaussian units (classified as a non-rationalized system of electromagnetism units):
In Lorentz–Heaviside units, there are factors of in Coulomb's law and the Biot–Savart law but not in Maxwell's equations; In Gaussian units, it is the reverse. Both systems are used, although Heaviside-Lorentz is more common. In either unit system, electric charge is expressible in terms of the "mechanical" units (mass, length, time). In fact, the elementary charge e satisfies:
Electromagnetism units are more complicated than mechanical units because there are different forms of the electromagnetic equations themselves. For example, Newton's law is F = ma in any system of units. However Coulomb's law is F = q1q2/4πr2 in Lorentz–Heaviside units, but F = q1q2/r2 in Gaussian units. Additionally, Maxwell's equations in CGS units (Gaussian or Lorentz-Heaviside) cannot be derived from the equivalent SI equations merely by normalizing some constants: the constant c appears explicitly despite normalizing ε0 and μ0.
Systems of natural units
|Length (L)||1.616×10−35 m||Planck length|
|Mass (M)||2.176×10−8 kg||Planck mass|
|Time (T)||5.3912×10−44 s||Planck time|
|Temperature (Θ)||1.417×1032 K||Planck temperature|
|Electric charge (Q)||(L–H)||5.291×10−18 C|
Planck units are defined by
Planck units are a system of natural units that is not defined in terms of properties of any prototype, physical object, or even elementary particle. They only refer to the basic structure of the laws of physics: c and G are part of the structure of spacetime in general relativity, and ℏ captures the relationship between energy and frequency which is at the foundation of quantum mechanics. This makes Planck units particularly useful and common in theories of quantum gravity, including string theory.
Some may consider Planck units to be "more natural" even than other natural unit systems discussed below. For example, some other systems use the mass of an electron as a parameter to be normalized. But the electron is just one of 15 known massive elementary particles, all with different masses, and there is no compelling reason, within fundamental physics, to emphasize the electron mass over some other elementary particle's mass.
"Natural units" (particle physics)
|1 eV−1 of length||1.97×10−7 m|
|1 eV of mass||1.78×10−36 kg|
|1 eV−1 of time||6.58×10−16 s|
|1 eV of temperature||1.16×104 K|
| 1 unit of electric charge
| 1 unit of electric charge
Finally, one more unit is needed. Most commonly, electron-volt (eV) is used, despite the fact that this is not a "natural" unit in the sense discussed above. (The SI prefixed multiples of eV are used as well: keV, MeV, GeV, etc.)
|Length (L)||1.381×10−36 m|
|Mass (M)||1.859×10−9 kg|
|Time (T)||4.605×10−45 s|
|Temperature (Θ)||1.210×1031 K|
|Electric charge (Q)||1.602×10−19 C|
Stoney units are defined by:
George Johnstone Stoney was the first physicist to introduce the concept of natural units. He presented the idea in a lecture entitled "On the Physical Units of Nature" delivered to the British Association in 1874. Stoney units differ from Planck units by fixing the elementary charge at 1, instead of Planck's constant (only discovered after Stoney's proposal).
Stoney units are rarely used in modern physics for calculations, but they are of historical interest.
(Hartree atomic units)
| Metric value|
(Hartree atomic units)
|Length (L)||5.292×10−11 m|
|Mass (M)||9.109×10−31 kg|
|Time (T)||2.419×10−17 s|
|Electric charge (Q)||1.602×10−19 C|
|Temperature (Θ)||3.158×105 K|
There are two types of atomic units, closely related: Hartree atomic units:
Rydberg atomic units:
These units are designed to simplify atomic and molecular physics and chemistry, especially the hydrogen atom, and are widely used in these fields. The Hartree units were first proposed by Douglas Hartree, and are more common than the Rydberg units.
The units are designed especially to characterize the behavior of an electron in the ground state of a hydrogen atom. For example, using the Hartree convention, in the Bohr model of the hydrogen atom, an electron in the ground state has orbital velocity = 1, orbital radius = 1, angular momentum = 1, ionization energy = ½, etc.
The unit of energy is called the Hartree energy in the Hartree system and the Rydberg energy in the Rydberg system. They differ by a factor of 2. The speed of light is relatively large in atomic units (137 in Hartree or 274 in Rydberg), which comes from the fact that an electron in hydrogen tends to move much slower than the speed of light. The gravitational constant is extremely small in atomic units (around 10−45), which comes from the fact that the gravitational force between two electrons is far weaker than the Coulomb force. The unit length, mA, is the Bohr radius, a0.
The values of c and e shown above imply that , as in Gaussian units, not Lorentz–Heaviside units. However, hybrids of the Gaussian and Lorentz–Heaviside units are sometimes used, leading to inconsistent conventions for magnetism-related units.
Quantum chromodynamics (QCD) system of units
|Length (L)||2.103 × 10−16 m|
|Mass (M)||1.673 × 10−27 kg|
|Time (T)||7.015 × 10−25 s|
|Temperature (Θ)||1.089 × 1013 K|
|Electric charge (Q)||(L–H)||5.291×10−18 C|
The electron mass is replaced with that of the proton. Strong units are "convenient for work in QCD and nuclear physics, where quantum mechanics and relativity are omnipresent and the proton is an object of central interest".
The geometrized unit system, used in general relativity, is not a completely defined system. In this system, the base physical units are chosen so that the speed of light and the gravitational constant are set equal to unity. Other units may be treated however desired. By normalizing appropriate other units, geometrized units become identical to Planck units.
|Quantity / Symbol|| Planck
|Speed of light in vacuum
|Planck's constant (reduced)
|von Klitzing constant
- α is the fine-structure constant, approximately 0.007297,
- αG is the gravitational coupling constant, ,
- Kowalski, Ludwik, 1986, "A Short History of the SI Units in Electricity," The Physics Teacher 24(2): 97-99. Alternate web link (subscription required)
- http://books.google.com/books?id=12DKsFtFTgYC&pg=PA385 Thermodynamics and statistical mechanics, by Greiner, Neise, Stöcker
- Gauge field theories: an introduction with applications, by Guidry, Appendix A
- An introduction to cosmology and particle physics, by Domínguez-Tenreiro and Quirós, p422
- Ray, T.P. (1981). "Stoney's Fundamental Units". Irish Astronomical Journal. 15: 152.
- Turek, Ilja (1997). Electronic structure of disordered alloys, surfaces and interfaces (illustrated ed.). Springer. p. 3. ISBN 978-0-7923-9798-4
- Relativistic Quantum Chemistry: The Fundamental Theory of Molecular Science, by Markus Reiher, Alexander Wolf, p7 [books.google.com/books?id=YwSpxCfsNsEC&pg=PA7 link]
- A note on units lecture notes. See the atomic units article for further discussion.
- Wilczek, Frank, 2007, "Fundamental Constants," Frank Wilczek web site.
- The NIST website (National Institute of Standards and Technology) is a convenient source of data on the commonly recognized constants.
- K.A. Tomilin: NATURAL SYSTEMS OF UNITS; To the Centenary Anniversary of the Planck System A comparative overview/tutorial of various systems of natural units having historical use.
- Pedagogic Aides to Quantum Field Theory Click on the link for Chap. 2 to find an extensive, simplified introduction to natural units. |
Covalent Bonding Worksheet
Covalent bonding worksheet covalent bonding occurs when two or more nonmetals share electrons, attempting to attain a stable octet outer their outer shell for at least part of the time. draw a dot diagram for each element listed. circle the unpaired electrons that will be shared between the elements.
hydrogen is.Some of the worksheets for this concept are chapters and practice work covalent bonds and, covalent, university of at, work chemical bonding ionic covalent, covalent bonds, science grade term work booklet complete, chapter, bonding basics.
List of Covalent Bonding Worksheet
Found worksheet you are looking for to, click on icon or print icon to worksheet to print or download. worksheet Bonding basics covalent bonds answer notes complete the chart for each element. follow your teachers directions to complete each covalent bond.
hydrogen hydrogen diatomic element write the symbols for each element. use fruity pebbles or other with bonding name covalent bonding occurs when two or more nonmetals share electrons. attempting to attain a stable octet of electrons at least part of the time.
1. 2 Page Worksheet Reviews Skill Divided Parts 1 Concise Fill Covalent Bonding Ionic Bonds Chemical Bond
Chemical bonding practice modified determine whether it is a covalent bond or an ionic bond.Help your students understand chemical bonding with this information text and worksheet for chemical bonding. the reading passage tells the story of how the different types of chemical bonding were discovered, and worksheets allow students to practice modeling the chemical bonds.
2. Page Worksheet Covering Identification Metals Covalent Bonding Chemical Bond
3. Ionic Covalent Bonding Worksheet Luxury Teaching Resources Worksheets
Give students the covalent bonding worksheet bonding and key.doc. guide students through the.The covalent bond is the part where the circles touch or overlap. each of the fluorine atoms has put one of its electrons into the shared part of the shell, so this is called a single covalent bond, which can be written as ff.
4. Ionic Covalent Bonds Color Number Bonding Activity
You can do the same thing with compounds. carbon dioxide is carbon and oxygen, and they are both nonmetals.Ionic bond and covalent bond worksheet chemical bonds ionic bonding and covalent this download now includes a page student workbook its a wonderful, beautiful thing.
5. Ionic Covalent Bonds Worksheet Bonding
6. Ionic Covalent Bonds Worksheet Chemical Formula Sorting Bonding
Grade and grade learners are expected to match the structural formula to the molecular formula.Oct, empirical and molecular formula worksheet answers. ask about our room carpet cleaning special. academic essays from the past exams critically discuss essay meaning quantitative data analysis tools autism topics to write about review paper introduction common app act writing score best acknowledgements myself in for.
7. Ionic Covalent Metallic Bonding Properties Laboratory
Differentiate between ionic and covalent bonds with this printable diagram on molecular physics. this chemistry resource can be used as a class handout or as a transparency. worksheets. what is a covalent bond in this physical science printable, students evaluate statements about ionic compounds, covalent compounds, and.
8. Ionic Covalent Metallic Bonds Bonding Chemistry
9. Ionic Covalent Naming Worksheet Bonding Bonds
10. Naming Covalent Compounds Worksheet Binary Bonding Grief Worksheets
11. Naming Ionic Compounds Nomenclature Chemistry Ion Worksheet
12. Naming Understanding Covalent Bonding Molecules Bond Length
Ionic and covalent bonds worksheet. grass. chapter worksheet key chemistry with at ionic and covalent bonding please Pure covalent bonding only occurs when two nonmetal atoms of the same kind bind to each other. when two different nonmetal atoms are bonded or a nonmetal and a metal are bonded, then the bond is a mixture of covalent and ionic bonding called polar covalent bonding.
13. Practice Naming Writing Chemical Compounds Distance Learning Covalent Bonding Bond
14. Ionic Bonng Worksheet Key Covalent Answer Bonds
A. ca and cl b. c and s c. mg and f d. n and o. e. h and o f. s and o g. and cl h. f and o. i. p and s j. h and cl k. c and h l. h and h.We begin our discussion of the relationship between structure and bonding in covalent compounds by describing the interaction between two identical neutral atoms for example the h molecule which contains a purely covalent bond.
15. Professionally Designed Worksheets
Expectations are referred to only fleetingly in the compatibility code.I encourage you to get the marriage expectation worksheet to help you and your partner work through each step in discovering, then sharing your expectations for each other, as well as your expectations for yourselves.
17. Reading Ice Cream Sales Data Tables Rhyming Words Worksheets Writing Thesis Statement Covalent Bonding Worksheet
The other three pairs of electrons on each chlorine atom are called lone pairs a pair of electrons in a structure that is not involved in covalent bonding. lone pairs are not involved in covalent bonding.Covalent bonding worksheet to discover the image more plainly in this article, you may click on the wanted image to see the graphic in its original sizing or in full.
18. Saber Worksheet Beautiful Covalent Bonding Relationship Worksheets Persuasive Writing
A person can also look at covalent bonding worksheet image gallery that we all get prepared to get the image you are interested in.A covalent bond in which one atom contributes both bonding electrons. structural formula, single covalent bond, ion, bond dissociation energy, coordinate covalent bond coordinate covalent bond.
19. Simple Directions Worksheet Ideas Covalent Bonding Ionic
20. Students Interact Combinations Atoms Learning Ionic Covalent Bonding Resource Classroom
This makes no Covalent bonds these are bonds formed between nonmetals. the nonmetals all have high ionization energies so octets of valence electrons are obtained by sharing electrons, instead of gaining or losing electrons. an example of covalent bonding is, fluorine gas.
21. Students Practice Chemical Bonding Basics Electron Dot Diagrams Instructions Sh Practices Worksheets Chemistry Covalent Worksheet
22. Worksheet Answer Key Ideas Worksheets Answers Keys
23. Worksheet Answers Covalent Bonding Worksheets
This lesson aligns with performance expectation construct and revise an explanation for the.By the way, related with types of chemical bonds worksheet answers, scroll down to see various similar photos to give you more ideas. balancing chemical equations worksheet answer key, chemical bonding worksheet answers and chemical bonding worksheet answer key are three of main things we want to show you based on the gallery title.
24. Worksheet People Kids Worksheets Geography Covalent Bonding
25. Ionic Covalent Bonding Worksheet Inspirational Teaching Methods
26. Ionic Bonding Worksheet Answer Key Print Yup Covalent
27. 9 Collisions Chemistry Covalent Bonding Ideas Molecular Shapes Octet Rule
28. Covalent Bonding Activity Worksheet Oxygen Molecule Activities Electron Configuration
29. 9 Ionic Covalent Bonds Ideas Chemistry Bonding Teaching
30. Atoms Ions Worksheet Answers Covalent Bonding Nouns Verbs Worksheets
31. Building Molecular Models Activity Geometry Covalent Bonding Worksheet Worksheets
32. Chemical Bonding Fish Games Ionic Covalent Bonds Teaching Chemistry
33. Chemical Bonding Fish Games Ionic Covalent Bonds Teaching Chemistry Lessons
34. Chemical Bonding Lesson Chemistry Classroom Physical Science High School Covalent
Missing addend worksheets. finding domain and range worksheet. hundreds tens and ones worksheets. the purchase worksheet answers. parent functions and transformations worksheet. short vowel review worksheets. character and setting worksheets. ions ionic bonding worksheet classroom chemist chemical answers.
35. Chemical Bonding Notes Theory Chemistry Worksheets
36. Chemical Bonding Worksheet Sierras Chemistry Blog Types Bonds Covalent Bond Compounds
37. Chemical Bonds Flow Chart Graphic Organizer Bond Organizers Teaching Chemistry
Why not be the first.Oct, covalent bonding worksheet from covalent bonding worksheet, sourceguillermotull.com. worksheet drawing covalent bonds worksheet ionic and covalent from covalent bonding worksheet, sourcecathhsli.org. dot and cross diagrams by teaching resources from covalent bonding worksheet, sourcetes.
comChapter practice worksheet covalent bonds and molecular structure how are ionic bonds and covalent bonds different ionic bonds result from the transfer of electrons from one atom to another covalent bonds result from two atoms sharing electrons. describe the relationship between the length of a bond and the strength of that bond.
38. Chemical Bonds Printable Teaching Bond Covalent Bonding Chemistry
Apr, that area is the chemical bonding area of the molecule. using the appropriate eraser, the student can then erase the negative space so that he or she can see what the positive areas look like. ionic bonding practice worksheet answers lovely worksheet ideas from ionic bonding practice worksheet answers, sourcetherlsh.
39. Covalent Bonding Practice Worksheet Practices Worksheets
S chemistry website home. covalent bonding practice worksheet covalent bonding practices worksheets physical science. saber answers image pixels scaled covalent bonding worksheet word problem worksheets persuasive writing.Dec, students practice chemical bonding basics by using electron dot diagrams.
40. Ionic Bonding Puzzle Activity Covalent
Glue the periodic table on the poster board with the completed puzzle pieces. turn the worksheet and poster board into. ionic bonding cutouts and periodic table coloring worksheet. use your puzzle pieces to combine the following ions to show how they make a Chemical bonding types of bonding the different types of chemical bonding are determined by how the valence electrons are shared among the bonded atoms.
41. Covalent Bonding Worksheet Answer Key Ionic Bonds Student Exploration Gizmo
42. Covalent Bonding Worksheet Answer Key Sierras Chemistry Blog Types Chemical Bonds Bond
Its a book. you can highlight. there are notes. you can draw. there are practices lots of practice there are explanations, examples and covalent compounds practice worksheet chemical bonds ionic bonding and covalent this download now contains a page student workbook its a magnificent, beautiful thing.
its a book. you can point it out. there are notes. you can draw. there is a practice lots and covalent bonding worksheets is a good first step for anyone wanting to learn about the subject. these basic worksheets include vital information that helps you determine whether you really do need to learn more about the process of ionic and covalent bonding.
43. Covalent Bonding Worksheet Ionic
Best images of ionic and covalent bonding practice worksheet. bonding packet. ionic and covalent bonding worksheet answer key luxury a ionic.Beautiful ionic bonds worksheet answers from ionic bonding practice worksheet, sourceduboismuseumassociation.
44. Diagrams Ionic Covalent Bonds Worksheet 6 Bonding
Org. module bonding revision notes in all levels chemistry from types of chemical bonds worksheet, sourcegetrevising.co.ukAug, chemical bonds worksheet answers from chemical bonds worksheet, sourcefacialreviveserum.com. if chemistry workbook a from chemical bonds worksheet, sourceslideshare.
net. types chemical bonds worksheet bonding ionic and from chemical bonds worksheet, sourceguillermotull.comAbout this quiz worksheet. this quiz and corresponding worksheet will help you gauge your understanding of covalent chemical bonds. topics need to know to pass the quiz include.
45. Dot Structure Mini Lesson Worksheet Chemistry Worksheets Classroom Lessons
46. Electron Configurations Periodic Table Configuration Chemistry Education Classroom
48. Identifying Ionic Covalent Bonds Bonding Chemistry Worksheets
49. Image Result Naming Covalent Compounds Worksheet Bonding Chemical
50. Intro Ionic Covalent Compounds Bonding
P o. naming ionic compounds practice worksheet name the following ionic compounds. antimony chlorine dioxide hydrogen iodide hi While we talk related with ionic and covalent bonding practice worksheet answers, below we will see several similar images to add more info.
ionic and covalent bonding worksheet, naming ionic compounds worksheet answers and ions ionic compounds worksheet are some main things we want to present to you based on the post title.Lab modeling ionic and covalent bonds. hand out the worksheet ionic versus covalent bonding versus covalent bonding and key.
51. Ionic Bonding Note Covalent Worksheet
52. Worksheet Polarity Bonds Answers Luxury Bond Ionic Bonding Covalent Practices Worksheets
For example note that hydrogen is content with, not. electrons. show how covalent bonding occurs in each of the following pairs of atoms. atoms may. this worksheet and answer key is a great way to assess students prior knowledge of ionic and covalent bonding.
it is a great for high school chemistry classes, and a wonderful review activity for middle and high school classes that have already learned about bonding. for each of the following covalent bonds write the symbols for each element. draw a dot structure for the valence shell of each element. |
Home > Flashcards > Print Preview
The flashcards below were created by user
on FreezingBlue Flashcards. What would you like to do?
Involve numeric characteristics such as age, height, sales revenue or business profits and so on
Involve non numeric characteristics such as race, gender or hair color
Characteristics of a population
Statistics that determine something about an entire group (population) based on looking at part of the group (sample)
Statistics that organize, summarize and present data
a change in one variable will cause a change in another variable
Variables appear to have a relationship to each other but one variable does not necessarily cause a change in the other item
The science of gathering, organizing analyzing and presenting data
A portion or a subset from a population
The entire set of items, people or measurements that are studied
When the occurrence of one event prevents the other events from happening
All observations will be placed in a category. There are no other options
What are the two types of quantitative variables?
Discrete and continuous
Can only assume a specific value. Ex. Population of a city. Discrete variables are counted.
A variable that can be any value with in a certain range such as weight interest rates
Levels of measurements
Used to classify data. Also called scales of measurement. Indicates how data is calculated summarized measured and tested
What are the 4 levels of measurement?
Nominal ordinal interval ratio
What 2 levels of measurement are used in qualitative variables?
What 2 level of measurements are used in quantitative variables?
Interval and ratio
Nominal level data
Mutually exclusive and exhaustive and no logical sequence. Classified and counted. Ex number if boys and girls in a class
Ordinal level data
Mutually exclusive and exhaustive and can be ranked or ordered. Ex grades a b c d f
Interval level data
Mutually exclusive and exhaustive can be ranked or ordered difference between classifications is a consistent unit of measure zero does not mean nothing is present. Ex: dress size and temp
In a population it is the sum of all the values divided by the number of items. In a sample it is the sum of the value of items selected divided by the number if items. Ratio level data uses the arithmetic mean to represent the center
Characteristics of a Arithmetic Mean
A mean uses all values in a sample or a population
A mean might be distorted by large or small values called outliers
All interval and ration data have a mean
Each set of data has a single unique mean
If you sum each item's deviation from the mean, it equals zero
Ratio level data
Mutually exclusive and exhaustive can be ranked or ordered. consistent unit of measurement. zero means none. ratio between 2 classifications is meaningful. Ex: gross pay hours worked test scores
A computation of the arithmetic mean used whrn you have multiple observations of the same value in the population or sample.
The midpoint of the values after they have been arranged in order from smallest to largest or largest to smallest. Ordinal data uses the median to represent the center
A special form of mean used in a situation in which you are computing averages that compound on each other or you want to compute the rate of change on an item over time.
The value of an observation that ocurs most frequntly. Nominal data uses the mode to represent the center.
The population mean is the mean of all the values in a population. The formula is expressed as follows:
- μ = Population mean
- Σ = Sum
- X = Population value
- N = Number of values in the population
Characteristics of the Median
It is not affecte by extremely high or extremely low scores
Each set of data has a single median
It can be computed on ratio=level, interval-level, and ordinal level data
Characteristics of a mode
It is used with all types of data
It is not afffected by extremely high or low values
If there are no reoccuring values, there is no mode
It is possible to have multiple modes
In a symmetrical distribution, the arithmetic mean, median, and mode are equal. You can use any one of these measures to represent the center.
A symmetrical distribution is where the histogram has the same shape on each side of the center point. In other words, if you cut the histogram in half at the center point, you get two identical pieces. Not all distributions are symmetrical
Positively Skewed Distribution
The arithmetic mean is larger than the median or the mode due to one or more large values
Negatively skewed distributions
The arithmetic mean is smaller than the median or the mode due to one or more small values
Measures of dispersion
Measures of dispersion tell you about the spread in the data. There are 4 measures of dispersion: range, mean deviation, variance and standard deviation.
The difference between the highest and lowest values in the data
The arithmetic mean of the absolute values of the deviation of each observation from the arithmetic mean
The arithmetic mean of the squared deviations of the observations from the mean
The square root of the variance
P.L. Chebyshev was a mathematician who developed a theory regarding standard deviation. His theory states that for any population or sample, the percentage of values that lie within k plus and minus standard deviations of the mean is at least:
- k = Number of standard deviations
- Note: For this theorem to work, k must be greater than one.
According to the theorem, at least 75% of this data falls within two standard deviations of the mean.
The empirical rule (also called the normal rule) applies only to a symmetrical, bell-shaped distribution
Approximately 68% of the data is within one plus or minus standard deviation from the mean.
Approximately 95% of the data is within two plus or minus standard deviations from the mean
Approximately 99.7% of the data is within three plus or minus standard deviations from the mean. |
Since the earliest days of microprocessors, system designers have been plagued by a problem in which the speed of the CPU's operation exceeded the bandwidth of the memory subsystem to which it was connected. To avoid wasting CPU cycles while waiting for the memory to fetch the requested data, the universally adopted solution was to use an area of faster (and thus more expensive) memory to cache main memory data. This solution allowed the CPU to operate at its natural speed as long as the data it required was available in the cache.
The purpose of this article is to explain caching from the point of view of a kernel programmer. I also explain some of the common terms used to describe caches. This article is divided into sections whose kernel programming relevance is indicated; that is, some sections explain that cache properties are irrelevant to understanding the essentials of how the kernel handles caching. If you're coming from an Intel IA32 background, caching is practically transparent to you. In order to write kernel code that operates correctly on all the architectures Linux supports, however, you need to know the essentials of how caching works in general.
Simply put, a cache is a place that buffers memory accesses and may have a copy of the data you are requesting. Usually one thinks of caches (there may be more than one) as being stacked; the CPU is at the top, followed by layers of one or more caches and then the main memory. In this hierarchy, caches are quantified by their level. The cache closest to the CPU is called level one, L1 for short, and caches increase in level until the main memory is reached.
A cache line is the smallest unit of memory that can be transferred to or from a cache. The essential elements that quantify a cache are called the read and write line widths. These signify the minimum amount of data the cache must read or write from the memory or cache below it. Frequently, these quantities are the same, so caches often are quantified simply by the line width. Even if they differ, the longest width usually is called the line width.
The next property that quantifies a cache is its size. This number is an indication of how much data could be stored in the cache. Often, the performance rule of thumb is the bigger the cache, the better the benchmarks.
A multilevel cache can be either inclusive or exclusive. Exclusive means a particular cache line may be present in exactly one of the cache levels and no more than one. Inclusive means the line may be present simultaneously in more than one level of cache. Nothing prevents the line widths from being different in differing cache levels.
Finally, a particular cache can be either write through or write back. Write through means the cache may store a copy of the data, but the write must be completed at the next level down before it can be signaled as complete to the layer above. Write back means a write may be considered complete as soon as the data is stored in the cache. For a write back cache, as long as the written data is not transmitted, the cache line is considered dirty, because it ultimately must be written out to the level below.
One of the most basic problems with caches is coherency. A cache line is termed coherent when the data in the line is identical to the data stored in the main memory being cached. If this is not true, the cache line is termed incoherent. Lack of coherency can cause two particular problems. The first problem, which may occur for all caches, is stale data. In this situation, data has changed in main memory but the cache hasn't been updated to reflect the change. This usually manifests itself as an incorrect read, as illustrated in Figure 1. This is a transient error, because the correct data is sitting in main memory; the cache simply needs to be told to bring it in.
The second problem, which occurs only with write back caches, can cause actual destruction of data and is much more insidious. As illustrated in Figure 2, the data has been changed in memory, and it also has been changed separately by a CPU write to the cache. Because the cache must write out one line at a time, there now is no way to reconcile the changes—either the cache line must be purged without being written, losing the CPU's change, or the line must be written out, thus losing the changes made to main memory. All programmers must avoid reaching the point where data destruction becomes inevitable; they can do this through the judicious use of the various cache management APIs.
|NTPsec: a Secure, Hardened NTP Implementation||Mar 30, 2017|
|SUSE Linux Enterprise High Availability Extension||Mar 29, 2017|
|Hybrid Cloud Storage Delivers Performance and Value||Mar 29, 2017|
|smbclient Security for Windows Printing and File Transfer||Mar 28, 2017|
|How to Calculate Flash Storage TCO||Mar 27, 2017|
|Non-Linux FOSS: Don't Drink the Apple Kool-Aid; Brew Your Own!||Mar 27, 2017|
- NTPsec: a Secure, Hardened NTP Implementation
- smbclient Security for Windows Printing and File Transfer
- Hybrid Cloud Storage Delivers Performance and Value
- SUSE Linux Enterprise High Availability Extension
- Returning Values from Bash Functions
- Non-Linux FOSS: Don't Drink the Apple Kool-Aid; Brew Your Own!
- William Rothwell and Nick Garner's Certified Ethical Hacker Complete Video Course (Pearson IT Certification)
- Preseeding Full Disk Encryption
- HOSTING Monitoring Insights
- Hodge Podge |
Black holes are thought to be the result of collapsing matter following the explosion of large stars into supernovae. For a star to be capable of compaction into a singularity, it must have a mass greater than 3.4 times that of the Sun. Specifically, if the remnants of a star which has exhausted the energy available from nuclear fusion reactions are greater than about 3.4 times the mass of the sun, electron degeneracy and neutron degeneracy are insufficient to prevent the star from collapsing into a black hole. Recent cosmology has considered the possibility of smaller black holes forming in the very early history of the universe, due to fluctuations in mass distribution when the density of the universe was significantly higher than is observed now.
There are both rotating and stationary black holes, a singularity and event horizon(s) being the major features of both. The event horizon is the boundary of a black hole where gravitational forces become so strong that not even light can escape. Relativity states that the singularity is a point of infinite space time curvature, and the singularity of a black hole is covered by the event horizon. To an outside observer, objects falling into a black hole will take an infinite amount of time to reach the event horizon. However, the amount of time as measured by the object falling into the black hole can be very short. A rotating black hole, according to the Kerr solutions of general relativity, will have two event horizons, and there are spacetime paths through the event horizon which do not intersect the singularity.
According to quantum mechanics, the location of the matter within a black hole is uncertain. Additionally, a phenomenon called Hawking radiation predicts that black holes can "leak" a very small amount of mass. So theoretically, black holes are not truly "black" due to emitted radiation. Black holes have a surface temperature defined by their mass. The larger the mass of a black hole, the larger the diameter, and the lower the amount of energy which escapes, thus the lower the temperature, and the longer the time it takes for the black hole to "evaporate".
Detection and Observation of Black Holes
Methods of detecting and observing black holes include imaging using radio telescopes and also gravitational wave detection.
In 2015, the two LIGO (Laser Interferometer Gravitational-Wave Observatory) detectors in the USA made the first-ever detection of gravitational waves, which were emitted by two colliding black holes that were approximately 29 and 36 times as massive as the Sun. Since then, the Virgo detector in Italy has also contributed to gravitational wave detection.
In 2019, a direct image of a black hole was made using the Event Horizon Telescope, which is actually a worldwide network of radio telescopes. This black hole is 6.5 billion times more massive than the Sun and located 55 million light-years away in the galaxy M87. |
CCSS.Math.Content.3.NF.A.1 Understand a fraction 1/b as the quantity formed by 1 part when a whole is partitioned into b equal parts
CCSS.MATH.CONTENT.4.NF.B.3 Understand a fraction a/b with a > 1 as a sum of fractions 1/b.
The student will learn the definition of fraction, parts of fractions and how fractions have been used in past and present. This lesson begins with a video example of how fractions could be used by Native Americans to keep track of time. Next, a presentation is used to give a definition of fraction, numerator and denominator. Both the presentation and the second video use one-half as an example of a fraction. Other videos and presentation in the lesson divide a whole into fourths. The entire lesson takes 30-40 minutes.
1. Video: An example of how our ancestors used fractions
This video explains how a whole area, such as a lake, could be broken into equal parts and how that knowledge could be applied to tell time, thereby avoiding the danger of going home in the dark.
2. Presentation: Definitions of fractions, numerator and denominator
This presentation, with 25 slides, defines a fraction and each of its parts. One-half is used as an example of a fraction. You can access this presentation as Google Slides or PowerPoint. We estimate it takes about 7 minutes, with pauses for student input.
3. Video: Is one-half fair?
How many times have you heard kids insist something wasn’t fair? This video uses fractions and the concept of one-half to determine if two people are doing the their fair share of the work, getting their fair share of a pile of blankets.
4. Video: What is half
In this example of meeting between two camps, students will learn the definition of one-half and how to apply this knowledge to determine if the distribution of effort is fair. The video provides both examples of one-half – a whole divided into two equal parts – and non-examples, when a whole is divided into two unequal parts.
5. Presentation: Using fractions
This presentation, with 13 slides, gives an example of dividing a trail into four equal parts, fourths, or quarters. Zoongey Giniw sets his snares at four spots, equal distances apart on the trail. The presentation is available in PowerPoint or Google Slides. We estimate it takes about 5 minutes.
6. Video: Why Snare Rabbits?
Why is Zoongey Giniw snaring rabbits? As Turtle Mountain elder, Deb Gourneau explains in this video, when the Ojibwe people on the Turtle Mountain reservation did not have deer to eat and could not leave the reservation, they escaped starvation by snaring rabbits.
7. Game Play: Fish Lake
Students can play Fish Lake on Mac or Windows computers or iPad. Fish Lake covers fractions a long list of fractions standards. Recommended time: 15 minutes. Teachers in the Growing Math program receive licenses for Fish Lake for all of their students. If you need a license, please email firstname.lastname@example.org
8. Game Play: Forgotten Trail
If your students don’t have access to iPads, Mac or Windows computers and are using Chromebooks, they can play Forgotten Trail, which teaches this fraction standard, as well as standards for measurement and data. You can see the full list here. Recommended time: 15 minutes. Teachers in the Growing Math program receive licenses for Forgotten Trail for all of their students. If you need a license, please email email@example.com
9. Next lesson: Adding fractions with like denominators
Once you have introduced fractions, the next step we recommend is adding and comparing fractions with a common denominator.
Assessment is built into the presentation as students are asked how they would write Long Foot’s portion of the buffalo as a fraction. There is a test of all of the fractions standards taught in Fish Lake here. It can be used as a pre- and post-test to show growth or at the end of a unit on fractions.
Minnesota Math Standard 220.127.116.11 – Understand that the size of a fractional part is relative to the size of the whole. |
Dummy Variables in R
In this section we explain how dummy variables can be used in Regressions and we will utilise the Baseball Wages dataset for this purpose.
Econometricians think of dummy variables as binary (0/1) variables. And in some datasets you will find the data presented as such right from the start. This is, for instance, the case for the Baseball wages dataset. Importing the dataset you will find information on the position each player takes in its team. These are firstbase (frstbase), second base (scndbase), thitd base (thrdbase), short stop (shrtstop), outfield (outfield) and catcher (catcher). Each player is given exactly one of these positions.
setwd("YOUR DIRECTORY PATH") # This sets the working directory load("mlb1.RData") # Opens mlb1 dataset from R datafile
If you now look at the data (the data themselves are stored in data, and the variable descriptions in desc) you will find them looking something like this
You can see that the first player is a second base player (1 for scndbase and 0 for all other positional variables) and the second player is a short stop.
Dummy variables as independent variables
If the data come as predefined dummy variables, then it is rather straightforward to use these in regressions.
reg_ex1 <- lm(lsalary~years+gamesyr+frstbase+scndbase+thrdbase+shrtstop+catcher,data=data) print(summary(reg_ex1))
Here we are running a regression in which we explain variation in log salary by using the explanatory variables years of major league experience and games played per year plus a set of dummy variables (in bold) for all positions but the outfield position (beware the dummy variable trap!).
What we get is the following output:
Call: lm(formula = lsalary ~ years + gamesyr + frstbase + scndbase + thrdbase + shrtstop + catcher, data = data)
Residuals: Min 1Q Median 3Q Max -2.71524 -0.46973 -0.00695 0.45610 2.73707 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.222840 0.125818 89.199 < 2e-16 *** years 0.067257 0.012551 5.359 1.54e-07 *** gamesyr 0.021095 0.001412 14.935 < 2e-16 *** frstbase -0.060406 0.128470 -0.470 0.6385 scndbase -0.340685 0.139059 -2.450 0.0148 * thrdbase 0.002862 0.142958 0.020 0.9840 shrtstop -0.232334 0.124566 -1.865 0.0630 . catcher 0.129668 0.126458 1.025 0.3059 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7455 on 345 degrees of freedom Multiple R-squared: 0.6105, Adjusted R-squared: 0.6026 F-statistic: 77.24 on 7 and 345 DF, p-value: < 2.2e-16
As you can see, and we surely would have expected, years of major league experience has a positive effect on the salary (although we may really need to consider a quadratic effect) as has the games per year variable . The included dummy variables indicate that compared to outfield players (which are the base category as that dummy variable was omitted) only second base players seem to have a significantly (at 5 per cent) different salary. The results seem to indicate that they, ceteris paribus, earn 34 per cent less than outfield players.
When you just include straight dummy variables you allow for intercept shifts according to the relevant categories (here positions on the field). This may often be inadequate and we may really want interaction terms. These may be interactions between different dummy variables (for instance we may be interested whether it is really only black second base players that earn less, we would include
sndbase*black) or interactions between a dummy and another explanatory variable to to allow for changing slope coefficients (e.g. we may want to figure out whether experience counts differently for catchers, we would include
When learning how to use interaction terms we will actually encounter another quirk of R. To see this it is instructional to first start with an extremely simple model, one which would really make no economic sense.
reg_ex1 <- lm(lsalary~(years*black),data=data) print(summary(reg_ex1))
Intuitively we would think that this should estimate a model with a constant and one explanatory variable,
years*black. But when we look at the result we can see that R has taken it upon itself to extend the model:
Call: lm(formula = lsalary ~ (years * black), data = data)
Residuals: Min 1Q Median 3Q Max -3.0165 -0.7867 -0.1900 0.7537 1.9904 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.307118 0.117606 104.65 <2e-16 *** years 0.178426 0.016394 10.88 <2e-16 *** black 0.248952 0.214635 1.16 0.247 years:black -0.009502 0.027919 -0.34 0.734 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9628 on 349 degrees of freedom Multiple R-squared: 0.3427, Adjusted R-squared: 0.3371 F-statistic: 60.66 on 3 and 349 DF, p-value: < 2.2e-16
It has included the simple explanatory variables
black. To understand this we need to understand that in the context of model building (which is what we do here) R understands the operator
* as an invitation to include the variables itself and its cross term. This is, at times, very convenient as this is often what you want to do.
But, if we want to include cross terms only we need to use the operator
: instead of
reg_ex1 <- lm(lsalary~(years:black),data=data) print(summary(reg_ex1))
will deliver a regression model with a constant and the cross term only.
Using Categorical/Factor variables in regressions
As we discussed in the Data Section, when you import categorical data from csv files they usually be imported as factor variables into R. In the data analysis section we already learned how to get frequency counts of categorical variables using the
table( ) or
summary( ) command.
When using such categorical variables in regressions as explanatory variables we will use them in the form of dummy variables (binary 0/1 variables). When importing the Baseball salary dataset there were two categorical variables, playing position and ethnicity/race. But both these were already transformed to individual dummy variables as discussed above.
What would we do if, as often the case, the categorical variable would be imported as one variable and not separate dummies. Download this csv file which presents the position and race variable as a categorical and also includes the variables
years (all other variables have been deleted from this datafile).
Read the csv into R,
setwd("YOUR DIRECTORY PATH") # This sets the working directory mydata <- read.csv("mlb1_cat_test.csv")
if you now look at the dataset it will look like this:
and inspecting the variables and their datatypes by using
str(mydata) we find that the variables
race are indeed factor variables.
'data.frame': 353 obs. of 5 variables:
$years : int 12 8 5 8 12 17 4 10 4 3 ...
$position: Factor w/ 6 levels "catcher","first base",..: 4 5 2 6 3 3 3 1 5 3 ...
$race : Factor w/ 3 levels "black","hispan",..: 3 1 3 3 1 1 2 3 2 1 ...
$gamesyr : num 142.1 114.8 150.2 132 99.7 ...
$lsalary : num 15.7 15 14.9 14.9 14.3 ...
From here there are two ways to go if you want to use dummy variables based on either of these variables in a regression.
Translating into dummy variables
We can translate the factor variable into dummy variables. We can do this using the following type of commands
$frstbase <- as.numeric(mydata
$position == "first base") # as.numeric translates to numerical - here from logical
which creates a new variable in the data frame called
frstbase that takes a value of 1 if the player is a first base player and 0 otherwise. Other dummy variables can be used accordingly. Once you have done this you can proceed as in the previous sections.
Using factor variables directly
One very nice aspect of R is that you can use such factor variables directly in regressions. For instance we could estimate a regression
gamesyr as explanatory variables but also include intercept dummies for the different positions. The straightforward way to do that is as follows:
reg_ex1 <- lm(lsalary~years+gamesyr+position,data=mydata) print(summary(reg_ex1))
Call: lm(formula = lsalary ~ years + gamesyr + position, data = mydata)
Residuals: Min 1Q Median 3Q Max -2.71524 -0.46973 -0.00695 0.45610 2.73707 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.352508 0.129846 87.430 < 2e-16 *** years 0.067257 0.012551 5.359 1.54e-07 *** gamesyr 0.021095 0.001412 14.935 < 2e-16 *** positionfirst base -0.190074 0.157450 -1.207 0.22818 positionoutfielder -0.129669 0.126458 -1.025 0.30590 positionsecond base -0.470353 0.167849 -2.802 0.00536 ** positionshort stop -0.362002 0.150584 -2.404 0.01674 * positionthird base -0.126807 0.168252 -0.754 0.45156 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7455 on 345 degrees of freedom Multiple R-squared: 0.6105, Adjusted R-squared: 0.6026 F-statistic: 77.24 on 7 and 345 DF, p-value: < 2.2e-16
When you compare the summary statistics to those of the first regression we estimated in this dummy variable section, then you will realise that they are identical, we essentially estimated the same model. There is, however, one difference. In the previous estimation we use outfielders as the base category (i.e. the respective dummy variables was excluded). Here we can see that R automatically includes dummy variables for the different positions, but for one, here the catcher position. R chose to drop the catcher position as this is the position which comes first in the alphabet.
Inherent in a factor variable in R is that R uses one of the values as its reference value and by default this is the value first in the alphabet as we saw in the above regression. There is, however, a way to let R know to change the reference value. The way to do this is as follows:
$position <- relevel(mydata
$position, ref = "outfielder")
This ensures that from now on R will use "outfielder" as the reference. If you know run the same regression as above
reg_ex1 <- lm(lsalary~years+gamesyr+position,data=mydata) print(summary(reg_ex1))
you will find that the outfielder dummy variable will be omitted.
- We lay no claim that this is the best possible model to explain salary.
- Check the dataset to find the dummy variable
- Had anyone seen a programming language without quirk? I havn't.
- See for details.
- Alternatively you could use
I(years*black), where the
I()function ensures that R understands the multiplication as a literal mathematical operation/ |
RECONSTRUCTION KEY TERMS Reconstruction Thirteenth Amendment Freedmen Freedmens Bureau Andrew Johnson Presidential Reconstruction Radical Republicans Black Codes
Congressional Reconstruction Civil Rights Act Fourteenth Amendment Due Process Equal Protection Fifteenth Amendment Impeachment
Carpetbagger Scalawag Hiram Rhodes Revels New South Sharecropping
Debt Peonage Ku Klux Klan Literacy Test Grandfather Clauses Poll Taxes Solid South
Jim Crow Laws Segregation Plessy v. Ferguson President Lincoln had been assassinated by John Wilkes Booth Andrew Johnson was the new President (Lincolns Vice President) Reconstruction- To rebuild the South, Americans had to overcome a series of major political, economic, and social hurdles. (1865-1877)
Issues of Reconstruction What to do with former slaves? How to rebuild the Southern economy and rebuild the Union? Freedmans Bureau-Helped former slaves adjust to freedmen LINCOLNS PLAN Favored a lenient Reconstruction plan Ten-Percent Plan Government would pardon all Confederates, except high-ranking officials and those accused of crimes against prisoners of war.
Ten-Percent of voters had to swear allegiance to the Union. As soon as this happened the Confederate state could form a new state government and send representatives to Congress. Radical Republicans wanted to destroy the political power of former slaveholders. Wanted African Americans to be given full citizenship and the right to vote. Wanted to punish the South 13TH AMENDMENT April 1864 the U.S. Senate proposed the Thirteenth Amendment prohibiting slavery throughout the United States.
Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction JOHNSONS PLAN Mostly favored Lincolns plan Major difference Johnson tried to break the planters power by excluding high-ranking Confederates and wealthy Southern landowners form taking the oath needed for voting privileges.
Each high ranking official had to personally request amnesty Pardoned more than 13,000 former Confederates Hoped for reconciliation between Northern and Southern Whites only!!!!!!!! BLACK CODES The end of the Civil War marked the end of slavery for 4 million black Southerners. But the war also left them landless and with little money to support themselves. White Southerners, seeking to control the
freedmen (former slaves), devised special state law codes. Many Northerners saw these codes as blatant attempts to restore slavery BLACK CODES Each Southern state wrote its own black codes Common black codes Defined freedmen as persons of color Prevented persons from 1.Voting 2.Serving on juries
3.Testifying in court against whites 4.Holding office 5.Serving on state militias 6.Also regulated freedmens marriages 7.Regulated labor contracts 8.Illegal for freedmen to travel freely 9.Illegal to leave their jobs BLACK CODES It forced former slaves to stay on plantations as workers. Black workers could also be whipped for
showing disrespect to their employers-often their former master. The whole aim of the Black Codes was to preserve the structure of Southern society with as little disruption as possible. CONGRESSIONAL RECONSTRUCTION Most Republicans were outraged at the actions of President Johnson. Moderate Republicans and Radical Republicans worked together to shift the control of power from the executive branch to the legislature branch. Civil Rights Act- bill to enlarge the Freedmens Bureau and prohibit discrimination based on race, thus overturning the Black Codes. It made
all persons born in the United States into citizens, including Freedmen and guaranteed them the same rights as white citizens President Johnson vetoed the Civil Rights Act In mid 1866 congress overrode the Presidents vetoes of the Civil Rights Act. 14TH AMENDMENT To counter the Presidents veto of the Civil Rights Act, Congress rewrote the terms of the Civil Rights Act into the 14th Amendment. The Amendment prevents states from denying African Americans or other minorities the rights and privileges of citizens, including a fair trial and
equal protection of the law. To be readmitted to the Union, each Southern state was forced to ratify the Fourteenth Amendment. 14TH AMENDMENT All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the state wherein they reside. No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor
shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws. JOHNSON IMPEACHED Johnson failed to win support in the 1866 mid-term election Congressed passed the Tenure of Office law stating that the President could not fire members of his Cabinet. Johnson ignored the law and fired a member of his
cabinet. Johnson was successfully impeached by the House of Representatives in February 1868 Later that same year Ulysses S. Grant was elected President RECONSTRUCTION ACT OF 1867 The Act divided the former Confederate states into five military districts. The states were required to grant African American men the vote and to ratify the Fourteenth Amendment in order to reenter the Union.
15TH AMENDMENT In 1870 Congress ratified the 15th Amendment Which states that no one can be kept from voting because of race, color, or previous condition of servitude. The 13th, 14th, and 15th Amendment were put in place as a punishment to the south RECONSTRUCTION GOVERNMENT
Under the Reconstruction Act of 1867, most southern governments were made up of former Whigs, a few former Democrats, black and white newcomers from the North and Southern African Americans. Carpetbaggers-New arrivals from the North, came to exploit the South and help the freedmen. Others came for new business opportunities Scalawags- Southern whites who supported Reconstruction. Most important aspect of Reconstruction was the active participation of African-Americans in state and local government 600 served as state legislators Hiram Rhodes Revels- first African-American to sit in congress
SHARECROPPING With the end of slavery most landowners were forced to sell off section of their land or enter into sharecropping. Landowners would divide their land and assigned each head of household a few acres, along with seed and tools. Sharecroppers kept a small share of their crops and gave the rest to the landowners. SHARECROPPING If a sharecropper owed any money at all to the
landlord for cash loans or the use of tools, he could not leave until the debt was paid. In effect tying the freedmen to the land in a system of Debt Peonage. THE NEW SOUTH The end of slavery did not mean the end of cotton it just meant that cotton would no longer be the number one crop. The cultivation of new crops like fruits and vegetables was added to cotton Most important of all, railroads, cotton mills, and steel furnaces were built
and more people moved into Southern Cities. Manufacturing increased in the south after the Civil War AFTERMATH TO RECONSTRUCTION The system that replaced Reconstruction in the South was one of racial segregation and white supremacy. Refer to this period in American History as the Nadir or low point in American race relations.
VOTING Southerners passed a series of laws in the 1890s designed to prevent African American from voting without violating the 14th and 15th Amendments. Literacy Tests- test to determine if someone can read or not. African American had to pass a test before they could vote. Poll Taxes- registration fees for voting. African Americans could not afford to pay the poll tax. Grandfather Clause- these laws allowed people who had been qualified to vote at the beginning of 1867, without passing a literacy test or paying a poll tax.
SEGREGATION LAWS Separated blacks from whites. Whites and blacks attended different schools, rode in separate railways cars, ate in different restaurants, used different public toilets and water fountains. JIM CROW The laws establishing racial segregation in the South became known as the JIM CROW LAWS. Named after a character in earlier song-anddance shows. In 1890, Louisiana passed a Jim Crow law requiring railroads companies to provide equal but separate facilities to members of different races.
PLESSY V. FERGUSON Homer Plessy sat in a railroad car for whites, Plessy was arrested and sent to jail. The case went all the way to the Supreme Court. The Supreme Court ruled that it was legal to separate races. Making segregation legal. Segregation became legal END OF RECONSTRUCTION
1877 Northern troops left the South and local government entirely returned to local white Southern rule. There are several reasons why Reconstruction failed to achieve complete equality for African Americans. 1. A legacy of racism 2. Economic dependence of African American 3. White terrorism (the Klan) 4. Loss of Northern interest in Southern Reconstruction.
The Accounting Cycle Infosys - * Source documents Record in Journal Financial Statements Transaction Analysis Post to Ledger Unadjusted Trial Balance Record & Post Adjusting Entries Adjusted Trial Balance Close Temporary Accounts Post-Closing Trial Balance The Accounting Processing Cycle Infosys...
In access control application, as you may all have, Concordia RFID access card. Hospital deploy RFID card to track their expensive medical device and asset. Even in credit card industry, Mastercard implement RFID technology so that you don't have to...
Thank you for your attention For questions, suggestions and remarks, please contact us: [email protected] Renewable energy resources in the SEEA Are renewable energy resources assets in the SNA and SEEA or not? Maarten van Rossum, Mark de Haan, and Sjoerd...
~ 5 Kb Probe SpeI SpeI SpeI ~ 11 Kb …not drawn to scale. Transcript? wt truncated Northern Analysis Extract total RNA Use complementary DNA to probe for mRNA Protein? Western Analysis Proteins extracted Antibodies specific to GPA proteins facilitate...
Antje Ihlefeld. Alan Kan. Elin Roverud. I'm not sure about the Ex officio. [email protected] Election of New TC Members for 2019-2022. ... Organized by: Andrew Brown. Speech Perception in Children with Auditory Prostheses (Joint with Speech Communication)
Commonly snorted but can be swallowed in 'bombs' and sometimes found in capsules. White, off-white or yellow powder. Derived from a chemical also found in plants that's creates a feeling of euphoria. Before it's classification in 2010 it could be...
The Faculty Supervisor submits the application. Please contact them when this process is ready to be completed. If there are any errors, the Faculty Supervisor will be notified at the top of the application.
Aktualisht përdoren pesë lloje kryesore të tyre. Ndëlidhësi serik (RS)232,V-24.Të gjithë bitët mund të transmetohen në sekuenca njëri pas tjetrit.Kjo lidhje është lidhje më e ngadalshme,por kërkon harduer minimal (kërkon vetëm tre përques për komunikim). Zakonisht modemët dhe mausët lidhen...
Ready to download the document? Go ahead and hit continue! |
There are a few curve balls, but on average most third, fourth, and fifth graders should be able to solve multiplication word problems. Unlike simple equations, word problems contain extra words, numbers, and descriptions that have seemingly no relevance to the question. Deductive reasoning and a process of elimination of extraneous information. Students should have a concrete understanding of the meaning of multiplication before attempting these worksheets. For example: For your birthday, 7 friends will get a surprise bag. How many prizes will you need to buy to fill the surprise bags?[Grade 1] [Grade 2] [Grade 3] [Grade 4] [Grade 5] The Word Problems are listed by grade and, within each grade, by theme.
Challenge your children using our KS2 maths word problems resources.They require children to use their reading comprehension skills while also applying everything they have learned in math class. The total cookies that you have are 48, since 4 x 12 = 48. The student should read the word problem and derive a multiplication equation from it.Most multiplication word problems are usually pretty straightforward. To find out if each child can have two cookies, 24 x 2 = 48. He or she can then solve the problem by mental multiplication and express the answer in the appropriate units.Without knowing what is being asked, students may have trouble making sense of all the important information in the question.Word problems take math understanding to the next level. These worksheets contain simple multiplication word problems.Here you can find a wide array of maths word problems from multiplication and division to fractions and more.All are designed to help your Key Stage 2 pupils develop their problem-solving skills!The words in the particular problem will not change but the numbers will.Children who struggle converting a word problem into a math equation will find it reassuring (confidence builder) to revisit the same verbal clues with different numbers, so consider printing a couple regenerations of each problem.Word problems often trip up even the best math students.Many get stumped trying to figure out what they are looking to solve. |
1 Answer | Add Yours
In order to understand this problem, we must make some assumptions. If we first assume that the rate of fuel consumption is constant and if we further assume that the spacecraft is experiencing a positive thrust due to its fuel consumption we can analyze the motion.
In order to leave the surface of the earth, the constant thrust of the rocket engine must be greater than g at the surface of the earth: it must be greater than 9.81 m/s^2. The net acceleration of the spacecraft will be the differnce in the acceleration from the thrust and that of g.
According to Newton's Univseral Law of Gravity, as the spaceship gets further from the earth, the acceleration due to gravity (g) will decrease, thus the difference between the acceleration due to thrust and that of gravity gets larger and therefore the spacecraft will experience a greater net acceleration.
We should also notice that the weight of the spacecraft will reduce as the spacecraft burns fuel. The result will be that the same thrust will produce a greater acceleration. Again the spacecraft will receive an even greater acceleration.
Third, we know that the density of air decreases as the spacecraft gets higher into the atmosphere. Therefore the same thrust will more efficiently move the spacecraft.
The result of these three changes will cause the spacecraft will continue at ever greater accelerations until it leaves the atmosphere.
We’ve answered 318,048 questions. We can answer yours, too.Ask a question |
Any hearing loss or disorder
affects our ability to communicate. It is therefore important to protect your hearing and take good care of your ears.
The ear consists of 3 parts:
- External Ear
- Middle Ear
- Inner Ear
The external ear
consists of the auricle and auditory canal. The external ear collects sound waves and directs them into the middle ear. The auricle collects sounds from every direction. The ear canal, or external auditory meatus, carries sound to the eardrum.
The middle ear
lies between the external ear and the inner ear. It consists of an air-filled cavity and includes the auditory ossicles. The middle ear consists of:
1. Ossicles in the form of three small bones:
These three bones are connected. The malleus is directly connected to the eardrum, while the stapes is connected to the inner ear. As sound waves vibrate the eardrum, it in turn moves the malleus, which transmits the vibrations via the incus to the stapes. These vibrations are then ultimately transmitted to the membrane-covered opening that leads from the middle ear to the vestibule of the inner ear, called the oval window.
2. The Eustachian tube is a narrow passage leading from the pharynx to the cavity of the middle ear, permitting the equalization of pressure on each side of the eardrum.
The inner ear
is made up of two parts:
- Cochlea for auditory portion
- Semicircular canal and otolithic organ for balance and motion
consists of hair cells, sensory cells and fluid-filled spaces. Oscillations in the middle ear set the fluid in the cochlea in motion. Hair cells in the cochlea perform the transduction of the sound waves into electrical impulses. Auditory nerve fibers transmit these electrical signals along the auditory nerve and eventually on to the brain stem for translating and processing.
Process of Hearing
- Sound travels through the external ear and impacts on the eardrum.
- Vibrations of the eardrum cause oscillations in the three bones (malleus, incus and stapes) in the middle ear.
- Vibrations set the fluid in the cochlea in motion. Tiny hair cells in the cochlea perform the transduction of the sound waves into electronic impulses.
- Auditory system transmits these electronic impulses to the brain for processing.
Causes of Hearing Loss
- Problems of the external ear include impacted earwax and ear infection.
- Problems of the middle ear include a perforated eardrum, fluid in the middle ear (Serous otitis media), chronic otitis media, otosclerosis, and poor Eustachian tube functioning.
- However, the most common cause of hearing loss is due to problems of the inner ear.
Problems of the inner ear are caused by:
- Noise induced hearing loss
- Inner ear syphilis
- Viral infection of the inner ear or nerve
- Hearing loss that runs in the family
- Diseases such as diabetes, kidney disease, high blood cholesterol, etc.
- Accidents causing damage to the inner ear
- Tumors that develop on the balance and hearing nerves
Process of Diagnosis
1. Your doctor will obtain detailed information to help identify the possible causes of your hearing loss, including:
- Duration of hearing loss
- Symptoms that go very quickly, come and go, or become worse
- Ear problems such as sound in the ear or dizziness
- Other symptoms such as facial numbness and loss of balance
- Use of medications and underlying medical conditions
- Family history of hearing loss
- Exposure to loud noise from shooting, using fire crackers, or working in factories
2. Your doctor will perform screening tests of the ears, nose, throat, nervous system, and brain function.
3. Special tests include:
- Audiogram to see how well you hear words at various volumes to determine the type of hearing loss and frequency.
- Tympanogram to see how well the middle ear is functioning.
- Evoke auditory response to test the hearing nerve.
4. If the cause of hearing loss is not found or a tumor is suspected, a CT/MRI may be required.
5. Blood test for diabetes
, kidney disease
, cholesterol, red blood cell density, syphilis, or immunity.
Treatment and Prevention
Treatment and prevention depend on the cause of the hearing loss.
- Problems of the external ear and middle ear can be treated with medications or surgery.
- Problems of the inner ear are quite complicated. Treatment depends on the cause of the hearing loss and may be ongoing. Delay in treatment may affect the results.
- You can prevent hearing loss by avoiding risk factors such as exposure to loud noise and certain medications. Seek immediate medical attention if any abnormalities are present or suspected.
Causes of Cochlear Disorder
- Bacterial or viral infection of the middle ear, inner ear or brain from meningitis, brain syphilis, herpes zoster of the ear, mumps, and rubella.
- Congenital deafness caused by maternal rubella infections or certain medications during pregnancy. An untreated hearing problem can have an effect on a child’s ability to learn spoken language. Adults with hearing loss would have communication problems at work, which may result in them losing their jobs.
Treatment of Patients with Hearing Loss
- Otitis or otitis media can be treated with surgical procedures.
- Cochlear disorder:
2.1 Mild symptoms that have been present for less than 1 month can be treated with oral medications.
2.2 In the case of mild hearing loss, a hearing aid inserted into the ear or pushed deeper into the ear canal can help with making sounds stronger.
2.3 In the case of severe hearing loss, in which a hearing aid cannot help, a cochlear implant may be an option. This is a complicated procedure. Performing a cochlear implant requires the cooperation of a team of surgeons and speech therapists. |
Streams are a flexible and object-oriented approach to I/O. In this chapter, we will see how to use streams for data output and input. We will also learn how to use. C++ has support both for input and output with files through the following classes: ofstream: File class for writing operations (derived from ostream); ifstream: File. File I/O in C++ works very similarly to normal I/O (with a few minor added complexities). There are 3 basic file I/O classes in C++: ifstream.
|Country:||Saint Kitts and Nevis|
|Genre:||Health and Food|
|Published (Last):||14 May 2010|
|PDF File Size:||8.47 Mb|
|ePub File Size:||6.16 Mb|
|Price:||Free* [*Free Regsitration Required]|
The ifstream class derives from the istream class, and enables users to access files and read data from them. The ofstream class derives from the ostream class, and enables users to access files and write data to them. The fstream class is derived from both the ifstream and ofstream classes, and enables users to access files for both data input and output. These functions are defined in the fstream header file. The o pen function is a member of the ifstream or ofstream class; the function in its most basic form takes a single rstream, the path to the desired file.
As second argument that can be sent to the open member function is the mode, which tutoroal based on a set of predefined constants. We can use the seekgseekptellg and tellp functions to enable random access fstreaam files. The seekg and tellg functions are used with ifstream objects, and the seekp and tellp functions are used with ofstream objects. Input and output is oriented around three classes: As of the current standard, istream and ostream are included in the iostream class.
This class provides use with three predefined variables — cin, for console input, cout, for console output, and cerr, for standard error. Fsteeam file version of the stream classes are included in the fstream header file. The input file variable is ifstream and the output file tuhorial is ostream, meaning that the ifstream is used to read data fstrezm a file and the ofstream is used to write data to a file. We use the member function open to open a file for reading or writing.
We pass the open function the path of the file to open. We can write to and read from a file using the same statements we have been using to read from and write to cin and cout. Reading from a file is a bit trickier. We need to tutorizl the data from the file into a storage unit, such as a variable.
If we know the type of data that is stored in the file this is easy to do. We can also read from the file into a char array that acts a buffer for the data. We do this using the read member function, which accepts two arguments, a pointer to the char array buffer and the number of characters to be read into the buffer.
We can use the fail member function to see if ifstream or ofstream is in the fail state. As always, please take a gander at my book on C at http: Files afford tytorial storing of data so that they can be later be used. We have already seen how the built-in streams cin and cout are used for the movement of data into and out of programs.
18.6 — Basic file I/O
Now, we will look at how to use file stream variables to move data between an external file and a program. A file stream variable is an object. The class of an input file stream variable is ifstream and the class of an output file stream variable is ofstream.
To call a member function of either an ifstream or ofstream object we use dot notation, which consists of the object name followed by a period followed by the name of the function we wish to call.
C++ tutorial: , Input/Output with files
As we have just seen in the program above, an output file stream is an object of the data type ofstream. We attach the ofstream object to a particular file by passing the filename to the ofstream opbject via the open member function. If tutoroal file does not exist, the open member function generates it and positions the output stream pointer at the beginning of the file.
If the file already exists, the open function causes the file to be overwritten, unless we use the ofstream:: Note that after a call to the open member tutoriall the file pointer is set to the first item of the file.
We can use the eof member function to ftream whether the end of an input file stream has been passed. The value returned by the eof member function is 1 when we fxtream read pas the end of the file; otherwise, the value returned is 0.
Take a look at my Amazon page at http: We include the standard header file fstream whenever we use functions involving output files. The file created by the program will be located within the directory under which our program executes. Again, the output path uses the current tutofial directory as the location to create the file.
A file is simply a secondary storage area used to hold information. Note that we must declare file stream variables of type ifstream in order to receive input from a file, and ofstream in order to output from a file.
Remember, the ifstream represents a stream of characters coming from an input file. We use ofstream to represent a stream of characers going to an output file. Both ifstream variables and ofstream variables use the open function call to fstgeam a file.
Each function call accepts a string argument tutofial contains the path to the file to be opened. Our final program copies the contents from one file to another using an extremely primitive system of reading from ifstream into a string and then sending the string to ofstream.
Unfortunately, this does not preserve whitespace! Declaring input and ouput objects is simple. |
Post on 12-Jan-2015
Embed Size (px)
DESCRIPTIONChapter 9, Section 6: Circles
- 1. Chapter 9 Section 6: Circles March 11, 2009
2. Find Circumference
- ACircleis the set of all points that are the same distance from a given point.
- The given point is theCENTERof the circle.
- Circumference : the distance around the circle.
Center 3. Inside the Circle
- Radius : a line segment that starts on the circle and ends at the center point.
- Chord : segment whose end points are on the circle.
- Diameter : a chord that passes through the center of the circle
4. Pi or
- Pi or , pronounced pie, is the ratio of Circumference (or C) to Diameter (or d).
- C = d
- C/d =
- Pi is constant =3.14159265358979323846
- Pi as a fraction = 22/7
- Because every circle is the same, the ratio is always the same, which is why pi is a constant.
- C =d
- If the d, or diameter, equals 1, then what is the Circumference (or C)?
- C = (1), C =
6. Find the circumference of the circle with a diameter of 6ft.
- C =dthe formula
- C (approx.) = 3.14(6ft)Substitute
- C = 18.84 Simplify!
6 Feet 7. Find the Circumference of Each Circle
- Diameter = 200 miles
- Radius = 30 millimeters
- Diameter = 2.8 inches.
About 628 Miles About 188.4 mm About 8.8 inches 8. Making Circle (Pie) Graphs
- A CENTRAL ANGLE is an angle whose vertex is the center of a circle.
- There are 360 in a circle.
- To make a Pie Graph, find the measure of each central angle by finding the proportion.
9. Use proportions to find the measures of the central angles.
- Juans Weekly Budget :
- Lunch (l) = 25%
- Recreation (r) = 20%
- Clothes (c) = 15%
- Savings (s) = 40%
Find the percentage of 360 to find the degree measurement of the central angles. L = 90 R = 72 C = 54 S = 144 10. AT HOME
- Use a cup opening or cap to make your circles if you dont have a compass.
11. Blood Types of Population
- Tell me the degree measurements of the central angles if you were to make a Pie graph with this information.
43% 5% 12% 40% Type O Type AB Type B Type A 155 18 43 144 12. Students at Western High School: Find Central Angles.
- Students at Western High School work in the following places; restaurants, 140; library, 15; auto shop, 60; retail stores, 75; and other places, 30. Round the measures of the central angles to the nearest degree.
Restaurant: 158 Retail: 84 Auto Shop: 68 Other: 34 Library: 17 13. Assignment #16:
- Pages 472-473: 1-24. Skip #20.
- Remember, if you need a circle and have no compass, then use a cup from home and trace the outside. |
Science, Tech, Math › Math Definition and Examples of Binomials in Algebra Share Flipboard Email Print Tetra Images / Getty Images Science, Tech, Math Math Tutorials Geometry Arithmetic Pre Algebra & Algebra Statistics Exponential Decay Functions Worksheets By Grade Resources View More By Deb Russell Math Expert Deb Russell is a school principal and teacher with over 25 years of experience teaching mathematics at all levels. our editorial process Deb Russell Updated January 19, 2019 A polynomial equation with two terms usually joined by a plus or minus sign is called a binomial. Binomials are used in algebra. Polynomials with one term will be called a monomial and could look like 7x. A polynomial with two terms is called a binomial; it could look like 3x + 9. It is easy to remember binomials as bi means 2 and a binomial will have 2 terms. A classic example is the following: 3x + 4 is a binomial and is also a polynomial, 2a(a+b) 2 is also a binomial (a and b are the binomial factors). The above are both binomials. When multiplying binomials, you'll come across a term called the FOIL method which is often just the method used to multiply binomials. For instance, to find the product of 2 binomials, you'll add the products of the First terms, the Outer terms, the Inner terms, and the Last terms. When you're asked to square a binomial, it simply means to multiply it by itself. The square of a binomial will be a trinomial. The product of two binomials will be a trinomial. Example of Multiplying Binomials (5 + 4x) x (3 + 2x)(5 + 4x)(3 + 2x)= (5)(3) + (5)(2x) + (4x)(3) + (4x)(2i)= 15 + 10x + 12x + 8(x)2 = 15 + 22x + 8(-1)= 15 + 22x - 8 = (15 - 8) + 22x = 7 + 22x Once you begin taking algebra in school, you'll be doing a great many computations that require binomials and polynomials. |
Germanic, Goth and Norman Invasions
European invasions and migrations
The first Germanic tribes, believed to have come from Scandinavia. They migrated into north western Europe – which was sparsely populated by small farming communities, that had settled here during the Bronze and iron Ages. At the time of the initial migration (between 500 and 300BCE), the region had become under the influence of the Celts, who had moved in from the east, somewhere between 600 and 300 BC. It is believed that the Germanic tribes that evolved from here mixed in with the early farmers and with the newly arrived Celts.
This was also the case in later migrations and invasions. Within the tribal societies there was a very strong sense of local power and the various migrating tribes and sub tribes immediately established their tribal authority in the areas they concurred. While there certainly was a lot of violence involved, evidence indicates that in most situations the newcomers (often less than 10% of the total local population, sometimes as low as 2%) rapidly integrated with the local population. This in particular was the case where the newcomers settled in Celtic lands; they very rapidly replaced the local culture and language. Frankenland in modern Germany is an example of a rather rapid transformation. This and at the same marks the furthest corner of where the Franks migrated too.
At the time the Romans arrived the Germanic tribes of the Cimbri and the Teutones had reached southern France and during their annual raiding campaigns even ventured into Italy and Spain. By that time the Romans had conquered the Mediterranean and started to move north. This is when the two groups met and it was the Roman general Gaius Marius who – with their superior army – stopped them in 102BC (Battles of Aquae Sextiae and Vercellae).
Around 100AC, in what is now the north-east Germany and Poland, dozens of Germanic tribes, represented as many as 120 different groups.
After the Roman conquest of Gaul and the unsuccessful expeditions into Germania, the boarder regions between the ‘Barbarians’ and the Roman Empire was established along the rivers Rhine and Danube nevertheless, it never was a water tight border. Nevertheless, because of its strong military power, there was – for several hundred years – no further mass migration into the Roman Empire.
Trade and other opportunities however, did attract people to the border region. Also people from tribes living further away moved towards the Roman borders. This also brought tribes from further away in closer contact with each other. They started to form confederations whereby they both fought each other and together they fought the Romans.
In order to manage the continuous pressure of these people on their boarders, the Romans allowed some of them to settle in foederati within the Empire. Bound by treaties they had to provide military services to the Empire.
In the 3rd and 4th century there is further pressure from these Germanic tribes to enter Roman territory. The Third Century Crisis provided increased opportunities to explore the weakness in the Roman border regions.
The collapse of the Roman Empire was partly due to the fact that a weakening empire could no longer control it borders and this led to more serious invasions and eventually these tribes took over all of the Roman Empire with the exception of the eastern part.
The Sallii moved from the River IJssel in what is now modern Brabant, Flanders, northern, France and the Rhineland in Germany. Combined with other tribes they conquered or dominated and combined they became known as the Franks and under Charlemagne were able to largely occupy most of the rest of the previous Western Roman Empire.
From the late 4th century onward there is further pressure from these Germanic tribes to enter Roman territory, this time driven by invading tribes from the Eastern Steppes. The most notorious of them all the Huns. These invaders took two routes into Europe from the steppes along the Lower Danube through the Moravian Gap into northern Greece and from there further into Europe or continuing north of the Alps towards the River Rhine.
The Eastern Germanic tribes formed the confederation of the Goths they were pushed along towards the river Danube and eventually settled in Italy and Spain. The Vandals arrived via the norther route settled in North Africa.
The were joined by the Allemanni – they gave the name the Germany in the French language ‘ Allemagne’, in Spanish Alemania and in Portuguese Alemanha, it translate to “all men”, most likely revering to a combined confederation. They attacked the Romans from their home base of what is now the Alsace and Swiss regions in 357 and 366 but where at both times defeated, only to be successful in the great migration – where they combined forces with the Vandals – which started on the last day of the year 406.
The Saxons (combined eastern Germanic tribes) remained largely independent in north and east Germany from where they together with the Scandinavian Jutes and Angles also moved into Roman Britain and where they became the rulers after the collapse of the Empire. The Frisii also joined in but they stayed largely in their home land, the coastal regions in the north of the Low Country. together was the rather small tribe of the Tubanti (Twente) they stayed more or less in their original lands possibly for over a thousand years.
The Lombards were a reasonable late comer and started to take over several parts of northern Italy around 600AD. Others who also made a dash into Europe included the Suevi (Spain) the Burgundians (France) and the Rugians ( Austria/Adriatic) most of them were conquered and/or integrated by the dominating tribes. The Huns and Avars also created havoc but eventually became part of the Goths, Slavs and Hungarian people.
Another migration wave started from the late 8th century onward, again originating in Scandinavian by people known as Norsemen, Vikings and the Rus ; they settled in Russia, Britain, Normandy and Sicily. In other parts such as the Low Countries and France their activities – with the exception of Britain – were mainly limited to raids and short occupations.
In general all of the invaders and migrants mixed with the local population and the two rather quickly assimilated, most of the invaders also kept the Roman systems and infrastructures in place (with the exception of the Anglo-Saxons).
Another interesting theory is that tribal communities rely heavily on their internal cohesion; their culture and traditions are aimed at keeping the tribe together. In many languages the name of the tribe simple translate to ‘the people’. Once the tribes are starting to intermingle – for example through migration – that cohesion collapses. Christianity provided a higher level of cohesion, it provides a bigger world view.
Reasons for migration
The reasons for migration are not well understood. In particular the rather rapid movement from the Celts from eastern Europe remains a mistery.
Reasons for the increase in the invasions that have been mentioned include:
- the need for more agriculture land,
- better (more pleasant) weather conditions further south
- climate changes were occurring at the time in Scandinavia; and
- as in modern times the boarder regions of prosperous economies attract people who do want to share in those spoils.
- Increasingly more leaking boarders allowed these people to move into the ‘Promised Lands’.
The Germanic migration from Scandinavia might also have been forced by climate changes and a deterioration of the limited agricultural lands in southern Scandinavia.
Later on the decline of the Roman Empire was a key reason for migrations and invasions. The boarder was already severely weakened after 260AC when the Sallii started to move south from that time onward there was a slow but ongoing movement southwards.
There certainly was an increase of tribal population north of the boarder looking for economic opportunities and this of course has always been a reason for migration. The ‘grass on the other side’ looked greener, especially with many business opportunities along the boarder with the Roman Empire, with large military settlements and growing cities and farming communities.
Again possible environmental changes in the Low Countries that led to high ground water levels along the southern parts of the main river system and the subsequent degradation of the arable lands saw new waves of smaller scale migration – it is most likely that this was at least part of the reason why the Romans shifted their border fortifications to the lower end of the river system here.There is also an indication that settlements along the North Sea coast were abandoned.
After the final departure of the Romans in 406 and also the departure of the large landowners there was a certain level of ’emptiness’ especially along the old border regions and in Britain. The following migration process both into France and the British Isles lasted for 200 years, so a rather gradual process. The eastern migrations of the Goths went much faster and population pressure and food shortages might have influenced these more rapid ‘invasions’.
There was also a differences in nature between the tribes that might have influenced migration patterns, while for example the Saxons proved to be formidable warriors, the Frisii, as we saw above, put their energies into their superior seamanship and became the traders of the north.
Invasions occurred when opportunities arrived for raids in plunder in more affluent area with little or no protection.
During the Roman Gallic wars many of the Germanic and Celtic tribes were conquered those who didn’t voluntarily accepted Roman rule were brutally slain and the Romans didn’t shy away from using tribal feuds to invite enemy tribes to plunder the lands of those who resisted them, as for example happened with the Eburones in Brabant.
Outside Roman civilisation, on the other side of their northern boarders lived the Frisii, Saxons, Jutes and Angles. They were great raiders and travelled the seas and rivers, but – at least in the early days – never had any inclination to migrate.
However, despite these reasonable stable situations we also see regular revolts occurring – especially in the border regions. This started already immediately after the conquest of Gaul (56BC). These guerrilla battles were often (in the short term) quite effective. Especially the war in 9CE stopped the Roman expansion north, forever. This battle has left long lasting effects that are still important today such as country boarders and cultural imprints on the people. The tribes used the leaky border region to escape and also to get reinforcement from the free Germanic people.
Part of the border between these free German tribes and the Romans was what is now the northern border of Brabant.
At the same time the border region attracted a lot of trade and this led to economic boom times, especially around the places where the Romans had their militarily bases and posts.
As is always the case more affluent societies function as a magnate to those who are less well off, especially in times of war or invasions in their lands from for example the Huns and the Avars.
All of this put significant pressure on the Roman empire, in order to alleviate some of that pressure resettling programs were launched; however, they seldom worked as planned and often resulted in dissatisfaction on the side of the newcomers and in resentment from the local population both from the native people and the Roman settlers. Especially the Goths, who in 376 were permitted to settle in the Danube region, were a victim of these often bungled Roman resettling projects. This eventually led to the split of the Goth, those who went east (Ostrogoths) and those who went west the Visigoth (brave Goths).
The early migrations and the consequent problems, forced the Romans to find better solutions. They started to create ‘foederati’ (autonomous regions). This allowed barbarian tribes to cross Roman borders and to settle with Roman territory. In exchange the new settlers provided defense assistance to the Roman border regions and paid taxes. They initially did not receive Roman citizenship.
The first of such regions was set aside for the Salii (together with other tribes, since than named the Salian Franks), they received in 358 Toxandria the region west of the Rhine – the original homeland of the Eburones. Those living on on the Roman boarder, south and west of the Rhine were known by them as ‘Germani cisrhenani’ (Cisrhenani is Latin for ‘these sides of the Rhine). Their major centre was in Tongeren (now Brabant).
With a change in lifestyle from nomads to more or less permanent settlers the leadership structure also started to change. While on the move leaders were elected on a as needed basis and dependent on their skills. Leaders now became more permanent and soon it became hereditary; they are known in German as the Heerkönige. Their line of descendant became important and myth and folklore rapidly created links back to the gods. This became the foundation of kingship.
In order to be able to fulfill their military obligations the local leaders of these regions were given the right to payments. In order to obtain such payments they tried to obtain a senior military Roman rank. One of the most successful local leaders, in this respect, was Childeric, the father of Clovis he called himself ’magister’ (general).
Other foedera followed soon after the one granted to the Salii (Saliers). However, over time the meaning of foedera changed. A ‘foedus’ simply became a mercenary contract. After 358 these Salian Franks basically protected in the name of the Romans the northern border of the Empire.
There was fierce opposition in Rome regarding the foedus (treaty) policy. And from a central government position, right so. As we will see below the Salii soon started to further colonise what we now call Brabant and Flanders. This in turn led to a position where other barbarian mercenaries were called in to tackle each other. Rome often switched alliances between the various groups, which allowed them to hang on to their positions for a little while longer. But slowly but steadily this led to an undermining of Roman control.
This development also allowed for the local leaders to build up military strength and military organisation. Wherever, Roman military functions existed (comes civitatis) they were incorporated in the new German systems. The same applied to the large Roman agriculture estates (latifundia). Furthermore, overtime the function of kingship also became administrative. Officials of the king involved in the administration were provided with privileges and soon started to form a group of service aristocrats – we also come across such developments later on where they are called minsteriales (gentry).
This new form of nobility established itself next to the ancient nobility – mostly linked to the early Germanic leaders who started to settled the land. For many, such as the Salii, Burgundians and the Goth long-hair was their status symbol. They are also known as the long-haired kings. Death was often preferred above the cutting of their hair.
All of this led to an increasing upper class in the newly emerging chiefdom societies such as the Merovingians who emerged from the Salian Franks.
Another massive migration, pushed along by the advancing Huns, took place on December 31st 406 when a huge barbarian confederation of Vandals, Suevi and Alans crossed the frozen Rhine near the military town Castrum Mogontiacum (Mainz). Reports from St Jerome written two years later indicate that they destroyed the city and that many people were massacred, the bishop, Aureus, was put to death by the Alamannian Crocus. After this event they sacked Trier and over the next three years – with their wagons with women and children and all of their belongings, workshops and loot – they slowly moved further south into Gaul. Jerome lists the cities now known as Mainz, Worms, Rheims, Amiens, Arras, Thérouanne, Tournai, Speyer and Strasbourg as having been pillaged by the invaders. After this invasion Rome lost control over norther Gaul.
Also important for our region was the foedus established in 418 for the Visigoths in parts of Aquitaine, which had Toulouse as the capital.
After the collapse of the Roman Empire the Franks were fully accustomed to Roman structures and when they took over the leadership they simply incorporated many elements of Roman life in what would become the Merovingian and Carolingian periods.
We now also start to see the arrival of the first ‘native’ scholars entering the scene writing about the history of their lands from their perspective. The key early historians are:
- Jordanes for the Goths (around 554)
- Gregory, bishop of Tours for the Franks (593/4)
- Bede for the English (793)
- Paul the Deacon for the Lombards (799)
In northwestern Europe it were the Franks who did put the biggest stamp on the map especially under the Merovingian and Carolingian kings. They provided the political and linguistic foundation that included the whole region (except Britain). Their legacy is still very visible, notably in the country of France.
While the Franks were able to slightly extend their territory northwards, the Saxons stayed put east of the river IJsel, while the Frisians remained unconquered to the north, they were able to also occupy Holland and Zealand. There however, is archaeological evidence of Roman influences in these ares but is is unclear what the exact nature of this was. At that time however, these lands were very sparsely populated because of the high ground water levels in an already ‘wet’ environment during the so called Duinkerke-transgression period .
After Charlemagne conquered the Saxons shortly after 800, the Frankish influence was pushed further north and east. However, the collapse of the Carolingian Empire saw the norther part of the old Frankish Empire being largely left to its own, which led to many local rulers left in charge, only slowly merging into regional powers, further consolidation, started with the Burgundians and was finally completed by Emperor Charles the V in the 1540s.
The Franks, Goths and the Saxons were perhaps the largest confederations of tribes, but several others also emerged during the migration period. An interesting aspect here is known as ethnogenesis. The merging of peoples saw individual tribe kinship merging into super kinship and as legends merged a new culture emerged whereby the people in the super kinship saw themselves as one kinship . Over time these confederations created their own new common tradition and self-identity whereby the original tribal myths and traditions had been suppressed, morphed and overtaken by the common traditions and myths of the super kinship.
Tribal village life
During Roman times we see some of the Germanic and Celtic Iron Age settlements started to grow from a dozen or so people into villages; the largest ones to around 300 people. They did not develop an urbanised society. They were mainly cattle farmers but also cultivated their lands and they were well skilled in iron making, which was also exported – be it in a limited way – throughout Europe.
Also important to recognise here is that these communities saw themselves as including the living and the death, the latter were an essential part of daily life and ancestor worship was an integral part of their society. Kin ship ancestry was a very important part in the structure of the tribe, marriage arrangements, kingship rules. Elders played a key role in these arrangements and were also the people who represented the living people in relation to the dead members.
Without a lack of central control these tribes had an elaborate system of feuds, protection (gift giving), vengeance and compensation – wergeld, (also spelled weregild, wergild, weregeld, etc.) that regulated their social affairs. But this also lead to a rather violent society with lots of internal conflicts (leaders were chosen in battle) and war. The annual raids into neighbouring territories also contributed to ongoing brutality. All free men could be summoned by the king for military services which were executed under the command of a count or dux.
These structures were not unique to the tribes in north-western Europe, throughout the world we see that tribal structures evolved and sustained themselves along these lines. Thanks to the early 13th century Icelandic sagas from Snorri Sturluson we get great insights in tribal life of that time. Important structures such as gift giving and friendships are characteristic of the Germanic tribal systems through north-western Europe and indeed beyond.
The new tribal societies that started to emerge – the combinations of various clans and communities – could have a few thousand people, growing even into the tens of thousands during Roman times, when different tribes started to form militias as part of their ‘foedus’ with the Roman army. There could have been half a dozen more or less permanent tribes that in one way or another roamed the area, however, others might have (occasionally) wandered in from elsewhere; especially in the Late Roman Times. During Roman occupation many of these tribes were mixed. Forced migration, invitations to plunder other tribes and pressure from tribes moving into the area from further north and east of the Roman Empire, a whole new society emerged.
Also in ‘free Germania’ the people were influenced by the new society and there is evidence that, during the 2nd and 3rd centuries, also here a higher level of organisation start to occur, especially in the region within around 200 kilometres from the border, perhaps some sort of a tribal alliance might have started to occur, perhaps this is what the Romans referred to when they started to talk about the Franks, which had become more like a confederation of what in previous times had been individual independent tribes.
The Franks were organised in settlements (weiks – wijk in Dutch, -wick in English). These settlements were further divided in households (domus). The larger settlements also had a mead hall – a long house – where the chief and his warriors met and feasted after victories. This drinking hall became the center of the new civilisation were leaders met and decisions were made. Later on in or next to the chief had his farm. In the high middle ages they became the first fortified enclosures (motte and bailey castles).
The tribal system had over the previous millennia evolved into a tripartian system and consisted of chiefs, warriors and farmers. It was not until much later – when confederations started to form – that these chiefs grew into kings. There are also indications that prior to the arrival of the Romans, there was some sort of governance in place, based on hoards unearthed in what was northern Gaul, it has been estimated that in the years before the Gaul wars some 220,000 golden and silver coins were minted here. This indicates that there was some use for it and that there was some sort of governance and trade. In general of course these coins formed part of the wealth of the ruling class.
The organisation of the tribe was rather flat. The tradition was to divide the land amongst sons, thus keeping the tradition of chieftains alive, rather than looking at a consolidation of property and power (and thus growing into countries with political powers ruled by kings). Only from Merovingian times, are we seeing that the top warriors started to form a ‘nobility’ class (mimicking the Roman nobility structure with dukes and counts). Initially the tribal chief might also have had religious (sacral) powers. That concept could well date back to their Scandinavian origins. This tradition might also be the origin of the sacred powers of the Merovingian and Carolingian rulers.
Germanic leadership based on Germanic law is known as ‘mund’. This was based on blood relationships whereby the head of the family was responsible for the family group. The power was initially more disciplinary. Important elements included to watch over the women’s chastity and faithfulness to prevent the family honour from being harmed, in the first case if a bride is not a virgin at the time of her departure from the family, in the second, if sons are born that are not of the common blood. It also has to control the male family members who may cast shame on the family honour, who may not serve the family, or who may endanger the whole family by their imprudence (for example by drawing the family into a feud).
Eventually these ‘powers’ involved the extended families and the tribe as a whole. Tribes with such leaders also became known as ‘munds’. During Roman times related ‘munds’ started to form larger groups such as the Saliers. During these and consequent Merovingian periods to became enshrined in the first written laws such as for example Lex Burgundionum.
The legality of later chieftains and kings in the Germanic countries all the way to the Holy Roman Emperors can be traced back to the ‘mund’. Also the principle of chivalry can be retraced to it.
Early Germanic Law
The old Germanic laws were based on oral tradition and tribal customs. They were memorised by designated individuals who acted as judges in confrontations and meted out justice according to customary rule. They were also able to carefully memorise precedents. Among the Franks they were called rachimburgs. “Living libraries.
These laws started to be written down in Latin in the early Middle Ages (also known as leges barbarorum “laws of the barbarians”). This happened between the 5th and 9th centuries. They provided authority to the king as Roman Law had given authority to their Emperors. At the time they were written down, the laws of the more southern tribes were influenced by Roman Law; they were also influenced by Christianity.
All these laws may be described in general as codes of governmental procedure and tariffs of compensation (wergild). They all present somewhat similar features with Salic law – the best-known example – but often differ from it in the date of compilation, the amounts of fines, the number and nature of the crimes, the number, rank, duties and titles of the officers, etc.
Early Germanic laws and their tribes
|Legislation||Tribes||Date of legislation|
|Leges Visiogothorum||Visigoths||5-7th century|
|Lex Burgundionum Lex Romana Burgundionum||Burgundians||483-532|
|Edictum Theoderici||Ostrogoths||ca. 520|
|Leges Langobardorium||Lombards||643-866(earliest Edictum Rothari 643)|
|Pactus Legis Salicae Lex Salica||Salii||6-9thcentury(Earliest ca 500 – Clovis)|
|Lex Ribuaria||Ripuarian Franks||623-639 (Dagobert I)|
|Pactus Legis Alamannorum Lex Alamannorum||Alamanni||7th century|
|Lex Biauvariorum Lex Baiwariorum||Bavarians||740-748|
|Lex Francorum Chamavorum||Chamavi||9th century (Charlemagne)|
|Lex Saxonum||Saxons||803 (Charlemagne)|
|Lex Thuringorum||Thuringi||9th century (Charlemagne)|
|Lex Frisionum||Frisians||785 (Charlemagne)|
All free members of the tribe could participate in the assembly (thing/ting/ding), in Carolingian times known as ‘campus’. These were meetings in open spaces (near the holy oak or linden tree, a sacred spring or river). This could also be the ‘malberg’ (mallum); this is a Frankish word indicating a hill or open field with a tree, stone or pole where the ding took place. There has been a long raging discussion regarding a reference in the Sallian Laws to ‘Mallobergium Ohseno’, this has been interpreted as a possible reference to Oss. It could be translated to the malberg of those who are herding oxes. There are more references to the importance of cattle in this area (Roman references), the hill in Oss (Heuvel) could perhaps have been such a malberg. It has also been argued that especially in the more remote areas in these region some of these old Frankish traditions lingered on while elsewhere new developments had replaced them 1 .
During the ding, tribal matters were discussed in a democratic way, disputes were settled, (Salic) laws declared and chieftains and kings elected. The most important ‘Thing’ took place in spring (campus martii – March), where the upcoming raiding season was discussed. Justice very much evolved around strict tribal family and boundary lines and included blood feud and weregild (this included valuations of human or human body parts). This was also elsewhere the case as the Bible talks about ‘an eye for an eye’. An interesting relation here is that there is even evidence that revenge is also something that happens in the social groups of primates.
The individual tribal feud law took place within strict rules and conditions it was one of the last remaining original vertically based (tribal bloodlines) justice elements and the various different tribal laws finally became the basis of the criminal law that only started to emerge in the Late Middle Ages. Within the nobility, the importance of the vertical based blood relation system remained in place to well into modern times.
Under Charlemagne the annual campus meetings were moved to May (campus maii). In an increasingly more Christianised society these events were also used to issue new canons and law.
The Frisian tribal assembly was called the Fimelthingh, they took place during a period of three days, during which there was the Ding Peace, no fighting was allowed during these days. We later see traditions such as Market Peace, Peace of the King and Peace of God, all dating back to this earlier tradition. In Friesland, where elements of tribal law survived well into the 16th century, remnants of weregild and revenge were still embedded in their legal code.
All good things come in 3’s
At the Thing verdicts were only given after they had been dealt with three times.
As mentioned the system of communal policy- and lawmaking lasted until well into the Middle Ages. All full tribal male members through inheritance from their parents (dual-decent) made the local political decisions and spoke law in relation to local disputes and criminal offence, under the holy tree or tree of justice. The linden tree in the arms of Oss most probably dates back from these times. These trees could also be related to the Irmensul (wooden pillars which they believed supported the earth) both Charlemagne and missionaries such as Willibrord put a lot of effort in pulling these trees down. Trees have since time immemorial been linked to bridging the earthy realm with the spiritual realm.
It were these Germanic traditions that started to form the basis for our modern democratic societies and not the Greek and Roman political systems. While for many of our political institutions we use classical names and even classical designed buildings the reality is that our democracy has more to do with the Germanic Thing than with the Greek and Roman Senate.
In medieval England the tithing (ti=10) was a self-policing body of ten men between the ages of 12 and 60 had to swear that they would upheld the law in their community. They assisted the sheriff and had to report anybody breaking the law. Obviously such a body also protected their own people within and led sometimes to intimidation and ‘silence’.
In Brabant it wasn’t until Burgundian times that the Dukes started to uniform these local legal systems and imposing more an more their ‘state’ laws upon the local population. It took several centuries to fully implement such a centralised system. Remnants of the old system lingered on. In Oss the annual ‘campus’ (jaargeding) of the local bailiffs (schepenen –scabini (Lt) – lawmakers) took place according to old Germanic tribal tradition on the Wednesday after Epiphany. Well into modern times ‘silence’ was used by the community to protect the people within their community.
The tribal inheritance laws were based on double decent, this allowed for the tracing of decent through both maternal and paternal lines. This structure allowed for a large network of interpersonal relationships and also provided women with political power (however they, by rule, were not admitted to the royal office).This system was governed by collective norms and rules but also provided a large network of support in cases of hardship, warfare and an in general harsh and hostile environment.
The Church initially through the missionaries tried to break these tribal and kinships rules as they were incompatible with canon law and this hampered the church to take control over social and religious life. For centuries the tribal elders were able to maintain control over their own kinships, but eventually the Church was able to replace these rules.
They introduced different kinship rules that to influence marriage introduced rules in relation to who was entitled to inherit kin rights and property (mainly by excluding inheritance through the maternal line). This significant reduced the more prominent role of women in society. The Church declared any generational degree above seven as mythical in order to break the relationship between the dead and the living (ancestor worship) .This reduced the importance of elders – who had an important function in relation to the ancestors. The Church now became the arbiter in these affairs rather than the elders, who had, for centuries, been the keepers of the kinship genealogy knowledge of their tribes.
Limiting the number of potential contenders for tribal leadership was supported by the incumbent ruling class of the day and this assisted the Church in transforming the tribal society. Eventually we see that these groups only consisted of three generation – children, parents, grandchildren – a typical kinship combination that occupied the many castles in Europe between the 9th and the 12 century.
Nevertheless kinship norms and rules remained strong as we continue to see them holding important secular and ecclesiastic positions, we recognise them by their dithematic kinship names (Germanic naming tradition based on two words/themes). By the 10th century however, this group starts to disappear and many of the new church, secular and military functions and positions were now taken over by secular clerks, ordained monks and episcopal administrators (ministerialis) and the newly emerging knights, often from humble beginnings. The new secular elite started to establish their own traditions with courty life and noble birth decent.
By that time the various kinships rules of all of the different tribes were replaced by uniform norms and rules set by the Church. We also see that by now the authority of the ruler was no longer depending on his relationships with his/the kinships but decided by the support from the Church. We see this reflected in table arrangements at important events. The king and his direct family no longer sit at the table with the rest of the nobility, but at a separate head table separated from them. 2
- Ptolemy Map Germania
- Major tribes Low Countries
The name Germani was given to these people by the Romans and means ‘related people’ (germ, seed). In their writings they recognised a large number of different – but related – tribes. It was the Roman-Gallo wars under Cesar that put Germania more clearly on the map.
It was not until after 600 BCE that, because of climatic changes, some of these regions became more habitable and the Celts started to settle the area coming from the east and mingled with the Bronze and Iron Age populations that had settled here as the early farmers of the region population. People lived from cattle and agriculture on the lands just behind the dunes, on the higher grounds in Drente, Twente, the Veluwe and Brabant and since the use of the since the use of the mouldboard plough also on the fertile clay grounds along the rivers. During the summer they ventured into the peat lands where they went hunting, fishing and collected edible plants and herbs.
After 100BC groups from the Germanic tribes arrived from Scandinavia and mingled with the Celts and other native people and these people as a whole took over the Germanic culture. It is still unclear what exactly happened during this period, however the end result is known. At that time there were three major cultural groupings that were going to form the basis for all Germanic people. Their names are linked to their mythology.
The mythical ancestor of all Germanic people was Tuisto (Tuisco). He had a son: Mannus he became the ancestor of the Hermiones (Irminones). He in turn had two sons: Ing (Ingaevones) and Istaev (Istvaeones). The Frisii had their own mythical ancestor Folcwald and his son Finn (Frisian). The Scandinavians never left their ancestral home.
Interestingly linguists see a link between the Indic Manu, the ancestor of the Indic people, pointing into the direction of shared Indo-European beliefs.
|Tribe||Tribal area||Main Centre||Comments||Movements|
|Hermiones||Elbe river system||Included: Suebi, Hermunduri, Chatti and Cherusci.||Later also formed: Goths, Cibidi, Burgundians and Lombards.|
|Ingaevones||North Sea Coast (Jutland, Holstein, Frisia)||Major group that moves to Britain England (Inglings?)||In Roman times made up of: Cimbri, Teutons (both Jutland), and Chauci (between Frisii and Elbe).||Mixed with Frisii, Saxons and Jutes|
|Istvaeones||Atlantic coast: Netherlands, Belgium, North France. As well as Rhine and Weser Rivers||Mixed with Ingaevones||Became Franks, Allemanni|
These proto-tribes split into the many tribes that we know from Roman documents (perhaps as many of 70 different tribes). While the three groups also mixed between them, it looks like the Ingaevones had the most impact on what would become the Brabantine area, while the Ingaevones would be the ancestors of the Saxons and Tubanti in Twente and north Germany.
Coming from the north-west the Ingaevones on the other side of the Rhine-delta ‘wilderness’, the people on the Belgian coast were amongst the last to be replaced by the Germanic tribes. This final part of the Germanic migration coincided with the arrival of the Romans. The Roman limes (boarder) prevented the Germanic tribes to move further south. When these people were incorporated in the Roman Empire they became known as mentioned above: ‘Germani cisrhenani’.
Further south the Celtic culture and language remained. It was the Salii people who from the area of the river Ijssel had moved further west in the Low Countries, mainly in what’s now Brabant. Here they became known as the Franks who, after 400AD, conquered all of Gaul.
Brabantwas a boarder region between the Germanic and Roman cultures and even up to modern times maintains aspects from both people. In the North the newer Germanic culture (and language) had a larger impact but the south (the area in Gaul) maintained the Celtic culture which turned into a Celtic/Gallo-Roman culture (and adopted the language from this mixing pot).
Once peace was established and the Roman Limes was established along the rivers Rhine and Danube the old Iron Age culture was driven back some 100 to 200 kms from the boarder and in this ‘independent’ region a market zone established itself, highly influenced by Roman culture. The Romans heavily depended on produce from this region. Roman coins, often in hoards, are found throughout this zone.
Beyond that the Iron Age cultures continued in the ‘Rich Burial Zone’ to well into modern Scandinavia and Poland and beyond this into modern Russia the ‘Warrior Burial Zone’ – the migrating Germanic tribes brought many aspects of this culture with them when they moved into the area that before them had seen a very strong Celtic influence.
The various Germanic and Celtic Tribes
The table below shows the various tribes that we come across in Roman texts. They all originate from the proto tribes mentioned above. However, not all these tribes lived all in this area at the same time and there are certainly tribes (or sub tribes) that were missed by the Romans. The opposite might also be true some of the tribes mentioned in their documents relating to certain events or observations might have been overstated and some of these tribes might perhaps simply have been sub-groups.
Key tribes in the area we now call Brabant were:
- Eburones (their lands were called Toxandria) perhaps as north as northeastern Brabant,
- Nervii (Brabant and Rien – Antwerp area), and
- Tungrii (Masau) these latter ones migrated into these regions after the extermination of the Eburones by Caesar.
- Salii (Salian Franks) through Roman resettling programs they were migrated to Toxandria.
In Twente – including Ootmarsum and Wietmarschen – were the Tubantii.
Early Germanic tribes in and around Brabant
|Tribe||Tribal area||Main Centre||Comments||Movements|
|Nervii||East of the Scheldt (civi: Famars, Liberchies, Doornik, Kortrijk)||Bagacum/Bavay||Their culture was Celtic‘Spartan’ warriors,Joined Eburones in 57BCE||Near annihilation in 53BCEConquered by the Franks in 275 and finally fully taken over by them in 432|
|Batavii||River island between Waal and Rhine||Nijmegen||Warriors part of Roman army. Battle for independence 69CE||Originally part of Chatti(Hesse/Saxony)|
|Chauci||Between Frisii and the Elbe||Tiberius led a campaign in 5AC. Loyal to Rome||Merged with Saxons|
|Chamavi||North West Germany||Hamburg/Hamm?||To Gueldres (Hamaland – Gelderland)|
|Salii||East of IJssel/IJsselmeer||Deventer?||Pirates, seafarers||To Tungii and further south|
|Tubanti||Twente/Westfalia||Oldenzaal?||Fought the Romans in 14AC and 308AC||Moved into Twente proper, part of the Saxon alliance|
|Frisii||Most coastal areas Holland to Helgoland||Utrecht (Trajectum)||Traders||All of northern Netherlands|
|Tungrii||Ardenne and Meuse Valley||Tongeren (Atuatuca)||Still mentioned in 5th century||Moved the Toxandria.Lands occupied by Salii|
|Caninifatii||Coastal area north of Rhine delta||Joined Batavii Revolt in 69|
|AtrebatesGaul||Around Artois northern France||Arras||Participated in the revolt of 57BCE|
|ViromanduiGaul||Diocese Nyon, Picardy France||Vermand/St Quentin||Joined revolt 57BCEPracticed human sacrifice|
|Eburones||Between Rhine and Meuse (Limburg)||Tongeren||Ambiorix led revolt 54BCE||Plundered by Sicambri after defeat from CeasarIntegrated in Tungri|
|Ubii||Right bank of the Rhine||Oppidum Ubiorum (Cologne)||Alliance with Romans in 55BCE||Assisted the Romans to fight the Batavii in 70|
|Condrusi and Segni||County of Liège/Ardenne||Namen?||A group was left behind after a raid from the west||Came from west GermanyIntegrated in Tungri|
|Sicambri||Lower Rhine next to Menapii||Joined revolt Arminius 9BCEClovis was called a Sicamber (honourable name)||Forcible moved south in 11BCE merged with Salii|
|MoriniGaul||South of Nervii/coastal wetlands||Terouanne/Terwaan||Reclaimed land through polders||Concurred by Rome in 33BCE|
|Menapii||From mouth of Rhine along the Scheldt, north of Nervii||Cassel northern France later Doornik||Mont de Cassel (Roman) still a strategic pointJoined the Revolt in 57BCE and Eburones Revolt of 54BCE||Ceasar put Atrebates in control of Menapii|
|RemiGaul||Northern Champagne plain, Ardennes, Meuse Valley||Reims (Durocortum)||Renown for their horses and cavalryMost pro-Roman in Gaul|
|Saxons||Lower Elbe (Holstein, Drenthe, Groningen, Twente)||Perhaps the land of the Chauci?||Confederation of German tribes. Last to be converted||Anglo-Saxon expansion|
|Tencteri and Usipetes||Eastern bank Lower Rhine||Took over some lands of Menapii Defeated by Caesar in 55BCE||Forced to migrate by Suebians (Swabia)|
|Treviri||Lower valley of the Moselle||Trier||Joined the revolts of 54, 57 BCE and 69AD||Celto-Germanic origin|
|Cherusci||Northern Rhine Valley||Osnabruck/Hanover||Battle of the Teutoburg Forest 9CE||Possible Celtic origin|
|Chatti||Upper Weser||Hessen (named after them). Thor’s sacred oak near Fitzlar.||Joined revolt Arminius 9BCE. The place name Hatten (Gelderland) is named after them||Some moved west known as Batavii. Chatti merged into the Franks.|
|Leuci||What is now Lorraine||Toul (Fr)||Supplied wheat to Caesar in 58|
|Mediomatrici (Mettis)||Current diocese of Metz||Divodorum (Metz)||Part of the Belgica tribe||Were Celtic but Treviri occupied it before Roman times|
The Eburonen, Condrusi, Caeros, Segnii and Paemani formed the Germani cisrhenani. They merged into the tribe later known as Tungrii. The Nervii, Attrebates and Morine appear to be of Germanic origin, their tribes settled in northern Gaul and adopted the Celtic language and culture.
Since the Hallstatt period the Celtic culture had been adopted by the local population, but as indicated before the effects of this were marginal as most people basically continued the life style they had been following for the last 2000 years.
As we will see with the Eburones, in the battles with the Romans the local Celtic population was decimated either killed or taken into slavery. This allowed Germanic tribes to penetrate deeper into the areas previous held by the Celts.
The Germanic society similar to the Celtic society, 500 years earlier, was based on war, peace means stagnation and destabilisation. The society could not be kept together without violence and war; prestige needed a continuous state of conflict.
The Salii and the Franks
- Frankish Empire
The most successful of what woud become the Frankish tribe were the Salli, they came from the Low Countries (near the river IJssel) and moved to Brabant and what is now Belgium including norther France and later merged with their compatriots (Ripuarian Franks) on the other side of the river Rhine (centred around Cologne).
In the meantime other land and booty hungry tribes tried to emulate the success of the Franks; they included the Bavarian, Langobards and Allemanni. By the 5th and 6th . However, eventually it would be the Franks who would dominate their territories. Not far behind these Germanic tribes were the Slavs who occupied the lands left vacant by the Goths. And to a large extend the boarder became the one that separated these two large peoples from each other, since that time. Eastern Franconia – where the Franks replaced the Bavarii – became the boarder region.
For an overview of their history see: Merovingians.
- Batavii Island
The Batavii were a Germanic (sub) tribe, originally part of the Chatti who lived further east in what is now Hessen, mid Germany. Between 49 and 15BCE they arrived – as a rather small group – perhaps on a raid or a campaign and for whatever reason they settled on the river island known as the Betuwe (named after them). The tribal name Batavii comes from bat “excellent” and avjo “land”, refers to the region’s fertility, today known as the fruit basket of the Netherlands (De Betuwe). They rapidly took control of the area and dominated the local population. The Romans called all of the population – natives and migrants – Batavii. It has been argued that from here they moved further west along the rivers into the Rhine Delta.
Interestingly coins have been found in the river are that were based on coins that were used in their home land. Many of these coins has been found at Rossum, Lith and Kessel. The temple that was built on the Eburon (?) sanctuary in Empel is distinctively Batavii. The temple was dedicated to Hercules Magusanus, linking the Roman God Hecules to the Celtic-Germanis God Magusanus and the German God Donar. However, their archeological presence in Brabant doesn’t last long there are no further traces from them after the 1st century AC.
They were loyal to the Romans and one of their leaders, the first Batavian commander we know, named Chariovalda led a charge across the Weser against the Cherusci – lead by Arminius – during the campaigns of Germanicus. The Batavii delivered in comparison far more ancillary troops to the Romans that the other tribes, perhaps at least one son in every household served in the army.
It came as a surprise that in 69 they revolted against the Romans.
It is uncertain what exactly happened with the tribe immediately after the uprising. There will certainly have been assimilation with most likely the Frisii, but it looks like that the original Frisii as mentioned by Tacitus were replaced or at least mingled with migrating Angles and Saxon, the same people who also migrated to the British Isles. As we will see below the Batavii mingled with the Salii
Large army settlements such as Ulpia Noviomagus Batavorum (Nijmegen) required large amounts of food and materials, and the Batavii and Frisii were certainly attracted to such places. The Batavii already had their own settlement here, but after the revolt that was taken over by the Romans and they now had to build their own city on the outskirts of the camp, which became known as Oppidum Batavorum. (Oppidum is the Latin name for a large warehouse where local goods were stored however, it is suggested that these trading places were indigenous in its origin). A century later a rather large city was established 2 km further on known as Ulpia Noviamagus Batavorum (New Market – Nijmegen). It became the capital of the Civitas Batavorum. Apart from the Betuwe also a large part of Brabant was part of this military district.
All indications are that they again became loyal Roman citizens but the life and culture of the local population remained largely untouched by the Roman occupation. There is no mentioning or evidence from Roman or Romanised settlements outside the few main Roman centres in the area. Typical Batavii villages within their tribal land consisted of between 6 to 12 farms.
- The Saxon
The word ‘Saxon’ is believed to be derived from the word seax, meaning a variety of single-edged knives.
The Saxons as we know them were, similar to the Franks a confederation of Germanic tribes in this case the Chauci, Angrivari, Cherusken and Tubanti. As such they start to appear in the 3rd century. Their earliest known area of settlement is Northern Albingia (north of the river Elbe), an area approximately that of modern Holstein. As a confederation they occupied the region between the Ijselmeer and the rivers Eide and Elbe to the east.
A few centuries later they could be divided into three groups, however all of the individual groups remain largely autonomous. They are:
- A central group – also called the Angarians, who lived along the river Weser
- Eastphalians to the east boardering the river Elbe
- The Westphalians to west boardering the river Rhine
The current region of Twente most probably is very similar to the tribal area of the Tubanti, a Germanic tribe that later became incorporated in the supra tribe of the Saxons. The Tubanti also lived an area north of modern Twente, now Germany, known as Grafschaft Bentheim and includes also Wietmarschen and Nordhorn.
Interestingly in Roman times the tribes here were described by the Romans as the Frisii, indicating a strong influence of and possible an alliance with the Frisii.
Ootmarsum and Nordhorn are situated on the important trading route Brussels – Amsterdam – Bremen – Hamburg. There are indications that this route could even have its origin in Roman times and that parts of it were used during the campaigns of Drusus, Tiberius, Germanicus and Varius into the Germanic lands.
During the Saxon migration, the Tubanti were pushed further west into Twente.
In 797 the Tubanti farms of Mander and Hezinge (both near Ootmarsum) are mentioned in official documents 3 .
According to the few historic records there are, Saxons didn’t have a king, but they were governed by several ealdormen (or satrapa) who, during war, cast lots for leadership but who, in time of peace, were equal in power.” Once the Saxons became more established their area was divided into three provinces — Westphalia, Eastphalia, and Angria — which comprised about one hundred pagi or Gaue. Each Gau had its own satrap with enough military power to level whole villages which opposed him.
The caste structure was rigid; in the Saxon language the three castes, were called the edhilingui (nobles – edelingen in Dutch), frilingi (free men), and lazzi (slaves/serfs). The edhilingui could have been the originals descendants of the Saxons who led the tribe out of Holstein and during the migrations of the sixth century. They were a conquering, warrior elite. The frilingi represented the descendants of the amicii, auxiliarii, and manumissi of that caste, while the lazzi represented the descendants of the original inhabitants of the conquered territories, who were forced to make oaths of submission and pay tribute to the edhilingui.
The Lex Saxonum regulated the Saxons’ unusual society. Intermarriage between the castes was forbidden and wergild were set based upon caste membership. This was also known as ‘zoenrecht’ a form of reparation payment it was expressed in a person’s value in monetary terms. The edhilingui were worth 1,440 solidi, or about 700 head of cattle, the highest wergild on the continent; the price of a bride was also very high. This was six times as much as that of the frilingi and eight times as much as the lazzi. The gulf between noble and ignoble was very large, but the difference between a freeman and an indentured labourer was small.
Interesting wergeld also had to be paid if women were touched inappropriately.
Even after the Middle Ages ‘zoenrecht’ was still practised, often mediated by the church. There are numerous ‘zoen’ letters in the church archives (including in those in Oss), whereby for example the family of a murdered victim agrees to certain compensation arrangements; this can include a financial contribution, a pilgrimage and/or a certain exile period, praying on the grave of the victim -undressed or just in underwear – was another frequent elements of the punishment.
The Saxons held their annual council (Thing) at Marklo where they “confirmed their laws, gave judgement on outstanding cases, and determined by common counsel whether they would go to war or be in peace that year. All three castes participated in the general council; twelve representatives from each caste were sent from each Gau.
The Saxons homeland was finally conquered by Charlemagne during the bloody Saxon wars from 772 to 804. Interestingly 200 hundred years later they became the rulers of the Holy Roman Empire.
There are indications that the Frankish invasion created an internal border between the earlier conquered Frissi and the Saxons. Most probably this border was further west of the current boarder between the Netherlands and Germany. It was not until Charles V that the border moved further east to where is currently is.
The river Vechte could have been an important part of the border Charlemagne established. During his reign the missionary Liudger was, in 796, send to this region to convert the Saxons. In Nordhorn, at the age old river crossing, a church was built and the farmers in the region were ordered to burry their deaths in graves, this was against tribal tradition whereby people were either cremated or buried in tumuli. Next to the church a Schultenhof was established for the local official (Schulte, Schout, Bailiff) certain income rights (land rights) were added to this position. He was in charge of executing the order to watch against any cremation. He was also in charge of guarding the river crossing (on the old Roman road?). This position also allowed him to check the carts on which the farmers would have to bring their death to the church, they were ordered to pass the Schultenhof on their way to a burial. Tradition had that until approx 1920 farmers from the neighbourhoods Altendorf and Deegfeld still used the ancient old dirt road to carry their death to the church rather than the paved way.
Anglo-Saxons in Britain
During the dying days of the Roman Empire, Emperor Honorius told the British cities to start looking after their own defense needs. Legend has it that one of the local warlords, Vortigern, invited the Saxons from the other side of the Channel to settle in what is now Kent in exchange for their military services to fight the Picts and the Scots. However the emigrants turned the tables and defeated Vortigern and established their own kingdoms of Sussex and Essex. From then onwards (approx 430) more Saxons together with the Angles and to a lesser extend the Jutes and Frissi started to arrive/invade Britain.
The Romanisation of England also had not been as thorough as for example in Gaul. In the proceeding centuries before the Romans arrived the land was occupied by the Britons; the collective names of Celtic people. By the time the Romans arrived the Picts (Scotland) had already separated from this group to follow their own distinct Cetlic traditions. The Romans never conquered the north (Scotland), the West (Wales), the southwest (Cornwall) nor Ireland. The Celts in the occupied regions however, never adopted the Roman culture in the way that this happened on the continent. Some historians believe that certain elements of the Arthur legend might refer to the resistance of the Celts against the Romans.
Unlike their Germanic ‘brothers’ who conquered continental Europe the Saxons (as the combined British invaders were labelled) had no concept of the Roman Empire nor their traditions and – unlike anywhere else in Europe – they shattered the whole Roman structure in Britain. Cities, roads and buildings all fell into decay. Bishops disappeared as did the Latin language; Christianity was replaced by their pagan religion, some of the Celtic Catholics fled to Wales, Ireland or Gaul (Brittany).
A century or so later from Ireland missionaries came back to England to start the conversion all over again, they introduced the Celtic variation of Christianity.
The last Romano-British territory in south western part fell to the Saxons in 577 . Over time seven Anglo-Saxon kingdoms emerged: Northumbria, Mercia, East Anglia, Essex, Wessex, Kent ans Sussex, all according old Germanic culture and traditions.
- Angles and Saxons in Britain
One king became the wide-ruler (bretwalda) of all the Saxon kingdoms In 560 Ethelbert, king of Kent became the bretwalda and married a Frankish princess. Later on she brought him in contact with (Roman) Christianity and was consequently converted by the Roman bishop Augustine (of Canterbury), he became the first Archbishop of Britain. Legend has it that the pope, at the Roman slave market, came across people who called themselves Angles to which the pope replied that he wanted to make angles of the Angels and subsequent Augustine was send, in 596, to the land of the Angles. Soon after that, missionaries started to arrive from the British shores in continental Europe.
Christianity also again stimulated learning and scholarship. The 8th bishop of Canterbury Theodore of Tarsus (Syria) established the Canterbury school, soon after his arrival in England in 669. He also started a thorough reorganisation of the Church in this country.
If it wasn’t for the these changes the Monk the venerable Bede and the scholar Alcuin (who in 782 became the key adviser to Charlemagne) would not have become the most knowledgeable Europeans in the 8th century.
In 757 King Offa II became the head strongman and was able to subdue all of the seven states and he built a 170 miles long defense wall to stop the Welsh from constant plunder, this wall is till in existence and is known as Offa’s dyke. There were strong relations between Charlemagne and Offa. His eldest son married one of Offa’s daughters.
As indicated below the Vikings created havoc in the 9th century, resulting in the establishment of permanent settlements on the island. While King Alfred was able to win back some territory, a permanent Danish occupation was now established. He was able to strengths the ties between Wessex and Mercia in order to establish a more powerful resistance against their common enemy. Alfred also built a navy to fight the Danes at sea.
In his campaign against the Danes he also obtained the support of his sister Ethelfelda, who was married to King Ethelred of Mercia. Together they launched a number of hit and run raids. After the death of her husband she was elected queen and signed treaties with the Scots. This further strengthened the fight and while the Scots were fighting the Danes in the north, she was able to reclaim Derby and Leicester from the Danes. In 925 Alfred was succeeded by one of his many sons, Athelstan who was able to subdue Wales, Scotland and Cornwall and became the first king of all of Britain.
Alfred followed Offa and also reached out to the content. His sister Edgiva was married to Louis D’Aquitaine and his other sister Elgiva to Charles III (the Simple) of France. Athelstan followed in his footsteps and arranged for his sister Ethelda to be married to Hugh Count of Paris (the father of Hugh Capet the founder of the French dynasty that lasted for nearly a thousand years), his other sister Edith was married to Otto I of Germany.
After Athelstan death rivalry started to flair up again and towards the end of that century the Vikings had regained a lot of their strength and were able to receive Danegeld from the British.
King Aethelred “The Unready” tried to pay the Vikings 10,000 Roman pounds (3,300 kg) of silver. Of course the Vikings after this never gave up and the extortion went on for nearly 200 years. It has been estimated that over that period this amounted to some sixty million pence. England looked much like a disorganised state, more like a range of competing fiefdoms, unable and unwilling to cooperate with each other. Regular raids from the Danes secured an ongoing payment of Danegeld. Disunity was also used by competing groups of Vikings to make alliances with the quarrelling fractions, only adding to the misery.
A remarkable woman at that time was Queen Emma of Normandy (985-1052) who saw seven kings rule during her lifetime, two of them where her husband and two her sons. In 1015 she was instrumental in the defence of London. Her son Eduard the Confessor was the last Anglo-Saxon king when he died in 1066 he was succeeded by Harold Godwinson, who was defeated and killed in the same year by the Normans under William the Conqueror at the Battle of Hastings.
The Frisii and the Frisians
The Frisii were mentioned by Tacitus and were ‘conquered’ by Drusus in 12, but the Romans never had a good grip on the Frisii, they remained a problem for the Romans (and later for the Merovingians and Carolingians).
The heartland of the Frisians remained the current Dutch provinces of Friesland and Groningen. Most likely after the depopulation of the Rhine river delta, during most of the first millennium, Holland became also part of Frisia, perhaps as far south as Antwerp as could be concluded from the very early relationships in relation to the foundation of the first abbey of Holland in Egmond.
Occupation of most of these lands was limited by the changes in sea levels. Sometimes that meant that they wear only able to occupy the dune strip along the coast with the North Sea and on the sand and clay deposits of the rivers.
For a full overview see: Frisia and Utrecht
While there is no definitive proof the origin of this tribe is most likely linked to the Danish island of Bornholm in the Baltic Sea – Old Norse: Burgundarholmr, “the island of the Burgundians”. Similar to the Goth they might have crossed to mainland European the first century AD and travelled via the rivers Vistula, Oder and Elbe to arrive somewhere around the start of the 5th century at the river Main. By that time their leader was Gibicar (Gibicca) who led his people together with other tribes during the great migration from 406/407 over the river Main into the Roman Empire. The Roman Limes here followed the river Main and the Burgundians settled some 100 km further to the west roughly in the area of what are now the cities of Worms and Speyer. They supported the puppet emperor Jovinus and forced this anti-emperor to declare them imperial allies, this allowed Gibicar’s son Gundagar (Gunther) to proclaim the Kingdom of the Burgundians.
However, as soon as proper Roman authority was re-establish they called in the Huns to assist them in fighting the Burgundians and in 436 Gundagar was defeated with perhaps as much as 20,000 killed. The scale of the massacre was such that it did find its way in Norse and Germanic sages such as the Nibelungen. What happened with the remaining Burgundians is unclear, there are indications they settled some 400 kilometres further to the south. However, their warriors are reported in 451 to fight on the side of the Romans in the victorious Battle of the Catalaunian Fields against the Huns. Most likely because of this the Burgundians, this time, did get the proper Roman approval in their new area, this was known as Sequaini; later known Sapaudis/old Savoy and now the County of Burgundy which is currently largely France-Compté including parts of Switzerland such as Geneva and this latter city became their first centre.
There warrior nation became even more apparent when within a decade after receiving their new status had already extended to the west including Lyon, Dyon and Vienne. Eventually the extended their territory all the way to the Mediterranean, occupying all of the Provence, the territory is also called Lower Burgundy.
To maintain the right relationships with their neighbours marriage arrangements were used. King Gundioc married his sister to Ricmar Flavius Aerius the Roman ruler in the area during the dying ages of the Empire. The next king, Chilperic married his daughter to Clovis.
Internal feuds after the fall of Rome saw the Burgundian lands split in three regions, this severely weakened its position and had severe consequences for its future development.
Under King Gundobad (480-516) and Sigismund (516-523) the Lex Romana Burgunionum was written and extended. What made this different from other codes so far was that this was an extension to Roman Laws specifically dealing with the customary law of the Burgundians.
After Clovis successful campaign against the Visigoth, he moved his attention to Burgundy and the next king, Gundimar (523-534) saw himself trapped between the Merovingian forces entering his lands from both the north and the south. While Burgundy remained his own entity their kings from now on were subjected to their Merovingian overlords.
We continue the story of the Burgundians in a separate chapter of the Merovingian Empire.
Fifth Century Migration
Towards the collapse of the Roman Empire in the late 4th century and early 5th century further large scale migration from these Germanic people occurred throughout Europe. Some of them were driven by Asiatic steppe tribes and the Huns from even further east.
The Huns most probably originated on the Mongolian steppe, sometimes around the middle of the 8th BCE and are linked to the people known as the Xionghu (Hiung-Nu). Climate changes (lack of water) might have driven them into China, where they relentlessly attacked the empire. At least in part the Great Wall of China was built and upgraded to keep the Huns out of China. We visited the Great Wall of China in 1987. They were finally defeated by the Chinese in 151BCE and driven westwards onto the steppes. However, whenever, weaknesses occurred in the Chinese Empire the Huns were back and every time they were pushed further and further westwards. On these steppes they became the most remarkably horsemen the world had ever seen. They literally lived on their horses and could travel as much as 2400 km a week. It was this militarily invention that allowed them to create havoc across Asia and Europe.
In the 3rd century, the Han Dynasty had collapsed and Chines power started to rapidly diminish. In 304 the eastern branch of the Huns (Xianbi) together with other northern barbarian tribes were invited to become allies – big mistake. By 316 they had taken full control over northern China. This situation released the pressure of the Huns and allowed for a more settled period. However, this didn’t last very long. One the allies of the Huns the northern tribe of Taw Ba was able to wrestle power from their other allies, this led to internal revolts and as a result one offshoot of the Taw Ba, the Geougen launched their own war campaign in northern China, here they defeated the Huns living north of the Caspian Sea. This created a whole new wave of Hun migration westwards. The domino effect extending across the lands of the Mongols, Kazakhs, the Black Sea and Steppes people and finally ended in Europe. It is this particular situation that can be truly called mass migration. From here on the raids of the Huns were more of an invasion.
While their effects were already felt many centuries earlier, finally by 370AD they arrived on the European scene. The reputation of their ruthless violence was widespread and caused massive panic. That reputation still lingers on in our modern times. They crossed the frozen river Danube in 394/395, providing a perfect ‘bridge’ for their cavalry.
Under their rulers Rua, Bleda and Attila the raiding nomad culture was transformed to a more semi sedentary barbarian empire. Recent excavation have revealed rich palace like structures. They also manipulated conquered tribes in Central Europe to pay allegiance to them and provide military support in their wars against both the western end eastern Roman emperors.
Initially their threat was rather easy to neutralise a relative small annual sum of money (350 pounds of gold a year) was negotiated in 425 as ‘protection money’ that stopped them raiding Constantinople. A decade later diplomatic relations were established between the Emperor and the Hun king Rua. After his death in 435 the Huns elected a new leader: Attila. The situation started to change under his regime. Extortion money started to see significant increases to 700 pounds and later on even 2,100 pounds. In 447 he again crossed the Danube and defeated the Roman forces that were send against him.
He now also started to send exploring parties further west most likely to see if more treasure could be raided here. Intriguingly one of the parties brought with them a letter from Princess Honoria, the sister of the western Emperor Valentinian, complaining about her distaste for her husband. She included her ring which Attila saw as an invitation to invade the west and claim her as his bride.
For the next three years the Huns terrorised the west with numerous raiding parties. They were continuously confronted by Roman armies. Both armies contained large number of other peoples such as the Alans, Goths, Suevi and Vandals.
In 451 the Huns crossed the Rhine and destroyed Metz, from here Attila’s campaign of terror and destruction included Cologne, Trier, Amiens, Paris, Cambrai, Tournai, they were finally stopped – during the Battle of the Catalaunian Plains near Chalons in Austrasia (modern day France). The battle has brought together one of the largest groupings of European political entities ever -under the Roman military leader Flavium Aëtius. One of his allies was Merovech, king of the Salliers from where the Merovingian and Carolingian dynasties evolved. The Visigoth – who had one of the largest armies of that time – played a key role here as well under the leadership of Theodoric I, who perished in the battle.
After Atilla’s sudden death in 453 -during the wedding night of the last one of a string of wives – his empire very rapidly disintegrated, it lacked the integrity of a secure homeland and an economic base to support it.
As a consequence of the rather abrupt end of the Huns, the other tribes got a taste of what would be possible if they started to pursue their own interests. The Alans, Vandals, Visigoths and Ostrogoths as well as smaller groups such as the Suevi and the Burgundians all started to flex their muscles and soon started to attack the empire both from within and from the outside.
On the positive side the Huns had also introduced a great innovation in Europe; the stirrups, which gave horsemen a much better chance to stay on their horses.
The Huns, 10,000 virgins and Ravenstein
There is an interesting legend links Atilla’s with Ravenstein (near Oss). This was in relation to the destruction of Cologne, while the historic facts are not correct it is likely that the core of the story does have some historical value. Part of the destruction of the city was the massacre of ‘10,000 virgins’ including Saint Ursula, one of her followers, Princes Cunera of York was saved by the Frisian King Radboud and he brought her to his castle in Rhenen, here she became very popular amongst the poor people. This drew the envy of the Frisian Queen Aldegonda who had her strangled. Three centuries later,bishop Willibrord buried Cunera’s remains of in the church of Rhenen . During the Reformation these remains were transported for safeguarding to Bedaf near Uden, which at that time was part of the independent (catholic) Land of Ravenstein.
Interestingly it was Roman Emperor Theodosius (based in Constantinople) who after the collapse of the Huns built a relationship with the remaining Huns and they became one of the most feared parts of his army.
Most of the Germanic and other Eastern invaders, while moving into the new lands, previously occupied by the Romans, left most of the Roman institutions such as law, administration and the Catholic Church alone and Roman law was accepted and integrated in the new societies. They also in general avoided clashes with the local aristocracy. Intimidation through a system of homage was also used to get the support of the Gallo-Roman upper-class.
In archaeological excavations of medieval churches and castles in place conquered places by these invaders, often Roman remnants of temples, villas and bath houses are uncovered.
Originally many of these places were owned by the Roman senatorial class and during the decline of the empire they assumed military style protection, their houses became fortresses and often free smallholders put themselves under their protection and as such created small settlements.
In the depopulated region of Brabant very little ‘conquering’ took actually place. Nevertheless the tribal wars and raids as they had existed for centuries continued unabated during this new period.
- The Alans
As discussed above, the migration avalanche that was unleashed on Europe by the Huns started further in the east. One of the first people they encountered on their way to Europe were the Alans – nomadic pastoral steppe people on the Iranian high-plateau. They were a numerous race with branches living along the river Don, in the Urals, around the Black Sea, the Caspian Sea, in the Caucasus and even along the Ganges. They were conquered by the Huns in 371 and many of them became their vassals, which further increased the firepower of the Huns.
Either the direct confrontation with the Huns or the shear fear that they evoked created massive migration amongst many of the various branches of the Alans. Panic saw possibly hundreds of thousands of people on the move. As we will see below, in 375 they invaded the lands of the Ostrogoths and the Visigoths and also pushed them deeper into western Europe, there they were joined by the Vandals who were living further to the north-east and most likely their migration was also, perhaps an indirect result of the invasion of the Huns.
The Suebi or Suevi at a certain stage occupied more than half of current Germany, and were divided into a number of distinct tribes under distinct names.
Compared to other Germanic tribes, they were very mobile, and not reliant upon agriculture. Various Suebic groups moved from the direction of the Baltic sea and river Elbe, becoming a periodic threat to the Roman empire on their Rhine and Danube frontiers. Toward the end of the empire, the Alamanni, referred to as Suebi, first settled in the Agri Decumates and then crossed the Rhine and occupied Alsace. A pocket remained in the region now still called Swabia, an area in southwest Germany whose modern name derives from the Suebi. Others moved as far as Gallaecia (modern Galicia, in Spain, and Northern Portugal) and established in 410 the Suebic Kingdom of Galicia. It reached its maximum power under king Rechila (438-448) who extended its territory into the provinces of Lusitania and Baetica, plundering cities such as Meridia and Seville.
Its independence lasted for 170 years until its integration in 585 into the Visigothic Kingdom.
The homeland of the Goths is most likely southern Sweden (Gotland). Some of them were led by a legendary King Berig across the Baltic where they settled in what’s now Lithuania. From here they travelled over the next hundred years along the Elbe and Danube river systems south. They appear in Roman chronicles regarding settlements and raids along the river Elbe, Danube, Black Sea and Aegean Sea and they are involved in battles against the Persian (245) and Roman (269) armies.
The Goths used the Third-century Crisis in the Roman Empire to build up their political power. Once Emperor Aurelian finally re-established Roman authority around 275 he gave up the province of Dacia where he allowed the Goths to settle under self-governance rule. This resulted in a peaceful period that lasted for a century. During this period the Goths became known as two distinct groups. These names were given later in history. The Goths didn’t name themselves as such. Most likely they would have called themselves ‘the people’ or ‘the army’ the two groups are:
- Visigoths (also Tervingi or Vesi) – in the Roman province of Dacia (modern Romania), between the rivers Danube and Dniester. Sometimes referred to as ‘the other Goths and in German ‘Westgoten’. ’Their ruling family was the Balthi.
- Ostrogoths (Greutongi) – further to the east. They called themselves the ‘real Goths’. Their ruling family were the Amali.
During the reign of Emperor Constantine, the Goths followed him in accepting Christianity (the Arian version).
It was the Goth bishop Ulfilas who in the 4th century translated the bible from Greek in Gothic and as such this became the only Germanic language of which written records exist.
- Kingdom of the Visgoth
In order to protect them from the Huns, in 376, Emperor Valens allowed the Goth to cross the Danube and settle in Trace, as long as they handed over their weapons. An estimated 75,000 crossed the river in whatever watercraft they could make or lay their hands on. There was not enough food to feed these new migrants and soon this led to starvation. This led to rebellions against the Romans. The unbelievable happened in 378, near Adrianople, massive strategic mistakes on the side of the Romans saw them totally and utterly defeated by the Goths. This led to a geographic split between the eastern and western Roman Empire as the Goth now controlled the Balkans. This was the worst defeat the Romans had suffered since the defeat by the Carthaginian Hannibal 600 years earlier.
The division of the Empire was of severe strategic consequence. The Romans heavily depended on their large army force and troop transport by sea was extremely costly and could require a thousand boats as it needed two sailors for every soldier.
While the Visigoths had a formidable army, they did not have the structures needed to run a state, nor are there any indications that at any stage this was something that they wanted to do. In 382 the Visigoths signed a treaty with the Emperor Theodosius The Great that granted them autonomy (foedera). Under the treaty they agreed to provide troops for the Roman Empire. However after the death of the emperor the empire was split again and his 18 and 11 year old sons, Arcadius (in the east) and Honorius (in the west – Milan) were unable to take effective control and the Goth troops were basically in charge of the capital Constantinople.
For reasons unknown, the Goths – under the leadership of their newly elect chief Alaric which means ‘the ruler of all’ (from the Balthi family)- decided to continue their migratory lifestyle and started to move west. This travel followed their age old pattern of raiding and pillaging. This way they moved through Greece and the Balkan provinces: Pannonia and Illyricum and from here to Italy.
As we also saw after the battle of Adrianople the Goths can’t be seen as a cohesive people let alone a nation or state. Their raiding armies are only in more recent history described as Goths, they included a whole range of Germanic tribes such as the Suevi, Alans, Franks, Germans and even Huns.
They would roam through Europe and sometimes stay for several years in one area, to then move on again for new treasury, military bravery or perhaps simply for food.
The damage that was caused by the Goth during these raids was devastating and Stilicho – the partly Vandal, partly Roman general under Honorius – was able to slow down the raiding. However, both sides avoided outright battles. In 402 Alaric besieged Milan – the imperial capital at the time. Stilicho was initially able to relief the city, but Honorius was so frightened that he moved the capital to Ravenna. However, bribed by Stilicho for 2 tonnes of gold, the wandering Goths were persuaded to move northwards into what is now Austria. Paranoid by any threat to his position as emperor he saw a potential rival in Stilicho and Honorius had his general beheaded in 408 and the Goths seized the opportunity to move back into Italy. Legend has it the Alaric met Honorius in Ravenna where the emperor offered him the provinces of Gaul and Spain together with 5,000 pounds of gold and other treasure as well as a ton and a half of pepper. But also this relief was short lived and in 410 Alaric was back this time in front of the gates of Rome. While the city by that time had lost its strategic value, the symbolic value of the first sacking of the city in its 800 years of history was enormous.
- Visigoth treasure. Church San Roman, Toledo.
On his way from Rome to conquer Africa he died at Consentia in southern Italy, Alaric died that same year and his successor Ataulphus (Ataulf = the Noble Wolf) led – in 414 – struck a deal with Rome and left Italy for Spain. On its way, in southern Gaul, he married Honorius’ sister Galla Placidia. During an internal feud in 415 he was, together with his family, murdered in his palace in Barcelona.
Kingdom of Tolosa
The next effective ruler Vallia negotiated a deal whereby the Visigoth were recognised as imperial allies and received a permanent home in Aquitaine (one of the three parts in which Gaul traditionally was divided), where they, in 418, established the Kingdom of Tolosa (Toulouse). From here – under the leadership of their king Euric (Erwig) they expanded their territory and by 474 the whole Iberia peninsula was under their control. Eurric also ordered the “Codex Euricianus’ (471) the first attempt in the post-Roman world to write down a body of Germanic laws.
- Visigoth church El Salvador Toledo. Carved pillar.
After their cousins the Ostrogoths sacked Rome in 476 the balance of power in Europe started to shift. With the whole of Italy now in their hands, they became neigbours. At the same time their other neighbours the Burgundians had taken over large part of the Provence (Lower Burgundy). The next Visigoth king Alaric II prepared the ‘Bervarium Alarici (506), which became the standard work of Roman Law for all of post-Roman Gaul that lasted until the 11th century. He married the daughter of the Ostrogoth king Theoderic, aimed at eventually combining the two Gothic empires.
In 506 Alaric II (who was married to Theodegotho, daughter of Theodoric the Great), aimed at eventually combining the two Gothic empires. He created the ‘Bervarium Alarici’ also known as Lex Romana Visiogothorum – the Roman Law of the Visiogoths.
However, danger arrived from the north were the Merovingian Clovis started to expand its power they were able – through marriage arrangements – to subdue the Burgundians and started to attack the northern regions of the Visigoths. He also signed a treaty with the Byzantine emperor to jointly attack the Visigoths, Clovis from the north and Byzantines from the south.
In the spring of 507, Clovis won the critical battle of Poitiers near the current village of Vouillé – the French remember this as the place ‘where France started’. Alaric II was killed by Clovis himself and before the end of the year Clovis had already sacked Tolosa and the Visigoth were pushed back over the Pyrenees. This brought the whole of Gaul under Merovingian control.
Kingdom of Toledo
Alaric II son Amalaric now only held on to the southern part of the Visigoth empire, Spain. Here he established the independent KIngdom of Toledo. In order to keep the peace he married Chrotilda, the daughter of Clovis. However, he had not counted on their religious difference, Almaric didn’t want to convert to Catholicism and the queen didn’t want to become an Arian. The queen’s brother Childebert I launched a holy war against Almaric, he won and Almaric was killed and Childebert took treasure as well as the queen back home. The next period is one of infighting and disease. Three successive Visigoth kings were assassinated, revolts became endemic and the plague of 543 also created havoc in this region.
In 551 one of the revolting nobles invited the Roman Emperor Justinian to assist his cause. This provided the Romans again with a foothold on the Iberian peninsula again. In the mean time the other Germanic rulers here, the Suevi in what is now Galicia, converted from Arianism to Orthodox Catholicism, which further isolated the Arian Visiogoth. In 555 Justinian’s army conquered Cartagena. In order to end their isolated position the Visiogoths also converted to ‘true’ Catholicism , this made it possible to get an agreement that provided them with access to the Mediterranean.
With the Merovingian Empire in disarray during the period of Clovis grand children and great grand children, the Visigoth took the opportunity to extend their power, they defeated the Suevi in the north and by 625 had sized all of the remaining Byzantine territories. They even conquered the till today fiercely independent Basques. However, this period was followed by one of those so typical for the Dark Ages ongoing internal infighting and civil war. In 711, one of the warring Visigoth fractions sought help from outside and invited the Count of Ceuta in North Africa, he responded positively with an army of 7,000 Berber Muslims from Mauretania (and hence called the Moors) under the leadership of Tariq ibn-Nusair they landed at a place they called Gebel at-Tarique (Gibraltar). The warring Visigoth fractions were even not prepared to form a combined alliance to fight the invaders.
- Remnants of Visigoth Basilica San Vincento Cordoba (now Mezquita Cathedral)
By 714 the Moors had conquered the total peninsula, except for the Basque country and the mountain area of Asturias. The reign of the Visigoth had ended. After their migration period and establishing two kingdoms they disappeared from the European scene their people integrated in what would become the Spanish nation.
- The Ostrogoths
Apart from the Goths who had formed the what became known as the Visigoth, the remaining people in the Black Sea – Dniester region stayed there under Hun rule. Part of the tribute they had to pay to their overlords was participation in the many raids and wars that the Hun unleashed over Europe. The situation ended when the Huns were defeated andtheir leadership started to flex their muscles for the top position, it took them 18 years to sort that out when an Amali soldier of the Roman Imperial army, Theodoric Strabo, emerged – in 471 – as the strongman and proclaimed himself King of Trace. He started the challenge the Eastern Emperor Leo and under pressure the Emperor granted him authority over all of the Goths within his empire.
This strengthened Germanic foedera of Italy started to demand similar concessions as the Goths had received in Toulouse. When the Emperor Orestes and his son Romulus Augustus refused, the newly appointed leader of the Italian foedera Odoacer – in 476 – killed Orestes, sacked Rome and disposed Emperor Romulus. The reign of the Roman Emperors in the west had ended. As the new king of Italy, Odoacer kept the Roman institutions in tact and this secured him the support of the leading nobility. Interestingly he offered suzerainty to the East Roman Emperor. However, all but in name were the emperors capable of exercising effective control over the western part of the empire.
Odoacer raised an Italic-Germanic army with which he defeated the Vandals in Sicily. He was able to conquer the whole island by 477. He made pacts with the Visigoths and Franks and joined them in battle against the Burgundians, Allemanni, and Saxons.
However, the next Byzantine Emperor Zeno nullified the treaty with Strabo and started a campaign of bribing, playing the different Ostrogoth clans out against each other. For that purpose he also created a king amongst the Ostrogoths and recruited out of his own army, confusingly also named Theodoric. For the next five years he played his game of bribing, privileges and shifting support. After the death of Strabo in 481, the other Theodoric became the leader of all of the Ostrogoths and received the title ‘the Great’.
After the invasion in Noricum, Zeno convinced his Ostrogoth vassals that Odoacer was an enemy and should be removed. Zeno promised Theodoric the Great and his Ostrogoths the Italian peninsula if they were to defeat and remove Odoacer. In the same year, 488, Theodoric led the Ostrogoths across the Julian Alps and into Italy. With this betrayal, the Byzantines killed two birds with one stone. In they removed the Ostrogoths from the Balkans and their border and at the same time conveniently caused Odoacer to disappear from the scene as he was defeated and personally killed by Theodoric at a banquet in 493.
In 493, Theodoric became the new king of Italy and established an Ostrogothic kingdom that was ruled from Ravenna. The remainder of Odoacer’s foedera joined the Ostrogoths and were allowed to remain in Italy. In that same year he married (on his request) Audofleda the sister of the Merovingian king Clovis. They had one daughter Amalasuntha. She married Eutharic, an Ostrogoth nobleman from Iberia (Spain).
While Theodoric’s previous overlord, the eastern emperor, tried to claim authority over the western Empire he was unable to gain control. Re-enforcing his Roman legitimacy, Theodoric recodified 154 Roman Laws from now on equally applying to Goths as Italians. Interestingly he was seen as a heretic Emperor by the Catholic Church as he was an Arian. However, he was very tolerant and let the Arian and Roman Catholics live next to each other. In Austria there is a site where an Arian and a Roman catholic church stood next to each other.
Theodoric also left some beautiful architecture behind in his capital Ravenna, among the finest of its time, some of these buildings still exist in their full glory such as the basilica San Vitale and his tomb, which resembles a yurt, referring back to his ancestors who originated from the steppes.
This is another example that the so called barbarians were far more civilised than that the Romans were giving them credit for.
- Basilica San Vitale Ravenna
- Emperor Justinian I, court officials, Bishop Maximian, palatinae guards and deacons.
- Tomb of Theodoric
- Arian baptism of Jesus
Theodoric’s plan was to groom his son in law Eutharic to be his successor however, when he suddenly died that plan came to an end. Rather unexpectedly his widow – Theodoric’s daughter succeeded the king after his death in 526, this lady Amalasontha, became the ruler of Italy as regent for her son Athalaric. Together with the two other leading women of that period, Empress Theodora and Antonina the wife of general Belisarius – she played a key role in the late Roman Empire.
She wanted her son to be educated as a Roman, but the Ostrogoth nobility forced her to let him go in order to receive a Gothic upbringing. She however, did not give up her regency. She corresponded with the Emperor in order to receive his support to fight off the Gothic intrigue at her court in Ravenna. However, real control was now in the hands of the Romans.
The Ostrogoths started to regroup themselves north of the river Po under a new leader Vitigis succeeded by Hildebad, Eraric and finally Totila. He defeated the Romans at Verona and marched into Tuscany, bypassed Rome and captured Apulia (Italy’s boot). He captured Naples a year later he negotiated with the Roman nobles an agreement and without any battle he was able to take this city in 546. However, very little glamour was left in the Eternal City, the 3,000 men garrison had fled the city that was only occupied by some 500 people.
The invasion of Slavs delayed Justinian’s response but finally a sea battle was launched to relief Roman troops in Ancona on the Adriatic Sea. The Romans did win this battle because of superior naval strategies. From here the Roman army moved north and a final battle was fought in June and July 552 near Taginae (northeast Umbria) where the Ostrogoths cavalry was decisively defeated by Roman archers. The Romans took back Rome and a year later the the final battle against the Goths was fouth at the foot of Mt Vesuvius.
After this event the Goths did no longer play a significant role in the European history.
They however left some striking architecture behind in Ravenna, the tomb of Theodoric, the Church of San Vitale and in particular the baptistery, with images that shows the Arian beliefs in relation to how Jesus received his divinity only at the moment of baptism. The Basilica in Aachen, built by Charlemagne in 800, was modeled on the San Vitale.
Another northern tribe (probably from Norway) that of the Rugians had settled in Rugiland (Pannonia – Austria/Hungary). In 487 the Ostrogoth commander Odoacer led his army to victory against the Rugians in Noricum, however, he could not provide the city with a permanent defense and he did not incorporate it into his own kingdom. The remaining Rugians fled and took refuge with the Ostrogothsin Italy.
The kingdom of Rugiland was now left open and by 493 was settled by the Lombards.
- Kingdom of the Vandals
The wealth of Hispania also attracted yet another migrating tribe, the Vandals. There were two major groups the Siling and Hasding and also included Alans and perhaps even Roman provincials. They were most probably also pushed along by the Huns from their ancestry lands around the Sea of Azov (northern part of the Black Sea). They had established themselves around 400 in southern Gaul. Together with the Alans they had started to cross – in 409 – the Pyrenees into Hispania. The Romans were keen to drive the Vandals back and the Visigoth were recruited by them to do that job, they did such a thorough job, that with the exception of Andalusia (Vandalusia), by 427 there were no Vandals left in Spain,
The Vandals ended up at Gibraltar, but unlike the Goths, they were skilled sailors and in 429 all 80,000 of them did cross over to Africa, where they conquered what is know Tunisia and eastern Algeria. This province had been in Roman hands for 700 years. Ten years later – under the leadership of Gaiseric – they had taken Carthage. In 442 they signed a peace treaty with the Romans. However, true to their raiding nature (and their name) they created havoc all around the Mediterranean pillaging as far away as Sicily and Apulia (SE Italy). In 455 they conquered Sardinia and established a force on the island and as such took control over the territory.
Roman Africa was the food bowl of the empire and the loss of this critical province greatly contributed to the demise of the empire, it led to famine and uprisings in Rome and huge ransoms were paid to the Vandals in order to receive the all important grain shipments.
Increasingly Gaiseric attracted other Germanic tribes in his federation, notably the Alans and the Suevi.
From their new homeland in North Africa, the Vandals increased their raids into Italy – were disarray reigned between rival child emperors and their supporters – and in 455 they sacked Rome and took the loot back to Africa. The Vandals rapidly transformed themselves and accepted the Roman culture, they certainly had shed their barbarian image. At the time of Gaiseric their military strategic skills equaled that of the Romans.
The Romans didn’t want to let all of this unpunished and in 470 Emperor Leo brought a formidable force together. The campaign saw a range of strategic mistakes that led to the Vandals defeating the Romans.
After Gaiseric’s death in 477, Vandal inheritance law saw the eldest son taking his place. This law also secured the succession of less powerful rulers and a steady decline of the Vandal empire started to set in. Disunity allowed the deposed pro-Roman Vandal king Hilderic to strike a deal with Emperor Justinian that saw, in 533, a new campaign launched aimed at overthrowing his usurper Gelimer.
This campaign was led by the famous general Belisarius. He was accompanied by his wife and friend Antonia, a formidable person in her own right. She also was the friend of another giant of these times, Empress Theodora and both these women were ex prostitutes, which also says something about Roman tolerance and equality, liberty and anti-discrimination (we also find this back in the Codex Justianius and as such also these also became elements of modern European law).
Within four weeks after the landing, Belisarius had already conquered Carthage. Gelimer asked for assistance from his brother Tzazon who led the large occupational force on Sardinia. It is estimated that the Vandal troops outnumbered the Roman at least 3:1, perhaps even more. However, in the three battles that followed – where Antonia led the Roman infantry – the Vandals were decisively defeated and North Africa was once again in the hands of the Romans.
Gelimer escaped to Numidia, the Romans pursued him and in the end he surrendered. In triumph Belisarius brought back many of the treasure that Gaiseric had taken during the sack of Rome.
- Kingdom of the Lombards
Also this Germanic tribe originated in southern Scandinavia and started to cross the Baltic to start their own travels south, soon after the start of the Christean era. By the middle of the 2nd century they had reached the Rhineland. On their way south there were various interactions and alliances with other Germanic tribes. As happened with other migrating Germanic tribes also parts of the Lombards sometimes intermingled with other tribes and a significant part might have become past of the Saxon federation.
Another northern tribe (probably from Norway) that of the Rugians had settled in Rugiland (Pannonia – Austria/Hungary) but after a battle with the Romans in 493 they fled with the Ostrogoth and the Lombards took possession of this vacant land. In 540 the crossed the Danube where they received imperial permission to settle in Pannonia .
Under their ambitious king Audoin they fought the Gepids mainly in the Serbian/Hungarian region and they assisted the East Romans in their fights with the Goths. After, in 553, the Goths were defeated, the Lombards were dismissed. General Narses stayed in Rome as the military ruler of the region. Legend has it that after the death of Emperor Jusinian he was insulted by the Sophia the wife of the new emperor Justin (the mad) and in 567 he invited the Lombards under the leadership of their king Alboin to enter Italy to take what they wanted. This became a very destructive invasion for Italy. They made their capital in Pavia. Italy now had two rulers the Lombard kingdom ( one strip of land north of Rome and another one south of Rome headed by the ancient city of Benevento) and the East Roman Imperial Exarchate ruled from Ravenna and included the boot of the country plus Sicily, Sardinia, Corsica as well as a strip from across the center of the peninsula from Rome to Venice.
The Lombard kingdom was rather loosely governed and the rulers in Pavia and Benevento and a third one ruling from Spoleto in northern Italy operated rather independently. A further division between the two main ruling entities was that they followed different religions, Arianism was followed by the Lombards while the Byzantines were Catholic. The dukes of Spoleto were among the most active and continuously attacked the Byzantine dominions. With Byzantium rule weakened in Italy the Pope in Rome was able to slowly increase its independence from Constantinople. Among the more prominent popes of these times were Leo I (who confronted the Huns) and Gregory the Great who implemented significant reforms among other he introduced celibacy among the clergy.
Gregory was also able to convert Agilulf one of the Lombards and thus made himself even more independent of the Byzantines. From now on there was a third power in Italy that of the Papacy.
In 758 the Lombard king Desiderius captured Spoleto and later also Benevento. He also was a good intriguer and tried to set Carloman up against his brother Charlemagne and let him marry one of his daughters. In order to avoid war between the two brothers their mother Bertrade persuaded Charlemagne to marry one of the other daughter of Desiderius. However, Carloman died before that happened and left a wife and two young sosn behind, Charlemagne now promptly took over Lombardy, without marrying Desiderius’ daughter. The King was not happy and tried to force the Pope to recognise the rights of Carloman’s son in doing so he invaded the Papacy. This was a perfect opportunity to invade the country , the kingdom collapsed in 773 and Desiderius was exiled to Francia and Charlemagne took the title King of the Lombards. While the Lombards faded into history the title became a hot pursuit for many centuries to come as it was to grow into the title of King of Italy.
By the tenth century, the once feared and respected Lombards had lost most of their importance and started to disappear from the European records. Later on the century, the invading Normans had little problem with conquering large parts of their territory.
The Normans (Vikings)
Who are they?
The name ‘Vikings’ is a rather recent one and it is unknown where the name comes from there are several theories, one is that it is linked to the name of a bay near Oslo called Vik. During the reign of their chieftain Halfdan (ca 880) this area expanded rapidly into what now is Oslo. 5
Their lands were outside the Roman Empire and therefor their culture remained tribal. Especially in the northern areas there was very little fertile grounds. Fishing was their main activity, which made their excellent seafarers. They developed one of the most innovative and effective boat types, the long boat, ideally suited for raiding.
The first images of such boats date back to 1000 BC on rock engravings on the island of Gotland. The first known ship burial site is also found here.
In general the name Vikings started to be used for the Scandinavian raiders who started to create havoc from the late 8th century onward. During the 9th century there are several reports of infighting between tribal rulers on the various Danish islands this could lead to exile, those who were even to violent in their eyes were also exiled (some of them became the founders of Iceland), another important element is that piracy was part of their internal warfare and this made them one of the most skilled seafarers of their time. So it could well be that their raiding was already well established before they ventured overseas. As mentioned above, this raiding culture was also well established in other parts Germanic and Celtic societies an interesting element of all of these tribes was that the ruler did not have any heavenly mandate, rulers were elected from among the tribe – mostly the strongest – and had to rule with their tribal assemblies. Trondheim, which remained a rather independent region within Norway kept their own assembly system to well into the 12th century. The assembly they established in Iceland -the Althingi (note ‘thing’ as mentioned above) – is the oldest – still functioning – parliamentary institution in the world.
The Althingi started as a general outdoor assembly around 930AD. It was here that the country’s most powerful leaders met to decide on legislation and spoke justice. All free men could attend the assemblies, which were usually the main social event of the year and drew large crowds of farmers and their families, parties involved in legal disputes, traders, craftsmen, storytellers and travellers. The centre of the gathering was the Lögberg, or Law Rock, a rocky outcrop on which the Lawspeaker took his seat as the presiding official of the assembly. His responsibilities included reciting aloud the laws in effect at the time. It was his duty to proclaim the procedural law of Althing to those attending the assembly each year. The place of where the origfinal Althingi took place was situated approximately 45 km east of what later became the country’s capital, Reykjavík.
They were largely restricted in their migration and raiding by strong countries who could stop them. First the Romans – especially in Britain -they had built strong fortifications along the coast to protect themselves against invaders (especially against the Saxons). Also the Franks under Charlemagne were able to contain these raiders. But during the ‘Dark Ages’ there were no longer strong nations in this part of the world and the ‘Vikings’ could more or less do what they pleased. They were assisted by the fact that because of the strong nature of the former empires there were very few defense structures in place around monasteries and towns and once the border protection had collapsed there was nothing stopping them to follow the coasts and look for easy raiding targets. During this period we for example see that monasteries are moved inland, less vulnerable from attacks from sea. Once the Vikings encountered stronger and better organised empires such as Byzantium and the Caliphate they changed from raiders to traders. They had an interesting trading organisation. Their ships – the knorr – were collective property, the owners selected the captain and he could rent space on the ship, payment could take place in the form of labour on board the ship. All participants receive an equal part of the ship for the loading of their goods, except the captain how received a larger part. At the destiny of the trip each went their own way and was in charge of his own trade.
But they are better known for their raiding and during some 200 years they were a very powerful force in European history. The key to their success was their violent nature – violence was seen as a sport and bravery was highly valued in their society – they were a savage people, that for reasons of climate and population growth, literally burst out of their countries and once they were given the opportunity they started to raid their weakened neighbours in Europe.
They were not all that different from Goths, Saxons, Mongols and the Huns, but at this point in history they were the masters of the sea. Not all that different from the Arabs, who were the masters of the desert. They both used these advantages to raid their neighbours and rapidly withdrew either to the sea or the desert if their ‘more civilised’ opponents put their forces in front of them. They both avoided fortified places as they did not master the siege techniques and in the end that stopped their advantages and allowed their opponents to fight back.
Also remarkable is that they were able to settle rather quickly – this started first with overwintering – and were able to establish governance and administrative systems that allowed them to become nation builders. This was financed by the ongoing extortion (Danegeld) that they were able to extract from the Anglo-Saxon, Byzantine and and Frankish rulers. As mentioned above the Vikings established the parliamentary democracy in Iceland, the world’s oldest lasting one. They also are the founders of the Russian Empire; they forced the Anglo-Saxon kingdoms to unite under King Alfred and later on from Normandy – the territory they established in France – they totally submitted England and they are also the founders of the County of Holland.
Pre-Vikings: Cimbri, Suiones and Chauci
Scandinavian people from have been active migrants from well before the time of the Vikings. A key reason why these Scandinavian tribes became such important migrants has most likely to do with the climate realities of the north. It was impossible to expand northwards, where only the Sámi ventured following the reindeer migration patterns. For the Germanic farmers only parts of Denmark and Sweden had arable land that they could settle. Their cultural centre was in Uppsala (north of the current town in Gamla (old) Uppsala) just north of modern Stockholm. There are indications that here in the 9th century significant landholdings were established and the possible presence of regional kings and queens. It was also the most sacred place within the lands of the Normans, with a temple dedicates to the Norse Gods, the temple was destructed in the late 11th century and the current church is believed to been built on top of the remnants of the temple. Next to is was a sacred grove were sacrifices of male animals (including humans) hung from what were seen as the most sacred trees in the land. Archaeological evidence has shown that the site was at least already in use around 500BC.
In times of over populations there was no other way for them to move southwards or eastwards across the Baltic and the North Sea. In situations of population pressure, violence becomes an important survival element. The strongest wins the leaders position in the homeland, others will have to find their luck elsewhere.
People originating from Germanic tribes in Denmark, the Cimbri, were among those from other parts of northern Europe who were defeated in southern France – as we saw above – by Caesar before he launched his campaign into Gaul. The Roman historian Tacitus mentioned in the 1st century the Suiones, who were living beyond the Baltic Sea and who were rich in arms, ships and men. The Goth also came from Scandinavia and ventured both into Eastern Europe and Southern Europe as we saw above.
The Chauci, a tribe that lived in north Germany/south Denmark, while led by Gannascus, a leader from the Canninefates from the Rhine/Maas delta – who had Roman military experience – ventured along the coast of north-western Europe. They raided what is now the Belgian coast in 41 and by 48 had reached the Rhine, where they were defeated by the Romans.
In the early 6th century a small fleet of them were spotted in the river Meuse in the Low Countries. Many of these travelers were most likely involved in trade.
- The Viking Raids
Initially these raids and pirate activities only took place in Scandinavian and Baltic region. It was not until the late 8th century that they started to arrive further south in larger numbers. There were several distinct groups of Vikings/Norsemen/ Normans involved from Norway, Sweden and Denmark. The ones that started to raid Ireland, England, France and the Netherlands were descended from Danish Vikings. Initially they arrived at the coasts of England in 787.The first recorded raid was in 791 when the monastery of Lindisfarne was sacked, situated on a tidal island on the north east coast of Northumberland.
Charlemagne knew of this raid as his adviser Alcuin, a Northumbrian scholar in Charlemagne’s court at the time, wrote:
Never before has such terror appeared in Britain as we have now suffered from a pagan race. . . .The heathens poured out the blood of saints around the altar, and trampled on the bodies of saints in the temple of God, like dung in the streets.
From then onwards they certainly created an enormous amount of damage and fear (the name still caries a particular connotation with it) but they also became active participants in political and economic affairs.
What is often forgotten is that they were also among the early European traders that started to emerge in the early Middle Ages. Kaupang/Skiringssal (Norway), Hedeby (Denmark), Reric (Baltic) and Birka and Gotland (Sweden) were together with Dorestad (River Rhine) the most important trading towns of northern Europe.
From a European perspective one of the major (positive) contribution of the Vikings was that they connected large parts of the world from their homeland west to Iceland, Greenland and Newfoundland, to the east to Russia and from there into Asia (Byzantium, Black Sea and the Caliphate) and to the south England, Frankish Empire and all the way to Sicilly.
Iceland, Greenland, Newfoundland
The heydays of the Vikings coincided with the Medieval Warmth that provided favorable ice conditions that allowed them to cross the northern seas. Interestingly these were no areas for raiding as there was nobody to raid they had to settle. On one of those trip in the 860s Gardar the Swede was thrown of course and arrived in Iceland – he found it so barren and unpleasant that he gave it the name that is still has. The sea trip to Iceland was 1300 kms and could take – depending on the weather – between one week to one month.However, on following expeditions they did found rich meadows and settled it.
From here, the from Norway exiled Eric the Red, also by accident discovered Greenland. This land was settled from 986 to 1400; it even had a bishop seat. Because of a cooling of the climate, they were unable to hold on to Greenland and they had to leave this to the Intuits who are better equipped to face the cold.
From Greenland the Vikings even reached North America. In Newfoundland they settled – in 1000 – Markland and Vinland but twenty years later the native Americans forced them to abandon these settlements and they traveled back to Greenland.
Their initial raids into the Frankish Empire could well have been exasperated by the violent way the Charlemagne persecuted pagans, but already before that time when, under the protection of the Frankish Empire, trade started to flourish, Vikings launched pirate attacks on Frisian ships in the North and Baltic Seas. This despite the fact that they saw each other as related people and could communicate with each other in their own languages. After the conquest of the Saxons and Abotrites in Nordalbingia (near the Elbe), the Frankish frontier – known as the Danish March – was brought into contact with Scandinavia. Charlemagne was aware of these people as the Saxon leader Widukind had fled to these lands and had later on told him stories about them.
In 808, the king of the Danes, Gudfred, extended the vast Danevirke across the isthmus of Schleswig , south of the River Eider. The earliest date for the construction of this defence is 737 and it was last employed in the Danish-Prussian War of 1864. Gudfred extended it to a 30 km long earthen-work rampart. The Danevirke protected Danish land and gave Gudfred the opportunity to harass Frisia and Flanders with pirate raids. From here he launched in 808 his first campaign aganist Charlemagne. The old Slavic/Scandinavian trading city of Reric on the Baltic coast near Wismar asked Charlemagne for protection and as a result it was totally sacked by Gudfred. He than established a new trading city near the Danevirke, Hedeby, where he also resettled some of the Reric merchants.
In 820 Gudfred send a large fleet to ravage the Frisian islands.
To protect his northern coast of Frisia and Saxony, Charlemagne used Ghent as one of his fleet bases for the counter attacks. He visited his fleet in Ghent in 811. He also built fortifications in the ‘Danish March’ near Hamburg and used an expelled Viking prince to assist him in protection the Frisian coast line. Towards the end of the reign of Charlemagne most of his military efforts were aimed at stopping the Viking raids. Tellingly the last of his many campaign was in 810 and was against the Vikings.
Louis the Pious
His successor Louis the Pious never had the authority nor the power to carry this military regime through. This led to an increase in Viking invasions and this in turn further undermined any form of central power. The defense of the land was basically left to the local people and their strongmen (dukes and counts) who started to take control over the land.
In 834 Danish Vikings raided the by now largely unprotected Low Countries; according to the Dutch school history books it was in this year that they ransacked Dorestad for the first time. This Frisian city was the most important trading city in northern Europe in the early Middle Ages. Consequent raids on this city took place in 835, 844, 857 and 873.
Vikings ruling parts of the Low Countries
The death of Charlemagne created severe political instability and in 833 Lothar, who was promised to become Emperor of what would become Lotharinghia revolted against his father Louis the Pious, because his half-brother Charles the Bold received a significant larger share of the lands in what would become East Francia. During the period of turmoil Lothar invited the exiled Danish (Jutland) Viking warlord Harald (Haraldr) to create havoc in the Frankish Empire. Between 834 and 839 he together with his brother Rorik (Hroerekr) plundered the coast and rivers of Frisia. Louis the Pious did nothing to stop this what might indicate his disinterst in this far corner of his empire. After father and son reconciled the raids stopped but in exchange for their services Harald and Rorik received Dorestad in fief, this basically was the river lands, starting from the coastal areas of Zeeland, all the way along the Scheldt to Antwerp and from their to Leuven. Inland following the rivers till Xanten. In exchange their main obligation was to protect the northern region of Lotharinghia and stop other Viking from raiding.
Rorik operated from Wieringen, while Harald ruled from the island of Walcheren in Zeeland, together they ruled Dorestad. After the death of Harald (around 844), Rorik became the sole ruler of the region and with the assistance of his nephew Godfry (Gottrik/Godofrid) Haraldson (the Noorman) he reneged on the earlier feudal arrangements with the Emperor. There was very little the Emperor could do as the Carolingian Empire had by now weakened considerably. From now on the Vikings ruled Frisia as independend kings.
There was not always Viking unity. Different clans clashed with each other and this could for example be the reason why places like Utrecht and Dorestad were so often attacked. Nevertheless Rorik was rather successful in stopping other Vikings plundering Dorestad and consequently those raids moved further south (see below).
In the running up to the Treaty of Meersen – that would split Lotharinghia to be integrated in East and West Francia, Charles the Bold of West Francia signed a treaty in Nijmegen with Rorik. While most of Frisia would end up in Est Francia, Charles arranged that all of Frisia would be given in fief to Rorik. While the relationship between Charles and Rorik was amicable this wasn’t the case with his new overlord Louis the German. While Utrecht and Dorestad were more or less under the West Francia sphere of influence the Church properties were more or less under East Francia control. As a result of this the Bishop of Utrecht didn’t feel safe and moved to Deventer which was fully under the control of Louis the German. As we will see further Tiel also fell more securely under the control of East Francia.
The north of Lotharingia was now fully under control of the Vikings. They even established here the capital of the Viking Kingdom of Dorestad (850-885). Because of the consequent political problems that this caused, the economy of the city rapidly declined and was of little importance after 863. Soon after there was an influx of traders from Dorestad moving to neighbouring city of Tiel (a distance of 10 kms). Tiel was outside the control of the Vikings and was supported by the rulers from East Francia and the city started to make its start of taking over the trading tradition from Dorestad, this had been largely completed by the end of the 9th century. While much later also Tiel got ransacked by the Vikings – in 1005 and again in 1006 – it was able to recover from these attacks because of its far more stable political situation. There are indications that Balderik of Hamaland played an import role in the defeat of the Vikings in 1006, where he came to the assistance of his uncle the imperial prefect of the region.
One can make parallel observations between the decline of Dorestad and the decline of Lotharinghia.
During a battle with Emperor Charles the Fat – who had gathered a large army of Langobards, Bavarians, Alemans, Thuringians, Saxons and Frisian – near the Viking stronghold Asselt (near Roermond) Godofrid was surrounded. However, he was still strong enough to force the Emperor to sign a treaty whereby Godofrid, in 882, was made Count of Frisia, following the death of his uncle Rorik in that same year. The only condition was that he had to convert to Christendom. To force closer ties, and thus indirectly control him, a marriage was arranged between him and Gisela, the daughter of Lothar II, the king of Lotharingia. However Godofrid didn’t stop plundering and Gisela was called back to Worms, never to return to her husband again.
In 885 Godofrid was killed by the Frisian count Gerulf who had a strongholds on the North Sea coast of Frisia. In 889 Arnulf granted Gerulf, as a reward for the assassination of Godofrid, lands in that area (Kennemerland). He is the ancestor of what later on would become the counts of Holland.
The Vikings started to established strongholds on higher laying places they now controlled such as in Walcheren, Wieringen and Elterberg. There are several settlements here that are protected by ‘ringwalburgen’, the English translation is hillfort but this doesn’t properly describe these fortresses. They were circular earthen walls surrounded by a ditch, around 200 meters in diameter, with 4 entrances and ramparts. While in general the theory is that they were built to protect the local population from Viking attack, others argue that some of them were perhaps built by the Vikings or at least occupied by them. There were several of these ‘burgen’ in Zeeland: Burg, Domburg, Middelburg, Souburg and Oostburg. There is good evidence that the one in Oostburg was built by Boudewijn II (+918) , Count of Flanders. With the exception of Middelburg the fortresses in Walcheren were abandoned towards the end of the 10th century, when many of the Vikings that were settled on the continent joined the raids on Britain.
The Vikings were suddenly and totally unexpected back during the reign of the powerful King Bluetooth and in 1002 plundered Tiel. They came back with a large fleet of 90 ships with perhaps as many as 3000 men in 1009 but left the city alone on the promise they were allowed to pass through. Most likely these were fleets of Vikings on their way to or from England, using the safer river system rather than the more treasonous sea to travel to and from Denmark. By making a slight detour via Tiel they could raid the supplies stored in the warehouses at the port. This was the last time that Viking raiders visited the Low Countries. 6
Elsewhere….there is also evidence about the plundering of Antwerp which took place in 836. From than on the Rupelstreek around Antwerp, Ghent, Kortrijk (Courtrai), Doornik (Tournai), Leuven and the region along the river Maas followed. By 844 the various ravaging groups had already reached Toulouse and in 845 they sacked Paris and Hamburg, Bordeaux in 848 and Orleans in 853.
Already in 799 a group of Danish Viking had ravaged the coast of Aquitaine and in 814 another group of Vikings had sailed all the way to the Mediterranean, however they were defeated by the Muslim Moors, that did not stop them from raiding the coast of Spain on their way back. In 844 they were back and sacked Seville.
Now well established in the Low Countries and Northern France the Vikings also started to use horses for their raiding campaigns. They occupied Ghent in 879 and Courtrai (Kortrijk ) the following year. Further north Deventer and Zutphen fell victim to the raiders, but that might date from later 880- 890. It is around this time that it was also mentioned that Rollo (the later Duke of Normandy) traveled through Walcheren.
In 885 they were also back in Paris, where the Parisians refused them to allow passage over the Seine, consequently they besieged the city and Charles the Fat had to pay ransom of 700 pounds to stop the siege. From now on the Vikings used this lucrative ‘danegeld’ principle to get fast amounts of money from many of the western rulers.
At the Treaty of Saint-Clair-sur-Epte (911) with King Charles the Simple, Rollo the Norman pledged feudal allegiance to the king of France, changed his name to the Frankish version (Robert), and converted to Christianity. In return, King Charles granted Rollo the lower Seine area (today’s upper Normandy – the old northern portion of the Merovingian province of Neustria) and the titular rulership of Normandy, centred around the city of Rouen. Also here as in Frisia the key arrangement was that Rollo had to protect his part of West Francia. Such arrangements were an good solution for exiled Viking warlords as their only other alternative was to keep on raiding.
The Viking legacy
There is an interesting Viking legacy. By the 10th century raiding had largely ended and the Vikings had become settlers and as we have seen above the controlled or at least partly controlled significant parts along the northwestern European coast, from Normandy , Flanders, Zeeland all the way to the Frisian Islands.
There was little Frankish control left and this made the decedents of the Vikings important local rulers. This is at a time where elsewhere in the remnants of the Frankish Empire local rulers started to emerge. Flanders and Zeeland for example were partly under the control of these Viking decedents. Holland basically was like Normandy rather independent and the fact that the foundation of the first abbey in Holland took place from Ghent indicates special relationships between these two regions which because of their combined Viking occupation for close to a century could have been closer than what has been generally considered.
However, this soon ended with the counts of Flanders taking control over the region. Holland carved out its own territory towards the north and as such carried the Viking legacy forwards into what would become the Netherlands.
Vikings adopting Christianity
Similar to what happened under the Franks, also the Viking rulers saw that joining Christianity would have many advantages, give them legitimacy and allowed them to partner with other rulers in the region. Under Harold Bluetooth at least officially his kingdom accepted the new religion. He was also able – towards the end of the 10th century – to unite a large number of diverse tribal rulers in a centralised Danish kingdom. The Danes now used Christianity as an excuse to ramp up their raids on pagan Norway. After the death of the Holy Roman Emperor Otto I, Bluetooth tried his luck on the north coast of Germany, into the land of the Wends, however, he was defeated and had to retreat behind the Danevirke and couldn’t avoid German occupation of Hedeby.
Christianity also made sense to those Vikings who settled and became farmers. The violent pagan religion around Odin was good for raiders and warriors, however farmers required a more peaceful setting.
Italy, Sicily and Malta
On their way from a pilgrimage to the Holly Land a group of Normans (from Normandy) assisted – in 999 – Prince Guaimar III of Salerno (SE Italy) to fight the Arabs who has come to collect their tribute. The Normans were asked to come back and so thy did. And is in so many similar situations, they didn’t leave. This created lots of resentment and Bishop Alfanus of Salerno (1020-1085) established churches in Cassano, Martirano and Castra in order to stop the Normans.
Cathedral of Salerno – Bishop Alfanus on the top right side
During the 11th and 12th centuries, involving many battles and many independent counts and princes they conquered all of the lands south of the Papal States.
Driven by its wealth, the Arabs in Sicily were also conquered by the Normans. One of the local nobles Tancred of Hauteville most probably married a daughter (or perhaps even two) of Rollo’s grandson Duke Richard I of Normandy. Their son Roger had conquered southern Italy and crossed over to Sicily where he ousted the Arabs and established the Norman County of Sicily. The Holy Roman Emperor Henry IV and his son Frederik are related to Tancred. In 2010 we visited the island where there is still archaeological evidence of the Normans for example in Erice and in Segesta(video clips).
The County of Sicily was created by Robert Guiscard in 1071 he received the title Duke of Sicily in 1059 from Pope Nicholas II as encouragement to conquer it from the Muslims. In 1061 the first permanent Norman conquest (Messina) was made and in 1071, after the fall of Palermo, the capital of the emirate and future capital of the county, Guiscard invested his brother Roger with the title of count and gave him full jurisdiction in the island save for half the city of Palermo, Messina, and the Val Demone, which he retained for himself. In February 1091 the conquest of Sicily was completed when Noto fell. The conquest of Malta was begun later that year; it was completed in 1127 when the Arab administration of the island was expelled.
The so-called cross of Robert Guiscard, a reliquary from the end of the 1000 with teeth of Saints Matthew and James minor, a piece of the Holy Cross. Legend has it that Robert took this with him wherever he went. (Diocesan Museum St. Matthew Salerno)
Robert Guiscard left Roger in an ambiguous relationship with his successors of the Duchy of Apulia and Calabria. During the reigns of Roger II of Sicily and William II of Apulia conflict broke out between the two Norman principalities, first cousins through Roger and Robert respectively. Through the mediation of Pope Calistus II and in return for aid against a rebellion led by Jordan of Ariano in 1121, the childless William ceded all his Sicilian territories to Roger and named him his heir. When William died in 1127, Roger inherited the mainland duchy; three years later he merged his holdings to form the Kingdom of Sicily.
The Norman conquest of the strategically critical territories of Sicily and southern Italy is much more remarkable than the far larger and better resourced conquest of a lesser strategic important Britain half a century later. It started with some 40 mercenaries and was followed up but still small groups of 60 and later 270 Normans. Their rather easy conquests was certainly assisted with the three quarreling Arab Emirs who ruled the island and refused to cooperate with each other. This however, doesn’t diminish the incredible effect that these small groups had on the political and military situation in this part of the world.
Britain received the brunt
As we already saw above, Britain received the brunt of the Viking raids; East Anglia and Kent were attacked in 838 and a year later a fleet of 350 Viking vessels moored in the Thames pillaging London and Canterbury. The raids now started to turn into invasion and by 867 – under the command of Ivar the Boneless – Northumberland fell to the Vikings and three years later all of England north of the Thames was under their control. However, the Wessex King Alfred (the Great) was able to fight back. He started to built a defense line of defensible fortresses to protect the population. The Vikings depended on plunder and food during the raids and this was no longer available to them and they simply had to retreat. In 878 a line was drawn with Saxon law in the south and the west and Danelaw in the north and east.
The also founded Dublin in Ireland.
Vikings from Norway annexed Shetland and Orkney in 875 and kept them till 1472 , when Scotland annexed the archipelago. Faroe was already reached by around 850 and saw both Danish and Norwegian rulers, but in the end the Danes became the dominant power.
The raiding Normans were able to rapidly expanded their area and continued their raids, a tradition that continued in following century. Rollo is great-great-great-grandfather of William the Conqueror, who in 1066 invaded England.
Russia and the East
So far the Swedish Vikings have not been mentioned. Their interest was more in the eastern part of the Baltic. Here they encounter the Baltic tribes, they inhabited most of the coast lands, all of the Lake Land, running parallel to the coast further inland – a remnant of the last Ice Age) and again further inland large parts of the ‘Land of the Headwaters’ (the area where two sets of rivers emerge those flowing into the Baltic Sea and those flowing into the Black Sea). The forest zone to the west (which would become the Muscovy’s base) was inhabited by Finnic tribes, called the Siisdai.
From the 9th to the 12th century, the Vikings raided the Baltic coast and Finland, but soon found that more wealth could be created from the enormous amounts of fish in the Finnish lakes. They also traded with the locals in pelts.
Through lake and river hopping the Vikings also reached – around 850 – what is now Russia. It is disputed over where the name’ Rus’ comes from. Some argue from rothr’ which means a ‘bands of rowers’. The Swedes are still known in Finnish as Rootsi. They established fortresses: Holmgard (Novgorad), Alaborg, Murom, Sambat (near Kiev/Kyiv) and Patesjka (Polatsk) to protect their trading activities. From the Baltic Sea, they used the rivers: Vistula, Nieman and Dvina than at the watershed between the rivers flowing into the Baltic Sea and those flowing into the Black Sea they carried their boats over a distance of 20-25 kms to the rivers Pripet, Dnieper, Volga and Berizina.
Others link the name Rus to ‘ruddy’ what means ‘red heads’.
In the north Novgorod (new fort) was established by Hroerekh (Rorik) in 860. This became the leading city merchant republic of the North. In the south the Slav settlement Kyiv fell into their hands later on in the 9th century.
From here they made a lasting impact on the Byzantine Empire. They attacked Mickelgard (‘powerful city’ their name for Constantinople) but they were not able to conquer the city, nevertheless they were able to extort large sums of money to stop them from raiding in the Empire. They changed tactics and started to provided the Emperor with mercenaries known as the Varangians , the name given to the Vikings by the Byzantine.
The also encountered the Muslim world, here the accounts mention trading rather than raiding and in particular the silver from these country (Afghanistan) was highly prized by the Rus and brought back all the way to Sweden (some 80,000 Arab coins have been found in Sweden). In exchange they provide Byzantium with slaves (as in Slaves) and pelts from the northern wilderness.
When silver started to run out and trade was drying up the Rus tried raiding, but were defeated. Attention now (940s) started to move back to Constantinople. The first serious defeat of the Rus took place in Arcadiopolis in 972. After this they kept on providing warriors for the Varangian Guard to the Emperor and after a successful campaign in 980 many stayed in Constantinople and they became a threat to the power of Emperor Basil II and part of the deal included a marriage between the Rus Prince of Kiev, Vladimir (the Vikings had by now well and truly interbred with the Slav population) and the sister of Emperor Basil, Anna. Another part of the deal was that Vladimir would be christened It was this marriage that linked the faith of the Rus with that of Constantinople. Under Jaroslav the Wise all of Rus became united and reached its zenith .
During the 10th and 11th centuries infighting between Rorik’s descendent saw ‘Rus splintered concentrating around three cities: Novgorod, Kyiv and Polatsk. All of these fortresses (grod/grad) attracted more and more Slavs and within a few generations the original Viking culture and language was integrated into the Slav culture. The interaction between the Vikings, Balts and Slavs led to a new language known as ‘Ruski’.
A few centuries later these three Slav cities would become the starting points of three different Slav states: Kiev (Ukraine), Great Rus (Novgorod) and Polatst (Belarus/Lithuania). The lands to the north west still had not been settled at that time, beyond the Finnic tribes that roamed the area. Moscow was founded in 1147.
The end of the Vikings
According to the Atlas of World Population, between the 8th and 11th century some 20,000 people – mainly men – from Scandinavia settled elsewhere, however less than half of them grew old enough to pass on their Scandinavian stories and tradition on the the next generation. At the end of the period, the rest of Europe had finally been able to to built strong enough defences to protect themselves from the Vikings, in other parts they had been fully integrated into the countries they raided and conquered. Back in Scandinavia much was still the same, a land of farmers and fishers still raiding and fighting amongst themselves.
There were a few attempts to regain control again over their ‘overseas’ territories, also some claims were made regarding the English crown, but Willam had laid waste to northern England in such a way that it could never regain the strength to support their overseas counterparts. The Byzantine Empire still used Varagians but in the meantime the Scandinavian nature had been diluted, with now many Englishmen – fleeing William the Conqueror – joining the mercenary army. Christianity united the Scandinavia Christian rulers with other from Europe creating common goals such as the Northern Crusades. The Swedes possessioins in North Amnerica were handed over to the Dutch in 1655.
The three Scandinavian countries became one kingdom, split again and Norway only became fully independent in 1905. Iceland was offered twice to England in exchange for many or other lands in the Caribbean. Iceland was also briefly considered as an alternative convict colony to Australia.
Also see video clip for the Viking treasure of Wieringen and artifacts from Dorestad.
Originally form the Ural region a distinct language group (Ungrian) was already established around 2000BCE. There are some indications that the Magyars – during the 4th and 5th centuries – joined the Huns in their migration westwards. However the Magyars didn’t arrive in what is now Hungary until very late in the 9th century, from here they started their looting raids further west. These raids were fast and devastating. The Franks used their savageness to assist them in keeping control over the Slavs in Moravia, which the flattened in 906.
The following year however they fturned against the Eastern Franks and destroyed a Bavarian army in the Battle of Pressburg and laid the territories of present-day Austria, Germany, France and Italy open to Hungarian occupation and raids. They defeated Louis the Child’s Imperial Army near Augsburg in 910. Between 917 to 925, they raided Basel, Alsace, Burgundy, Saxony, and Provence. Magyar expansion to the west was finally checked by Otto the Great at the Battle of Lechfeld (near Augsburg) in 955. However, their raids on the Balkan Peninsula continued until 970.
Medieval Hungary controlled more territory than medieval France, and the population of medieval Hungary was the third largest in Europe. Their settlement in the area was approved by the Pope when their leaders accepted Christianity, and Stephen I (Szent István) was crowned King of Hungary in 1001.
The Slavs (Slovenians) are only starting to appear in the history books from the 6th century onward. There are several theories regarding their origins but the most likely theory is that they originated from what is now the Ukraine and following the vallye of the Dniepr extended west and northwest as far as the island of Rügen, perhaps the last area ruled by the original pagan religion. Their sacred site at Arkona, where they worshipped their god Swantewit was finally destroyed in 1168 by a combined force of Danish, Saxon and Pommern forces.
Their personalities and way of life doesn’t seem to differ much from the people they replaced. They were tough people, hospitable, hard working farmers and also pagan, their ‘democratic’ tribal structure was another element of commonality with their Germanic brothers.
They followed the Goth migration and started to take over the lands that the Goths and a bit further to the north also the migrating Germanic tribes had left behind. Interestingly this happened at the time that the important role the Goth had played started to wane. The Slav legacy is therefore far more significant for the future development of Europe. Over half of Europe’s territory is now occupied by Slav people.
The southern Slavs (Yugo Slavs) crossed the Lower Danube – and perhaps sailed along the coast of the Black Sea in 550 and entered the Roman Empire forcing Emperor Justinian having to rethink his military strategies and delaying the defense of Rome (see above).In 581 they were reported to also move into Greece.
With the collapse of the Avar Empire in 791, Slavic people quietly extended their rule into this region and created the state of Great Moravia. While initially the Franks were able to impress their rule on this emerging development at the far end of their Empire in the end it were the Slav people that were able to wrestle more and more territory from the Franks. The March (Morava) River became the boundary. They extended their realm into Czech, Bohemia, Slovakia, parts of Austria, Hungary, Poland and parts of Germany.
The first known Moravian (Slav) King Mojmir accepted Christianity but under pressure of the Franks, the pope didn’t agree with this conversion. In 862 his successor Rostislav asked the Byzantine Emperor to send missionaries. He received Cyril and Methodius, this led to the translation of the Bible into the Slavic language, specially developing the Cyrillic alphabet for that purpose. The Pope tried to undo the damage and reduce the Byzantine influence and this resulted that at least the West Slavs became absorbed in the Roman Catholic Church.
After waging war in 869, the Franks were able to use their powers to force over-lordship over Moravia and were thus able to maintain control over the western strip of the region, this lasted for nearly 1,000 years. However, most of the eastern part became under the influence of the emerging Bulgars, which became the earliest most successful Slavic state in the Middle Ages. This grew into a true Empire starting from the late 7th century onwards.
The Viking influence in the Slavic areas was rather unique. While agriculture remained at a very low level in these lands which in general were sparsely populated, the Viking (Rus) trading posts developed into the first pre-feudal cities. It wasn’t until the German ‘Drang nach Osten’ – led by the Teutonic Crusades – in the 12th century before these region started to become more developed.
See also: Pommern and Rugen
- Tussen Vlaanderen en Saksen, 1990, A. C. F. Koch, Jaap Kruisheer J. C. Bedaux, p92 ↩
- Understanding the Middle Ages, Harald Kleinschmidt, 200, p92-100 ↩
- Charter from Oodhelm dated 29 June 797 whereby donated the farms to the Church of Wichmond near Zutphen ↩
- Justinian’s Flea, Willem Rosen, 2007, p134-142 ↩
- The Vikings, Jonathan Clements, 2005 ↩
- Noormannen in het Rivierland, Luit van der Tuuk, Omniboek, 2009, p101-116 ↩ |
After this chapter, you should be able to:
- Describe Piaget’s formal operational stage and the characteristics of formal operational thought
- Compare Theories – Lawrence’s Kohlberg’s Moral Development and Carol Gilligan’s Morality of Care
- Explain the Information Processing Theory
- Describe the strategies for memory storage
- Explain the areas of transition for adolescence
During adolescence more complex thinking abilities emerge. Researchers suggest this is due to increases in processing speed and efficiency rather than as the result of an increase in mental capacity—in other words, due to improvements in existing skills rather than development of new ones (Bjorkland, 1987; Case, 1985). Let’s explore these improvements.
During adolescence, teenagers move beyond concrete thinking and become capable of abstract thought. Teen thinking is also characterized by the ability to consider multiple points of view, imagine hypothetical situations, debate ideas and opinions (e.g., politics, religion, and justice), and form new ideas. In addition, it’s not uncommon for adolescents to question authority or challenge established societal norms.
Cognitive empathy, also known as theory-of-mind, which relates to the ability to take the perspective of others and feel concern for others (Shamay-Tsoory, Tomer, & Aharon-Peretz, 2005). Cognitive empathy begins to increase in adolescence and is an important component of social problem solving and conflict avoidance. According to one longitudinal study, levels of cognitive empathy begin rising in girls around 13 years old, and around 15 years old in boys (Van der Graaff et al., 2013).1
Early in adolescence, changes in Dopamine, a chemical in the brain that is a neurotransmitter and produces feelings of pleasure, can contribute to increases in adolescents’ sensation-seeking and reward motivation. During adolescence, people tend to do whatever activities produce the most dopamine without fully considering the consequences of such actions. Later in adolescence, the prefrontal cortex, the area of the brain responsible for outcomes, forming judgments, controlling impulses and emotions, also continues to develop (Goldberg, 2001). The difference in timing of the development of these different regions of the brain contributes to more risk taking during middle adolescence because adolescents are motivated to seek thrills (Steinberg, 2008). One of the world’s leading experts on adolescent development, Laurence Steinberg, likens this to engaging a powerful engine before the braking system is in place. The result is that adolescents are prone to risky behaviors more often than children or adults.
Figure 14.2 – A simulation of the risky behavior of drinking and driving.2
Although the most rapid cognitive changes occur during childhood, the brain continues to develop throughout adolescence, and even into the 20s (Weinberger, Elvevåg, & Giedd, 2005). The brain continues to form new neural connections and becomes faster and more efficient because it prunes, or casts off unused neurons and connections (Blakemore, 2008), and produces myelin, the fatty tissue that forms around axons and neurons, which helps speed transmissions between different regions of the brain (Rapoport et al., 1999). This time of rapid cognitive growth for teens, making them more aware of their potential and capabilities, causes a great amount of disequilibrium for them. Theorists have researched cognitive changes and functions and have formed theories based on this developmental period.3
Cognition refers to thinking and memory processes, andrefers to long-term changes in these processes. One of the most widely known perspectives about cognitive development is the cognitive stage theory of a Swiss psychologist named . Piaget created and studied an account of how children and youth gradually become able to think logically and scientifically. Because his theory is especially popular among educators, we focus on it in this chapter.
Piaget was a: in his view, learning was proceeded by the interplay of assimilation (adjusting new experiences to fit prior concepts) and accommodation (adjusting concepts to fit new experiences). The to-and-fro of these two processes leads not only to short-term learning, but also to long-term . The long-term developments are really the main focus of Piaget’s cognitive theory.
As you might remember, Piaget proposed that cognition developed through distinct stages from birth through the end of adolescence. By stages he meant a sequence of thinking patterns with four key features:
- They always happen in the same order.
- No stage is ever skipped.
- Each stage is a significant transformation of the stage before it.
- Each later stage incorporated the earlier stages into itself.
Basically this is the “staircase” model of development. Piaget proposed four major stages of cognitive development, and called them (1) sensorimotor intelligence, (2) preoperational thinking, (3) concrete operational thinking, and (4) formal operational thinking. Each stage is correlated with an age period of childhood, but only approximately. Formal operational thinking appears in adolescence.4
During the formal operational stage, adolescents are able to understand abstract principles. They are no longer limited by what can be directly seen or heard, and are able to contemplate such constructs as beauty, love, freedom, and morality. Additionally, while younger children solve problems through trial and error, adolescents demonstrate hypothetical-deductive reasoning, which is developing hypotheses based on what might logically occur. They are able to think about all the possibilities in a situation beforehand, and then test them systematically, (Crain, 2005) because they are able to engage in true scientific thinking.
Figure 14.1 – Teenage thinking is characterized by the ability to reason logically and solve hypothetical problems such as how to design, plan, and build a structure.5
Figure 14.3 – Piaget proposed that formal operational thinking is the last stage in cognitive development.6
According to Piaget, most people attain some degree of formal operational thinking, but use formal operations primarily in the areas of their strongest interest (Crain, 2005). In fact, most adults do not regularly demonstrate formal operational thought. A possible explanation is that an individual’s thinking has not been sufficiently challenged to demonstrate formal operational thought in all areas.
Once adolescents can understand abstract thoughts, they enter a world of hypothetical possibilities and demonstrate egocentrism, a heightened self-focus. The egocentricity comes from attributing unlimited power to their own thoughts (Crain, 2005). Piaget believed it was not until adolescents took on adult roles that they would be able to learn the limits to their own thoughts.
David Elkind (1967) expanded on the concept of Piaget’s adolescent egocentricity. Elkind theorized that the physiological changes that occur during adolescence result in adolescents being primarily concerned with themselves. Additionally, since adolescents fail to differentiate between what others are thinking and their own thoughts, they believe that others are just as fascinated with their behavior and appearance. This belief results in the adolescent anticipating the reactions of others, and consequently constructing an imaginary audience. The imaginary audience is the adolescent’s belief that those around them are as concerned and focused on their appearance as they themselves are (Schwartz, Maynard, & Uzelac, 2008, p. 441). Elkind thought that the imaginary audience contributed to the self-consciousness that occurs during early adolescence. The desire for privacy and the reluctance to share personal information may be a further reaction to feeling under constant observation by others.
Figure 14.4 – This teen is likely thinking, “they must be whispering about me.”7
Another important consequence of adolescent egocentrism is the personal fable or belief that one is unique, special, and invulnerable to harm. Elkind (1967) explains that because adolescents feel so important to others (imaginary audience) they regard themselves and their feelings as being special and unique. Adolescents believe that only they have experienced strong and diverse emotions, and therefore others could never understand how they feel. This uniqueness in one’s emotional experiences reinforces the adolescent’s belief of invulnerability, especially to death. Adolescents will engage in risky behaviors, such as drinking and driving or unprotected sex, and feel they will not suffer any negative consequences. Elkind believed that adolescent egocentricity emerged in early adolescence and declined in middle adolescence, however, recent research has also identified egocentricity in late adolescence (Schwartz, et al., 2008).
As adolescents are now able to think abstractly and hypothetically, they exhibit many new ways of reflecting on information (Dolgin, 2011). For example, they demonstrate greater introspection or thinking about one’s thoughts and feelings. They begin to imagine how the world could be, which leads them to become idealistic or insisting upon high standards of behavior. Because of their idealism, they may become critical of others, especially adults in their life. Additionally, adolescents can demonstrate hypocrisy, or pretend to be what they are not. Since they are able to recognize what others expect of them, they will conform to those expectations for their emotions and behavior seemingly hypocritical to themselves. Lastly, adolescents can exhibit pseudostupidity, which is when they approach problems at a level that is too complex and they fail because the tasks are too simple. Their new ability to consider alternatives is not completely under control and they appear “stupid” when they are in fact bright, just inexperienced.8
Kohlberg (1963) built on the work of Piaget and was interested in finding out how our moral reasoning changes as we get older. He wanted to find out how people decide what is right and what is wrong (moral justice). Just as Piaget believed that children’s cognitive development follows specific patterns, Kohlberg argued that we learn our moral values through active thinking and reasoning, and that moral development follows a series of stages. Kohlberg’s six stages are generally organized into three levels of moral reasons. To study moral development, Kohlberg posed moral dilemmas to children, teenagers, and adults. You may remember one such dilemma, the Heinz dilemma, that was introduced in Chapter 12:9
A woman was on her deathbed. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman’s husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I’m going to make money from it.” So Heinz got desperate and broke into the man’s laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?10
Based on their reasoning behind their responses (not whether they thought Heinz made the right choice or not), Kohlberg placed each person in one of the stages as described in the image on the following page:
Figure 14.5 – Kohlberg’s six stages of moral development.11
Although research has supported Kohlberg’s idea that moral reasoning changes from an early emphasis on punishment and social rules and regulations to an emphasis on more general ethical principles, as with Piaget’s approach, Kohlberg’s stage model is probably too simple. For one, children may use higher levels of reasoning for some types of problems, but revert to lower levels in situations where doing so is more consistent with their goals or beliefs (Rest, 1979). Second, it has been argued that this stage model is particularly appropriate for Western countries, rather than non-Western, samples in which allegiance to social norms (such as respect for authority) may be particularly important (Haidt, 2001). In addition, there is little correlation between how children score on the moral stages and how they behave in real life.
Perhaps the most important critique of Kohlberg’s theory is that it may describe the moral development of boys better than it describes that of girls. Carol Gilligan has argued that, because of differences in their socialization, males tend to value principles of justice and rights, whereas females value caring for and helping others. Although there is little evidence that boys and girls score differently on Kohlberg’s stages of moral development (Turiel, 1998), it is true that girls and women tend to focus more on issues of caring, helping, and connecting with others than do boys and men (Jaffee & Hyde, 2000).12
Carol Gilligan, whose ideas center on a morality of care, or system of beliefs about human responsibilities, care, and consideration for others, proposed three moral positions that represent different extents or breadth of ethical care. Unlike Kohlberg, or Piaget, she does not claim that the positions form a strictly developmental sequence, but only that they can be ranked hierarchically according to their depth or subtlety. In this respect her theory is “semi-developmental” in a way similar to Maslow’s theory of motivation (Brown & Gilligan, 1992; Taylor, Gilligan, & Sullivan, 1995). The following table summarizes the three moral positions from Gilligan’s theory:
Table 14.1 – Positions of Moral Development According to Gilligan
Definition of What is Morally Good
Position 1: Survival Orientation
Action that considers one’s personal needs only
Position 2: Conventional Care
Action that considers others’ needs or preferences but no one’s own
Position 3: Integrated Care
Action that attempts to coordinate one’s own personal needs with those of others
The most basic kind of caring is a survival orientation, in which a person is concerned primarily with his or her own welfare. As a moral position, a survival orientation is obviously not satisfactory for classrooms on a widespread scale. If every student only looked out for himself or herself alone, classroom life might become rather unpleasant. Nonetheless, there are situations in which caring primarily about yourself is both a sign of good mental health and also relevant to teachers. For a child who has been bullied at school or sexually abused at home, for example, it is both healthy and morally desirable to speak out about the bullying or abuse—essentially looking out for the victim’s own needs at the expense of others’, including the bully’s or abuser’s. Speaking out requires a survival orientation and is healthy because in this case, the child is at least caring about herself.
A more subtle moral position is caring for others, in which a person is concerned about others’ happiness and welfare, and about reconciling or integrating others’ needs where they conflict with each other. In classrooms, students who operate from Position 2 can be very desirable in some ways; they can be kind, considerate, and good at fitting in and at working cooperatively with others. Because these qualities are very welcome in a busy classroom, it can be tempting for teachers to reward students for developing and using them for much of their school careers. The problem with rewarding Position 2 ethics, however, is that doing so neglects the student’s identity—his or her own academic and personal goals or values. Sooner or later, personal goals, values and identity need attention, and educators have a responsibility for assisting students to discover and clarify them. Unfortunately for teachers, students who know what they want may sometimes be more assertive and less automatically compliant than those who do not.
The most developed form of moral caring in Gilligan’s model is integrated caring, the coordination of personal needs and values with those of others. Now the morally good choice takes account of everyone including yourself, not everyone except yourself.
In classrooms, integrated caring is most likely to surface whenever teachers give students wide, sustained freedom to make choices. If students have little flexibility about their actions, there is little room for considering anyone’s needs or values, whether their own or others’. If the teacher says simply, “Do the homework on page 50 and turn it in tomorrow morning,” then compliance becomes the main issue, not moral choice. But suppose instead that she says something like this: “Over the next two months, figure out an inquiry project about the use of water resources in our town. Organize it any way you want—talk to people, read widely about it, and share it with the class in a way that all of us, including yourself, will find meaningful.” Although an assignment this general or abstract may not suit some teachers or students, it does pose moral challenges for those who do use it. Why? For one thing, students cannot simply carry out specific instructions, but must decide what aspect of the topic really matters to them. The choice is partly a matter of personal values. For another thing, students have to consider how the topic might be meaningful or important to others in the class. Third, because the time line for completion is relatively far in the future, students may have to weigh personal priorities (like spending time with family on the weekend) against educational priorities (working on the assignment a bit more on the weekend). Some students might have trouble making good choices when given this sort of freedom—and their teachers might therefore be cautious about giving such an assignment. But in a way these hesitations are part of Gilligan’s point: integrated caring is indeed more demanding than the caring based on survival or orientation to others, and not all students may be ready for it. 13
We’ve learned that major changes in the structure and functioning of the brain occur during adolescence and result in the theories about cognitive and behavioral developments (Steinberg, 2008). These cognitive changes include how information is processed, and are fostered by improvements in cognitive function during early adolescence such as in memory, encoding, and storage as well as ability to think about thinking, therefore becoming better at information processing functions.14
Figure 14.6 – The brain’s developments during adolescence allow for greater information processing functions.15
Memory is an information processing system that we often compare to a computer. Memory is the set of processes used to encode, store, and retrieve information over different periods of time.
Figure 14.7 – The memory process.16
Encoding involves the input of information into the memory system. Storage is the retention of the encoded information. Retrieval, or getting the information out of memory and back into awareness, is the third function.
We get information into our brains through a process called encoding, which is the input of information into the memory system. Once we receive sensory information from the environment, our brains label or code it. We organize the information with other similar information and connect new concepts to existing concepts. Encoding information occurs through both automatic processing and effortful processing. For example, if someone asks you what you ate for lunch today, more than likely you could recall this information quite easily. This is known as automatic processing, or the encoding of details like time, space, frequency, and the meaning of words. Automatic processing is usually done without any conscious awareness.
Recalling the last time you studied for a test is another example of automatic processing. But what about the actual test material you studied? It probably required a lot of work and attention on your part to encode that information; this is known as effortful processing. When you first learn new skills such as driving a car, you have to put forth effort and attention to encode information about how to start a car, how to brake, how to handle a turn, and so on. Once you know how to drive, you can encode additional information about this skill automatically.
Once the information has been encoded, we have to retain it. Our brains take the encoded information and place it in storage. Storage is the creation of a permanent record of information. In order for a memory to go into storage (i.e., long-term memory), it has to pass through three distinct stages: Sensory Memory, Short-Term Memory, and finally Long-Term Memory. These stages were first proposed by Richard Atkinson and Richard Shiffrin (1968). Their model of human memory, called Atkinson-Shiffrin (A-S), is based on the belief that we process memories in the same way that a computer processes information.
Figure 14.8 – According to the Atkinson-Shiffrin model of memory, information passes through three distinct stages in order for it to be stored in long-term memory.17
Sensory Memory (First Stage of Storage)
In the Atkinson-Shiffrin model, stimuli from the environment are processed first in sensory memory, storage of brief sensory events, such as sights, sounds, and tastes. It is very brief storage—up to a couple of seconds. We are constantly bombarded with sensory information. We cannot absorb all of it, or even most of it. And most of it has no impact on our lives. For example, what was your professor wearing the last class period? As long as the professor was dressed appropriately, it does not really matter what they were wearing. Sensory information about sights, sounds, smells, and even textures, which we do not view as valuable information, we discard. If we view something as valuable, the information will move into our short-term memory system.
One study of sensory memory researched the significance of valuable information on short-term memory storage. J. R. Stroop discovered a memory phenomenon in the 1930s: you will name a color more easily if it appears printed in that color, which is called the Stroop effect.
The Stroop Effect describes why it is difficult for us to name a color when the word and the color of the word are different. To test this out a person is instructed not to read the words below, but to say the color the word is printed in. For example, upon seeing the word “yellow” in green print, they should say “green,” not “yellow.” This experiment is fun, but it’s not as easy as it seems.
Short-Term Memory or Working Memory (Second Stage of Storage)
Short-term memory is a temporary storage system that processes incoming sensory memory; sometimes it is called working memory. Short-term memory takes information from sensory memory and sometimes connects that memory to something already in long-term memory. Short-term memory storage lasts about 20 seconds. Think of short-term memory as the information you have displayed on your computer screen—a document, a spreadsheet, or a web page. Information in short-term memory either goes to long-term memory (when you save it to your hard drive) or it is discarded (when you delete a document or close a web browser).
George Miller (1956), in his research on the capacity of memory, found that most people can retain about seven items in short-term memory. Some remember five, some nine, so he called the capacity of short-term memory the range of seven items plus or minus two.
To explore the capacity and duration of short-term memory, two people can try this activity. One person reads the strings of random numbers below out loud to the other, beginning each string by saying, “Ready?” and ending each by saying, “Recall.” Then the second person should try to write down the string of numbers from memory.
This can be used to determine the longest string of digits that you can store. For most people, this will be close to seven, Miller’s famous seven plus or minus two. Recall is somewhat better for random numbers than for random letters (Jacobs, 1887) and is also often slightly better for information we hear (acoustic encoding, which is the encoding of sounds) rather than what we see (visual encoding, which is the encoding of images and words in particular) (Anderson, 1969).
Long-Term Memory (Third and Final Stage of Storage)
Long-term memory is the continuous storage of information. Unlike short-term memory, the storage capacity of long-term memory has no limits. It encompasses all the things you can remember that happened more than just a few minutes ago to all of the things that you can remember that happened days, weeks, and years ago. In keeping with the computer analogy, the information in your long-term memory would be like the information you have saved on the hard drive. It isn’t there on your desktop (your short-term memory), but you can pull up this information when you want it, at least most of the time. Not all long-term memories are strong memories. Some memories can only be recalled through prompts. For example, you might easily recall a fact— “What is the capital of the United States?”—or a procedure—“How do you ride a bike?”—but you might struggle to recall the name of the restaurant you had dinner at when you were on vacation in France last summer. A prompt, such as that the restaurant was named after its owner, who spoke to you about your shared interest in soccer, may help you recall (retrieve) the name of the restaurant.
So you have worked hard to encode via effortful processing (a lot of work and attention on your part in order to encode that information) and store some important information for your upcoming final exam. How do you get that information back out of storage when you need it? The act of getting information out of memory storage and back into conscious awareness is known as retrieval. This would be similar to finding and opening a paper you had previously saved on your computer’s hard drive. Now it’s back on your desktop, and you can work with it again. Our ability to retrieve information from long-term memory is vital to our everyday functioning. You must be able to retrieve information from memory in order to do everything from knowing how to brush your hair and teeth, to driving to work, to knowing how to perform your job once you get there.
There are three ways you can retrieve information out of your long-term memory storage system: recall, recognition, and relearning. Recall is what we most often think about when we talk about memory retrieval: it means you can access information without cues. For example, you would use recall for an essay test. Recognition happens when you identify information that you have previously learned after encountering it again. It involves a process of comparison. When you take a multiple-choice test, you are relying on recognition to help you choose the correct answer. The third form of retrieval is relearning, and it’s just what it sounds like, it involves learning information that you previously learned. Whitney took Spanish in high school, but after high school she did not have the opportunity to speak Spanish. Whitney is now 31, and her company has offered her an opportunity to work in their Mexico City office. In order to prepare herself, she enrolls in a Spanish course at the local community college. She’s surprised at how quickly she’s able to pick up the language after not speaking it for 13 years; this is an example of relearning.
As we just learned, your brain must do some work (effortful processing) to encode information and move it into short-term, and ultimately long-term memory. This has strong implications for a student, as it can impact their learning – if one doesn’t work to encode and store information, it will likely be forgotten. Research indicates that people forget 80 percent of what they learn only a day later. This statistic may not sound very encouraging, given all that you’re expected to learn and remember as a college student. Really, though, it points to the importance of a study strategy other than waiting until the night before a final exam to review a semester’s worth of readings and notes. When you learn something new, the goal is to “lock it in” sooner rather than later, and move it from short-term memory to long-term memory, where it can be accessed when you need it (like at the end of the semester for your final exam or maybe years from now). The next section will explore a variety of strategies that can be used to process information more deeply and help improve retrieval.18
Knowing What to Know
How can you decide what to study and what you need to know? The answer is to prioritize what you’re trying to learn and memorize, rather than trying to tackle all of it. Below are some strategies to help you do this:
- Think about concepts rather than facts: Most of the time instructors are concerned about you learning about the key concepts in a subject or course rather than specific facts.
- Take cues from your instructor: Pay attention to what your instructor writes on the board, mentions repeatedly in class, or includes in study guides and handouts, they are likely core concepts that you’ll want to focus on.
- Look for key terms: Textbooks will often put key terms in bold or italics.
- Use summaries: Read end of chapter summaries, or write your own, to check your understanding of the main elements of the reading.
Transferring Information from Short-Term Memory to Long-Term Memory
In the previous discussion of how memory works, the importance of making intentional efforts to transfer information from short-term to long-term memory was noted. Below are some strategies to facilitate this process:
- Start reviewing new material immediately: Remember that people typically forget a significant amount of new information within 24 hours of learning it.
- Study frequently for shorter periods of time: If you want to improve the odds of recalling course material by the time of an exam or in future class, try reviewing it a little bit every day.
Strengthening your Memory
How can you work to strengthen your overall memory? Some people have stronger memories than others but memorizing new information takes work for anyone. Below are some strategies that can aid memory:
- Rehearsal: One strategy is rehearsal, or the conscious repetition of information to be remembered (Craik & Watkins, 1973). Academic learning comes with time and practice, and at some point the skills become second nature.
- Incorporate visuals: Visual aids like note cards, concept maps, and highlighted text are ways of making information stand out. These aids make the information to be memorized seem more manageable and less daunting.
- Create mnemonics: Memory devices known as mnemonics can help you retain information while only needing to remember a unique phrase or letter pattern that stands out. They are especially useful when we want to recall larger bits of information such as steps, stages, phases, and parts of a system (Bellezza, 1981). There are different types of mnemonic devices:
- Acronym: An acronym is a word formed by the first letter of each of the words you want to remember. Such as HOMES for the Great Lakes (Huron, Ontario, Michigan, Erie, and Superior)
- Acrostic: In an acrostic, you make a phrase of all the first letters of the words. For example, if you need to remember the order of mathematical operations, recalling the sentence “Please Excuse My Dear Aunt Sally” will help you, because the order of mathematical operations is Parentheses, Exponents, Multiplication, Division, Addition, Subtraction.
- Jingles: Rhyming tunes that contain key words related to the concept, such as “i before e, except after c” are jingles.
- Visual: Using a visual to help you remember is also useful. Such as the knuckle mnemonic shown in the image below to help you remember the number of days in each month. Months with 31 days are represented by the protruding knuckles and shorter months fall in the spots between knuckles.
Figure 14.9 – You might use a mnemonic device to help you remember someone’s name, a mathematical formula, or the six levels of Bloom’s taxonomy.20
- Chunking: Another strategy is chunking, where you organize information into manageable bits or chunks, such as turning a phone number you remember into chunks.
- Connect new information to old information: It’s easier to remember new information if you can connect it to old information, a familiar frame of reference, or a personal experience.
- Get quality sleep: Although some people require more or less sleep than the recommended amount, most people should aim for six to eight hours every night.
Cognitive growth and a new found sense of freedom and independence makes it both easier and more difficult for teens when making choices and coping with upcoming transitions and life decisions.
As Adolescents grow older, they encounter age-related transition points that require them to progress into a new role, such as go to college, take a year off or Gap Year, or start to work towards a career. Educational expectations vary not only from culture to culture, but also from class to class. While middle- or upper-class families may expect their daughter or son to attend a four-year university after graduating from high school, other families may expect their child to immediately begin working full-time, as many within their families have done before.21
Adolescents spend more waking time in school than in any other context (Eccles & Roeser, 2011). Academic achievement during adolescence is predicted by interpersonal (e.g., parental engagement in adolescents’ education), intrapersonal (e.g., intrinsic motivation), and institutional (e.g., school quality) factors. Academic achievement is important in its own right as a marker of positive adjustment during adolescence but also because academic achievement sets the stage for future educational and occupational opportunities. The most serious consequence of school failure, particularly dropping out of school, is the high risk of unemployment or underemployment in adulthood that follows. High achievement can set the stage for college or future vocational training and opportunities.22
The status dropout rate refers to the percentage of 16 to 24 year-olds who are not enrolled in school and do not have high school credentials (either a diploma or an equivalency credential such as a General Educational Development [GED] certificate). The dropout rate is based on sample surveys of the civilian, non- institutionalized population, which excludes persons in prisons, persons in the military, and other persons not living in households.23 The dropout rate among high school students has declined from a rate of 9.7% in 2006, to 5.4% in 2017.24
Age transition points require socialization into new roles that can vary widely between societies. For example, in the United Kingdom, when teens finish their secondary schooling (aka high school in the United States), they often take a year “off” before entering college. Frequently, they might take a job, travel, or find other ways to experience another culture. Prince William, the Duke of Cambridge, spent his gap year practicing survival skills in Belize, teaching English in Chile, and working on a dairy farm in the United Kingdom (Prince of Wales 2012a). His brother, Prince Harry, advocated for AIDS orphans in Africa and worked as a jackeroo (a novice ranch hand) in Australia (Prince of Wales 2012b).
In the United States, this life transition point is socialized quite differently, and taking a year off is generally frowned upon. Instead, U.S. youth are encouraged to pick career paths by their mid-teens, to select a college and a major by their late teens, and to have completed all collegiate schooling or technical training for their career by their early twenties.
In other nations, this phase of the life course is tied into conscription, a term that describes compulsory military service. Egypt, Switzerland, Turkey, and Singapore all have this system in place. Youth in these nations (often only the males) are expected to undergo a number of months or years of military training and service.27
Many adolescents work either summer jobs, or during the school year, or may work in lieu of college. Holding a job may offer teenagers extra funds, provide the opportunity to learn new skills, foster ideas about future careers, and perhaps shed light on the true value of money. However, there are numerous concerns about teenagers working, especially during the school year. Several studies have found that working more than 20 hours per week can lead to declines in grades, a general disengagement from school (Staff, Schulenberg, & Bachman, 2010; Lee & Staff, 2007; Marsh & Kleitman, 2005), an increase in substance abuse (Longest & Shanahan, 2007), engaging in earlier sexual behavior, and pregnancy (Staff et al., 2011). Like many employee groups, teens have seen a drop in the number of jobs. The summer jobs of previous generations have been on a steady decline, according to the United States Department of Labor, Bureau of Labor Statistics (2016).
Figure 14.12 – How many hours and the reasons why this teen works, will influence the effects of her job.28
A major concern in the United States is the rising number of young people who choose to work rather than continue their education and are growing up or continuing to grow up in poverty. Growing up poor or entering the workforce too soon, can cut off access to the education and services people need to move out of poverty and into stable employment. Research states that education was often a key to stability, and those raised in poverty are the ones least able to find well-paying work, perpetuating a cycle. Those who work only part time, may it be teens or whomever, are more likely to be classified as working poor than are those with full-time employment; higher levels of education lead to less likelihood of being among the working poor.29 In 2017, the working poor included 6.9 million Americans, down from 7.6 million in 2011 (U.S. Bureau of Labor Statistics, 2019).30
Driving gives teens a sense of freedom and independence from their parents. It can also free up time for parents as they are not shuttling teens to and from school, activities, or work. The National Highway Traffic Safety Administration (NHTSA) reports that in 2014 young drivers (15 to 20 year-olds) accounted for 5.5% (11.7 million) of the total number of drivers (214 million) in the US (National Center for Statistics and Analysis (NCSA), 2016). However, almost 9% of all drivers involved in fatal crashes that year were young drivers (NCSA, 2016), and according to the National Center for Health Statistics (2014), motor vehicle accidents are the leading cause of death for 15 to 20 year-olds. “In all motorized jurisdictions around the world, young, inexperienced drivers have much higher crash rates than older, more experienced drivers” (NCSA, 2016, p. 1).
The rate of fatal crashes is higher for young males than for young females, although for both genders the rate was highest for the 15-20 year-old age group. For young males, the rate for fatal crashes was approximately 46 per 100,000 drivers, compared to 20 per 100,000 drivers for young females. The NHTSA (NCSA, 2016) reported that of the young drivers who were killed and who had alcohol in their system, 81% had a blood alcohol count past what was considered the legal limit. Fatal crashes involving alcohol use were higher among young men than young women. The NHTSA also found that teens were less likely to use seat belt restraints if they were driving under the influence of alcohol, and that restraint use decreased as the level of alcohol intoxication increased.
AAA completed a study in 2014 that showed that the following are risk factors for accidents for teen drivers:
- Following cars too closely
- Driving too fast for weather and road conditions
- Distraction from fellow passengers
- Distraction from cell phones
According to the NHTSA, 10% of drivers aged 15 to 19 years involved in fatal crashes were reported to be distracted at the time of the crash; the highest figure for any age group (NCSA, 2016). Distraction coupled with inexperience has been found to greatly increase the risk of an accident (Klauer et al., 2014).
The NHTSA did find that the number of accidents has been on a decline since 2005. They attribute this to greater driver training, more social awareness to the challenges of driving for teenagers, and to changes in laws restricting the drinking age. The NHTSA estimates that the raising of the legal drinking age to 21 in all 50 states and the District of Columbia has saved 30,323 lives since 1975.31
Figure 14.13 – This teen needs to have solid driver training and awareness of driving challenges.32
Whether it is a sense heightened of ability (we’ve learned a lot about the egocentrism, personal fable, imaginary audience, or the lack of development of prefrontal cortex), or just poor decision making, many teens tend to take unnecessary risks. Wisdom, or the capacity for insight and judgment that is developed through experience, increases between the ages of 14 and 25, and increases with maturity, life experiences, and cognitive development. Wisdom increases gradually and is not the same as intelligence, and adolescents do not improve substantially on 33tests since their scores are relative to others in their age group, as everyone matures at approximately the same rate. Adolescents must be monitored because they are more likely to take risks than adults. The behavioral decision-making theory proposes that adolescents and adults both weigh the potential rewards and consequences of an action. However, adolescents seem to give more weight to rewards, particularly social rewards, than do adults. Scaffolding adolescents until they show consistent and appropriate judgment will likely allow for fewer negative consequences.
In this chapter we looked at:
- Piaget’s formal operational stage
- Moral Development and Morality of Care theories
- Memories in the Information Processing Theory
- Adolescent transitions and independence
In the next chapter we will be examining adolescent social emotional development |
Sampling and Data
The science of statistics deals with the collection, analysis, interpretation, and presentation of data. We see and use data in our everyday lives.
In this course, you will learn how to organize and summarize data. Organizing and summarizing data is called descriptive statistics. Two ways to summarize data are by graphing and by using numbers (for example, finding an average). After you have studied probability and probability distributions, you will use formal methods for drawing conclusions from “good” data. The formal methods are called inferential statistics. Statistical inference uses probability to determine how confident we can be that our conclusions are correct.
Effective interpretation of data (inference) is based on good procedures for producing data and thoughtful examination of the data. You will encounter what will seem to be too many mathematical formulas for interpreting data. The goal of statistics is not to perform numerous calculations using the formulas, but to gain an understanding of your data. The calculations can be done using a calculator or a computer. The understanding must come from you. If you can thoroughly grasp the basics of statistics, you can be more confident in the decisions you make in life.
Probability is a mathematical tool used to study randomness. It deals with the chance (the likelihood) of an event occurring. For example, if you toss a fair coin four times, the outcomes may not be two heads and two tails. However, if you toss the same coin 4,000 times, the outcomes will be close to half heads and half tails. The expected theoretical probability of heads in any one toss is or 0.5. Even though the outcomes of a few repetitions are uncertain, there is a regular pattern of outcomes when there are many repetitions. After reading about the English statistician Karl Pearson who tossed a coin 24,000 times with a result of 12,012 heads, one of the authors tossed a coin 2,000 times. The results were 996 heads. The fraction is equal to 0.498 which is very close to 0.5, the expected probability.
The theory of probability began with the study of games of chance such as poker. Predictions take the form of probabilities. To predict the likelihood of an earthquake, of rain, or whether you will get an A in this course, we use probabilities. Doctors use probability to determine the chance of a vaccination causing the disease the vaccination is supposed to prevent. A stockbroker uses probability to determine the rate of return on a client’s investments. You might use probability to decide to buy a lottery ticket or not. In your study of statistics, you will use the power of mathematics through probability calculations to analyze and interpret your data.
In statistics, we generally want to study a population. You can think of a population as a collection of persons, things, or objects under study. To study the population, we select a sample. The idea of sampling is to select a portion (or subset) of the larger population and study that portion (the sample) to gain information about the population. Data are the result of sampling from a population.
Because it takes a lot of time and money to examine an entire population, sampling is a very practical technique. If you wished to compute the overall grade point average at your school, it would make sense to select a sample of students who attend the school. The data collected from the sample would be the students’ grade point averages. In presidential elections, opinion poll samples of 1,000–2,000 people are taken. The opinion poll is supposed to represent the views of the people in the entire country. Manufacturers of canned carbonated drinks take samples to determine if a 16 ounce can contains 16 ounces of carbonated drink.
From the sample data, we can calculate a statistic. A statistic is a number that represents a property of the sample. For example, if we consider one math class to be a sample of the population of all math classes, then the average number of points earned by students in that one math class at the end of the term is an example of a statistic. The statistic is an estimate of a population parameter, in this case the mean. A parameter is a numerical characteristic of the whole population that can be estimated by a statistic. Since we considered all math classes to be the population, then the average number of points earned per student over all the math classes is an example of a parameter.
One of the main concerns in the field of statistics is how accurately a statistic estimates a parameter. The accuracy really depends on how well the sample represents the population. The sample must contain the characteristics of the population in order to be a representative sample. We are interested in both the sample statistic and the population parameter in inferential statistics. In a later chapter, we will use the sample statistic to test the validity of the established population parameter.
A variable, or random variable, usually notated by capital letters such as X and Y, is a characteristic or measurement that can be determined for each member of a population. Variables may be numerical or categorical. Numerical variables take on values with equal units such as weight in pounds and time in hours. Categorical variables place the person or thing into a category. If we let X equal the number of points earned by one math student at the end of a term, then X is a numerical variable. If we let Y be a person’s party affiliation, then some examples of Y include Republican, Democrat, and Independent. Y is a categorical variable. We could do some math with values of X (calculate the average number of points earned, for example), but it makes no sense to do math with values of Y (calculating an average party affiliation makes no sense).
Data are the actual values of the variable. They may be numbers or they may be words. Datum is a single value.
Two words that come up often in statistics are mean and proportion. If you were to take three exams in your math classes and obtain scores of 86, 75, and 92, you would calculate your mean score by adding the three exam scores and dividing by three (your mean score would be 84.3 to one decimal place). If, in your math class, there are 40 students and 22 are men and 18 are women, then the proportion of men students is and the proportion of women students is . Mean and proportion are discussed in more detail in later chapters.
The words “mean” and “average” are often used interchangeably. The substitution of one word for the other is common practice. The technical term is “arithmetic mean,” and “average” is technically a center location. However, in practice among non-statisticians, “average” is commonly accepted for “arithmetic mean.”
Determine what the key terms refer to in the following study. We want to know the average (mean) amount of money first year college students spend at ABC College on school supplies that do not include books. We randomly surveyed 100 first year students at the college. Three of those students spent ?150, ?200, and ?225, respectively.
The population is all first year students attending ABC College this term.
The sample could be all students enrolled in one section of a beginning statistics course at ABC College (although this sample may not represent the entire population).
The parameter is the average (mean) amount of money spent (excluding books) by first year college students at ABC College this term: the population mean.
The statistic is the average (mean) amount of money spent (excluding books) by first year college students in the sample.
The variable could be the amount of money spent (excluding books) by one first year student. Let X = the amount of money spent (excluding books) by one first year student attending ABC College.
The data are the dollar amounts spent by the first year students. Examples of the data are ?150, ?200, and ?225.
Determine what the key terms refer to in the following study. We want to know the average (mean) amount of money spent on school uniforms each year by families with children at Knoll Academy. We randomly survey 100 families with children in the school. Three of the families spent ?65, ?75, and ?95, respectively.
The population is all families with children attending Knoll Academy.
The sample is a random selection of 100 families with children attending Knoll Academy.
The parameter is the average (mean) amount of money spent on school uniforms by families with children at Knoll Academy.
The statistic is the average (mean) amount of money spent on school uniforms by families in the sample.
The variable is the amount of money spent by one family. Let X = the amount of money spent on school uniforms by one family with children attending Knoll Academy.
The data are the dollar amounts spent by the families. Examples of the data are ?65, ?75, and ?95.
Determine what the key terms refer to in the following study.
A study was conducted at a local college to analyze the average cumulative GPA’s of students who graduated last year. Fill in the letter of the phrase that best describes each of the items below.
1. Population ____ 2. Statistic ____ 3. Parameter ____ 4. Sample ____ 5. Variable ____ 6. Data ____
- all students who attended the college last year
- the cumulative GPA of one student who graduated from the college last year
- 3.65, 2.80, 1.50, 3.90
- a group of students who graduated from the college last year, randomly selected
- the average cumulative GPA of students who graduated from the college last year
- all students who graduated from the college last year
- the average cumulative GPA of students in the study who graduated from the college last year
1. f2. g3. e4. d5. b6. c
Determine what the key terms refer to in the following study.
As part of a study designed to test the safety of automobiles, the National Transportation Safety Board collected and reviewed data about the effects of an automobile crash on test dummies. Here is the criterion they used:
|Speed at which cars crashed||Location of “drive” (i.e. dummies)|
|35 miles/hour||Front Seat|
Cars with dummies in the front seats were crashed into a wall at a speed of 35 miles per hour. We want to know the proportion of dummies in the driver’s seat that would have had head injuries, if they had been actual drivers. We start with a simple random sample of 75 cars.
The population is all cars containing dummies in the front seat.
The sample is the 75 cars, selected by a simple random sample.
The parameter is the proportion of driver dummies (if they had been real people) who would have suffered head injuries in the population.
The statistic is proportion of driver dummies (if they had been real people) who would have suffered head injuries in the sample.
The variable X = the number of driver dummies (if they had been real people) who would have suffered head injuries.
The data are either: yes, had head injury, or no, did not.
Determine what the key terms refer to in the following study.
An insurance company would like to determine the proportion of all medical doctors who have been involved in one or more malpractice lawsuits. The company selects 500 doctors at random from a professional directory and determines the number in the sample who have been involved in a malpractice lawsuit.
The population is all medical doctors listed in the professional directory.
The parameter is the proportion of medical doctors who have been involved in one or more malpractice suits in the population.
The sample is the 500 doctors selected at random from the professional directory.
The statistic is the proportion of medical doctors who have been involved in one or more malpractice suits in the sample.
The variable X = the number of medical doctors who have been involved in one or more malpractice suits.
The data are either: yes, was involved in one or more malpractice lawsuits, or no, was not.
The Data and Story Library, http://lib.stat.cmu.edu/DASL/Stories/CrashTestDummies.html (accessed May 1, 2013).
The mathematical theory of statistics is easier to learn when you know the language. This module presents important terms that will be used throughout the text.
For each of the following eight exercises, identify: a. the population, b. the sample, c. the parameter, d. the statistic, e. the variable, and f. the data. Give examples where appropriate.
A fitness center is interested in the mean amount of time a client exercises in the center each week.
<!– <solution id=”eip-idm31021440″> The population is all of the clients of the fitness center. A sample of the clients that use the fitness center for a given week. The average amount of time that all clients exercise in one week. The average amount of time that a sample of clients exercises in one week. The amount of time that a client exercises in one week. Examples are: 2 hours, 5 hours, and 7.5 hours –>
Ski resorts are interested in the mean age that children take their first ski and snowboard lessons. They need this information to plan their ski classes optimally.
- all children who take ski or snowboard lessons
- a group of these children
- the population mean age of children who take their first snowboard lesson
- the sample mean age of children who take their first snowboard lesson
- X = the age of one child who takes his or her first ski or snowboard lesson
- values for X, such as 3, 7, and so on
A cardiologist is interested in the mean recovery period of her patients who have had heart attacks.
<!– <solution id=”eip-idm18256448″> the cardiologist’s patients a group of the cardiologist’s patients the mean recovery period of all of the cardiologist’s patients the mean recovery period of the group of the cardiologist’s patients X = the mean recovery period of one patient values for X, such as 10 days, 14 days, 20 days, and so on –>
Insurance companies are interested in the mean health costs each year of their clients, so that they can determine the costs of health insurance.
- the clients of the insurance companies
- a group of the clients
- the mean health costs of the clients
- the mean health costs of the sample
- X = the health costs of one client
- values for X, such as 34, 9, 82, and so on
A politician is interested in the proportion of voters in his district who think he is doing a good job.
<!– <solution id=”eip-idm40279024″> all voters in the politician’s district a random selection of voters in the politician’s district the proportion of voters in this district who think this politician is doing a good job the proportion of voters in this district who think this politician is doing a good job in the sample X = the number of voters in the district who think this politician is doing a good job Yes, he is doing a good job. No, he is not doing a good job. –>
A marriage counselor is interested in the proportion of clients she counsels who stay married.
- all the clients of this counselor
- a group of clients of this marriage counselor
- the proportion of all her clients who stay married
- the proportion of the sample of the counselor’s clients who stay married
- X = the number of couples who stay married
- yes, no
Political pollsters may be interested in the proportion of people who will vote for a particular cause.
<!– <solution id=”eip-idm168008800″> all voters (in a certain geographic area) a random selection of all the voters the proportion of voters who are interested in this particular cause the proportion of voters who are interested in this particular cause in the sample X = the number of voters who are interested in this particular cause yes, no –>
A marketing company is interested in the proportion of people who will buy a particular product.
- all people (maybe in a certain geographic area, such as the United States)
- a group of the people
- the proportion of all people who will buy the product
- the proportion of the sample who will buy the product
- X = the number of people who will buy it
- buy, not buy
Use the following information to answer the next three exercises: A Lake Tahoe Community College instructor is interested in the mean number of days Lake Tahoe Community College math students are absent from class during a quarter.
What is the population she is interested in?
- all Lake Tahoe Community College students
- all Lake Tahoe Community College English students
- all Lake Tahoe Community College students in her classes
- all Lake Tahoe Community College math students
<!– <solution id=”id30103802″> d –>
Consider the following:
= number of days a Lake Tahoe Community College math student is absent
In this case, X is an example of a:
The instructor’s sample produces a mean number of days absent of 3.5 days. This value is an example of a:
<!– <solution id=”id30104192″> c –>
- also called mean or arithmetic mean; a number that describes the central tendency of the data
- Categorical Variable
- variables that take on values that are names or labels
- a set of observations (a set of possible outcomes); most data can be put into two groups: qualitative (an attribute whose value is indicated by a label) or quantitative (an attribute whose value is indicated by a number). Quantitative data can be separated into two subgroups: discrete and continuous. Data is discrete if it is the result of counting (such as the number of students of a given ethnic group in a class or the number of books on a shelf). Data is continuous if it is the result of measuring (such as distance traveled or weight of luggage)
- Mathematical Models
- a description of a phenomenon using mathematical concepts, such as equations, inequalities, distributions, etc.
- Numerical Variable
- variables that take on values that are indicated by numbers
- Observational Study
- a study in which the independent variable is not manipulated by the researcher
- a number that is used to represent a population characteristic and that generally cannot be determined easily
- all individuals, objects, or measurements whose properties are being studied
- a number between zero and one, inclusive, that gives the likelihood that a specific event will occur
- the number of successes divided by the total number in the sample
- Representative Sample
- a subset of the population that has the same characteristics as the population
- a subset of the population studied
- a numerical characteristic of the sample; a statistic estimates the corresponding population parameter.
- Statistical Models
- a description of a phenomenon using probability distributions that describe the expected behavior of the phenomenon and the variability in the expected observations.
- a study in which data is collected as reported by individuals.
- a characteristic of interest for each person or object in a population |
Rendering (computer graphics)
Rendering or image synthesis is the automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model (or models in what collectively could be called a scene file) by means of computer programs. Also, the results of displaying such a model can be called a rendering. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene.
Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing program to produce final video output.
Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject.
Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics, and software development.
In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.
- 1 Usage
- 2 Features
- 3 Techniques
- 4 Radiosity
- 5 Sampling and filtering
- 6 Optimization
- 7 Academic core
- 8 Chronology of important published ideas
- 9 See also
- 10 References
- 11 Further reading
- 12 External links
When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees.
For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this.
A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.
- Shading – how the color and brightness of a surface varies with lighting
- Texture-mapping – a method of applying detail to surfaces
- Bump-mapping – a method of simulating small-scale bumpiness on surfaces
- Fogging/participating medium – how light dims when passing through non-clear atmosphere or air
- Shadows – the effect of obstructing light
- Soft shadows – varying darkness caused by partially obscured light sources
- Reflection – mirror-like or highly glossy reflection
- Transparency (optics), transparency (graphic) or opacity – sharp transmission of light through solid objects
- Translucency – highly scattered transmission of light through solid objects
- Refraction – bending of light associated with transparency
- Diffraction – bending, spreading, and interference of light passing by an object or aperture that disrupts the ray
- Indirect illumination – surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination)
- Caustics (a form of indirect illumination) – reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object
- Depth of field – objects appear blurry or out of focus when too far in front of or behind the object in focus
- Motion blur – objects appear blurry due to high-speed motion, or the motion of the camera
- Non-photorealistic rendering – rendering of scenes in an artistic style, intended to look like a painting or drawing
Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image.
Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted.
Therefore, a few loose families of more-efficient light transport modelling techniques have emerged:
- rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects;
- ray casting considers the scene as observed from a specific point of view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts;
- ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude faster.
The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques.
Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.
Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels.
Scanline rendering and rasterisation
A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives.
If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task, then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one loops through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards.
Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization.
The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect.
|This section does not cite any sources. (May 2010) (Learn how and when to remove this template message)|
In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged.
Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two.
Raycasting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish.
Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are path tracing, bidirectional path tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects.
In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness.
Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.
In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments.
As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required.
However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing.
Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms.
The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it.
The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithm, images may exhibit convincing realism, particularly for indoor scenes.
In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model.
Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some digital artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity—or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture.
Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame.
Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.
Sampling and filtering
One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel.
If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing.
Optimizations used by an artist when a scene is being developed
Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed.
Common optimizations for real time rendering
For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.
The implementation of a realistic renderer always has some basic element of physical simulation or emulation — some computation which resembles or abstracts a real physical process.
The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community.
The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques.
Rendering research is concerned with both the adaptation of scientific models and their efficient application.
The rendering equation
This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.
Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' — all the movement of light — in a scene.
The bidirectional reflectance distribution function
The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows:
Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.
Rendering is practically exclusively concerned with the particle aspect of light physics — known as geometrical optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.
Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost infinite range of light brightness and color, but current displays — movie screen, computer monitor, etc. — cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping.
Rendering for movies often takes place on a network of tightly connected computers known as a render farm.
The current[when?] state of the art in 3-D image description for movie creation is the mental ray scene description language designed at mental images and RenderMan Shading Language designed at Pixar. (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators).
Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include interactive photorealistic rendering (IPR) and hardware rendering/shading.
Chronology of important published ideas
- 1968 Ray casting
- 1970 Scanline rendering
- 1971 Gouraud shading
- 1973 Phong shading
- 1973 Phong reflection
- 1973 Diffuse reflection
- 1973 Specular highlight
- 1973 Specular reflection
- 1974 Sprites
- 1974 Scrolling
- 1974 Texture mapping
- 1974 Z-buffering
- 1976 Environment mapping
- 1977 Blinn shading
- 1977 Side-scrolling
- 1977 Shadow volumes
- 1978 Shadow mapping
- 1978 Bump mapping
- 1979 Tile map
- 1980 BSP trees
- 1980 Ray tracing
- 1981 Parallax scrolling
- 1981 Sprite zooming
- 1981 Cook shader
- 1983 MIP maps
- 1984 Octree ray tracing
- 1984 Alpha compositing
- 1984 Distributed ray tracing
- 1984 Radiosity
- 1985 Row/column scrolling
- 1985 Hemicube radiosity
- 1986 Light source tracing
- 1986 Rendering equation
- 1987 Reyes rendering
- 1988 Depth cue
- 1988 Distance fog
- 1988 Tiled rendering
- 1991 Xiaolin Wu line anti-aliasing
- 1991 Hierarchical radiosity
- 1993 Texture filtering
- 1993 Perspective correction
- 1993 Transform, clipping, and lighting
- 1993 Directional lighting
- 1993 Trilinear interpolation
- 1993 Z-culling
- 1993 Oren–Nayar reflectance
- 1993 Tone mapping
- 1993 Subsurface scattering
- 1994 Ambient Occlusion
- 1995 Hidden surface determination
- 1995 Photon mapping
- 1996 Multisample anti-aliasing
- 1997 Metropolis light transport
- 1997 Instant Radiosity
- 1998 Hidden surface removal
- 2000 Pose space deformation
- 2002 Precomputed Radiance Transfer
- 2D computer graphics
- 3D computer graphics
- 3D rendering
- Architectural rendering
- Chromatic aberration
- Displacement mapping
- Global illumination
- Graphics pipeline
- High dynamic range rendering
- Image-based modeling and rendering
- Motion blur
- Non-photorealistic rendering
- Normal mapping
- Painter's algorithm
- Physically based rendering
- Raster image processor
- Ray tracing
- Real-time computer graphics
- Scanline rendering/Scanline algorithm
- Software rendering
- Sprite (computer graphics)
- Unbiased rendering
- Vector graphics
- Virtual model
- Virtual studio
- Volume rendering
- Z-buffer algorithms
- "Relativistic Ray-Tracing: Simulating the Visual Appearance of Rapidly Moving Objects". CiteSeerX .
- A brief introduction to RenderMan
- Appel, A. (1968). "Some techniques for shading machine renderings of solids" (PDF). Proceedings of the Spring Joint Computer Conference. 32. pp. 37–49.
- Bouknight, W. J. (1970). "A procedure for generation of three-dimensional half-tone computer graphics presentations". Communications of the ACM. 13 (9): 527–536. doi:10.1145/362736.362739.
- Gouraud, H. (1971). "Continuous shading of curved surfaces" (PDF). IEEE Transactions on Computers. 20 (6): 623–629. doi:10.1109/t-c.1971.223313.
- University of Utah School of Computing, http://www.cs.utah.edu/school/history/#phong-ref
- Phong, B-T (1975). "Illumination for computer generated pictures" (PDF). Communications of the ACM. 18 (6): 311–316. doi:10.1145/360825.360839.
- Bui Tuong Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317.
- Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces (PDF) (PhD thesis). University of Utah.
- Blinn, J.F.; Newell, M.E. (1976). "Texture and reflection in computer generated images". Communications of the ACM. 19: 542–546. CiteSeerX . doi:10.1145/360349.360353.
- Crow, F.C. (1977). "Shadow algorithms for computer graphics" (PDF). Computer Graphics (Proceedings of SIGGRAPH 1977). 11. pp. 242–248.
- Williams, L. (1978). "Casting curved shadows on curved surfaces". Computer Graphics (Proceedings of SIGGRAPH 1978). 12. pp. 270–274. CiteSeerX .
- Blinn, J.F. (1978). Simulation of wrinkled surfaces (PDF). Computer Graphics (Proceedings of SIGGRAPH 1978). 12. pp. 286–292.
- Fuchs, H.; Kedem, Z.M.; Naylor, B.F. (1980). On visible surface generation by a priori tree structures. Computer Graphics (Proceedings of SIGGRAPH 1980). 14. pp. 124–133. CiteSeerX .
- Whitted, T. (1980). "An improved illumination model for shaded display". Communications of the ACM. 23 (6): 343–349. CiteSeerX . doi:10.1145/358876.358882.
- Cook, R.L.; Torrance, K.E. (1981). A reflectance model for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1981). 15. pp. 307–316. CiteSeerX .
- Williams, L. (1983). Pyramidal parametrics. Computer Graphics (Proceedings of SIGGRAPH 1983). 17. pp. 1–11. CiteSeerX .
- Glassner, A.S. (1984). "Space subdivision for fast ray tracing". IEEE Computer Graphics & Applications. 4 (10): 15–22. doi:10.1109/mcg.1984.6429331.
- Porter, T.; Duff, T. (1984). Compositing digital images (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). 18. pp. 253–259.
- Cook, R.L.; Porter, T.; Carpenter, L. (1984). Distributed ray tracing (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). 18. pp. 137–145.
- Goral, C.; Torrance, K.E.; Greenberg, D.P.; Battaile, B. (1984). Modeling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH 1984). 18. pp. 213–222. CiteSeerX .
- Cohen, M.F.; Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for complex environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 1985). 19. pp. 31–40. doi:10.1145/325165.325171.
- Arvo, J. (1986). Backward ray tracing. SIGGRAPH 1986 Developments in Ray Tracing course notes. CiteSeerX .
- Kajiya, J. (1986). The rendering equation. Computer Graphics (Proceedings of SIGGRAPH 1986). 20. pp. 143–150. CiteSeerX .
- Cook, R.L.; Carpenter, L.; Catmull, E. (1987). The Reyes image rendering architecture (PDF). Computer Graphics (Proceedings of SIGGRAPH 1987). 21. pp. 95–102.
- Wu, Xiaolin (July 1991). "An efficient antialiasing technique". Computer Graphics. 25 (4): 143–152. ISBN 0-89791-436-8. doi:10.1145/127719.122734.
- Wu, Xiaolin (1991). "Fast Anti-Aliased Circle Generation". In James Arvo (Ed.). Graphics Gems II. San Francisco: Morgan Kaufmann. pp. 446–450. ISBN 0-12-064480-0.
- Hanrahan, P.; Salzman, D.; Aupperle, L. (1991). A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH 1991). 25. pp. 197–206. CiteSeerX .
- M. Oren and S.K. Nayar, "Generalization of Lambert's Reflectance Model". SIGGRAPH. pp.239-246, Jul, 1994
- Tumblin, J.; Rushmeier, H.E. (1993). "Tone reproduction for realistic computer generated images" (PDF). IEEE Computer Graphics & Applications. 13 (6): 42–48. doi:10.1109/38.252554.
- Hanrahan, P.; Krueger, W. (1993). Reflection from layered surfaces due to subsurface scattering. Computer Graphics (Proceedings of SIGGRAPH 1993). 27. pp. 165–174. CiteSeerX .
- Jensen, H.W.; Christensen, N.J. (1995). "Photon maps in bidirectional monte carlo ray tracing of complex objects". Computers & Graphics. 19 (2): 215–224. CiteSeerX . doi:10.1016/0097-8493(94)00145-o.
- Veach, E.; Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997). 16. pp. 65–76. CiteSeerX .
- Keller, A. (1997). Instant Radiosity. Computer Graphics (Proceedings of SIGGRAPH 1997). 24. pp. 49–56. CiteSeerX .
- Sloan, P.; Kautz, J.; Snyder, J. (2002). Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 2002). 29. pp. 527–536.
- Pharr, Matt; Humphreys, Greg (2004). Physically based rendering from theory to implementation. Amsterdam: Elsevier/Morgan Kaufmann. ISBN 0-12-553180-X.
- Shirley, Peter; Morley, R. Keith (2003). Realistic ray tracing (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-198-5.
- Philip Dutré; Bekaert, Philippe; Bala, Kavita (2003). Advanced global illumination ([Online-Ausg.] ed.). Natick, Mass.: A K Peters. ISBN 1-56881-177-2.
- Akenine-Möller, Tomas; Haines, Eric (2004). Real-time rendering (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-182-9.
- Strothotte, Thomas; Schlechtweg, Stefan (2002). Non-photorealistic computer graphics modeling, rendering, and animation (2 ed.). San Francisco, CA: Morgan Kaufmann. ISBN 1-55860-787-0.
- Gooch, Bruce; Gooch, Amy (2001). Non-photorealistic rendering. Natick, Mass.: A K Peters. ISBN 1-56881-133-0.
- Jensen, Henrik Wann (2001). Realistic image synthesis using photon mapping ([Nachdr.] ed.). Natick, Mass.: AK Peters. ISBN 1-56881-147-0.
- Blinn, Jim (1996). Jim Blinn's corner : a trip down the graphics pipeline. San Francisco, Calif.: Morgan Kaufmann Publishers. ISBN 1-55860-387-5.
- Glassner, Andrew S. (2004). Principles of digital image synthesis (2 ed.). San Francisco, Calif.: Kaufmann. ISBN 1-55860-276-3.
- Cohen, Michael F.; Wallace, John R. (1998). Radiosity and realistic image synthesis (3 ed.). Boston, Mass. [u.a.]: Academic Press Professional. ISBN 0-12-178270-0.
- Foley, James D.; Van Dam; Feiner; Hughes (1990). Computer graphics : principles and practice (2 ed.). Reading, Mass.: Addison-Wesley. ISBN 0-201-12110-7.
- Andrew S. Glassner, ed. (1989). An introduction to ray tracing (3 ed.). London [u.a.]: Acad. Press. ISBN 0-12-286160-4.
- Ward, Gregory J. (July 1994). "The RADIANCE Lighting Simulation and Rendering System". SIGGRAPH 94: 459–72.
|Look up renderer in Wiktionary, the free dictionary.| |
The prokaryotic cell is simpler than the eukaryotic cell at every level, with one exception: The cell envelope is more complex.
Prokaryotes have no true nuclei; instead they package their DNA in a structure known as the nucleoid. The negatively charged DNA is at least partially neutralized by small polyamines and magnesium ions, but histone-like proteins exist in bacteria and presumably play a role similar to that of histones in eukaryotic chromatin.
Electron micrographs of a typical prokaryotic cell reveal the absence of a nuclear membrane and a mitotic apparatus. The exception to this rule is the planctomycetes, a divergent group of aquatic bacteria, which have a nucleoid surrounded by a nuclear envelope consisting of two membranes. The distinction between prokaryotes and eukaryotes that still holds is that prokaryotes have no eukaryotic-type mitotic apparatus. The nuclear region (Figure 2-6) is filled with DNA fibrils. The nucleoid of most bacterial cells consists of a single continuous circular molecule ranging in size from 0.58 to almost 10 million base pairs. However, a few bacteria have been shown to have two, three, or even four dissimilar chromosomes. For example, Vibrio cholerae and Brucella melitensis have two dissimilar chromosomes. There are exceptions to this rule of circularity because some prokaryotes (eg, Borrelia burgdorferi and Streptomyces coelicolor) have been shown to have a linear chromosome.
The nucleoid. A: Color-enhanced transmission electron micrograph of Escherichia coli with the DNA shown in red. (© CNRI/SPL/Photo Researchers, Inc.) B: Chromosome released from a gently lysed cell of E coli. Note how tightly packaged the DNA must be inside the bacterium. (© Dr. Gopal Murti/SPL/Photo Researchers.)
In bacteria, the number of nucleoids, and therefore the number of chromosomes, depend on the growth conditions. Rapidly growing bacteria have more nucleoids per cell than slowly growing ones; however, when multiple copies are present, they are all the same (ie, prokaryotic cells are haploid).
Prokaryotic cells lack autonomous plastids, such as mitochondria and chloroplasts; the electron transport enzymes are localized instead in the cytoplasmic membrane. The photosynthetic pigments (carotenoids, bacteriochlorophyll) of photosynthetic bacteria are contained in intracytoplasmic membrane systems of various morphologies. Membrane vesicles (chromatophores) or lamellae are commonly observed membrane types. Some photosynthetic bacteria have specialized nonunit membrane-enclosed structures called chlorosomes. In some Cyanobacteria (formerly known as blue-green algae), the photosynthetic membranes often form multilayered structures known as thylakoids (Figure 2-7). The major accessory pigments used for light harvesting are the phycobilins found on the outer surface of the thylakoid membranes.
Thin section of Synechocystis during division. Many structures are visible. (Reproduced from Stanier RY: The position of cyanobacteria in the world of phototrophs. Carlsberg Res Commun 42:77-98, 1977. With kind permission of Springer + Business Media.)
Bacteria often store reserve materials in the form of insoluble granules, which appear as refractile bodies in the cytoplasm when viewed by phase contrast microscopy. These so-called inclusion bodies almost always function in the storage of energy or as a reservoir of structural building blocks. Most cellular inclusions are bounded by a thin nonunit membrane consisting of lipid, which serves to separate the inclusion from the cytoplasm proper. One of the most common inclusion bodies consists of poly-β-hydroxybutyric acid (PHB), a lipid-like compound consisting of chains of β-hydroxybutyric acid units connected through ester linkages. PHB is produced when the source of nitrogen, sulfur, or phosphorous is limited and there is excess carbon in the medium (Figure 2-8A). Another storage product formed by prokaryotes when carbon is in excess is glycogen, which is a polymer of glucose. PHB and glycogen are used as carbon sources when protein and nucleic acid synthesis are resumed. A variety of prokaryotes are capable of oxidizing reduced sulfur compounds such as hydrogen sulfide and thiosulfate, producing intracellular granules of elemental sulfur (Figure 2-8B). As the reduced sulfur source becomes limiting, the sulfur in the granules is oxidized, usually to sulfate, and the granules slowly disappear. Many bacteria accumulate large reserves of inorganic phosphate in the form of granules of polyphosphate. These granules can be degraded and used as sources of phosphate for nucleic acid and phospholipid synthesis to support growth. These granules are sometimes termed volutin granules or metachromatic granules because they stain red with a blue dye. They are characteristic features of the corynebacteria (see Chapter 13).
Inclusion bodies in bacteria. A: Electron micrograph of Bacillus megaterium (30,500×) showing poly-β-hydroxybutyric acid inclusion body, PHB; cell wall, CW; nucleoid, N; plasma membrane, PM; “mesosome,” M; and ribosomes, R. (Reproduced with permission. © Ralph A. Slepecky/Visuals Unlimited.) B: Cromatium vinosum, a purple sulfur bacterium, with intracellular sulfur granules, bright field microscopy (2000×). (Reproduced with permission from Holt J (editor): The Shorter Bergey’s Manual of Determinative Bacteriology, 8th ed. Williams & Wilkins, 1977. Copyright Bergey’s Manual Trust.)
Certain groups of autotrophic bacteria that fix carbon dioxide to make their biochemical building blocks contain polyhedral bodies surrounded by a protein shell (carboxysomes) containing the key enzyme of CO2 fixation, ribulosebisphosphate carboxylase (see Figure 2-7). Magnetosomes are intracellular crystal particles of the iron mineral magnetite (Fe3O4) that allow certain aquatic bacteria to exhibit magnetotaxis (ie, migration or orientation of the cell with respect to the earth’s magnetic field). Magnetosomes are surrounded by a nonunit membrane containing phospholipids, proteins, and glycoproteins. Gas vesicles are found almost exclusively in microorganisms from aquatic habitats, where they provide buoyancy. The gas vesicle membrane is a 2-nm-thick layer of protein, impermeable to water and solutes but permeable to gases; thus, gas vesicles exist as gas-filled structures surrounded by the constituents of the cytoplasm (Figure 2-9).
Transverse section of a dividing cell of the cyanobacterium Microcystis species showing hexagonal stacking of the cylindric gas vesicles (31,500×). (Micrograph by HS Pankratz. Reproduced with permission from Walsby AE: Gas vesicles. Microbiol Rev 1994;58:94.)
Bacteria contain proteins resembling both the actin and nonactin cytoskeletal proteins of eukaryotic cells as additional proteins that play cytoskeletal roles (Figure 2-10). Actin homologs (eg, MreB, Mbl) perform a variety of functions, helping to determine cell shape, segregate chromosomes, and localize proteins with the cell. Nonactin homologs (eg, FtsZ) and unique bacterial cytoskeletal proteins (eg, SecY, MinD) are involved in determining cell shape and in regulation of cell division and chromosome segregation.
The prokaryotic cytoskeleton. Visualization of the MreB-like cytoskeletal protein (Mbl) of Bacillus subtilis. The Mbl protein has been fused with green fluorescent protein, and live cells have been examined by fluorescence microscopy. A: Arrows point to the helical cytoskeleton cables that extend the length of the cells. B: Three of the cells from A are shown at a higher magnification. (Courtesy of Rut Carballido-Lopez and Jeff Errington.)
Prokaryotic cells are surrounded by complex envelope layers that differ in composition among the major groups. These structures protect the organisms from hostile environments, such as extreme osmolarity, harsh chemicals, and even antibiotics.
The bacterial cell membrane, also called the cytoplasmic membrane, is visible in electron micrographs of thin sections (see Figure 2-15). It is a typical “unit membrane” composed of phospholipids and upward of 200 different kinds of proteins. Proteins account for approximately 70% of the mass of the membrane, which is a considerably higher proportion than that of mammalian cell membranes. Figure 2-11 illustrates a model of membrane organization. The membranes of prokaryotes are distinguished from those of eukaryotic cells by the absence of sterols, the only exception being mycoplasmas that incorporate sterols, such as cholesterol, into their membranes when growing in sterol-containing media.
Bacterial plasma membrane structure. This diagram of the fluid mosaic model of bacterial membrane structure shown the integral proteins (green and red) floating in a lipid bilayer. Peripheral proteins (yellow) are associated loosely with the inner membrane surface. Small spheres represent the hydrophilic ends of membrane phospholipids and wiggly tails, the hydrophobic fatty acid chains. Other membrane lipids such as hopanoids (purple) may be present. For the sake of clarity, phospholipids are shown proportionately much larger size than in real membranes. (Reproduced with permission from Willey JM, Sherwood LM, Woolverton CJ [editors]: Prescott, Harley, and Klein’s Microbiology, 7th ed. McGraw-Hill; 2008. © The McGraw-Hill Companies, Inc.)
The cell membranes of the Archaea (see Chapter 1) differ from those of the Bacteria. Some Archaeal cell membranes contain unique lipids, isoprenoids, rather than fatty acids, linked to glycerol by ether rather than an ester linkage. Some of these lipids have no phosphate groups, and therefore, they are not phospholipids. In other species, the cell membrane is made up of a lipid monolayer consisting of long lipids (about twice as long as a phospholipid) with glycerol ethers at both ends (diglycerol tetraethers). The molecules orient themselves with the polar glycerol groups on the surfaces and the nonpolar hydrocarbon chain in the interior. These unusual lipids contribute to the ability of many Archaea to grow under environmental conditions such as high salt, low pH, or very high temperature.
The major functions of the cytoplasmic membrane are (1) selective permeability and transport of solutes; (2) electron transport and oxidative phosphorylation in aerobic species; (3) excretion of hydrolytic exoenzymes; (4) bearing the enzymes and carrier molecules that function in the biosynthesis of DNA, cell wall polymers, and membrane lipids; and (5) bearing the receptors and other proteins of the chemotactic and other sensory transduction systems.
At least 50% of the cytoplasmic membrane must be in the semifluid state for cell growth to occur. At low temperatures, this is achieved by greatly increased synthesis and incorporation of unsaturated fatty acids into the phospholipids of the cell membrane.
1. Permeability and transport—The cytoplasmic membrane forms a hydrophobic barrier impermeable to most hydrophilic molecules. However, several mechanisms (transport systems) exist that enable the cell to transport nutrients into and waste products out of the cell. These transport systems work against a concentration gradient to increase the concentration of nutrients inside the cell, a function that requires energy in some form. There are three general transport mechanisms involved in membrane transport: passive transport, active transport, and group translocation.
a. Passive transport—This mechanism relies on diffusion, uses no energy, and operates only when the solute is at higher concentration outside than inside the cell. Simple diffusion accounts for the entry of very few nutrients, including dissolved oxygen, carbon dioxide, and water itself. Simple diffusion provides neither speed nor selectivity. Facilitated diffusion also uses no energy so the solute never achieves an internal concentration greater than what exists outside the cell. However, facilitated diffusion is selective. Channel proteins form selective channels that facilitate the passage of specific molecules. Facilitated diffusion is common in eukaryotic microorganisms (eg, yeast) but is rare in prokaryotes. Glycerol is one of the few compounds that enters prokaryotic cells by facilitated diffusion.
b. Active transport—Many nutrients are concentrated more than a thousand-fold as a result of active transport. There are two types of active transport mechanisms depending on the source of energy used: ion-coupled transport and ATP-binding cassette (ABC) transport.
1) Ion-coupled transport—These systems move a molecule across the cell membrane at the expense of a previously established ion gradient such as protonmotive or sodium-motive force. There are three basic types: uniport, symport, and antiport (Figure 2-12). Ion-coupled transport is particularly common in aerobic organisms, which have an easier time generating an ion-motive force than do anaerobes. Uniporters catalyze the transport of a substrate independent of any coupled ion. Symporters catalyze the simultaneous transport of two substrates in the same direction by a single carrier; for example, an H+ gradient can permit symport of an oppositely charged ion (eg, glycine) or a neutral molecule (eg, galactose). Antiporters catalyze the simultaneous transport of two like-charged compounds in opposite directions by a common carrier (eg, H+:Na+). Approximately 40% of the substrates transported by E coli use this mechanism.
2) ABC transport—This mechanism uses ATP directly to transport solutes into the cell. In gram-negative bacteria, the transport of many nutrients is facilitated by specific binding proteins located in the periplasmic space; in gram-positive cells, the binding proteins are attached to the outer surface of the cell membrane. These proteins function by transferring the bound substrate to a membrane-bound protein complex. Hydrolysis of ATP is then triggered, and the energy is used to open the membrane pore and allow the unidirectional movement of the substrate into the cell. Approximately 40% of the substrates transported by E coli use this mechanism.
c. Group translocation—In addition to true transport, in which a solute is moved across the membrane without change in structure, bacteria use a process called group translocation (vectorial metabolism) to effect the net uptake of certain sugars (eg, glucose and mannose), the substrate becoming phosphorylated during the transport process. In a strict sense, group translocation is not active transport because no concentration gradient is involved. This process allows bacteria to use their energy resources efficiently by coupling transport with metabolism. In this process, a membrane carrier protein is first phosphorylated in the cytoplasm at the expense of phosphoenolpyruvate; the phosphorylated carrier protein then binds the free sugar at the exterior membrane face and transports it into the cytoplasm, releasing it as sugar phosphate. Such systems of sugar transport are called phosphotransferase systems. Phosphotransferase systems are also involved in movement toward these carbon sources (chemotaxis) and in the regulation of several other metabolic pathways (catabolite repression).
d. Special transport processes—Iron (Fe) is an essential nutrient for the growth of almost all bacteria. Under anaerobic conditions, Fe is generally in the +2 oxidation state and soluble. However, under aerobic conditions, Fe is generally in the +3 oxidation state and insoluble. The internal compartments of animals contain virtually no free Fe; it is sequestered in complexes with such proteins as transferrin and lactoferrin. Some bacteria solve this problem by secreting siderophores—compounds that chelate Fe and promote its transport as a soluble complex. One major group of siderophores consists of derivatives of hydroxamic acid (−CONH2OH), which chelate Fe3+ very strongly. The iron–hydroxamate complex is actively transported into the cell by the cooperative action of a group of proteins that span the outer membrane, periplasm, and inner membrane. The iron is released, and the hydroxamate can exit the cell and be used again for iron transport.
Some pathogenic bacteria use a fundamentally different mechanism involving specific receptors that bind host transferrin and lactoferrin (as well as other iron-containing host proteins). The Fe is removed and transported into the cell by an energy-dependent process.
2. Electron transport and oxidative phosphorylation—The cytochromes and other enzymes and components of the respiratory chain, including certain dehydrogenases, are located in the cell membrane. The bacterial cell membrane is thus a functional analog of the mitochondrial membrane—a relationship which has been taken by many biologists to support the theory that mitochondria have evolved from symbiotic bacteria. The mechanism by which ATP generation is coupled to electron transport is discussed in Chapter 6.
3. Excretion of hydrolytic exoenzymes and pathogenicity proteins—All organisms that rely on macromolecular organic polymers as a source of nutrients (eg, proteins, polysaccharides, lipids) excrete hydrolytic enzymes that degrade the polymers to subunits small enough to penetrate the cell membrane. Higher animals secrete such enzymes into the lumen of the digestive tract; bacteria (both gram positive and gram negative) secrete them directly into the external medium or into the periplasmic space between the peptidoglycan layer and the outer membrane of the cell wall in the case of gram-negative bacteria (see The Cell Wall, later).
In gram-positive bacteria, proteins are secreted directly, but proteins secreted by gram-negative bacteria must traverse the outer membrane as well. Six pathways of protein secretion have been described in bacteria: the type I, type II, type III, type IV, type V, and type VI secretion systems. A schematic overview of the type I to V systems is presented in Figure 2-13. The type I and IV secretion systems have been described in both gram-negative and gram-positive bacteria, but the type II, III, V, and VI secretion systems have been found only in gram-negative bacteria. Proteins secreted by the type I and III pathways traverse the inner membrane (IM) and outer membrane (OM) in one step, but proteins secreted by the type II and V pathways cross the IM and OM in separate steps. Proteins secreted by the type II and V pathways are synthesized on cytoplasmic ribosomes as preproteins containing an extra leader or signal sequence of 15–40 amino acids—most commonly about 30 amino acids—at the amino terminal and require the sec system for transport across the IM. In E coli, the sec pathway comprises a number of IM proteins (SecD to SecF, SecY), a cell membrane–associated ATPase (SecA) that provides energy for export, a chaperone (SecB) that binds to the preprotein, and the periplasmic signal peptidase. After translocation, the leader sequence is cleaved off by the membrane-bound signal peptidase, and the mature protein is released into the periplasmic space. In contrast, proteins secreted by the type I and III systems do not have a leader sequence and are exported intact.
In gram-negative and gram-positive bacteria, another plasma membrane translocation system, called the tat pathway, can move proteins across the plasma membrane. In gram-negative bacteria, these proteins are then delivered to the type II system (Figure 2-13). The tat pathway is distinct from the sec system in that it translocates already folded proteins.
Although proteins secreted by the type II and V systems are similar in the mechanism by which they cross the IM, differences exist in how they traverse the OM. Proteins secreted by the type II system are transported across the OM by a multiprotein complex (see Figure 2-13). This is the primary pathway for the secretion of extracellular degradative enzymes by gram-negative bacteria. Elastase, phospholipase C, and exotoxin A are secreted by this system in Pseudomonas aeruginosa. However, proteins secreted by the type V system autotransport across the outer membrane by virtue of a carboxyl terminal sequence, which is enzymatically removed upon release of the protein from the OM. Some extracellular proteins—eg, the IgA protease of Neisseria gonorrhoeae and the vacuolating cytotoxin of Helicobacter pylori—are secreted by this system.
The type I and III secretion pathways are sec independent and thus do not involve amino terminal processing of the secreted proteins. Protein secretion by these pathways occurs in a continuous process without the presence of a cytoplasmic intermediate. Type I secretion is exemplified by the α-hemolysin of E coli and the adenylyl cyclase of Bordetella pertussis. Type I secretion requires three secretory proteins: an IM ATP-binding cassette (ABC transporter), which provides energy for protein secretion; an OM protein; and a membrane fusion protein, which is anchored in the inner membrane and spans the periplasmic space (see Figure 2-13). Instead of a signal peptide, the information is located within the carboxyl terminal 60 amino acids of the secreted protein.
The type III secretion pathway is a contact-dependent system. It is activated by contact with a host cell, and then injects a toxin protein into the host cell directly. The type III secretion apparatus is composed of approximately 20 proteins, most of which are located in the IM. Most of these IM components are homologous to the flagellar biosynthesis apparatus of both gram-negative and gram-positive bacteria. As in type I secretion, the proteins secreted via the type III pathway are not subject to amino terminal processing during secretion.
Type IV pathways secrete either polypeptide toxins (directed against eukaryotic cells) or protein–DNA complexes either between two bacterial cells or between a bacterial and a eukaryotic cell. Type IV secretion is exemplified by the protein–DNA complex delivered by Agrobacterium tumefaciens into a plant cell. Additionally, B pertussis and H pylori possess type IV secretion systems that mediate secretion of pertussis toxin and interleukin-8–inducing factor, respectively. The sec-independent type VI secretion was recently described in P aeruginosa, where it contributes to pathogenicity in patients with cystic fibrosis. This secretion system is composed of 15–20 proteins whose biochemical functions are not well understood. However, recent studies suggest that some of these proteins share homology with bacteriophage tail proteins.
The characteristics of the protein secretion systems of bacteria are summarized in Table 9-5.
4. Biosynthetic functions—The cell membrane is the site of the carrier lipids on which the subunits of the cell wall are assembled (see the discussion of synthesis of cell wall substances in Chapter 6) as well as of the enzymes of cell wall biosynthesis. The enzymes of phospholipid synthesis are also localized in the cell membrane.
5. Chemotactic systems—Attractants and repellents bind to specific receptors in the bacterial membrane (see Flagella, later). There are at least 20 different chemoreceptors in the membrane of E coli, some of which also function as a first step in the transport process.
Three types of porters: A: uniporters, B: symporters, and C: antiporters. Uniporters catalyze the transport of a single species independently of any other, symporters catalyze the cotransport of two dissimilar species (usually a solute and a positively charged ion, H+) in the same direction, and antiporters catalyze the exchange transport of two similar solutes in opposite directions. A single transport protein may catalyze just one of these processes, two of these processes, or even all three of these processes, depending on conditions. Uniporters, symporters, and antiporters have been found to be structurally similar and evolutionarily related, and they function by similar mechanisms. (Reproduced with permission from Saier MH Jr: Peter Mitchell and his chemiosmotic theories. ASM News 1997;63:13.)
The protein secretion systems of gram-negative bacteria. Five secretion systems of gram-negative bacteria are shown. The Sec-dependent and Tat pathways deliver proteins from the cytoplasm to the periplasmic space. The type II, type V, and sometimes type IV systems complete the secretion process begun by the Sec-dependent pathway. The Tat system appears to deliver proteins only to the type II pathway. The type I and III systems bypass the Sec-dependent and Tat pathways, moving proteins directly from the cytoplasm, through the outer membrane, to the extracellular space. The type IV system can work either with the Sec-dependent pathway or can work alone to transport proteins to the extracellular space. Proteins translocated by the Sec-dependent pathway and the type III pathway are delivered to those systems by chaperone proteins. ADP, adenosine diphosphate; ATP, adenosine triphosphate; EFGY; PuIS; SecD; TolC; Yop. (Reproduced with permission from Willey JM, Sherwood LM, Woolverton CJ [editors]: Prescott, Harley, and Klein’s Microbiology, 7th ed. McGraw-Hill; 2008. © The McGraw-Hill Companies, Inc.)
The internal osmotic pressure of most bacteria ranges from 5 to 20 atm as a result of solute concentration via active transport. In most environments, this pressure would be sufficient to burst the cell were it not for the presence of a high-tensile-strength cell wall (Figure 2-14). The bacterial cell wall owes its strength to a layer composed of a substance variously referred to as murein, mucopeptide, or peptidoglycan (all are synonyms). The structure of peptidoglycan is discussed as follows.
The rigid cell wall determines the shape of the bacterium. Even though the cell has split apart, the cell wall maintains it’s original shape. (Courtesy of Dale C. Birdsell.)
Most bacteria are classified as gram positive or gram negative according to their response to the Gram-staining procedure. This procedure was named for the histologist Hans Christian Gram, who developed this differential staining procedure in an attempt to stain bacteria in infected tissues. The Gram stain depends on the ability of certain bacteria (the gram-positive bacteria) to retain a complex of crystal violet (a purple dye) and iodine after a brief wash with alcohol or acetone. Gram-negative bacteria do not retain the dye–iodine complex and become translucent, but they can then be counterstained with safranin (a red dye). Thus, gram-positive bacteria look purple under the microscope, and gram-negative bacteria look red. The distinction between these two groups turns out to reflect fundamental differences in their cell envelopes (Table 2-1).
In addition to giving osmotic protection, the cell wall plays an essential role in cell division as well as serving as a primer for its own biosynthesis. Various layers of the wall are the sites of major antigenic determinants of the cell surface, and one component—the lipopolysaccharide of gram-negative cell walls—is responsible for the nonspecific endotoxin activity of gram-negative bacteria. The cell wall is, in general, nonselectively permeable; one layer of the gram-negative wall, however—the outer membrane—hinders the passage of relatively large molecules (see below).
The biosynthesis of the cell wall and the antibiotics that interfere with this process are discussed in Chapter 6.
A. The Peptidoglycan Layer
Peptidoglycan is a complex polymer consisting, for the purposes of description, of three parts: a backbone, composed of alternating N-acetylglucosamine and N-acetylmuramic acid connected by β1→4 linkages; a set of identical tetrapeptide side chains attached to N-acetylmuramic acid; and a set of identical peptide cross-bridges (Figure 2-15). The backbone is the same in all bacterial species; the tetrapeptide side chains and the peptide cross-bridges vary from species to species. In many gram-negative cell walls, the cross-bridge consists of a direct peptide linkage between the diaminopimelic acid (DAP) amino group of one side chain and the carboxyl group of the terminal d-alanine of a second side chain.
Components and structure of peptidoglycan. A: Chemical structure of N-acetylglucosamine (NAG) and N-acetylmuramic acid (NAM); the ring structures of the two molecules are glucose. Glycan chains are composed of alternating subunits of NAG and NAM joined by covalent bonds. Adjacent glycan chains are cross-linked via their tetrapeptide chains to create peptidoglycan. B: Interconnected glycan chains form a very large three-dimensional molecule of peptidoglycan. The β1→4 linkages in the backbone are cleaved by lysozyme. (Reproduced with permission from Nester EW, Anderson DG, Roberts CE, Nester MT: Microbiology: A Human Perspective, 6th ed. McGraw-Hill; 2009.)
The tetrapeptide side chains of all species, however, have certain important features in common. Most have l-alanine at position 1 (attached to N-acetylmuramic acid), d-glutamate or substituted d-glutamate at position 2, and d-alanine at position 4. Position 3 is the most variable one: Most gram-negative bacteria have diaminopimelic acid at this position, to which is linked the lipoprotein cell wall component discussed as follows. Gram-positive bacteria usually have l-lysine at position 3; however, some may have diaminopimelic acid or another amino acid at this position.
Diaminopimelic acid is a unique element of bacterial cell walls. It is never found in the cell walls of Archaea or eukaryotes. Diaminopimelic acid is the immediate precursor of lysine in the bacterial biosynthesis of that amino acid (see Figure 6-19). Bacterial mutants that are blocked before diaminopimelic acid in the biosynthetic pathway grow normally when provided with diaminopimelic acid in the medium; when given l-lysine alone, however, they lyse, because they continue to grow but are specifically unable to make new cell wall peptidoglycan.
The fact that all peptidoglycan chains are cross-linked means that each peptidoglycan layer is a single giant molecule. In gram-positive bacteria, there are as many as 40 sheets of peptidoglycan, comprising up to 50% of the cell wall material; in gram-negative bacteria, there appears to be only one or two sheets, comprising 5–10% of the wall material. Bacteria owe their shapes, which are characteristic of particular species, to their cell wall structure.
B. Special Components of Gram-Positive Cell Walls
Most gram-positive cell walls contain considerable amounts of teichoic and teichuronic acids, which may account for up to 50% of the dry weight of the wall and 10% of the dry weight of the total cell. In addition, some gram-positive walls may contain polysaccharide molecules.
1. Teichoic and teichuronic acids—The term teichoic acids encompasses all wall, membrane, or capsular polymers containing glycerophosphate or ribitol phosphate residues. These polyalcohols are connected by phosphodiester linkages and usually have other sugars and d-alanine attached (Figure 2-16A). Because they are negatively charged, teichoic acids are partially responsible for the negative charge of the cell surface as a whole. There are two types of teichoic acids: wall teichoic acid (WTA), covalently linked to peptidoglycan; and membrane teichoic acid, covalently linked to membrane glycolipid. Because the latter are intimately associated with lipids, they have been called lipoteichoic acids (LTA). Together with peptidoglycan, WTA and LTA make up a polyanionic network or matrix that provides functions relating to the elasticity, porosity, tensile strength, and electrostatic properties of the envelope. Although not all gram-positive bacteria have conventional LTA and WTA, those that lack these polymers generally have functionally similar ones.
Most teichoic acids contain large amounts of d-alanine, usually attached to position 2 or 3 of glycerol or position 3 or 4 of ribitol. In some of the more complex teichoic acids, however, d-alanine is attached to one of the sugar residues. In addition to d-alanine, other substituents may be attached to the free hydroxyl groups of glycerol and ribitol (eg, glucose, galactose, N-acetylglucosamine, N-acetylgalactosamine, or succinate). A given species may have more than one type of sugar substituent in addition to d-alanine; in such cases, it is not certain whether the different sugars occur on the same or on separate teichoic acid molecules. The composition of the teichoic acid formed by a given bacterial species can vary with the composition of the growth medium.
The teichoic acids constitute major surface antigens of those gram-positive species that possess them, and their accessibility to antibodies has been taken as evidence that they lie on the outside surface of the peptidoglycan. Their activity is often increased, however, by partial digestion of the peptidoglycan; thus, much of the teichoic acid may lie between the cytoplasmic membrane and the peptidoglycan layer, possibly extending upward through pores in the latter (Figure 2-16B). In the pneumococcus (Streptococcus pneumoniae), the teichoic acids bear the antigenic determinants called Forssman antigen. In Streptococcus pyogenes, LTA is associated with the M protein that protrudes from the cell membrane through the peptidoglycan layer. The long M protein molecules together with the LTA form microfibrils that facilitate the attachment of S pyogenes to animal cells (see Chapter 14).
The teichuronic acids are similar polymers, but the repeat units include sugar acids (eg, N-acetylmannosuronic or d-glucosuronic acid) instead of phosphoric acids. They are synthesized in place of teichoic acids when phosphate is limiting.
2. Polysaccharides—The hydrolysis of gram-positive walls has yielded, from certain species, neutral sugars such as mannose, arabinose, rhamnose, and glucosamine and acidic sugars such as glucuronic acid and mannuronic acid. It has been proposed that these sugars exist as subunits of polysaccharides in the cell wall; the discovery, however, that teichoic and teichuronic acids may contain a variety of sugars (see Figure 2-16A) leaves the true origin of these sugars uncertain.
A: Teichoic acid structure. The segment of a teichoic acid made of phosphate, glycerol, and a side chain, R. R may represent d-alanine, glucose, or other molecules. B: Teichoic and lipoteichoic acids of the gram-positive envelope. (Reproduced with permission from Willey JM, Sherwood LM, Woolverton CJ [editors]: Prescott, Harley, and Klein’s Microbiology, 7th ed. McGraw-Hill; 2008.)
C. Special Components of Gram-Negative Cell Walls
Gram-negative cell walls contain three components that lie outside of the peptidoglycan layer: lipoprotein, outer membrane, and lipopolysaccharide (Figure 2-17).
Molecular representation of the envelope of a gram-negative bacterium. Ovals and rectangles represent sugar residues, and circles depict the polar head groups of the glycerophospholipids (phosphatidylethanolamine and phosphatidylglycerol). The core region shown is that of Escherichia coli K-12, a strain that does not normally contain an O-antigen repeat unless transformed with an appropriate plasmid. MDO, membrane-derived oligosaccharides. (Reproduced with permission from Raetz CRH: Bacterial endotoxins: Extraordinary lipids that activate eucaryotic signal transduction. J Bacteriol 1993;175:5745.)
1. Outer membrane—The outer membrane is chemically distinct from all other biological membranes. It is a bilayered structure; its inner leaflet resembles in composition that of the cell membrane, and its outer leaflet contains a distinctive component, a lipopolysaccharide (LPS) (see below). As a result, the leaflets of this membrane are asymmetrical, and the properties of this bilayer differ considerably from those of a symmetrical biologic membrane such as the cell membrane.
The ability of the outer membrane to exclude hydrophobic molecules is an unusual feature among biologic membranes and serves to protect the cell (in the case of enteric bacteria) from deleterious substances such as bile salts. Because of its lipid nature, the outer membrane would be expected to exclude hydrophilic molecules as well. However, the outer membrane has special channels, consisting of protein molecules called porins that permit the passive diffusion of low-molecular-weight hydrophilic compounds such as sugars, amino acids, and certain ions. Large antibiotic molecules penetrate the outer membrane relatively slowly, which accounts for the relatively high antibiotic resistance of gram-negative bacteria. The permeability of the outer membrane varies widely from one gram-negative species to another; in P aeruginosa, for example, which is extremely resistant to antibacterial agents, the outer membrane is 100 times less permeable than that of E coli.
The major proteins of the outer membrane, named according to the genes that code for them, have been placed into several functional categories on the basis of mutants in which they are lacking and on the basis of experiments in which purified proteins have been reconstituted into artificial membranes. Porins, exemplified by OmpC, D, and F and PhoE of E coli and Salmonella typhimurium, are trimeric proteins that penetrate both faces of the outer membrane (Figure 2-18). They form relatively nonspecific pores that permit the free diffusion of small hydrophilic solutes across the membrane. The porins of different species have different exclusion limits, ranging from molecular weights of about 600 in E coli and S typhimurium to more than 3000 in P aeruginosa.
Members of a second group of outer membrane proteins, which resemble porins in many ways, are exemplified by LamB and Tsx. LamB, an inducible porin that is also the receptor for lambda bacteriophage, is responsible for most of the transmembrane diffusion of maltose and maltodextrins; Tsx, the receptor for T6 bacteriophage, is responsible for the transmembrane diffusion of nucleosides and some amino acids. LamB allows some passage of other solutes; however, its relative specificity may reflect weak interactions of solutes with configuration-specific sites within the channel.
The OmpA protein is an abundant protein in the outer membrane. The OmpA protein participates in the anchoring of the outer membrane to the peptidoglycan layer; it is also the sex pilus receptor in F-mediated bacterial conjugation (see Chapter 7).
The outer membrane also contains a set of less abundant proteins that are involved in the transport of specific molecules such as vitamin B12 and iron-siderophore complexes. They show high affinity for their substrates and probably function like the classic carrier transport systems of the cytoplasmic membrane. The proper function of these proteins requires energy coupled through a protein called TonB. Additional minor proteins include a limited number of enzymes, among them phospholipases and proteases.
The topology of the major proteins of the outer membrane, based on cross-linking studies and analyses of functional relationships, is shown in Figure 2-17. The outer membrane is connected to both the peptidoglycan layer and the cytoplasmic membrane. The connection with the peptidoglycan layer is primarily mediated by the outer membrane lipoprotein (see below). About one-third of the lipoprotein molecules are covalently linked to peptidoglycan and help hold the two structures together. A noncovalent association of some of the porins with the peptidoglycan layer plays a lesser role in connecting the outer membrane with this structure. Outer membrane proteins are synthesized on ribosomes bound to the cytoplasmic surface of the cell membrane; how they are transferred to the outer membrane is still uncertain, but one hypothesis suggests that transfer occurs at zones of adhesion between the cytoplasmic and outer membranes, which are visible in the electron microscope. Unfortunately, firm evidence for such areas of adhesion has proven hard to come by.
2. Lipopolysaccharide (LPS)—The LPS of gram-negative cell walls consists of a complex glycolipid, called lipid A, to which is attached a polysaccharide made up of a core and a terminal series of repeat units (Figure 2-19A). The lipid A component is embedded in the outer leaflet of the membrane anchoring the LPS. LPS is synthesized on the cytoplasmic membrane and transported to its final exterior position. The presence of LPS is required for the function of many outer membrane proteins.
Lipid A consists of phosphorylated glucosamine disaccharide units to which are attached a number of long-chain fatty acids (Figure 2-19). β-Hydroxymyristic acid, a C14 fatty acid, is always present and is unique to this lipid; the other fatty acids, along with substituent groups on the phosphates, vary according to the bacterial species.
The polysaccharide core, shown in Figure 2-19A and B, is similar in all gram-negative species that have LPS and includes two characteristic sugars, ketodeoxyoctanoic acid (KDO) and a heptose. Each species, however, contains a unique repeat unit, that of Salmonella being shown in Figure 2-19A. The repeat units are usually linear trisaccharides or branched tetra- or pentasaccharides. The repeat unit is referred to as the O antigen. The hydrophilic carbohydrate chains of the O antigen cover the bacterial surface and exclude hydrophobic compounds.
The negatively charged LPS molecules are noncovalently cross-bridged by divalent cations (ie, Ca2+ and Mg2+); this stabilizes the membrane and provides a barrier to hydrophobic molecules. Removal of the divalent cations with chelating agents or their displacement by polycationic antibiotics such as polymyxins and aminoglycosides renders the outer membrane permeable to large hydrophobic molecules.
Lipopolysaccharide, which is extremely toxic to animals, has been called the endotoxin of gram-negative bacteria because it is firmly bound to the cell surface and is released only when the cells are lysed. When LPS is split into lipid A and polysaccharide, all of the toxicity is associated with the former. The O antigen is highly immunogenic in a vertebrate animal. Antigenic specificity is conferred by the O antigen because this antigen is highly variable among species and even in strains within a species. The number of possible antigenic types is very great: Over 1000 have been recognized in Salmonella alone. Not all gram-negative bacteria have outer membrane LPS composed of a variable number of repeated oligosaccharide units (see Figure 2-19); the outer membrane glycolipids of bacteria that colonize mucosal surfaces (eg, Neisseria meningitidis, N gonorrhoeae, Haemophilus influenzae, and Haemophilus ducreyi) have relatively short, multiantennary (ie, branched) glycans. These smaller glycolipids have been compared with the “R-type” truncated LPS structures, which lack O antigens and are produced by rough mutants of enteric bacteria such as E coli. However, their structures more closely resemble those of the glycosphingolipids of mammalian cell membranes, and they are more properly termed lipooligosaccharides (LOS). These molecules exhibit extensive antigenic and structural diversity even within a single strain. LOS is an important virulence factor. Epitopes have been identified on LOS that mimic host structures and may enable these organisms to evade the immune response of the host. Some LOS (eg, those from N gonorrhoeae, N meningitidis, and H ducreyi) have a terminal N-acetyllactosamine (Galβ-1→4-GlcNAc) residue that is immunochemically similar to the precursor of the human erythrocyte i antigen. In the presence of a bacterial enzyme called sialyltransferase and a host or bacterial substrate (cytidine monophospho-N-acetylneuraminic acid, CMP-NANA), the N-acetyllactosamine residue is sialylated. This sialylation, which occurs in vivo, provides the organism with the environmental advantages of molecular mimicry of a host antigen and the biologic masking thought to be provided by sialic acids.
3. Lipoprotein—Molecules of an unusual lipoprotein cross-link the outer membrane and peptidoglycan layers (see Figure 2-17). The lipoprotein contains 57 amino acids, representing repeats of a 15-amino-acid sequence; it is peptide-linked to DAP residues of the peptidoglycan tetrapeptide side chains. The lipid component, consisting of a diglyceride thioether linked to a terminal cysteine, is noncovalently inserted in the outer membrane. Lipoprotein is numerically the most abundant protein of gram-negative cells (ca 700,000 molecules per cell). Its function (inferred from the behavior of mutants that lack it) is to stabilize the outer membrane and anchor it to the peptidoglycan layer.
4. The periplasmic space—The space between the inner and outer membranes, called the periplasmic space, contains the peptidoglycan layer and a gel-like solution of proteins. The periplasmic space is approximately 20–40% of the cell volume, which is far from insignificant. The periplasmic proteins include binding proteins for specific substrates (eg, amino acids, sugars, vitamins, and ions), hydrolytic enzymes (eg, alkaline phosphatase and 5′-nucleotidase) that break down nontransportable substrates into transportable ones, and detoxifying enzymes (eg, β-lactamase and aminoglycoside-phosphorylase) that inactivate certain antibiotics. The periplasm also contains high concentrations of highly branched polymers of d-glucose, 8 to 10 residues long, which are variously substituted with glycerol phosphate and phosphatidylethanolamine residues; some contain O-succinyl esters. These so-called membrane-derived oligosaccharides appear to play a role in osmoregulation because cells grown in media of low osmolarity increase their synthesis of these compounds 16-fold.
A: General fold of a porin monomer (OmpF porin from Escherichia coli). The large hollow β-barrel structure is formed by antiparallel arrangement of 16 β-strands. The strands are connected by short loops or regular turns on the periplasmic rim (bottom), and long irregular loops face the cell exterior (top). The internal loop, which connects β-strands 5 and 6 and extends inside the barrel, is highlighted in dark. The chain terminals are marked. The surface closest to the viewer is involved in subunit contacts. B: Schematic representation of the OmpF trimer. The view is from the extracellular space along the molecular threefold symmetry axis. (Reproduced with permission from Schirmer T: General and specific porins from bacterial outer membranes. J Struct Biol 1998;121:101.)
Lipopolysaccharide structure. A: The lipopolysaccharide from Salmonella. This slightly simplified diagram illustrates one form of the LPS. Abe, abequose; Gal, galactose; GlcN, glucosamine; Hep, heptulose; KDO, 2-keto-3-deoxyoctonate; Man, mannose; NAG, N-acetylglucosamine; P, phosphate; Rha, l-rhamnose. Lipid A is buried in the outer membrane. B: Molecular model of an Escherichia coli lipopolysaccharide. The lipid A and core polysaccharide are straight; the O side chain is bent at an angle in this model. (Reproduced with permission from Willey VM, Sherwood LM, Woolverton CJ: Prescott, Harley, and Klein’s Microbiology, 7th ed. McGraw-Hill, 2008. © The McGraw-Hill Companies, Inc.)
D. The Acid-Fast Cell Wall
Some bacteria, notably the tubercle bacillus (M tuberculosis) and its relatives have cell walls that contain large amounts of waxes, complex branched hydrocarbons (70–90 carbons long) known as mycolic acids. The cell wall is composed of peptidoglycan and an external asymmetric lipid bilayer; the inner leaflet contains mycolic acids linked to an arabinoglycan, and the outer leaflet contains other extractable lipids. This is a highly ordered lipid bilayer in which proteins are embedded, forming water-filled pores through which nutrients and certain drugs can pass slowly. Some compounds can also penetrate the lipid domains of the cell wall albeit slowly. This hydrophobic structure renders these bacteria resistant to many harsh chemicals, including detergents and strong acids. If a dye is introduced into these cells by brief heating or treatment with detergents, it cannot be removed by dilute hydrochloric acid, as in other bacteria. These organisms are therefore called acid fast. The permeability of the cell wall to hydrophilic molecules is 100- to 1000-fold lower than for E coli and may be responsible for the slow growth rate of mycobacteria.
E. Cell Walls of the Archaea
The Archaea do not have cell walls like the Bacteria. Some have a simple S-layer (see below) often composed of glycoproteins. Some Archaea have a rigid cell wall composed of polysaccharides or a peptidoglycan called pseudomurein. The pseudomurein differs from the peptidoglycan of bacteria by having l-amino acids rather than d-amino acids and disaccharide units with an α-1→3 rather than a β1→4 linkage. Archaea that have a pseudomurein cell wall are gram positive.
F. Crystalline Surface Layers
Many bacteria, both gram-positive and gram-negative bacteria as well as Archaebacteria, possess a two-dimensional crystalline, subunit-type layer lattice of protein or glycoprotein molecules (S-layer) as the outermost component of the cell envelope. In both gram-positive and gram-negative bacteria, this structure is sometimes several molecules thick. In some Archaea, they are the only layer external to the cell membrane.
S-layers are generally composed of a single kind of protein molecule, sometimes with carbohydrates attached. The isolated molecules are capable of self-assembly (ie, they make sheets similar or identical to those present on the cells). S-layer proteins are resistant to proteolytic enzymes and protein-denaturing agents. The function of the S-layer is uncertain but is probably protective. In some cases, it has been shown to protect the cell from wall-degrading enzymes, from invasion by Bdellovibrio bacteriovorous (a predatory bacterium), and from bacteriophages. It also plays a role in the maintenance of cell shape in some species of Archaebacteria, and it may be involved in cell adhesion to host epidermal surfaces.
G. Enzymes That Attack Cell Walls
The β1→4 linkage of the peptidoglycan backbone is hydrolyzed by the enzyme lysozyme (see Figure 2-15), which is found in animal secretions (tears, saliva, nasal secretions) as well as in egg white. Gram-positive bacteria treated with lysozyme in low-osmotic-strength media lyse; if the osmotic strength of the medium is raised to balance the internal osmotic pressure of the cell, free spherical bodies called protoplasts are liberated. The outer membrane of the gram-negative cell wall prevents access of lysozyme unless disrupted by an agent such as ethylene-diaminetetraacetic acid (EDTA), a compound that chelates divalent cations; in osmotically protected media, cells treated with EDTA-lysozyme form spheroplasts that still possess remnants of the complex gram-negative wall, including the outer membrane.
Bacteria themselves possess a number of autolysins, hydrolytic enzymes that attack peptidoglycan, including muramidases, glucosaminidases, endopeptidases, and carboxypeptidases. These enzymes catalyze the turnover or degradation of peptidoglycan in bacteria. These enzymes presumably participate in cell wall growth and turnover and in cell separation, but their activity is most apparent during the dissolution of dead cells (autolysis).
Enzymes that degrade bacterial cell walls are also found in cells that digest whole bacteria (eg, protozoa and the phagocytic cells of higher animals).
Cell wall synthesis is necessary for cell division; however, the incorporation of new cell wall material varies with the shape of the bacterium. Rod-shaped bacteria (eg, E coli, Bacillus subtilis) have two modes of cell wall synthesis; new peptidoglycan is inserted along a helical path leading to elongation of the cell and is inserted in a closing ring around the future division site, leading to the formation of the division septum. Coccoid cells such as S aureus do not seem to have an elongation mode of cell wall synthesis. Instead, new peptidoglycan is inserted only at the division site. A third form of cell wall growth is exemplified by S pneumoniae, which are not true cocci, because their shape is not totally round but instead have the shape of a rugby ball. S pneumoniae synthesizes cell wall not only at the septum but also at the so-called equatorial rings (Figure 2-20).
Incorporation of new cell wall in differently shaped bacteria. Rod-shaped bacteria such as Bacillus subtilis or Escherichia coli have two modes of cell wall synthesis: New peptidoglycan is inserted along a helical path (A), leading to elongation of the lateral wall and is inserted in a closing ring around the future division site, leading to the formation of the division septum (B). Streptococcus pneumoniae cells have the shape of a rugby ball and elongate by inserting new cell wall material at the so-called equatorial rings (A), which correspond to an outgrowth of the cell wall that encircles the cell. An initial ring is duplicated, and the two resultant rings are progressively separated, marking the future division sites of the daughter cells. The division septum is then synthesized in the middle of the cell (B). Round cells such as Staphylococcus aureus do not seem to have an elongation mode of cell wall synthesis. Instead, new peptidoglycan is inserted only at the division septum (B). (Reproduced with permission from Scheffers DJ, Pinho MG: Bacterial cell wall synthesis: new insights from localization studies. Microbiol Mol Biol Rev 2005;69:585.)
I. Protoplasts, Spheroplasts, and L Forms
Removal of the bacterial wall may be accomplished by hydrolysis with lysozyme or by blocking peptidoglycan synthesis with an antibiotic such as penicillin. In osmotically protective media, such treatments liberate protoplasts from gram-positive cells and spheroplasts (which retain outer membrane and entrapped peptidoglycan) from gram-negative cells.
If such cells are able to grow and divide, they are called L forms. L forms are difficult to cultivate and usually require a medium that is solidified with agar as well as having the right osmotic strength. L forms are produced more readily with penicillin than with lysozyme, suggesting the need for residual peptidoglycan.
Some L forms can revert to the normal bacillary form upon removal of the inducing stimulus. Thus, they are able to resume normal cell wall synthesis. Others are stable and never revert. The factor that determines their capacity to revert may again be the presence of residual peptidoglycan, which normally acts as a primer in its own biosynthesis.
Some bacterial species produce L forms spontaneously. The spontaneous or antibiotic-induced formation of L forms in the host may produce chronic infections, the organisms persisting by becoming sequestered in protective regions of the body. Because L-form infections are relatively resistant to antibiotic treatment, they present special problems in chemotherapy. Their reversion to the bacillary form can produce relapses of the overt infection.
The mycoplasmas are cell wall–lacking bacteria containing no peptidoglycan (see Figure 25-1). There are also wall-less Archaea, but they have been less well studied. Genomic analysis places the mycoplasmas close to the gram-positive bacteria from which they may have been derived. Mycoplasmas lack a target for cell wall–inhibiting antimicrobial agents (eg, penicillins and cephalosporins) and are therefore resistant to these drugs. Some, such as Mycoplasma pneumoniae, an agent of pneumonia, contain sterols in their membranes. The difference between L forms and mycoplasmas is that when the murein is allowed to reform, L forms revert to their original bacteria shape, but mycoplasmas never do.
Many bacteria synthesize large amounts of extracellular polymer when growing in their natural environments. With one known exception (the poly-d-glutamic acid capsules of Bacillus anthracis and Bacillus licheniformis), the extracellular material is polysaccharide (Table 2-2). The terms capsule and slime layer are frequently used to describe polysaccharide layers; the more inclusive term glycocalyx is also used. Glycocalyx is defined as the polysaccharide-containing material lying outside the cell. A condensed, well-defined layer closely surrounding the cell that excludes particles, such as India ink, is referred to as a capsule (Figure 2-21). If the glycocalyx is loosely associated with the cell and does not exclude particles, it is referred to as a slime layer. Extracellular polymer is synthesized by enzymes located at the surface of the bacterial cell. Streptococcus mutans, for example, uses two enzymes—glucosyl transferase and fructosyl transferase—to synthesize long-chain dextrans (poly-d-glucose) and levans (poly-d-fructose) from sucrose. These polymers are called homopolymers. Polymers containing more than one kind of monosaccharide are called heteropolymers.
TABLE 2-2Chemical Composition of the Extracellular Polymer in Selected Bacteria |Favorite Table|Download (.pdf) TABLE 2-2 Chemical Composition of the Extracellular Polymer in Selected Bacteria
|Organism ||Polymer ||Chemical Subunits |
|Bacillus anthracis ||Polypeptide ||d-Glutamic acid |
|Enterobacter aerogenes ||Complex polysaccharide ||Glucose, fucose, glucuronic acid |
|Haemophilus influenzae ||Serogroup b ||Ribose, ribitol, phosphate |
|Neisseria meningitidis ||Homopolymers and heteropolymers, eg, || |
| ||Serogroup A ||Partially O-acetylated N-acetylmannosaminephosphate |
| ||Serogroup B ||N-Acetylneuraminic acid (sialic acid) |
| ||Serogroup C ||Acetylated sialic acid |
| ||Serogroup 135 ||Galactose, sialic acid |
|Pseudomonas aeruginosa ||Alginate ||d-Manuronic acid, l-glucuronic acid |
|Streptococcus pneumoniae ||Complex polysaccharide (many types), eg, || |
|(pneumococcus) ||Type II ||Rhamnose, glucose, glucuronic acid |
| ||Type III ||Glucose, glucuronic acid |
| ||Type VI ||Galactose, glucose, rhamnose |
| ||Type XIV ||Galactose, glucose, N-acetylglucosamine |
| ||Type XVIII ||Rhamnose, glucose |
|Streptococcus pyogenes (group A) ||Hyaluronic acid ||N-Acetylglucosamine, glucuronic acid |
|Streptococcus salivarius ||Levan ||Fructose |
Bacterial capsules. A: Bacillus anthracis M’Faydean capsule stain, grown at 35°C, in defibrinated horse blood. B: Demonstration of the presence of a capsule in B anthracis by negative staining with India ink. This method is useful for improving visualization of encapsulated bacteria in clinical samples such as blood, blood culture bottles, or cerebrospinal fluid. (CDC, courtesy of Larry Stauffer, Oregon State Public Health Laboratory.)
The capsule contributes to the invasiveness of pathogenic bacteria—encapsulated cells are protected from phagocytosis unless they are coated with anticapsular antibody. The glycocalyx plays a role in the adherence of bacteria to surfaces in their environment, including the cells of plant and animal hosts. S mutans, for example, owes its capacity to adhere tightly to tooth enamel to its glycocalyx. Bacterial cells of the same or different species become entrapped in the glycocalyx, which forms the layer known as plaque on the tooth surface; acidic products excreted by these bacteria cause dental caries (see Chapter 10). The essential role of the glycocalyx in this process—and its formation from sucrose—explains the correlation of dental caries with sucrose consumption by the human population. Because outer polysaccharide layers bind a significant amount of water, the glycocalyx layer may also play a role in resistance to desiccation.
Bacterial flagella are thread-like appendages composed entirely of protein, 12–30 nm in diameter. They are the organs of locomotion for the forms that possess them. Three types of arrangement are known: monotrichous (single polar flagellum), lophotrichous (multiple polar flagella), and peritrichous (flagella distributed over the entire cell). The three types are illustrated in Figure 2-22.
Bacterial flagellation. A: Vibrio metchnikovii, a monotrichous bacterium (7500×). (Reproduced with permission from van Iterson W: Biochim Biophys Acta 1947;1:527.) B: Electron micrograph of Spirillum serpens, showing lophotrichous flagellation (9000×). (Reproduced with permission from van Iterson W: Biochim Biophys Acta 1947;1:527.) C: Electron micrograph of Proteus vulgaris, showing peritrichous flagellation (9000×). Note basal granules. (Reproduced with permission from Houwink A, van Iterson W: Electron microscopical observations on bacterial cytology; a study on flagellation. Biochim Biophys Acta 1950;5:10.)
A bacterial flagellum is made up of several thousand molecules of a protein subunit called flagellin. In a few organisms (eg, Caulobacter species), flagella are composed of two types of flagellin, but in most, only a single type is found. The flagellum is formed by the aggregation of subunits to form a helical structure. If flagella are removed by mechanically agitating a suspension of bacteria, new flagella are rapidly formed by the synthesis, aggregation, and extrusion of flagellin subunits; motility is restored within 3–6 minutes. The flagellins of different bacterial species presumably differ from one another in primary structure. They are highly antigenic (H antigens), and some of the immune responses to infection are directed against these proteins.
The flagellum is attached to the bacterial cell body by a complex structure consisting of a hook and a basal body. The hook is a short curved structure that appears to act as the universal joint between the motor in the basal structure and the flagellum. The basal body bears a set of rings, one pair in gram-positive bacteria and two pairs in gram-negative bacteria. An interpretative diagram of the gram-negative structure is shown in Figure 2-23; the rings labeled L and P are absent in gram-positive cells. The complexity of the bacterial flagellum is revealed by genetic studies, which show that over 40 gene products are involved in its assembly and function.
A: General structure of the flagellum of a gram-negative bacterium, such as Escherichia coli or Salmonella typhimurium. The filament-hook-basal body complex has been isolated and extensively characterized. The location of the export apparatus has not been demonstrated. B: An exploded diagram of the flagellum showing the substructures and the proteins from which they are constructed. The FliF protein is responsible for the M-ring feature, S-ring feature, and collar feature of the substructure shown, which is collectively termed the MS ring. The location of FliE with respect to the MS ring and the rod—and the order of the FlgB, FlgC, and FlgF proteins within the proximal rod—is not known. (From Macnab RM: Genetics and biogenesis of bacterial flagella. Annu Rev Genet 1992;26:131. Reproduced with permission from Annual Review of Genetics, Volume 26, © 1992 by Annual Reviews.)
Flagella are made stepwise (see Figure 2-23). First, the basal body is assembled and inserted into the cell envelope. Then the hook is added, and finally, the filament is assembled progressively by the addition of flagellin subunits to its growing tip. The flagellin subunits are extruded through a hollow central channel in the flagella; when it reaches the tip, it condenses with its predecessors, and thus the filament elongates.
Bacterial flagella are semirigid helical rotors to which the cell imparts a spinning movement. Rotation is driven by the flow of protons into the cell down the gradient produced by the primary proton pump (see earlier discussion); in the absence of a metabolic energy source, it can be driven by a proton motive force generated by ionophores. Bacteria living in alkaline environments (alkalophiles) use the energy of the sodium ion gradient—rather than the proton gradient—to drive the flagellar motor (Figure 2-24).
Structural components within the basal body of the flagellum allow the inner portion of this structure, the rods of the basal body, and the attached hook–filament complex to rotate. The outer rings remain statically in contact with the inner and outer cell membranes and cell wall (murein), anchoring the flagellum complex to the bacterial cell envelope. Rotation is driven by the flow of protons through the motor from the periplasmic space, outside the cell membrane, into the cytoplasm in response to the electric field and proton gradient across the membrane, which together constitute the proton motive force. A switch determines the direction of rotation, which in turn determines whether the bacteria swim forward (by counterclockwise rotation of the flagellum) or tumble (caused by clockwise rotation of the flagellum). (Reproduced with permission from Saier MH Jr: Peter Mitchell and his chemiosmotic theories. ASM News 1997;63:13.)
All the components of the flagellar motor are located in the cell envelope. Flagella attached to isolated, sealed cell envelopes rotate normally when the medium contains a suitable substrate for respiration or when a proton gradient is artificially established.
When a peritrichous bacterium swims, its flagella associate to form a posterior bundle that drives the cell forward in a straight line by counterclockwise rotation. At intervals, the flagella reverse their direction of rotation and momentarily dissociate, causing the cell to tumble until swimming resumes in a new, randomly determined direction. This behavior makes possible the property of chemotaxis: A cell that is moving away from the source of a chemical attractant tumbles and reorients itself more frequently than one that is moving toward the attractant, the result being the net movement of the cell toward the source. The presence of a chemical attractant (eg, a sugar or an amino acid) is sensed by specific receptors located in the cell membrane (in many cases, the same receptor also participates in membrane transport of that molecule). The bacterial cell is too small to be able to detect the existence of a spatial chemical gradient (ie, a gradient between its two poles); rather, experiments show that it detects temporal gradients, that is, concentrations that decrease with time during which the cell is moving away from the attractant source and increase with time during which the cell is moving toward it.
Some compounds act as repellants rather than attractants. One mechanism by which cells respond to attractants and repellents involves a cGMP-mediated methylation and demethylation of specific proteins in the membrane. Whereas attractants cause a transient inhibition of demethylation of these proteins, repellents stimulate their demethylation.
The mechanism by which a change in cell behavior is brought about in response to a change in the environment is called sensory transduction. Sensory transduction is responsible not only for chemotaxis but also for aerotaxis (movement toward the optimal oxygen concentration), phototaxis (movement of photosynthetic bacteria toward the light), and electron acceptor taxis (movement of respiratory bacteria toward alternative electron acceptors, such as nitrate and fumarate). In these three responses, as in chemotaxis, net movement is determined by regulation of the tumbling response.
Many gram-negative bacteria possess rigid surface appendages called pili (L “hairs”) or fimbriae (L “fringes”). They are shorter and finer than flagella; similar to flagella, they are composed of structural protein subunits termed pilins. Some pili contain a single type of pilin, others more than one. Minor proteins termed adhesins are located at the tips of pili and are responsible for the attachment properties. Two classes can be distinguished: ordinary pili, which play a role in the adherence of symbiotic and pathogenic bacteria to host cells; and sex pili, which are responsible for the attachment of donor and recipient cells in bacterial conjugation (see Chapter 7). Pili are illustrated in Figure 2-25, in which the sex pili have been coated with phage particles for which they serve as specific receptors.
Pili. Pili on an Escherichia coli cell. The short pili (fimbriae) mediate adherence; the sex pilus is involved in DNA transfer. (Courtesy of Dr. Charles Brinton, Jr.)
Motility via pili is completely different from flagellar motion. Pilin molecules are arranged helically to form a straight cylinder that does not rotate and lacks a complete basal body. Their tips strongly adhere to surfaces at a distance from the cells. Pili then depolymerize from the inner end, thus retracting inside the cell. The result is that the bacterium moves in the direction of the adhering tip. This kind of surface motility is called twitching and is widespread among piliated bacteria. Unlike flagella, pili grow from the inside of the cell outward.
The virulence of certain pathogenic bacteria depends on the production not only of toxins but also of “colonization antigens,” which are ordinary pili that provide the cells with adherent properties. In enteropathogenic E coli strains, both the enterotoxins and the colonization antigens (pili) are genetically determined by transmissible plasmids, as discussed in Chapter 7.
In one group of gram-positive cocci, the streptococci, fimbriae are the site of the main surface antigen, the M protein. Lipoteichoic acid, associated with these fimbriae, is responsible for the adherence of group A streptococci to epithelial cells of their hosts.
Pili of different bacteria are antigenically distinct and elicit the formation of antibodies by the host. Antibodies against the pili of one bacterial species will not prevent the attachment of another species. Some bacteria (see Chapter 21), such as N gonorrhoeae, are able to make pili of different antigenic types (antigenic variation) and thus can still adhere to cells in the presence of antibodies to their original type of pili. Similar to capsules, pili inhibit the phagocytic ability of leukocytes.
Members of several bacterial genera are capable of forming endospores (Figure 2-26). The two most common are gram-positive rods: the obligately aerobic genus Bacillus and the obligately anaerobic genus Clostridium. The other bacteria known to form endospores are Thermoactinomyces, Sporolactobacillus, Sporosarcina, Sporotomaculum, Sporomusa, and Sporohalobacter spp. These organisms undergo a cycle of differentiation in response to environmental conditions: The process, sporulation, is triggered by near depletion of any of several nutrients (carbon, nitrogen, or phosphorous). Each cell forms a single internal spore that is liberated when the mother cell undergoes autolysis. The spore is a resting cell, highly resistant to desiccation, heat, and chemical agents; when returned to favorable nutritional conditions and activated (see below), the spore germinates to produce a single vegetative cell.
Sporulating cells of bacillus species. A: Unidentified bacillus from soil. B: Bacillus cereus. C: Bacillus megaterium. (Reproduced with permission from Robinow CF: Structure. In Gunsalus IC, Stanier RY [editors]. The Bacteria: A Treatise on Structure and Function, Vol 1. Academic Press, 1960.)
The sporulation process begins when nutritional conditions become unfavorable, near depletion of the nitrogen or carbon source (or both) being the most significant factor. Sporulation occurs massively in cultures that have terminated exponential growth as a result of this near depletion.
Sporulation involves the production of many new structures, enzymes, and metabolites along with the disappearance of many vegetative cell components. These changes represent a true process of differentiation: A series of genes whose products determine the formation and final composition of the spore are activated. These changes involve alterations in the transcriptional specificity of RNA polymerase, which is determined by the association of the polymerase core protein with one or another promoter-specific protein called a sigma factor. During vegetative growth, a sigma factor designated σA predominates. Then, during sporulation, five other sigma factors are formed that cause various spore genes to be expressed at various times in specific locations.
The sequence of events in sporulation is highly complex: Differentiation of a vegetative cell of B subtilis into an endospore takes about 7 hours under laboratory conditions. Different morphologic and chemical events occur at sequential stages of the process. Seven different stages have been identified.
Morphologically, sporulation begins with the formation of an axial filament (Figure 2-27). The process continues with an infolding of the membrane so as to produce a double-membrane structure whose facing surfaces correspond to the cell wall–synthesizing surface of the cell envelope. The growing points move progressively toward the pole of the cell so as to engulf the developing spore.
The stages of endospore formation. (Reproduced with permission from Merrick MJ: Streptomyces. In: Parish JH [editor]. Developmental Biology of Procaryotes. Univ California Press, 1979.)
The two spore membranes now engage in the active synthesis of special layers that will form the cell envelope: the spore wall and the cortex, lying outside the facing membranes. In the newly isolated cytoplasm, or core, many vegetative cell enzymes are degraded and are replaced by a set of unique spore constituents.
B. Properties of Endospores
1. Core—The core is the spore protoplast. It contains a complete nucleus (chromosome), all of the components of the protein-synthesizing apparatus, and an energy-generating system based on glycolysis. Cytochromes are lacking even in aerobic species, the spores of which rely on a shortened electron transport pathway involving flavoproteins. A number of vegetative cell enzymes are increased in amount (eg, alanine racemase), and a number of unique enzymes are formed (eg, dipicolinic acid synthetase). Spores contain no reduced pyridine nucleotides or ATP. The energy for germination is stored as 3-phosphoglycerate rather than as ATP.
The heat resistance of spores is partly attributable to their dehydrated state and in part to the presence in the core of large amounts (5–15% of the spore dry weight) of calcium dipicolinate, which is formed from an intermediate of the lysine biosynthetic pathway (see Figure 6-19). In some way not yet understood, these properties result in the stabilization of the spore enzymes, most of which exhibit normal heat lability when isolated in soluble form.
2. Spore wall—The innermost layer surrounding the inner spore membrane is called the spore wall. It contains normal peptidoglycan and becomes the cell wall of the germinating vegetative cell.
3. Cortex—The cortex is the thickest layer of the spore envelope. It contains an unusual type of peptidoglycan, with many fewer cross-links than are found in cell wall peptidoglycan. Cortex peptidoglycan is extremely sensitive to lysozyme, and its autolysis plays a role in spore germination.
4. Coat—The coat is composed of a keratin-like protein containing many intramolecular disulfide bonds. The impermeability of this layer confers on spores their relative resistance to antibacterial chemical agents.
5. Exosporium—The exosporium is composed of proteins, lipids, and carbohydrates. It consists of a paracrystalline basal layer and a hairlike outer region. The function of the exosporium is unclear. Spores of some Bacillus species (eg, B anthracis and B cereus) possess an exosporium, but other species (eg, B atrophaeus) have spores that lack this structure.
The germination process occurs in three stages: activation, initiation, and outgrowth.
1. Activation—Most endospores cannot germinate immediately after they have formed. But they can germinate after they have rested for several days or are first activated in a nutritionally rich medium by one or another agent that damages the spore coat. Among the agents that can overcome spore dormancy are heat, abrasion, acidity, and compounds containing free sulfhydryl groups.
2. Initiation—After activation, a spore will initiate germination if the environmental conditions are favorable. Different species have evolved receptors that recognize different effectors as signaling a rich medium: Thus, initiation is triggered by l-alanine in one species and by adenosine in another. Binding of the effector activates an autolysin that rapidly degrades the cortex peptidoglycan. Water is taken up, calcium dipicolinate is released, and a variety of spore constituents are degraded by hydrolytic enzymes.
3. Outgrowth—Degradation of the cortex and outer layers results in the emergence of a new vegetative cell consisting of the spore protoplast with its surrounding wall. A period of active biosynthesis follows; this period, which terminates in cell division, is called outgrowth. Outgrowth requires a supply of all nutrients essential for cell growth. |
New research findings from the Centre for Permafrost (CENPERM) at the Department of Geosciences and Natural Resource Management, University of Copenhagen, document that permafrost during thawing may result in a substantial release of carbon dioxide into the atmosphere and that the future water content in the soil is crucial to predict the effect of permafrost thawing. The findings may lead to more accurate climate models in the future.
The permafrost is thawing and thus contributes to the release of carbon dioxide and other greenhouse gases into the atmosphere. But the rate at which carbon dioxide is released from permafrost is poorly documented and is one of the most important uncertainties of the current climate models.
The knowledge available so far has primarily been based on measurements of the release of carbon dioxide in short-term studies of up to 3-4 months. The new findings are based on measurements carried out over a 12-year period. Studies with different water content have also been conducted. Professor Bo Elberling, Director of CENPERM (Centre for Permafrost) at the University of Copenhagen, is the person behind the novel research findings which are now being published in the internationally renowned scientific journal Nature Climate Change.
"From a climate change perspective, it makes a huge difference whether it takes 10 or 100 years to release, e.g., half the permafrost carbon pool. We have demonstrated that the supply of oxygen in connection with drainage or drying is essential for a rapid release of carbon dioxide into the atmosphere," says Bo Elberling.
Water content in the soil crucial to predict effect of permafrost thawing
The new findings also show that the future water content in the soil is a decisive factor for being able to correctly predict the effect of permafrost thawing. If the permafrost remains water-saturated after thawing, the carbon decomposition rate will be very low, and the release of carbon dioxide will take place over several hundred years, in addition to methane that is produced in waterlogged conditions. The findings can be used directly to improve existing climate models.
The new studies are mainly conducted at the Zackenberg research station in North-East Greenland, but permafrost samples from four other locations in Svalbard and in Canada have also been included and they show a surprising similarity in the loss of carbon over time.
"It is thought-provoking that microorganisms are behind the entire problem – microorganisms which break down the carbon pool and which are apparently already present in the permafrost. One of the critical decisive factors – the water content – is in the same way linked to the original high content of ice in most permafrost samples. Yes, the temperature is increasing, and the permafrost is thawing, but it is, still, the characteristics of the permafrost which determine the long-term release of carbon dioxide," Bo Elberling concludes.
Professor Bo Elberling, Director of CENPERM, Centre for Permafrost, Department of Geosciences and Natural Resource Management, University of Copenhagen, Øster Voldgade 10, DK-1350 Copenhagen K. Mobile: + 45 2363 8453.
The core funding for the Centre for Permafrost for the 2012-2018 period is a Centre of Excellence grant from the Danish National Research Foundation. CENPERM is an interdisciplinary project studying the biological, geographical and physical effects of permafrost thawing in Greenland.
The studies combine field studies in Greenland under extreme conditions with laboratory experiments under controlled conditions. The studies are intended to decode the complex interaction between microorganisms, plants and soil during permafrost thawing.
Permafrost and carbon
Permafrost is layers of soil and sediments which remain frozen for more than two consecutive years, while the active layer is the top layer of soil which thaws during the summer.
In Arctic areas with so-called continuous permafrost, the permafrost may be several hundred metres deep. The permafrost contains large amounts of organic matter, because the pool is built up over several thousand years. The pool can be extremely large and includes old top layers containing organic material which have been buried by wind or water-deposited sediments.
This means that near-surface layers, over time, will become a part of the permafrost. In addition, the decomposition rate of the pool of organic matter is slow during the generally cold conditions in the Arctic. It is well-documented that carbon in organic matter can be decomposed when permafrost layers thaw, and that these decomposition processes can contribute to a significant release of both carbon dioxide and methane – two well-known and problematic greenhouse gases.
How rapidly thaws the permafrost
Observations from Greenland may provide the answer to the question of how rapidly the permafrost thaws. The depth of the active layer in Zackenberg in North-East Greenland has been measured at the end of the growth season since 1996.
The measurements show that the depth of the active layer increases by more than 1 cm per year, which means that, as a minimum, more than 1 cm of permafrost thaws every year. This is the minimum figure, because permafrost, due to its content of ice, will typically decrease in size after thawing and becoming a part of the active layer.
The Danish Meteorological Institute has climate models for the period up until 2100 that cover all of Greenland. The model results predict a future climate with an annual summer mean temperature that is 2-3 degrees higher than today.
All things being equal, this translates into an increase in permafrost thawing in the order of 10-30 cm over the next 70 years. The reason for not stating a more precise figure is that the increase in thawing depends on soil type, in particular the water content. The maximum thawing depth is expected in the dry soil types.
Bo Elberling | EurekAlert!
GPM sees deadly tornadic storms moving through US Southeast
01.12.2016 | NASA/Goddard Space Flight Center
Cyclic change within magma reservoirs significantly affects the explosivity of volcanic eruptions
30.11.2016 | Johannes Gutenberg-Universität Mainz
A multi-institutional research collaboration has created a novel approach for fabricating three-dimensional micro-optics through the shape-defined formation of porous silicon (PSi), with broad impacts in integrated optoelectronics, imaging, and photovoltaics.
Working with colleagues at Stanford and The Dow Chemical Company, researchers at the University of Illinois at Urbana-Champaign fabricated 3-D birefringent...
In experiments with magnetic atoms conducted at extremely low temperatures, scientists have demonstrated a unique phase of matter: The atoms form a new type of quantum liquid or quantum droplet state. These so called quantum droplets may preserve their form in absence of external confinement because of quantum effects. The joint team of experimental physicists from Innsbruck and theoretical physicists from Hannover report on their findings in the journal Physical Review X.
“Our Quantum droplets are in the gas phase but they still drop like a rock,” explains experimental physicist Francesca Ferlaino when talking about the...
The Max Planck Institute for Physics (MPP) is opening up a new research field. A workshop from November 21 - 22, 2016 will mark the start of activities for an innovative axion experiment. Axions are still only purely hypothetical particles. Their detection could solve two fundamental problems in particle physics: What dark matter consists of and why it has not yet been possible to directly observe a CP violation for the strong interaction.
The “MADMAX” project is the MPP’s commitment to axion research. Axions are so far only a theoretical prediction and are difficult to detect: on the one hand,...
Broadband rotational spectroscopy unravels structural reshaping of isolated molecules in the gas phase to accommodate water
In two recent publications in the Journal of Chemical Physics and in the Journal of Physical Chemistry Letters, researchers around Melanie Schnell from the Max...
The efficiency of power electronic systems is not solely dependent on electrical efficiency but also on weight, for example, in mobile systems. When the weight of relevant components and devices in airplanes, for instance, is reduced, fuel savings can be achieved and correspondingly greenhouse gas emissions decreased. New materials and components based on gallium nitride (GaN) can help to reduce weight and increase the efficiency. With these new materials, power electronic switches can be operated at higher switching frequency, resulting in higher power density and lower material costs.
Researchers at the Fraunhofer Institute for Solar Energy Systems ISE together with partners have investigated how these materials can be used to make power...
16.11.2016 | Event News
01.11.2016 | Event News
14.10.2016 | Event News
02.12.2016 | Medical Engineering
02.12.2016 | Agricultural and Forestry Science
02.12.2016 | Physics and Astronomy |
You are hereHome ›
Now showing results 1-8 of 8
This is a lithograph about NASA's Magnetospheric Multiscale Mission, or MMS. Learners will cut out and assemble a colorful 3D model of an MMS spacecraft. Web links, additional facts, and QR codes are included for audiences to access more information.
This is an activity about visual analysis. Learners will create art inspired by planetary images while learning to recognize the geology on planetary surfaces. This presentation and accompanying activity uses the elements of art - shape, line,... (View More) color, texture, value - to make sense of features in NASA images, while honing observation skills and inspiring questions. (View Less)
This is a set of three activities about how scientists study other worlds. Learners will explore and compare the features of Mars and Earth, discuss what the features suggest about the history of Mars, and create a model to help them understand how... (View More) scientists view other worlds. The activities help to show why scientists are interested in exploring Mars for evidence of past life, and address the question: "Why are we searching for life on Mars?" It also includes specific tips within each activity for effectively engaging girls in STEM. This is activity 4 in Explore: Life on Mars? that was developed specifically for use in libraries. (View Less)
This module focuses on ultraviolet radiation on Earth and in space and how it affects life. Learners will construct their own "martian" using craft materials and UV beads. They will explore how UV radiation from the Sun can affect living things,... (View More) comparing conditions on Earth and Mars, and then discuss ways in which organisms may protect themselves from UV radiation. They will then take part in a Mars Creature Challenge, where they will change their creature to help it survive harsh UV conditions — like on Mars. They will then test their Mars creatures by subjecting them to different environmental conditions to see how well they "survive" in a martian environment. This investigation will explore shelter and protection as one of life’s requirements and how Earth’s atmosphere protects life from harmful UV radiation. It also includes specific tips for effectively engaging girls in STEM. This is activity 5 in Explore: Life on Mars? that was developed specifically for use in libraries. (View Less)
This is an activity about asteroids. Learners will shape mashed potatoes into their own odd-shaped asteroids. They can then bake them in the oven to turn them (more or less) asteroid color, and eat them for dinner.
This is a story about space exploration. Learners will read about missions to asteroids and comets, consider the measurements and math required for the robotic spacecraft to visit these objects, and are invited to finish the story themselves. The... (View More) provided extension explains how to use a K-W-L chart with the story and provides a glossary of terms. (View Less)
This 28-page coloring and activity book includes general information on X-ray astronomy, Chandra and the STS-93 mission. It also looks at black holes, supernovas, galaxy clusters and more. Each image is accompanied by a summary of information.... (View More) Activities include mazes, word searches, connect-the-dots, crossword, code break, and word jumble. (View Less)
This is a wallsheet that contains 11 activities relating to Mars. Learners could investigate: how far away is Mars, why does Mars have craters, water on Mars, Mars' minerals, how high the mountains are on Mars, and are invited to create a martian... (View More) calendar and travel guide. (View Less) |
Nuclear pulse propulsion or external pulsed plasma propulsion is a hypothetical method of spacecraft propulsion that uses nuclear explosions for thrust. It originated as Project Orion with support from DARPA, after a suggestion by Stanislaw Ulam in 1947. Newer designs using inertial confinement fusion have been the baseline for most later designs, including Project Daedalus and Project Longshot.
Calculations for a potential use of this technology were made at the laboratory from and toward the close of the 1940s to the mid 1950s.
Project Orion was the first serious attempt to design a nuclear pulse rocket. A design was formed at General Atomics during the late 1950s and early 1960s, with the idea of reacting small directional nuclear explosives utilizing a variant of the Teller–Ulam two-stage bomb design against a large steel pusher plate attached to the spacecraft with shock absorbers. Efficient directional explosives maximized the momentum transfer, leading to specific impulses in the range of 6,000 seconds, or about thirteen times that of the Space Shuttle main engine. With refinements a theoretical maximum of 100,000 seconds (1 MN·s/kg) might be possible. Thrusts were in the millions of tons, allowing spacecraft larger than 8×106 tons to be built with 1958 materials.
The reference design was to be constructed of steel using submarine-style construction with a crew of more than 200 and a vehicle takeoff weight of several thousand tons. This single-stage reference design would reach Mars and return in four weeks from the Earth's surface (compared to 12 months for NASA's current chemically powered reference mission). The same craft could visit Saturn's moons in a seven-month mission (compared to chemically powered missions of about nine years). Notable engineering problems that occurred were related to crew shielding and pusher-plate lifetime.
Although the system appeared to be workable, the project was shut down in 1965, primarily because the Partial Test Ban Treaty made it illegal; in fact, before the treaty, the US and Soviet Union had already separately detonated a combined number of at least nine nuclear bombs, including thermonuclear, in space, i.e., at altitudes of over 100 km (see high-altitude nuclear explosions). Ethical issues complicated the launch of such a vehicle within the Earth's magnetosphere: calculations using the (disputed) linear no-threshold model of radiation damage showed that the fallout from each takeoff would cause the death of approximately 1 to 10 individuals. In a threshold model, such extremely low levels of thinly distributed radiation would have no associated ill-effects, while under hormesis models, such tiny doses would be negligibly beneficial. The use of less efficient clean nuclear bombs for achieving orbit and then more efficient, higher yield dirtier bombs for travel would significantly reduce the amount of fallout caused from an Earth-based launch.
One useful mission would be to deflect an asteroid or comet on collision course with the Earth, depicted dramatically in the 1998 film Deep Impact. The high performance would permit even a late launch to succeed, and the vehicle could effectively transfer a large amount of kinetic energy to the asteroid by simple impact. The prospect of an imminent asteroid impact would obviate concerns over the few predicted deaths from fallout. An automated mission would remove the challenge of designing a shock absorber that would protect the crew.
Orion is one of very few interstellar space drives that could theoretically be constructed with available technology, as discussed in a 1968 paper, "Interstellar Transport" by Freeman Dyson.
Project Daedalus was a study conducted between 1973 and 1978 by the British Interplanetary Society (BIS) to design an interstellar unmanned spacecraft that could reach a nearby star within about 50 years. A dozen scientists and engineers led by Alan Bond worked on the project. At the time fusion research appeared to be making great strides, and in particular, inertial confinement fusion (ICF) appeared to be adaptable as a rocket engine.
ICF uses small pellets of fusion fuel, typically lithium deuteride (6Li2H) with a small deuterium/tritium trigger at the center. The pellets are thrown into a reaction chamber where they are hit on all sides by lasers or another form of beamed energy. The heat generated by the beams explosively compresses the pellet to the point where fusion takes place. The result is a hot plasma, and a very small "explosion" compared to the minimum size bomb that would be required to instead create the necessary amount of fission.
For Daedalus, this process was to be run within a large electromagnet that formed the rocket engine. After the reaction, ignited by electron beams, the magnet funnelled the hot gas to the rear for thrust. Some of the energy was diverted to run the ship's systems and engine. In order to make the system safe and energy efficient, Daedalus was to be powered by a helium-3 fuel collected from Jupiter.
The Medusa design has more in common with solar sails than with conventional rockets. It was envisioned by Johndale Solem in the 1990s and published in the Journal of the British Interplanetary Society (JBIS).
A Medusa spacecraft would deploy a large sail ahead of it, attached by independent cables, and then launch nuclear explosives forward to detonate between itself and its sail. The sail would be accelerated by the plasma and photonic impulse, running out the tethers as when a fish flees a fisher, generating electricity at the "reel". The spacecraft would use some of the generated electricity to reel itself up towards the sail, constantly smoothly accelerating as it goes.
In the original design, multiple tethers connected to multiple motor generators. The advantage over the single tether is to increase the distance between the explosion and the tethers, thus reducing damage to the tethers.
For heavy payloads, performance could be improved by taking advantage of lunar materials, for example, wrapping the explosive with lunar rock or water, stored previously at a stable Lagrange point.
Medusa performs better than the classical Orion design because its sail intercepts more of the explosive impulse, its shock-absorber stroke is much longer, and its major structures are in tension and hence can be quite lightweight. Medusa-type ships would be capable of a specific impulse between 50,000 and 100,000 seconds (500 to 1000 kN·s/kg).
Medusa became widely known to the public in the BBC documentary film To Mars By A-Bomb: The Secret History of Project Orion. A short film shows an artist's conception of how the Medusa spacecraft works "by throwing bombs into a sail that's ahead of it".
Project Longshot was a NASA-sponsored research project carried out in conjunction with the US Naval Academy in the late 1980s. Longshot was in some ways a development of the basic Daedalus concept, in that it used magnetically funneled ICF. The key difference was that they felt that the reaction could not power both the rocket and the other systems, and instead included a 300 kW conventional nuclear reactor for running the ship. The added weight of the reactor reduced performance somewhat, but even using LiD fuel it would be able to reach neighboring star Alpha Centauri in 100 years (approx. velocity of 13,411 km/s, at a distance of 4.5 light years, equivalent to 4.5% of light speed).
In the mid-1990s, research at Pennsylvania State University led to the concept of using antimatter to catalyze nuclear reactions. Antiprotons would react inside the nucleus of uranium, releasing energy that breaks the nucleus apart as in conventional nuclear reactions. Even a small number of such reactions can start the chain reaction that would otherwise require a much larger volume of fuel to sustain. Whereas the "normal" critical mass for plutonium is about 11.8 kilograms (for a sphere at standard density), with antimatter catalyzed reactions this could be well under one gram.
Several rocket designs using this reaction were proposed, some which would use all-fission reactions for interplanetary missions, and others using fission-fusion (effectively a very small version of Orion's bombs) for interstellar missions.
|Specific impulse||1,606 s to 5,722 s (depending on fusion gain)|
|Burn time||1 day to 90 days (10 days optimal with gain of 40)|
The rocket uses a form of magneto-inertial fusion to produce a direct thrust fusion rocket. Magnetic fields cause large metal rings to collapse around the deuterium-tritium plasma, triggering fusion. The energy heats and ionizes the shell of metal formed by the crushed rings. The hot, ionized metal is shot out of a magnetic rocket nozzle at a high speed (up to 30 km/s). Repeating this process roughly every minute would propel the spacecraft. The fusion reaction is not self-sustaining and requires electrical energy to explode each pulse. With electrical requirements estimated to be between 100 kW to 1,000 kW (300 kW average), designs incorporate solar panels to produce the required energy.
Foil Liner Compression creates fusion at the proper energy scale. The proof of concept experiment in Redmond, Washington, was to use aluminum liners for compression. However, the ultimate design was to use lithium liners.
Performance characteristics are dependent on the fusion energy gain factor achieved by the reactor. Gains were expected to be between 20 and 200, with an estimated average of 40. Higher gains produce higher exhaust velocity, higher specific impulse and lower electrical power requirements. The table below summarizes different performance characteristics for a theoretical 90-day Mars transfer at gains of 20, 40 and 200.
|Total gain||Gain of 20||Gain of 40||Gain of 200|
|Liner mass (kg)||0.365||0.365||0.365|
|Specific impulse (s)||1,606||2,435||5,722|
|Specific mass (kg/kW)||0.8||0.53||0.23|
|Mass propellant (kg)||110,000||59,000||20,000|
|Mass initial (kg)||184,000||130,000||90,000|
|Electrical power required (kW)||1,019||546||188|
By April 2013, MSNW had demonstrated subcomponents of the systems: heating deuterium plasma up to fusion temperatures and concentrating the magnetic fields needed to create fusion. They planned to put the two technologies together for a test before the end of 2013.
Pulsed Fission-Fusion (PuFF) propulsion is reliant on principles similar to magneto-inertial fusion, It aims to solve the problem of the extreme stress induced on containment by an Orion-like motor by ejecting the plasma obtained from small fuel pellets that undergo autocatalytic fission and fusion reactions initiated by a Z-pinch. It is a theoretical propulsion system researched through the NIAC Program by the University of Alabama in Huntsville. It is in essence a fusion rocket that uses a Z-pinch configuration, but coupled with a fission reaction to boost the fusion process.
A PuFF fuel pellet, around 1 cm in diameter, consists of two components: A deuterium-tritium (D-T) cylinder of plasma, called the target, which undergoes fusion, and a surrounding U-235 sheath that undergoes fission enveloped by a lithium liner. Liquid lithium, serving as a moderator, fills the space between the D-T cylinder and the uranium sheath. Current is run through the liquid lithium, a Lorentz force is generated which then compresses the D-T plasma by a factor of 10 in what is known as a Z-pinch. The compressed plasma reaches criticality and undergoes fusion reactions. However, the fusion energy gain (Q) of these reactions is far below breakeven (Q < 1), meaning that the reaction consumes more energy than it produces.
In a PuFF design, the fast neutrons released by the initial fusion reaction induce fission in the U-235 sheath. The resultant heat causes the sheath to expand, increasing its implosion velocity onto the D-T core and compressing it further, releasing more fast neutrons. Those again amplify the fission rate in the sheath, rendering the process autocatalytic. It is hoped that this results in a complete burn up of both the fission and fusion fuels, making PuFF more efficient than other nuclear pulse concepts. Much like in a magneto-inertial fusion rocket, the performance of the engine is dependent on the degree to which the fusion gain of the D-T target is increased.
One "pulse" consist of the injection of a fuel pellet into the combustion chamber, its consumption through a series of fission-fusion reactions, and finally the ejection of the released plasma through a magnetic nozzle, thus generating thrust. A single pulse is expected to take only a fraction of a second to complete.
Research discussed in this paper was sponsored by Air Force Special Weapons Center, Kirtland Air Force Base, New Mexico, Air Force Systems Command, USAF, under Contract AF29(601)-6214.
|Wikimedia Commons has media related to Nuclear pulse propulsion.| |
Common Core Standards: Math
Expressions and Equations 6.EE.A.3
3. Apply the properties of operations to generate equivalent expressions. For example, apply the distributive property to the expression 3(2 + x) to produce the equivalent expression 6 + 3x; apply the distributive property to the expression 24x + 18y to produce the equivalent expression 6(4x + 3y); apply the properties of operations to y + y + y to produce the equivalent expression 3y.
This standard is for all those students who feel that math doesn't given them enough freedom of expression. You know, those kids who complain they can't use their artist's palette in the classroom, and that algebra "imprisons their creative spirit." Well, here's their chance to shine.
For this standard, we expect students to play (and we mean really play) with the properties of mathematical operations. It's almost like playing with lasers and anti-gravity boots, but with fewer potential injuries. Not none, but certainly fewer.
In order to apply the properties of operations, students need to be familiar with them. If students don't know the associative, commutative, and distributive properties yet, now's the time to teach 'em. Combining like terms is also, you know, kind of important. Only after they can recognize and identify these properties will they be able to play around with them freely.
As students use these properties to create equivalent expressions, stress that these expressions are equivalent because no matter how we evaluate them, they'll always end up equaling each other. This is true of numerical expressions and variable expressions; if students don't believe you, have them plug in values for the variable and they'll see that the two expressions should evaluate to give the same result every single time. (We'll get more into that in the next standard.)
The beauty of generating equivalent expressions is that there is literally an infinite number of ways to write the same expression. There might be two or three obvious choices, like rewriting y + y + y as 3y, but we can also write the same expression as y(1 + 1 + 1) or 3y + 0 or even ½y + ½y + ½y + ½y +½y + ½y. As students get more and more comfortable, encourage them to explore less conventional ways of writing the same expressions and testing them for equivalence.
So next time your students think math is too limiting, give 'em an expression have 'em go nuts.
- ACT Math 1.2 Elementary Algebra
- ACT Math 2.1 Elementary Algebra
- ACT Math 2.2 Elementary Algebra
- ACT Math 6.1 Elementary Algebra
- CAHSEE Math 1.2 Algebra I
- CAHSEE Math 2.1 Mathematical Reasoning
- CAHSEE Math 2.1 Measurement and Geometry
- CAHSEE Math 2.1 Statistics, Data, and Probability I
- CAHSEE Math 2.2 Measurement and Geometry
- CAHSEE Math 2.2 Statistics, Data, and Probability I |
You might wonder how a laboratory 250 miles above Earth could help us study and observe our home planet, but the space station actually gives us a unique view of the blue marble we call home.
The space station is part of a fleet of Earth remote-sensing platforms to develop a scientific understanding of Earth’s systems and its response to natural or human-induced changes and to improve prediction of climate, weather and natural hazards. Unlike automated remote-sensing platforms, the space station has a human crew, a low-orbit altitude and orbital parameters that provide variable views and lighting. Crew members have the ability to collect unscheduled data of an unfolding event, like severe weather, using handheld digital cameras.
The Cupola, seen above, is one of the many ways astronauts aboard the space station are able to observe the Earth. This panoramic control tower allows crew members to view and guide operations outside the station, like the station’s robotic arm.
The space station also has an inclined, sun-asynchronous orbit, which means that it travels over 90% of the inhabited surface of the Earth, and allows for the station to pass over ground locations at different times of the day and night. This perspective is different and complimentary to other orbiting satellites.
The space station is also home to a few Earth-observing instruments, including:
The ISS-RapidScat monitors ocean winds for climate research, weather prediction and hurricane science. This vantage point gives scientists the first near-global direct observations of how ocean winds can vary over the course of the day, while adding extra eyes in the tropics and mid-latitudes to track the formation and movement of tropical cyclones.
CATS (Cloud-Aerosol Transport System) is a laser instrument that measures clouds and airborne particles such as pollution, mineral dust and smoke. Improving cloud data allows scientists to create more accurate climate models, which in turn, will improve air quality forecasts and health risk alerts.
Want to observe the Earth from a similar vantage point? You can thanks to our High Definition Earth-Viewing System (HDEV). This experiment is mounted on the exterior of the space station and includes several commercial HD video cameras aimed at the Earth.
Today, three new crew members will launch to the International Space Station. NASA astronaut Jeff Williams, along with Russian cosmonauts Alexey Ovchinin and Oleg Skripochka, are scheduled to launch from the Baikonur Cosmodrome in Kazakhstan at 5:26 p.m. EDT. The three Expedition 47 crew members will travel in a Soyuz spacecraft, rendezvousing with the space station six hours after launch.
Traveling to the International Space Station is an exciting moment for any astronaut. But what if you we’re launching to orbit AND knew that you were going to break some awesome records while you were up there? This is exactly what’s happening for astronaut Jeff Williams.
This is a significant mission for Williams, as he will become the new American record holder for cumulative days in space (with 534) during his six months on orbit. The current record holder is astronaut Scott Kelly, who just wrapped up his one-year mission on March 1.
On June 4, Williams will take command of the station for Expedition 48. This will mark his third space station expedition…which is yet another record!
Want to Watch the Launch?
You can! Live coverage will begin at 4:30 p.m. EDT on NASA Television, with launch at 5:26 p.m.
Tune in again at 10:30 p.m. to watch as the Soyuz spacecraft docks to the space station’s Poisk module at 11:12 p.m.
Hatch opening coverage will begin at 12:30 a.m., with the crew being greeted around 12:55 a.m. |
Guide to Finding Lesson Plans
According to the Facts on File Dictionary of Education, a curriculum guide differs from a lesson plan in that it includes “one or more aspects of curriculum and instruction, such as philosophy, policies, aims, objectives, subject matter, resources and processes” (p.138), while a lesson plan “ includes the instructional objectives and methods for a particular functional unit or period of instruction” (p. 271). The Social Sciences, Health, and Education Library (SSHEL) has many materials containing lesson plans, primarily in the form of curriculum guides, in both print and microfiche formats. However, lesson plans can also be found on various websites. This guide explains how to find lesson plans in print, online, and on microfiche.
Many lesson plans are embedded within curriculum guides. If you are looking for lesson plans on a specific topic, see the Guide to the Curriculum Collection which includes a “Call Number Guide by Subject.” Find the call number that corresponds to your subject. For example, if your subject is American history, the call number will be 973. Then you can either use the Online Library Catalog and run a call number search to see titles within that call number range, or you can browse bookshelves in the Curriculum Collection in Room 112, Main Library. Curriculum guides are shelved separately from the textbooks and other materials and begin with the prefix “CURR.” Look at the Map of the School (S-) and Curriculum Room to determine where they are located.
Common Core State Standards
This free curriculum planning program allows teachers to design and develop lesson plans collaboratively. Teachers can also quickly search Common Core standards within the program to ensure their lessons align with state standards for education. In addition, lesson plans can be easily updated and altered to fit students’ needs. Individual teacher registration is required for free access. Schools can opt to pay a fee that allows an unlimited number of collaborators to work together on lesson plans.
K-5 Math Teaching Resources
Free, printable resources, activities, and games to supplement the Common Core State Standards in mathematics for grades K-5.
Learn Zillion contains video lessons and assessment tools for teaching to the Common Core State Standards in grades 3-12. The site was started at the E.L. Haynes Public Charter School in Washington, D.C., and allows teachers from around the United States to contribute lessons. Users can browse lessons by grade level, domain, or specific Standard. Many lessons include additional resources such as slides, parent letters, and discussion protocol. Users can create an account and use the site to assign video lessons to students and track student progress.
ReadWorks is a non-profit organization that provides free lesson plans and teaching units on reading comprehension for grades K-6. The site is searchable and browsable by topic, grade, and standard. Lesson plans align with the Common Core State Standards and state standards.
Share My Lesson: Common Core State Standards Information Center
Share My Lesson was developed by the American Federation of Teachers and TES Connect, an online network of educators, and provides a space for teachers to share lesson plans and teaching resources. Share My Lesson contains lesson plans aligned to the Standards and created by and for teachers. In addition, users can browse lesson plans by state to find lesson plans that align with current standards. Users must create an account to use this free site.
Teaching Channel: Confused About Common Core?
A collection of over 270 videos from the Teaching Channel with information about and suggested activities that align with the Common Core State Standards.
A searchable collection of activities, student work and instructional units aligned to the CCSS and created by educators in New York City.
ReadWriteThink is a website focused on literacy for K-12 students. It provides detailed, research-based lesson plans that can be searched by grade level as well as area of literacy practice. The site also includes a wide variety of web resources, including instructional, reference, professional development, and interactive student resources.
Story Arts Online
Story Arts Online provides lesson plans and activities to help teachers incorporate storytelling in the classroom to teach language arts. The site was created by storyteller and author Heather Forest and funded by Bell Atlantic Foundation. The site also has suggestions as to how to use storytelling to teach math, science, social studies and the arts, and includes concise folktale plots and Aesop’s fables as retold by Forest.
This website allows users to search specific books or authors and provides supplemental reading sources and activities, such as author interviews, lesson plans, audio readings, and related booklists. A great resource for K-12 teachers and school or public librarians.
MSTE Online Resource Catalog
The Office for Mathematics, Science, and Technology Education (MSTE) at the University of Illinois at Urbana-Champaign has developed a number of interactive resources for aiding K-12 students in their math and science education. These lessons range from number and operations to data analysis and probability in mathematics and from basic chemistry to physics in science.
United States Mint Lesson Plans
This site uses U.S. coins to teach basic math and counting. Each lesson plan has been contributed by teachers and includes grade level and national standards information.
This site provides lesson plans for most subject areas and includes links to other information sources that can be drawn upon to create original lesson plans. The extensive listings about multicultural holidays and current events are especially useful.
Common Sense Media
Common Sense Media is a not-for profit organization dedicated to providing information and education to help children and teens navigate media and technology. Their site provides cross-curricular lesson plans that address digital literacy and citizenship for students in grades K-12. The lesson plans are free, research-based, and aligned to the Common Core State, International Society of Technology Education, and American Association of School Librarians Standards. Topics covered are: Internet safety, privacy and security, relationships and communication, cyberbullying, digital footprint and reputation, self-image and identity, information literacy, and creative credit and copyright.
Developed by the National Endowment for Humanities and other sources, this site contains links to 49 of the “top humanities sites” and lesson plans in the areas of history, English and language arts, foreign languages and art history. It also includes learning guides that provide tips for using sites for designing class curricula and activities. Sites are searchable.
Gooru provides an open and collaborative learning community online that makes available free K-12 educational materials. Educators can find interactive materials for instruction that are standards-aligned and can share personalized, custom collections keyed to the needs of their students. Interactive lessons are available in the disciplines of science, math, the social sciences, and language arts. Common Core Standards are available for math instruction from grades 6 to 12. Lessons are also available from a number of partner libraries, which are accessible directly through the Gooru site.
Kennedy Center Digital Resources
The Kennedy Center Digital Resources provides a wide range of teaching resources with a special focus on the arts for grades K-12. These resources include print lesson plans, audio stories, video clips, and interactive online modules and cover topics from dance, theater, and music to literary arts, media arts, and visual arts. In addition to the arts, this website also has teaching resources for other disciplines, such as English, geography, history, information education, language arts, math, physical education, science, social studies, technology, and world languages.
Lesson Plan Library
The Lesson Plans Library contains lessons for grades K-12 in common and not so common subjects. Plans range in subject from literature and math to forensic science and meteorology. Written by teachers and educators, these lesson plans are both comprehensive and easy to follow. Most plans define what national academic standards the lesson plans meet. In addition, this site is also linked to several other “teaching tools” from The Discovery Channel.
Library of Congress Lesson Plans
The Library of Congress has teacher-created, classroom-tested lesson plans on United States social studies, geography, science, sports and recreation, journalism, and literature, among other subjects. All of the lesson plans use primary sources that can be found at the Library of Congress and are provided with each lesson plan. Lesson plans can be searched by topic or by era. Grades 3-12 are targeted. Standards can be found by searching within each lesson plan for state, grade, and subject.
Peace Corps WorldWise Schools Lesson Plans
Based on lessons used by teachers in the Peace Corps, provides nearly 300 standards-based lesson plans. Different concepts and subjects are illustrated using examples from regions and cultures. Searchable by grade level and subject area.
Smithsonian Education Lesson Plans
The Smithsonian Institution has many resources for educators, including hundreds of lesson plans in all subject areas and grades from preK-12. Lesson plans are searchable by subject and grade level and each lesson plan includes all of the materials needed (photographs, handouts, suggested strategies, reproductions, activities, standards information, and additional online resources). Lesson plans are created around an inquiry-based learning model and make extensive use of primary sources and museum artifacts.
United States Department of Agriculture Teacher Center
This website, produced by the nation’s experts in the field of agriculture, this site includes nearly 200 lesson plans for grades K-12 on all aspects of agriculture and agricultural history. Most lessons focus on facets of the American agricultural system, however there are several lessons on agriculture around the world. Lesson plans include science experiments, Web Quests, introductions to careers in agriculture, and agriculture as an aspect of the global economy.
U.S. Census Bureau: Statistics in Schools
The Statistics in Schools (SIS) program provides teachers with resources to help promote statistical literacy in K-12 classrooms. Through the combined effort of the U.S. Census Bureau and teachers from across the United States, this online resource offers free and customizable lesson plans, activities, and other resources designed to prepare students for a data-driven world. These resources are primarily organized by grade level (K-12) and subject, including math, English, history, geography, and sociology. This program also contains select resources for the Pre-K level in both English and Spanish, as well as for ELL and adult ESL learners.
USA.gov Resources for Teachers
This site provides links to a variety of free lesson plans and teacher resources, primarily designed for K-8 students. In addition to subjects like history, math, and science, some of the other topics covered include money management, health & safety, and online safety.
The Concord Consortium
The Concord Consortium is a non-profit educational research and development organization based in Massachusetts. The STEM resource finder on the Consortium’s site provides free, open source educational activities, software, and models for teaching STEM subjects to elementary school through college students. The resources are keyword-searchable and allow users to browse by subject and grade level. Also includes a tool to find activities and models that adhere to the Next Generation Science Standards.
eGFI is a website provided by the American Society for Engineering Education, and it contains engineering lesson plans and class activities for K-12 teachers. Lessons and activities incorporate engineering concepts to teach math and science skills. The site also provides a list of engineering and technology outreach programs and web resources for teachers and students.
Provides environmental games appropriate for children in grades K-12 such as game shows, crossword puzzles, word searches and matching endangered species. Also contains teacher resources and lesson plans on the environment and science.
Spanning back to the beginning of recorded history, this comprehensive website provides everything you have ever wanted to know about food including history, law and regulation, inventions, nutrition, and historic cookbooks/recipes. Also included are lesson plans, a food reference guide, and a list of libraries and museums that specialize in food.
Teaching Earth Science: Classroom Activities and Lesson Plans
This website provides a range of lesson plans based on geography, geology, astronomy, and other earth sciences using maps, satellite images, and other projections. Also provides links to current topics in the earth sciences.
Includes a collection of more than 100 engineering-related lesson plans for students ages 8-18. Each plan provides a lesson focus, synopsis, target age levels, learning objectives, learner outcomes, activities, materials, and alignments to curriculum frameworks. Lesson plan topics include mazes, search engines, sorting, decision trees, periscopes, tennis, and more.
This site, administered by the U.S. Geological Survey, provides educational resources in areas like Biology, Geography, Geology, and Water. Resources are arranged by grade level. While some links are not strictly lesson plans, many of the resources listed on this site can be incorporated in the classroom as a supplement to an existing Earth Science curriculum.
These lesson plans, developed by the National September 11 Memorial & Museum, the New York City Department of Education, and the New Jersey Commission on Holocaust Education, are written for use throughout the school year and across disciplines. Each lesson adheres to the Common Core Standards and remains grounded in the collections of the museum. These lessons do not require visits to the museum and can be used in any classroom.
Compiled by the Public Broadcasting System (PBS), this page provides lesson plans for teachers who wish to teach about issues such as war, patriotism, peace, and tolerance.
Citizens, Not Spectators; Center for Civic Education
These lesson plans introduce students to the basics of the voting process and provide hands-on opportunities for students to experience registering and casting votes. Lesson plans are available for elementary, middle, and high school students. These lesson plans can be downloaded freely as zipped files. In addition, individual documents from each lesson can be found in the Resource Bank on this site.
The Civil War Curriculum
The Civil War Curriculum page offers links to lesson plans related to the American Civil War for students from elementary to high school. Lesson plans include a complete outline, College and Career Readiness (CCR) anchor standards, National Council for the Social Studies (NCSS) standards, related resources, and downloadable content for each lesson. The entire Civil War Curriculum can also be downloaded by registering for free with the Civil War Trust.
Digital Cultural Heritage Community Curriculum Units
This extensive historical website provides resources for teachers who want to make learning about history interesting and exciting. Key features include an interactive timeline, online textbook, and history reference room. Includes resource guides, lesson plans and classroom handouts.
This extensive historical website provides resources for teachers who want to make learning about history interesting and exciting. Key features include an interactive timeline, online textbook, and history reference room. Includes resource guides, lesson plans, and classroom handouts.
Federal Reserve Bank of St. Louis Education Resources
This resource contains in-depth lesson plans about various aspects of economics, including employment growth, income taxes, and supply and demand. Additionally, the website offers a podcast series called The Economic Lowdown which gives students understandable explanations and real-world examples of economic and finance principles. The site also includes resources and activities for both students and teachers.
Federal Reserve Education
This website, maintained by the United States Federal Reserve, contains numerous K-12 and college lesson plans and publications on subjects such as banking, economics, government, money, and personal finance. Resources can be found by typing keywords into the search bar. Additionally, the Resources by Audience tab allows users to search for lesson plans and activities by grade level.
Historic Maps in K-12 Classrooms
Designed specifically to support basic map and information acquisition skills at the K-12 levels, this website provides lesson plans based on 18 different maps. Divided into six different themes, each map contains several lessons for grades K-12.
The National Archives Experience: Docs Teach
Provides activities and over 8,000 primary source documents from the United States National Archives for use in the classroom. Users can find digitized primary source written documents, images, maps, charts, graphs, audio, and video that span the course of American history. Users can also find ready-to-use activities, or alter pre-existing activities to fit their needs.
National Council for the Social Studies
The National Council for the Social Studies (NCSS) supports elementary, secondary, and college teachers of history, geography, economics, political science, sociology, psychology, anthropology, and law. Their website includes K-12 lesson plans, as well as links to a host of additional resources. Lesson plans focus on current events or “teachable moments,” as well as historical events
National History Education Clearinghouse (NHEC)
Teachinghistory.org is designed to help K–12 history teachers access resources and materials to improve U.S. history education in the classroom. With funding from the U.S. Department of Education, the Center for History and New Media (CHNM) and the Stanford University History Education Group have created the Clearinghouse with the goal of making history content, teaching strategies, resources, and research accessible.
Smithsonian’s History Explorer
Based on items at the National Museum of American History, this website brings the museum’s collections and research into your classroom. In addition to the tour guides, there are plenty of lesson plans and classroom curriculum suggestions.
Stanford History Education Group
The Stanford History Education Group site includes over 130 document-based lesson plans on US and World History, 80 history assessments featuring Library of Congress documents, and assessments on measuring student reasoning about online information. Lesson plans and assessments are free with an account. “The Stanford History Education Group is an award-winning research and development group that comprises Stanford faculty, staff, graduate students, post-docs and visiting scholars.”
Teaching with Historic Places
Contains over 100 free middle school lesson plans in the areas of history, social studies, and geography. Lessons are based on sites listed in the National Register of Historic Places and include maps, readings, photographs, questions, and activities. Each plan is linked to national standards in the relevant subject area.
United States Holocaust Memorial Museum
Prepared by the United States Holocaust Memorial Museum, this website provides helpful tips for educators that will be teaching the Holocaust to middle and high school students. There are also links to lesson plans provided on the site.
There are three microfiche collections of curriculum guides in the Social Sciences, Health, and Education Library (SSHEL) that may also be searched for lesson plans or other instructional materials. These are: the Curriculum Development Library, ERIC microfiche collection, and the American Primer collection. All microfiche are stored in Room 104, Main Library.
Curriculum Development Library
This microfiche collection of pre-K-12 curriculum guides covers a variety of subjects, including traditional areas (social sciences, mathematics, etc.) and other areas (Bilingual/English as a second language, special education, etc.). Curriculum guides covering 1978-2000 are available exclusively on microfiche in SSHEL.
Many ERIC documents on microfiche contain lesson plans and classroom materials. When searching the ERIC database, type your subject keyword and combine with the appropriate descriptor term(s) using the AND operator. EXAMPLE: mathematics and (lesson plans or problem sets). More descriptor terms can be found in the Thesaurus of ERIC Descriptors (025.36 U5874t).The following is a list of possible descriptor terms that may be helpful:
- lesson plans
- curriculum guides
- state curriculum guides
- instructional materials
- teacher developed materials
- bilingual instructional materials
- study guides
- teaching guides
- learning modules
- class activities
- educational games
- course content
American Primers Collection
To search the American Primers collection, find American Primers: a Guide to the Microfiche Collection, (MFICHE428.6 Am35 index). It is located in Room 104 on top of the microfiche cabinets. Although this microfiche collection is mainly used for finding old textbooks and reading primers, it has a limited number of teaching manuals (lesson plans, teaching methods, learning games and activities, teacher’s guides to accompany primers) from the 1700’s to the mid-1930s. It mostly contains introductory reading materials from that period, such as primers, spellers, and alphabet books. However, this may be a useful resource if you are searching for historical curriculum materials. |
Since the 1970s, the Mathematical Association of America’s (MAA) journals Mathematics Magazine and College Mathematics Journal have published “Proofs without Words” (PWWs) (Nelsen 1993). “PWWs are pictures or diagrams that help the reader see why a particular mathematical statement may be true and how one might begin to go about proving it true” (Nelsen 2000, p. i). This article explains how I created and implemented a variation of using diagrams without words to facilitate the development of high school geometry students’ reasoning and proof skills. The Common Core (CCSSI 2010) lists approximately twenty geometric theorems for students to prove (e.g., standards G.CO.9, G.CO.10, and G.CO.11) as well as the more general standard of using congruence and similarity criteria for triangles to prove relationships in geometric figures (i.e., standard G.SRT.5). Furthermore, Appendix A of the Common Core states that teachers should “encourage multiple ways of writing proofs, such as in narrative paragraphs, using flow diagrams, in two-column format, and using diagrams without words” (p. 29). I used the following PWWs, adapted from Nelson (see fig. 1), when my Honors Geometry students studied right triangle trigonometry. Using the figure, students prove that sin(2 ) = 2cos sin and cos(2) = 2(cos )2 – 1, given that O is a semicircle with a radius of 1. (For further suggestions on how to use PWWs with students see Bell 2011). Even though the Common Core’s Appendix A states that students should use diagrams without words, I was initially unsure of how I could use them for high school geometry theorems. As with figure 1, most PWWs that I was familiar with are a single diagram and seemed to fit the traditional norm of providing students with the diagram along with the “given” and the “prove” statements. After some thinking, I realized that a group of cards, each with the same base diagram but with different labels or markings on each card, could be arranged in a flowchart to “prove” a theorem. For example, consider how you would arrange the four cards in figure 2. Is there more than one way to arrange the cards? What theorems can you prove? Four different ways to order the cards suggest meaningful reasoning. Figure 3 shows that when two parallel lines are cut by a transversal, the alternate interior angles are congruent. The diagrams lead an explanation of why this is true: Lines AC and BD are parallel and cut by a transversal (card d), so we have corresponding angles formed by parallel lines and ∠1 ≅ ∠2 (card b). Also, vertical angles are congruent, so ∠1 ≅ ∠3 (card a). Thus, by the transitive property, we can write ∠2 ≅ ∠3 (card c). On the other hand, figure 4 shows that a different arrangement of the cards can illustrate the converse. That is, if two lines intersect a third line and form congruent alternate interior angles (card c), then the two lines are parallel (card d). In yet other arrangements, the cards can demonstrate the corresponding angles theorem and its converse. After my students have arranged the cards, the diagram guides them to write the accompanying statements and reasons in paragraph, two-column, or flowchart form. In my geometry class, I use PWWs as a bridge between informal reasoning and writing the complete proof. Preceding a five-part proof progression, students make many conjectures in small-group investigations both with and without dynamic geometry software. During the progression, students prove most of these conjectures (a few of them multiple times using different representations). In the first three stages of the progression, students reason informally with diagrams, put proofs in order when provided with the statements and reasons, and fill in missing details of given proofs. The fourth stage uses PWWs as scaffolding to help students write all the statements and reasons of a proof formally. In the final stage, students write complete proofs as well as critique and evaluate full proofs. Depending upon the amount of scaffolding that students need and the complexity of the proof, I use PWWs in different ways within the fourth stage of the progression. Minimally, as with the activity associated with figure 2, I give students the set of cards and ask for arrangements that prove as many different results as possible. To add more support, I have sets of cards that also include a statement of what students are to prove. For example, figure 5 shows the nine cards, plus the card with the theorem, already arranged in a flowchart proof. This PWW illustrates that if a quadrilateral has a pair of parallel and congruent sides, then it is a parallelogram. Note that card a and card g are the same. Duplicate cards emphasize that the parallel sides are used at two different points in the proof: first, as part of the given information (card a), and then as a condition to apply the definition of a parallelogram (card g). For some theorems listed in the Common Core that are more complicated for students to prove, I provide a flowchart arrangement of the cards, and students need only to write the statements and reasons by analyzing the diagrams. I also use bold print and color in the diagram, to help students focus on important parts. Figure 6 shows the PWW cards for the theorem “the medians in a triangle are concurrent.” Before students begin working, I explain that cards a and c show medians and intersecting at F. Then, based on card b, I explain that is extended to H, and that if we can conclude that intersection point I is the midpoint of then this would imply that is a median. Thus, all three medians in the triangle are concurrent at F. It would be nearly impossible for students (and most geometry teachers—including me) to come up with the proof themselves, but by using PWWs along with the short explanation I give before students begin, my students are able to write the statements and reasons needed to complete the proof. The proof in figure 6 uses two previously proven theorems (both also found in the Common Core): namely, the midsegment in a triangle is parallel to the third side (cards f and g), and the diagonals of a parallelogram bisect each other (card i). Finally, to increase the complexity of the task, I include more cards than necessary to form a single proof. Figure 7 (p. 584) shows four of fifteen different cards that students can use to prove a geometric fact. (All fifteen cards are available as more4U content at www.nctm.org/mt.) Some cards in this set are extraneous: Card k, for example, shows AAA similarity, which would not be needed for proofs of congruence. I ask each student to glue cards onto paper in a flowchart that proves a result. Figure 8 (p. 584) shows how a student used eight cards to prove that if is parallel to and B is the midpoint of then B is also the midpoint of Next, students exchange papers, and they write statements and reasons corresponding to the flowchart. I also instruct them to make corrections if they think anything is wrong with the flowchart they received in the exchange. Figure 9 shows a student’s statements and reasons for the PWW (fig. 8) created by a classmate. As an alternative activity, I give students multiple copies of the same PWW cards and ask them to prove as many different results as possible each time using a new set of cards. In the past, many of my students struggled to understand how the proofs for a theorem and its converse were different. However, students can use the same set of cards to create paired PWWs when I introduce a two-sided card, printed on a different sheet of colored paper to attract their attention. For example, in figures 10 and 11, different sides of card f are used to prove the two parts of this biconditional statement: A quadrilateral is a parallelogram if and only if the diagonals bisect each other. The forward implication (fig. 10) is read from the top down, whereas the converse is read from the bottom up (fig. 11). Note three differences between the two flowcharts: Card f has been turned over, card e is moved, and the direction of the arrows is reversed. Also, note that in the forward direction, card c uses a previously proven theorem (the opposite sides of a parallelogram are congruent). Proving this biconditional has benefited students who are unsure whether the congruent alternate interior angles or the parallel lines comes first in the proof. Some students have incorrectly attempted to make a generalization that one must always follow the other no matter what statement they are proving. When I am implementing a set of cards that prove a biconditional, I do not tell the students what to prove, but instead I ask them to use the cards to prove a result. When they have something ready in flowchart form, I check it, offering feedback if necessary. When it is correct, students write the proof on paper, and then I check it. Once that flowchart and proof are correct, I ask students to try to reverse the proof. That is, they start with the conclusion they just proved as given information, and they try to prove that the originally given assumptions hold true. Because they created both proofs, they are able to see the two directions more clearly. Afterward, as part of a class discussion, I use a large bulletin board to display full-page versions of their PWWs. Student volunteers come up one at a time to begin arranging the cards. Once we have one direction of the proof on the bulletin board, we discuss reversing the proof. Student ownership of this PWW, and others like it that I have created, has helped students understand the differences between the proofs of a theorem and its converse. The PWW approach to proof supports Sinclair, Pimm, and Skelin’s (2012) notion that working with diagrams is central to geometric thinking. According to Sinclair and colleagues, diagrams are artifacts that have a story to tell, but we must interpret the diagrams to learn the story. They suggest that although diagrams sit in silence, we need to improve upon hearing what they are telling us. PWWs put diagrams in the foreground instead of relegating them to the background where “marking the diagram” is only a part of the ritualized norm of “doing proof” (Herbst et al. 2009). PWWs emphasize that interpreting diagrams and reasoning are at the heart of geometric proof. The cards are a collection of individual still frames that students can organize to tell a coherent story of why a particular geometric invariance must be true. Based upon the results from classwork, homework, and quizzes, my students have been very successful with proof when using PWWs. I believe this success is due to three reasons. First, arranging a PWW to prove a result is akin to solving a puzzle, and most people (including my students) enjoy working on puzzles and recreational mathematics with physical models. Second, focusing on diagrams initially frees students from getting bogged down with the formality of writing statements and reasons, even though they will have much of the same discussion in their groups as they complete a PWW. Third, working on PWWs is low-risk for students because, based upon discussion they have in groups and feedback they get from me, they can continue to revise the arrangement of the pieces of the PWWs before they begin the final process of writing down the statements and reasons. In summary, PWWs, such as the five examples described in this article, have made proof more accessible to more of my students while supporting them in developing their reasoning skills and in seeing the structure of how proofs are “assembled.” ACKNOWLEDGMENTS The author would like to thank his Troy High School colleagues, Brian Huelskamp and Samantha Potocek, for field-testing various versions of these PWWs with their students. The author also would like to thank participants in previous conference sessions and workshops for their feedback on preliminary versions of these PWWs. REFERENCES Bell, Carol J. 2011. “Proofs without Words: A Visual Application of Reasoning and Proof.” Mathematics Teacher 104 (9): 690–95. Common Core State Standards Initiative (CCSSI). 2010a. Common Core State Standards for Mathematics. Washington, DC: National Governors Association Center for Best Practices and the Council of Chief State School Officers. http://www.corestandards.org/wp-content/uploads/Math_Standards1.pdf———. 2010b. Common Core State Standards for Mathematics Appendix A. Washington, DC: National Governors Association Center for Best Practices and the Council of Chief State School Officers. http://www.corestandards.org/assets/CCSSI_Mathematics_Appendix_A.pdfHerbst, Patricio, Chialing Chen, Richael Weiss, and Gloriana Gonzalez, with Talli Nachlieli, Maria Hamlin, and Catherine Brach. 2009. “Doing Proofs in Geometry Classrooms.” In Teaching and Learning Proof across the Grades: A K–16 Perspective, edited by Despina Stylianou, Maria L. Blanton, and Eric J. Knuth, pp. 250–68. Studies in Mathematical Thinking and Learning series. New York: Routledge. Nelsen, Roger B. 1993. Proofs without Words: Exercises in Visual Thinking. Washington, DC: Mathematical Association of America. ———. 2000. Proofs without Words II: More Exercises in Visual Thinking. Washington, DC: Mathematical Association of America. Sinclair, Nathalie, David Pimm, and Melanie Skelin. 2012. Developing Essential Understanding of Geometry for Teaching Mathematics in 9–12. Essential Understanding series. Reston, VA: National Council of Teachers of Mathematics. WAYNE NIRODE, Nirodeemail@example.com, teaches mathematics at Troy High School in Troy, Ohio. His interests include geometry, statistics, and technology. |
Dinosaurs are a diverse group of reptiles of the clade Dinosauria. They first appeared during the Triassic period, between 243 and 233.23 million years ago, although the exact origin and timing of the evolution of dinosaurs is the subject of active research. They became the dominant terrestrial vertebrates after the Triassic–Jurassic extinction event 201.3 million years ago; their dominance continued through the Jurassic and Cretaceous periods. The fossil record demonstrates that birds are modern feathered dinosaurs, having evolved from earlier theropods during the Late Jurassic epoch. As such, birds were the only dinosaur lineage to survive the Cretaceous–Paleogene extinction event approximately 66 million years ago. Dinosaurs can therefore be divided into avian dinosaurs, or birds; and non-avian dinosaurs, which are all dinosaurs other than birds.
Temporal range: Late Triassic–Present, 233.23 – 0 Mya (Range includes birds (Aves)) (Possible Middle Triassic record)
|A collection of fossil dinosaur skeletons. Clockwise from top left: Microraptor gui (a winged theropod), Apatosaurus louisae (a giant sauropod), Edmontosaurus regalis (a duck-billed ornithopod), Triceratops horridus (a horned ceratopsian), Stegosaurus stenops (a plated stegosaur), Pinacosaurus grangeri (an armored ankylosaur)|
|Scientific classification |
Dinosaurs are a varied group of animals from taxonomic, morphological and ecological standpoints. Birds, at over 10,000 living species, are the most diverse group of vertebrates besides perciform fish. Using fossil evidence, paleontologists have identified over 500 distinct genera and more than 1,000 different species of non-avian dinosaurs. Dinosaurs are represented on every continent by both extant species (birds) and fossil remains. Through the first half of the 20th century, before birds were recognized to be dinosaurs, most of the scientific community believed dinosaurs to have been sluggish and cold-blooded. Most research conducted since the 1970s, however, has indicated that all dinosaurs were active animals with elevated metabolisms and numerous adaptations for social interaction. Some were herbivorous, others carnivorous. Evidence suggests that all dinosaurs were egg-laying; and that nest-building was a trait shared by many dinosaurs, both avian and non-avian.
While dinosaurs were ancestrally bipedal, many extinct groups included quadrupedal species, and some were able to shift between these stances. Elaborate display structures such as horns or crests are common to all dinosaur groups, and some extinct groups developed skeletal modifications such as bony armor and spines. While the dinosaurs' modern-day surviving avian lineage (birds) are generally small due to the constraints of flight, many prehistoric dinosaurs (non-avian and avian) were large-bodied—the largest sauropod dinosaurs are estimated to have reached lengths of 39.7 meters (130 feet) and heights of 18 meters (59 feet) and were the largest land animals of all time. Still, the idea that non-avian dinosaurs were uniformly gigantic is a misconception based in part on preservation bias, as large, sturdy bones are more likely to last until they are fossilized. Many dinosaurs were quite small: Xixianykus, for example, was only about 50 cm (20 in) long.
Since the first dinosaur fossils were recognized in the early 19th century, mounted fossil dinosaur skeletons have been major attractions at museums around the world, and dinosaurs have become an enduring part of world culture. The large sizes of some dinosaur groups, as well as their seemingly monstrous and fantastic nature, have ensured dinosaurs' regular appearance in best-selling books and films, such as Jurassic Park. Persistent public enthusiasm for the animals has resulted in significant funding for dinosaur science, and new discoveries are regularly covered by the media.
The taxon 'Dinosauria' was formally named in 1842 by paleontologist Sir Richard Owen, who used it to refer to the "distinct tribe or sub-order of Saurian Reptiles" that were then being recognized in England and around the world. The term is derived from Ancient Greek δεινός (deinos), meaning 'terrible, potent or fearfully great', and σαῦρος (sauros), meaning 'lizard or reptile'. Though the taxonomic name has often been interpreted as a reference to dinosaurs' teeth, claws, and other fearsome characteristics, Owen intended it merely to evoke their size and majesty.
Other prehistoric animals, including pterosaurs, mosasaurs, ichthyosaurs, plesiosaurs, and Dimetrodon, while often popularly conceived of as dinosaurs, are not taxonomically classified as dinosaurs. Pterosaurs are distantly related to dinosaurs, being members of the clade Ornithodira. The other groups mentioned are, like dinosaurs and pterosaurs, members of Sauropsida (the reptile and bird clade), except Dimetrodon (which is a synapsid).
Under phylogenetic nomenclature, dinosaurs are usually defined as the group consisting of the most recent common ancestor (MRCA) of Triceratops and modern birds (Neornithes), and all its descendants. It has also been suggested that Dinosauria be defined with respect to the MRCA of Megalosaurus and Iguanodon, because these were two of the three genera cited by Richard Owen when he recognized the Dinosauria. Both definitions result in the same set of animals being defined as dinosaurs: "Dinosauria = Ornithischia + Saurischia", encompassing ankylosaurians (armored herbivorous quadrupeds), stegosaurians (plated herbivorous quadrupeds), ceratopsians (herbivorous quadrupeds with horns and frills), ornithopods (bipedal or quadrupedal herbivores including "duck-bills"), theropods (mostly bipedal carnivores and birds), and sauropodomorphs (mostly large herbivorous quadrupeds with long necks and tails).
Birds are now recognized as being the sole surviving lineage of theropod dinosaurs. In traditional taxonomy, birds were considered a separate class that had evolved from dinosaurs, a distinct superorder. However, a majority of contemporary paleontologists concerned with dinosaurs reject the traditional style of classification in favor of phylogenetic taxonomy; this approach requires that, for a group to be natural, all descendants of members of the group must be included in the group as well. Birds are thus considered to be dinosaurs and dinosaurs are, therefore, not extinct. Birds are classified as belonging to the subgroup Maniraptora, which are coelurosaurs, which are theropods, which are saurischians, which are dinosaurs.
Research by Matthew G. Baron, David B. Norman, and Paul M. Barrett in 2017 suggested a radical revision of dinosaurian systematics. Phylogenetic analysis by Baron et al. recovered the Ornithischia as being closer to the Theropoda than the Sauropodomorpha, as opposed to the traditional union of theropods with sauropodomorphs. They resurrected the clade Ornithoscelida to refer to the group containing Ornithischia and Theropoda. Dinosauria itself was re-defined as the last common ancestor of Triceratops horridus, Passer domesticus and Diplodocus carnegii, and all of its descendants, to ensure that sauropods and kin remain included as dinosaurs.
Using one of the above definitions, dinosaurs can be generally described as archosaurs with hind limbs held erect beneath the body. Many prehistoric animal groups are popularly conceived of as dinosaurs, such as ichthyosaurs, mosasaurs, plesiosaurs, pterosaurs, and pelycosaurs (especially Dimetrodon), but are not classified scientifically as dinosaurs, and none had the erect hind limb posture characteristic of true dinosaurs. Dinosaurs were the dominant terrestrial vertebrates of the Mesozoic Era, especially the Jurassic and Cretaceous periods. Other groups of animals were restricted in size and niches; mammals, for example, rarely exceeded the size of a domestic cat, and were generally rodent-sized carnivores of small prey.
Dinosaurs have always been an extremely varied group of animals; according to a 2006 study, over 500 non-avian dinosaur genera have been identified with certainty so far, and the total number of genera preserved in the fossil record has been estimated at around 1850, nearly 75% of which remain to be discovered. An earlier study predicted that about 3,400 dinosaur genera existed, including many that would not have been preserved in the fossil record. By September 17, 2008, 1,047 different species of dinosaurs had been named.
In 2016, the estimated number of dinosaur species that existed in the Mesozoic was estimated to be 1,543–2,468. Some are herbivorous, others carnivorous, including seed-eaters, fish-eaters, insectivores, and omnivores. While dinosaurs were ancestrally bipedal (as are all modern birds), some prehistoric species were quadrupeds, and others, such as Anchisaurus and Iguanodon, could walk just as easily on two or four legs. Cranial modifications like horns and crests are common dinosaurian traits, and some extinct species had bony armor. Although known for large size, many Mesozoic dinosaurs were human-sized or smaller, and modern birds are generally small in size. Dinosaurs today inhabit every continent, and fossils show that they had achieved global distribution by at least the Early Jurassic epoch. Modern birds inhabit most available habitats, from terrestrial to marine, and there is evidence that some non-avian dinosaurs (such as Microraptor) could fly or at least glide, and others, such as spinosaurids, had semiaquatic habits.
Distinguishing anatomical features
While recent discoveries have made it more difficult to present a universally agreed-upon list of dinosaurs' distinguishing features, nearly all dinosaurs discovered so far share certain modifications to the ancestral archosaurian skeleton, or are clear descendants of older dinosaurs showing these modifications. Although some later groups of dinosaurs featured further modified versions of these traits, they are considered typical for Dinosauria; the earliest dinosaurs had them and passed them on to their descendants. Such modifications, originating in the most recent common ancestor of a certain taxonomic group, are called the synapomorphies of such a group.
- in the skull, a supratemporal fossa (excavation) is present in front of the supratemporal fenestra, the main opening in the rear skull roof
- epipophyses, obliquely backward-pointing processes on the rear top corners, present in the anterior (front) neck vertebrae behind the atlas and axis, the first two neck vertebrae
- apex of deltopectoral crest (a projection on which the deltopectoral muscles attach) located at or more than 30% down the length of the humerus (upper arm bone)
- radius, a lower arm bone, shorter than 80% of humerus length
- fourth trochanter (projection where the caudofemoralis muscle attaches on the inner rear shaft) on the femur (thigh bone) is a sharp flange
- fourth trochanter asymmetrical, with distal, lower, margin forming a steeper angle to the shaft
- on the astragalus and calcaneum, upper ankle bones, the proximal articular facet, the top connecting surface, for the fibula occupies less than 30% of the transverse width of the element
- exoccipitals (bones at the back of the skull) do not meet along the midline on the floor of the endocranial cavity, the inner space of the braincase
- in the pelvis, the proximal articular surfaces of the ischium with the ilium and the pubis are separated by a large concave surface (on the upper side of the ischium a part of the open hip joint is located between the contacts with the pubic bone and the ilium)
- cnemial crest on the tibia (protruding part of the top surface of the shinbone) arcs anterolaterally (curves to the front and the outer side)
- distinct proximodistally oriented (vertical) ridge present on the posterior face of the distal end of the tibia (the rear surface of the lower end of the shinbone)
- concave articular surface for the fibula of the calcaneum (the top surface of the calcaneum, where it touches the fibula, has a hollow profile)
Nesbitt found a number of further potential synapomorphies and discounted a number of synapomorphies previously suggested. Some of these are also present in silesaurids, which Nesbitt recovered as a sister group to Dinosauria, including a large anterior trochanter, metatarsals II and IV of subequal length, reduced contact between ischium and pubis, the presence of a cnemial crest on the tibia and of an ascending process on the astragalus, and many others.
A variety of other skeletal features are shared by dinosaurs. However, because they are either common to other groups of archosaurs or were not present in all early dinosaurs, these features are not considered to be synapomorphies. For example, as diapsids, dinosaurs ancestrally had two pairs of Infratemporal fenestrae (openings in the skull behind the eyes), and as members of the diapsid group Archosauria, had additional openings in the snout and lower jaw. Additionally, several characteristics once thought to be synapomorphies are now known to have appeared before dinosaurs, or were absent in the earliest dinosaurs and independently evolved by different dinosaur groups. These include an elongated scapula, or shoulder blade; a sacrum composed of three or more fused vertebrae (three are found in some other archosaurs, but only two are found in Herrerasaurus); and a perforate acetabulum, or hip socket, with a hole at the center of its inside surface (closed in Saturnalia tupiniquim, for example). Another difficulty of determining distinctly dinosaurian features is that early dinosaurs and other archosaurs from the Late Triassic epoch are often poorly known and were similar in many ways; these animals have sometimes been misidentified in the literature.
Dinosaurs stand with their hind limbs erect in a manner similar to most modern mammals, but distinct from most other reptiles, whose limbs sprawl out to either side. This posture is due to the development of a laterally facing recess in the pelvis (usually an open socket) and a corresponding inwardly facing distinct head on the femur. Their erect posture enabled early dinosaurs to breathe easily while moving, which likely permitted stamina and activity levels that surpassed those of "sprawling" reptiles. Erect limbs probably also helped support the evolution of large size by reducing bending stresses on limbs. Some non-dinosaurian archosaurs, including rauisuchians, also had erect limbs but achieved this by a "pillar-erect" configuration of the hip joint, where instead of having a projection from the femur insert on a socket on the hip, the upper pelvic bone was rotated to form an overhanging shelf.
Origins and early evolution
Dinosaurs diverged from their archosaur ancestors during the Middle to Late Triassic epochs, roughly 20 million years after the devastating Permian–Triassic extinction event wiped out an estimated 96% of all marine species and 70% of terrestrial vertebrate species approximately 252 million years ago. Radiometric dating of the rock formation that contained fossils from the early dinosaur genus Eoraptor at 231.4 million years old establishes its presence in the fossil record at this time. Paleontologists think that Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators. Dinosaurs may have appeared as early as 243 million years ago, as evidenced by remains of the genus Nyasasaurus from that period, though known fossils of these animals are too fragmentary to tell if they are dinosaurs or very close dinosaurian relatives. Paleontologist Max C. Langer et al. (2018) determined that Staurikosaurus from the Santa Maria Formation dates to 233.23 million years ago, making it older in geologic age than Eoraptor.
When dinosaurs appeared, they were not the dominant terrestrial animals. The terrestrial habitats were occupied by various types of archosauromorphs and therapsids, like cynodonts and rhynchosaurs. Their main competitors were the pseudosuchia, such as aetosaurs, ornithosuchids and rauisuchians, which were more successful than the dinosaurs. Most of these other animals became extinct in the Triassic, in one of two events. First, at about 215 million years ago, a variety of basal archosauromorphs, including the protorosaurs, became extinct. This was followed by the Triassic–Jurassic extinction event (about 201 million years ago), that saw the end of most of the other groups of early archosaurs, like aetosaurs, ornithosuchids, phytosaurs, and rauisuchians. Rhynchosaurs and dicynodonts survived (at least in some areas) at least as late as early-mid Norian and late Norian or earliest Rhaetian stages, respectively, and the exact date of their extinction is uncertain. These losses left behind a land fauna of crocodylomorphs, dinosaurs, mammals, pterosaurians, and turtles. The first few lines of early dinosaurs diversified through the Carnian and Norian stages of the Triassic, possibly by occupying the niches of the groups that became extinct. Also notably, there was a heightened rate of extinction during the Carnian Pluvial Event.
Evolution and paleobiogeography
Dinosaur evolution after the Triassic follows changes in vegetation and the location of continents. In the Late Triassic and Early Jurassic, the continents were connected as the single landmass Pangaea, and there was a worldwide dinosaur fauna mostly composed of coelophysoid carnivores and early sauropodomorph herbivores. Gymnosperm plants (particularly conifers), a potential food source, radiated in the Late Triassic. Early sauropodomorphs did not have sophisticated mechanisms for processing food in the mouth, and so must have employed other means of breaking down food farther along the digestive tract. The general homogeneity of dinosaurian faunas continued into the Middle and Late Jurassic, where most localities had predators consisting of ceratosaurians, spinosauroids, and carnosaurians, and herbivores consisting of stegosaurian ornithischians and large sauropods. Examples of this include the Morrison Formation of North America and Tendaguru Beds of Tanzania. Dinosaurs in China show some differences, with specialized sinraptorid theropods and unusual, long-necked sauropods like Mamenchisaurus. Ankylosaurians and ornithopods were also becoming more common, but prosauropods had become extinct. Conifers and pteridophytes were the most common plants. Sauropods, like the earlier prosauropods, were not oral processors, but ornithischians were evolving various means of dealing with food in the mouth, including potential cheek-like organs to keep food in the mouth, and jaw motions to grind food. Another notable evolutionary event of the Jurassic was the appearance of true birds, descended from maniraptoran coelurosaurians.
By the Early Cretaceous and the ongoing breakup of Pangaea, dinosaurs were becoming strongly differentiated by landmass. The earliest part of this time saw the spread of ankylosaurians, iguanodontians, and brachiosaurids through Europe, North America, and northern Africa. These were later supplemented or replaced in Africa by large spinosaurid and carcharodontosaurid theropods, and rebbachisaurid and titanosaurian sauropods, also found in South America. In Asia, maniraptoran coelurosaurians like dromaeosaurids, troodontids, and oviraptorosaurians became the common theropods, and ankylosaurids and early ceratopsians like Psittacosaurus became important herbivores. Meanwhile, Australia was home to a fauna of basal ankylosaurians, hypsilophodonts, and iguanodontians. The stegosaurians appear to have gone extinct at some point in the late Early Cretaceous or early Late Cretaceous. A major change in the Early Cretaceous, which would be amplified in the Late Cretaceous, was the evolution of flowering plants. At the same time, several groups of dinosaurian herbivores evolved more sophisticated ways to orally process food. Ceratopsians developed a method of slicing with teeth stacked on each other in batteries, and iguanodontians refined a method of grinding with dental batteries, taken to its extreme in hadrosaurids. Some sauropods also evolved tooth batteries, best exemplified by the rebbachisaurid Nigersaurus.
There were three general dinosaur faunas in the Late Cretaceous. In the northern continents of North America and Asia, the major theropods were tyrannosaurids and various types of smaller maniraptoran theropods, with a predominantly ornithischian herbivore assemblage of hadrosaurids, ceratopsians, ankylosaurids, and pachycephalosaurians. In the southern continents that had made up the now-splitting Gondwana, abelisaurids were the common theropods, and titanosaurian sauropods the common herbivores. Finally, in Europe, dromaeosaurids, rhabdodontid iguanodontians, nodosaurid ankylosaurians, and titanosaurian sauropods were prevalent. Flowering plants were greatly radiating, with the first grasses appearing by the end of the Cretaceous. Grinding hadrosaurids and shearing ceratopsians became extremely diverse across North America and Asia. Theropods were also radiating as herbivores or omnivores, with therizinosaurians and ornithomimosaurians becoming common.
The Cretaceous–Paleogene extinction event, which occurred approximately 66 million years ago at the end of the Cretaceous, caused the extinction of all dinosaur groups except for the neornithine birds. Some other diapsid groups, such as crocodilians, sebecosuchians, turtles, lizards, snakes, sphenodontians, and choristoderans, also survived the event.
The surviving lineages of neornithine birds, including the ancestors of modern ratites, ducks and chickens, and a variety of waterbirds, diversified rapidly at the beginning of the Paleogene period, entering ecological niches left vacant by the extinction of Mesozoic dinosaur groups such as the arboreal enantiornithines, aquatic hesperornithines, and even the larger terrestrial theropods (in the form of Gastornis, eogruiids, bathornithids, ratites, geranoidids, mihirungs, and "terror birds"). It is often cited that mammals out-competed the neornithines for dominance of most terrestrial niches but many of these groups co-existed with rich mammalian faunas for most of the Cenozoic Era. Terror birds and bathornithids occupied carnivorous guilds alongside predatory mammals, and ratites are still fairly successful as mid-sized herbivores; eogruiids similarly lasted from the Eocene to Pliocene, only becoming extinct very recently after over 20 million years of co-existence with many mammal groups.
Dinosaurs belong to a group known as archosaurs, which also includes modern crocodilians. Within the archosaur group, dinosaurs are differentiated most noticeably by their gait. Dinosaur legs extend directly beneath the body, whereas the legs of lizards and crocodilians sprawl out to either side.
Collectively, dinosaurs as a clade are divided into two primary branches, Saurischia and Ornithischia. Saurischia includes those taxa sharing a more recent common ancestor with birds than with Ornithischia, while Ornithischia includes all taxa sharing a more recent common ancestor with Triceratops than with Saurischia. Anatomically, these two groups can be distinguished most noticeably by their pelvic structure. Early saurischians—"lizard-hipped", from the Greek sauros (σαῦρος) meaning "lizard" and ischion (ἰσχίον) meaning "hip joint"—retained the hip structure of their ancestors, with a pubis bone directed cranially, or forward. This basic form was modified by rotating the pubis backward to varying degrees in several groups (Herrerasaurus, therizinosauroids, dromaeosaurids, and birds). Saurischia includes the theropods (exclusively bipedal and with a wide variety of diets) and sauropodomorphs (long-necked herbivores which include advanced, quadrupedal groups).
By contrast, ornithischians—"bird-hipped", from the Greek ornitheios (ὀρνίθειος) meaning "of a bird" and ischion (ἰσχίον) meaning "hip joint"—had a pelvis that superficially resembled a bird's pelvis: the pubic bone was oriented caudally (rear-pointing). Unlike birds, the ornithischian pubis also usually had an additional forward-pointing process. Ornithischia includes a variety of species that were primarily herbivores. (NB: the terms "lizard hip" and "bird hip" are misnomers – birds evolved from dinosaurs with "lizard hips".)
- Saurischian pelvis structure (left side)
- Tyrannosaurus pelvis (showing saurischian structure – left side)
- Ornithischian pelvis structure (left side)
- Edmontosaurus pelvis (showing ornithischian structure – left side)
The following is a simplified classification of dinosaur groups based on their evolutionary relationships, and organized based on the list of Mesozoic dinosaur species provided by Holtz (2007). A more detailed version can be found at Dinosaur classification. The dagger (†) is used to signify groups with no living members.
- Saurischia ("lizard-hipped"; includes Theropoda and Sauropodomorpha)
- †Herrerasauria (early bipedal carnivores)
- Theropoda (all bipedal; most were carnivorous)
- †Coelophysoidea (small, early theropods; includes Coelophysis and close relatives)
- †Dilophosauridae (early crested and carnivorous theropods)
- †Ceratosauria (generally elaborately horned, the dominant southern carnivores of the Cretaceous)
- Tetanurae ("stiff tails"; includes most theropods)
- †Megalosauroidea (early group of large carnivores including the semiaquatic spinosaurids)
- †Carnosauria (Allosaurus and close relatives, like Carcharodontosaurus)
- Coelurosauria (feathered theropods, with a range of body sizes and niches)
- †Compsognathidae (common early coelurosaurs with reduced forelimbs)
- †Tyrannosauridae (Tyrannosaurus and close relatives; had reduced forelimbs)
- †Ornithomimosauria ("ostrich-mimics"; mostly toothless; carnivores to possible herbivores)
- †Alvarezsauroidea (small insectivores with reduced forelimbs each bearing one enlarged claw)
- Maniraptora ("hand snatchers"; had long, slender arms and fingers)
- †Therizinosauria (bipedal herbivores with large hand claws and small heads)
- †Oviraptorosauria (mostly toothless; their diet and lifestyle are uncertain)
- †Archaeopterygidae (small, winged theropods or primitive birds)
- †Deinonychosauria (small- to medium-sized; bird-like, with a distinctive toe claw)
- Avialae (modern birds and extinct relatives)
- †Scansoriopterygidae (small primitive avialans with long third fingers)
- †Omnivoropterygidae (large, early short-tailed avialans)
- †Confuciusornithidae (small toothless avialans)
- †Enantiornithes (primitive tree-dwelling, flying avialans)
- Euornithes (advanced flying birds)
- †Yanornithiformes (toothed Cretaceous Chinese birds)
- †Hesperornithes (specialized aquatic diving birds)
- Aves (modern, beaked birds and their extinct relatives)
- †Sauropodomorpha (herbivores with small heads, long necks, long tails)
- †Guaibasauridae (small, primitive, omnivorous sauropodomorphs)
- †Plateosauridae (primitive, strictly bipedal "prosauropods")
- †Riojasauridae (small, primitive sauropodomorphs)
- †Massospondylidae (small, primitive sauropodomorphs)
- †Sauropoda (very large and heavy, usually over 15 m (49 ft) long; quadrupedal)
- †Vulcanodontidae (primitive sauropods with pillar-like limbs)
- †Eusauropoda ("true sauropods")
- †Cetiosauridae ("whale reptiles")
- †Turiasauria (European group of Jurassic and Cretaceous sauropods)
- †Neosauropoda ("new sauropods")
- †Diplodocoidea (skulls and tails elongated; teeth typically narrow and pencil-like)
- †Macronaria (boxy skulls; spoon- or pencil-shaped teeth)
- †Brachiosauridae (long-necked, long-armed macronarians)
- †Titanosauria (diverse; stocky, with wide hips; most common in the Late Cretaceous of southern continents)
- †Ornithischia ("bird-hipped"; diverse bipedal and quadrupedal herbivores)
- †Heterodontosauridae (small basal ornithopod herbivores/omnivores with prominent canine-like teeth)
- †Thyreophora (armored dinosaurs; mostly quadrupeds)
- †Neornithischia ("new ornithischians")
- †Ornithopoda (various sizes; bipeds and quadrupeds; evolved a method of chewing using skull flexibility and numerous teeth)
- †Marginocephalia (characterized by a cranial growth)
- †Pachycephalosauria (bipeds with domed or knobby growth on skulls)
- †Ceratopsia (quadrupeds with frills; many also had horns)
Knowledge about dinosaurs is derived from a variety of fossil and non-fossil records, including fossilized bones, feces, trackways, gastroliths, feathers, impressions of skin, internal organs and soft tissues. Many fields of study contribute to our understanding of dinosaurs, including physics (especially biomechanics), chemistry, biology, and the Earth sciences (of which paleontology is a sub-discipline). Two topics of particular interest and study have been dinosaur size and behavior.
Current evidence suggests that dinosaur average size varied through the Triassic, Early Jurassic, Late Jurassic and Cretaceous. Predatory theropod dinosaurs, which occupied most terrestrial carnivore niches during the Mesozoic, most often fall into the 100 to 1000 kg (220 to 2200 lb) category when sorted by estimated weight into categories based on order of magnitude, whereas recent predatory carnivoran mammals peak in the 10 to 100 kg (22 to 220 lb) category. The mode of Mesozoic dinosaur body masses is between 1 to 10 tonnes (1.1 to 11.0 short tons). This contrasts sharply with the average size of Cenozoic mammals, estimated by the National Museum of Natural History as about 2 to 5 kg (4.4 to 11.0 lb).
The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest were an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as Paraceratherium (the largest land mammal ever) were dwarfed by the giant sauropods, and only modern whales approach or surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.
Largest and smallest
Scientists will probably never be certain of the largest and smallest dinosaurs to have ever existed. This is because only a tiny percentage of animals were ever fossilized and most of these remain buried in the earth. Few of the specimens that are recovered are complete skeletons, and impressions of skin and other soft tissues are rare. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art, and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork.
The tallest and heaviest dinosaur known from good skeletons is Giraffatitan brancai (previously classified as a species of Brachiosaurus). Its remains were discovered in Tanzania between 1907 and 1912. Bones from several similar-sized individuals were incorporated into the skeleton now mounted and on display at the Museum für Naturkunde in Berlin; this mount is 12 meters (39 ft) tall and 21.8–22.5 meters (72–74 ft) long, and would have belonged to an animal that weighed between 30000 and 60000 kilograms (70000 and 130000 lb). The longest complete dinosaur is the 27 meters (89 feet) long Diplodocus, which was discovered in Wyoming in the United States and displayed in Pittsburgh's Carnegie Museum of Natural History in 1907. The longest dinosaur known from good fossil material is the Patagotitan: the skeleton mount in the American Museum of Natural History in New York is 37 meters (121 ft) long. The Museo Municipal Carmen Funes in Plaza Huincul, Argentina, has an Argentinosaurus reconstructed skeleton mount 39.7 metres (130 ft) long.
There were larger dinosaurs, but knowledge of them is based entirely on a small number of fragmentary fossils. Most of the largest herbivorous specimens on record were discovered in the 1970s or later, and include the massive Argentinosaurus, which may have weighed 80000 to 100000 kilograms (90 to 110 short tons) and reached length of 30–40 metres (98–131 ft); some of the longest were the 33.5 meters (110 ft) long Diplodocus hallorum (formerly Seismosaurus), the 33–34 meters (108–112 ft) long Supersaurus and 37 metres (121 ft) long Patagotitan; and the tallest, the 18 meters (59 ft) tall Sauroposeidon, which could have reached a sixth-floor window. The heaviest and longest dinosaur may have been Maraapunisaurus, known only from a now lost partial vertebral neural arch described in 1878. Extrapolating from the illustration of this bone, the animal may have been 58 meters (190 ft) long and weighed 122400 kg (270000 lb). However, as no further evidence of sauropods of this size has been found, and the discoverer, Edward Drinker Cope, had made typographic errors before, it is likely to have been an extreme overestimation.
The largest carnivorous dinosaur was Spinosaurus, reaching a length of 12.6 to 18 meters (41 to 59 ft), and weighing 7 to 20.9 tonnes (7.7 to 23.0 short tons). Other large carnivorous theropods included Giganotosaurus, Carcharodontosaurus and Tyrannosaurus. Therizinosaurus and Deinocheirus were among the tallest of the theropods. The largest ornithischian dinosaur was probably the hadrosaurid Shantungosaurus giganteus which measured 16.6 metres (54 ft). The largest individuals may have weighed as much as 16 tonnes (18 short tons).
The smallest dinosaur known is the bee hummingbird, with a length of only 5 cm (2.0 in) and mass of around 1.8 g (0.063 oz). The smallest known non-avialan dinosaurs were about the size of pigeons and were those theropods most closely related to birds. For example, Anchiornis huxleyi is currently the smallest non-avialan dinosaur described from an adult specimen, with an estimated weight of 110 grams and a total skeletal length of 34 cm (1.12 ft). The smallest herbivorous non-avialan dinosaurs included Microceratus and Wannanosaurus, at about 60 cm (2.0 ft) long each.
Many modern birds are highly social, often found living in flocks. There is general agreement that some behaviors that are common in birds, as well as in crocodiles (birds' closest living relatives), were also common among extinct dinosaur groups. Interpretations of behavior in fossil species are generally based on the pose of skeletons and their habitat, computer simulations of their biomechanics, and comparisons with modern animals in similar ecological niches.
The first potential evidence for herding or flocking as a widespread behavior common to many dinosaur groups in addition to birds was the 1878 discovery of 31 Iguanodon bernissartensis, ornithischians that were then thought to have perished together in Bernissart, Belgium, after they fell into a deep, flooded sinkhole and drowned. Other mass-death sites have been discovered subsequently. Those, along with multiple trackways, suggest that gregarious behavior was common in many early dinosaur species. Trackways of hundreds or even thousands of herbivores indicate that duck-billed (hadrosaurids) may have moved in great herds, like the American bison or the African Springbok. Sauropod tracks document that these animals traveled in groups composed of several different species, at least in Oxfordshire, England, although there is no evidence for specific herd structures. Congregating into herds may have evolved for defense, for migratory purposes, or to provide protection for young. There is evidence that many types of slow-growing dinosaurs, including various theropods, sauropods, ankylosaurians, ornithopods, and ceratopsians, formed aggregations of immature individuals. One example is a site in Inner Mongolia that has yielded the remains of over 20 Sinornithomimus, from one to seven years old. This assemblage is interpreted as a social group that was trapped in mud. The interpretation of dinosaurs as gregarious has also extended to depicting carnivorous theropods as pack hunters working together to bring down large prey. However, this lifestyle is uncommon among modern birds, crocodiles, and other reptiles, and the taphonomic evidence suggesting mammal-like pack hunting in such theropods as Deinonychus and Allosaurus can also be interpreted as the results of fatal disputes between feeding animals, as is seen in many modern diapsid predators.
The crests and frills of some dinosaurs, like the marginocephalians, theropods and lambeosaurines, may have been too fragile to be used for active defense, and so they were likely used for sexual or aggressive displays, though little is known about dinosaur mating and territorialism. Head wounds from bites suggest that theropods, at least, engaged in active aggressive confrontations.
From a behavioral standpoint, one of the most valuable dinosaur fossils was discovered in the Gobi Desert in 1971. It included a Velociraptor attacking a Protoceratops, providing evidence that dinosaurs did indeed attack each other. Additional evidence for attacking live prey is the partially healed tail of an Edmontosaurus, a hadrosaurid dinosaur; the tail is damaged in such a way that shows the animal was bitten by a tyrannosaur but survived. Cannibalism amongst some species of dinosaurs was confirmed by tooth marks found in Madagascar in 2003, involving the theropod Majungasaurus.
Comparisons between the scleral rings of dinosaurs and modern birds and reptiles have been used to infer daily activity patterns of dinosaurs. Although it has been suggested that most dinosaurs were active during the day, these comparisons have shown that small predatory dinosaurs such as dromaeosaurids, Juravenator, and Megapnosaurus were likely nocturnal. Large and medium-sized herbivorous and omnivorous dinosaurs such as ceratopsians, sauropodomorphs, hadrosaurids, ornithomimosaurs may have been cathemeral, active during short intervals throughout the day, although the small ornithischian Agilisaurus was inferred to be diurnal.
Based on current fossil evidence from dinosaurs such as Oryctodromeus, some ornithischian species seem to have led a partially fossorial (burrowing) lifestyle. Many modern birds are arboreal (tree climbing), and this was also true of many Mesozoic birds, especially the enantiornithines. While some early bird-like species may have already been arboreal as well (including dromaeosaurids such as Microraptor) most non-avialan dinosaurs seem to have relied on land-based locomotion. A good understanding of how dinosaurs moved on the ground is key to models of dinosaur behavior; the science of biomechanics, pioneered by Robert McNeill Alexander, has provided significant insight in this area. For example, studies of the forces exerted by muscles and gravity on dinosaurs' skeletal structure have investigated how fast dinosaurs could run, whether diplodocids could create sonic booms via whip-like tail snapping, and whether sauropods could float.
Modern birds are known to communicate using visual and auditory signals, and the wide diversity of visual display structures among fossil dinosaur groups, such as horns, frills, crests, sails and feathers, suggests that visual communication has always been important in dinosaur biology. Reconstruction of the plumage color of Anchiornis huxleyi, suggest the importance of color in visual communication in non-avian dinosaurs. The evolution of dinosaur vocalization is less certain. Paleontologist Phil Senter suggests that non-avian dinosaurs relied mostly on visual displays and possibly non-vocal acoustic sounds like hissing, jaw grinding or clapping, splashing and wing beating (possible in winged maniraptoran dinosaurs). He states they were unlikely to have been capable of vocalizing since their closest relatives, crocodilians and birds, use different means to vocalize, the former via the larynx and the latter through the unique syrinx, suggesting they evolved independently and their common ancestor was mute.
The earliest remains of a syrinx, which has enough mineral content for fossilization, was found in a specimen of the duck-like Vegavis iaai dated 69-66 million year ago, and this organ is unlikely to have existed in non-avian dinosaurs. However, in contrast to Senter, the researchers have suggested that dinosaurs could vocalize and that the syrinx-based vocal system of birds evolved from a larynx-based one, rather than the two systems evolving independently. A 2016 study suggests that dinosaurs produced closed mouth vocalizations like cooing, which occur in both crocodilians and birds as well as other reptiles. Such vocalizations evolved independently in extant archosaurs numerous times, following increases in body size. The crests of the Lambeosaurini and nasal chambers of ankylosaurids have been suggested to function in vocal resonance, though Senter states that the presence of resonance chambers in some dinosaurs is not necessarily evidence of vocalization as modern snakes have such chambers which intensify their hisses.
All dinosaurs laid amniotic eggs with hard shells made mostly of calcium carbonate. Dinosaur eggs were usually laid in a nest. Most species create somewhat elaborate nests which can be cups, domes, plates, beds scrapes, mounds, or burrows. Some species of modern bird have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. Primitive birds and many non-avialan dinosaurs often lay eggs in communal nests, with males primarily incubating the eggs. While modern birds have only one functional oviduct and lay one egg at a time, more primitive birds and dinosaurs had two oviducts, like crocodiles. Some non-avialan dinosaurs, such as Troodon, exhibited iterative laying, where the adult might lay a pair of eggs every one or two days, and then ensured simultaneous hatching by delaying brooding until all eggs were laid.
When laying eggs, females grow a special type of bone between the hard outer bone and the marrow of their limbs. This medullary bone, which is rich in calcium, is used to make eggshells. A discovery of features in a Tyrannosaurus rex skeleton provided evidence of medullary bone in extinct dinosaurs and, for the first time, allowed paleontologists to establish the sex of a fossil dinosaur specimen. Further research has found medullary bone in the carnosaur Allosaurus and the ornithopod Tenontosaurus. Because the line of dinosaurs that includes Allosaurus and Tyrannosaurus diverged from the line that led to Tenontosaurus very early in the evolution of dinosaurs, this suggests that the production of medullary tissue is a general characteristic of all dinosaurs.
Another widespread trait among modern birds (but see below in regards to fossil groups and extant megapodes) is parental care for young after hatching. Jack Horner's 1978 discovery of a Maiasaura ("good mother lizard") nesting ground in Montana demonstrated that parental care continued long after birth among ornithopods. A specimen of the Mongolian oviraptorid Citipati osmolskae was discovered in a chicken-like brooding position in 1993, which may indicate that they had begun using an insulating layer of feathers to keep the eggs warm. A dinosaur embryo (pertaining to the prosauropod Massospondylus) was found without teeth, indicating that some parental care was required to feed the young dinosaurs. Trackways have also confirmed parental behavior among ornithopods from the Isle of Skye in northwestern Scotland.
However, there is ample evidence of precociality or superprecociality among many dinosaur species, particularly theropods. For instance, non-ornithuromorph birds have been abundantly demonstrated to have had slow growth rates, megapode-like egg burying behavior and the ability to fly soon after birth. Both Tyrannosaurus rex and Troodon formosus display juveniles with clear superprecociality and likely occupying different ecological niches than the adults. Superprecociality has been inferred for sauropods.
Because both modern crocodilians and birds have four-chambered hearts (albeit modified in crocodilians), it is likely that this is a trait shared by all archosaurs, including all dinosaurs. While all modern birds have high metabolisms and are "warm-blooded" (endothermic), a vigorous debate has been ongoing since the 1960s regarding how far back in the dinosaur lineage this trait extends. Scientists disagree as to whether non-avian dinosaurs were endothermic, ectothermic, or some combination of both.
After non-avian dinosaurs were discovered, paleontologists first posited that they were ectothermic. This supposed "cold-bloodedness" was used to imply that the ancient dinosaurs were relatively slow, sluggish organisms, even though many modern reptiles are fast and light-footed despite relying on external sources of heat to regulate their body temperature. The idea of dinosaurs as ectothermic remained a prevalent view until Robert T. "Bob" Bakker, an early proponent of dinosaur endothermy, published an influential paper on the topic in 1968.
Modern evidence indicates that some non-avian dinosaurs thrived in cooler temperate climates and that some early species must have regulated their body temperature by internal biological means (aided by the animals' bulk in large species and feathers or other body coverings in smaller species). Evidence of endothermy in Mesozoic dinosaurs includes the discovery of polar dinosaurs in Australia and Antarctica as well as analysis of blood-vessel structures within fossil bones that are typical of endotherms. Scientific debate continues regarding the specific ways in which dinosaur temperature regulation evolved.
In saurischian dinosaurs, higher metabolisms were supported by the evolution of the avian respiratory system, characterized by an extensive system of air sacs that extended the lungs and invaded many of the bones in the skeleton, making them hollow. Early avian-style respiratory systems with air sacs may have been capable of sustaining higher activity levels than those of mammals of similar size and build. In addition to providing a very efficient supply of oxygen, the rapid airflow would have been an effective cooling mechanism, which is essential for animals that are active but too large to get rid of all the excess heat through their skin.
Like other reptiles, dinosaurs are primarily uricotelic, that is, their kidneys extract nitrogenous wastes from their bloodstream and excrete it as uric acid instead of urea or ammonia via the ureters into the intestine. In most living species, uric acid is excreted along with feces as a semisolid waste. However, at least some modern birds (such as hummingbirds) can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. This material, as well as the output of the intestines, emerges from the cloaca. In addition, many species regurgitate pellets, and fossil pellets that may have come from dinosaurs are known from as long ago as the Cretaceous.
Origin of birds
The possibility that dinosaurs were the ancestors of birds was first suggested in 1868 by Thomas Henry Huxley. After the work of Gerhard Heilmann in the early 20th century, the theory of birds as dinosaur descendants was abandoned in favor of the idea of their being descendants of generalized thecodonts, with the key piece of evidence being the supposed lack of clavicles in dinosaurs. However, as later discoveries showed, clavicles (or a single fused wishbone, which derived from separate clavicles) were not actually absent; they had been found as early as 1924 in Oviraptor, but misidentified as an interclavicle. In the 1970s, John Ostrom revived the dinosaur–bird theory, which gained momentum in the coming decades with the advent of cladistic analysis, and a great increase in the discovery of small theropods and early birds. Of particular note have been the fossils of the Yixian Formation, where a variety of theropods and early birds have been found, often with feathers of some type. Birds share over a hundred distinct anatomical features with theropod dinosaurs, which are now generally accepted to have been their closest ancient relatives. They are most closely allied with maniraptoran coelurosaurs. A minority of scientists, most notably Alan Feduccia and Larry Martin, have proposed other evolutionary paths, including revised versions of Heilmann's basal archosaur proposal, or that maniraptoran theropods are the ancestors of birds but themselves are not dinosaurs, only convergent with dinosaurs.
Feathers are one of the most recognizable characteristics of modern birds, and a trait that was shared by all other dinosaur groups. Based on the current distribution of fossil evidence, it appears that feathers were an ancestral dinosaurian trait, though one that may have been selectively lost in some species. Direct fossil evidence of feathers or feather-like structures has been discovered in a diverse array of species in many non-avian dinosaur groups, both among saurischians and ornithischians. Simple, branched, feather-like structures are known from heterodontosaurids, primitive neornithischians and theropods, and primitive ceratopsians. Evidence for true, vaned feathers similar to the flight feathers of modern birds has been found only in the theropod subgroup Maniraptora, which includes oviraptorosaurs, troodontids, dromaeosaurids, and birds. Feather-like structures known as pycnofibres have also been found in pterosaurs, suggesting the possibility that feather-like filaments may have been common in the bird lineage and evolved before the appearance of dinosaurs themselves. Research into the genetics of American alligators has also revealed that crocodylian scutes do possess feather-keratins during embryonic development, but these keratins are not expressed by the animals before hatching.
Archaeopteryx was the first fossil found that revealed a potential connection between dinosaurs and birds. It is considered a transitional fossil, in that it displays features of both groups. Brought to light just two years after Charles Darwin's seminal On the Origin of Species (1859), its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for Compsognathus. Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Most of these specimens were unearthed in the lagerstätte of the Yixian Formation, Liaoning, northeastern China, which was part of an island continent during the Cretaceous. Though feathers have been found in only a few locations, it is possible that non-avian dinosaurs elsewhere in the world were also feathered. The lack of widespread fossil evidence for feathered non-avian dinosaurs may be because delicate features like skin and feathers are not often preserved by fossilization and thus are absent from the fossil record.
The description of feathered dinosaurs has not been without controversy; perhaps the most vocal critics have been Alan Feduccia and Theagarten Lingham-Soliar, who have proposed that some purported feather-like fossils are the result of the decomposition of collagenous fiber that underlaid the dinosaurs' skin, and that maniraptoran dinosaurs with vaned feathers were not actually dinosaurs, but convergent with dinosaurs. However, their views have for the most part not been accepted by other researchers, to the point that the scientific nature of Feduccia's proposals has been questioned.
In 2016, it was reported that a dinosaur tail with feathers had been found enclosed in amber. The fossil is about 99 million years old.
Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent another important line of evidence for paleontologists. Areas of the skeleton with important similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, furcula (wishbone), and breast bone. Comparison of bird and dinosaur skeletons through cladistic analysis strengthens the case for the link.
Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to a 2005 investigation led by Patrick M. O'Connor. The lungs of theropod dinosaurs (carnivores that walked on two legs and had bird-like feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In 2008, scientists described Aerosteon riocoloradensis, the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT scanning of Aerosteon's fossil bones revealed evidence for the existence of air sacs within the animal's body cavity.
Fossils of the troodonts Mei and Sinornithoides demonstrate that some dinosaurs slept with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds. Several deinonychosaur and oviraptorosaur specimens have also been found preserved on top of their nests, likely brooding in a bird-like manner. The ratio between egg volume and body mass of adults among these dinosaurs suggest that the eggs were primarily brooded by the male, and that the young were highly precocial, similar to many modern ground-dwelling birds.
Some dinosaurs are known to have used gizzard stones like modern birds. These stones are swallowed by animals to aid digestion and break down food and hard fibers once they enter the stomach. When found in association with fossils, gizzard stones are called gastroliths.
Extinction of major groups
The discovery that birds are a type of dinosaur showed that dinosaurs in general are not, in fact, extinct as is commonly stated. However, all non-avian dinosaurs, estimated to have been 628-1078 species, as well as many groups of birds did suddenly become extinct approximately 66 million years ago. It has been suggested that because small mammals, squamata and birds occupied the ecological niches suited for small body size, non-avian dinosaurs never evolved a diverse fauna of small-bodied species, which led to their downfall when large-bodied terrestrial tetrapods were hit by the mass extinction event. Many other groups of animals also became extinct at this time, including ammonites (nautilus-like mollusks), mosasaurs, plesiosaurs, pterosaurs, and many groups of mammals. Significantly, the insects suffered no discernible population loss, which left them available as food for other survivors. This mass extinction is known as the Cretaceous–Paleogene extinction event. The nature of the event that caused this mass extinction has been extensively studied since the 1970s; at present, several related theories are supported by paleontologists. Though the consensus is that an impact event was the primary cause of dinosaur extinction, some scientists cite other possible causes, or support the idea that a confluence of several factors was responsible for the sudden disappearance of dinosaurs from the fossil record.
The asteroid impact hypothesis, which was brought to wide attention in 1980 by Walter Alvarez and colleagues, links the extinction event at the end of the Cretaceous to a bolide impact approximately 66 million years ago. Alvarez et al. proposed that a sudden increase in iridium levels, recorded around the world in the period's rock stratum, was direct evidence of the impact. The bulk of the evidence now suggests that a bolide 5 to 15 kilometers (3.1 to 9.3 miles) wide hit in the vicinity of the Yucatán Peninsula (in southeastern Mexico), creating the approximately 180 km (110 mi) Chicxulub crater and triggering the mass extinction. Scientists are not certain whether dinosaurs were thriving or declining before the impact event. Some scientists propose that the meteorite impact caused a long and unnatural drop in Earth's atmospheric temperature, while others claim that it would have instead created an unusual heat wave. The consensus among scientists who support this hypothesis is that the impact caused extinctions both directly (by heat from the meteorite impact) and also indirectly (via a worldwide cooling brought about when matter ejected from the impact crater reflected thermal radiation from the sun). Although the speed of extinction cannot be deduced from the fossil record alone, various models suggest that the extinction was extremely rapid, being down to hours rather than years. In 2019, scientists drilling into the seafloor off Mexico extracted a unique geologic record of what they believe to be the day a city-sized asteroid smashed into the planet.
Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 million years ago and lasted for over 2 million years. However, there is evidence that two thirds of the Deccan Traps were created in only 1 million years about 66 million years ago, and so these eruptions would have caused a fairly rapid extinction, possibly over a period of thousands of years, but still longer than would be expected from a single impact event.
The Deccan Traps in India could have caused extinction through several mechanisms, including the release into the air of dust and sulfuric aerosols, which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, Deccan Trap volcanism might have resulted in carbon dioxide emissions, which would have increased the greenhouse effect when the dust and aerosols cleared from the atmosphere. Before the mass extinction of the dinosaurs, the release of volcanic gases during the formation of the Deccan Traps "contributed to an apparently massive global warming. Some data point to an average rise in temperature of [8 °C (14 °F)] in the last half-million years before the impact [at Chicxulub]."
In the years when the Deccan Traps hypothesis was linked to a slower extinction, Luis Alvarez (who died in 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. However, even Walter Alvarez has acknowledged that there were other major changes on Earth even before the impact, such as a drop in sea level and massive volcanic eruptions that produced the Indian Deccan Traps, and these may have contributed to the extinctions.
Possible Paleocene survivors
Non-avian dinosaur remains are occasionally found above the Cretaceous–Paleogene boundary. In 2000, paleontologist Spencer G. Lucas et al. reported the discovery of a single hadrosaur right femur in the San Juan Basin, New Mexico, and described it as evidence of Paleocene dinosaurs. The formation in which the bone was discovered has been dated to the early Paleocene epoch, approximately 64.5 million years ago. If the bone was not re-deposited into that stratum by weathering action, it would provide evidence that some dinosaur populations may have survived at least a half-million years into the Cenozoic. Other evidence includes the finding of dinosaur remains in the Hell Creek Formation up to 1.3 m (51 in) above the Cretaceous–Paleogene boundary, representing 40000 years of elapsed time. Similar reports have come from other parts of the world, including China. Many scientists, however, dismissed the supposed Paleocene dinosaurs as re-worked, that is, washed out of their original locations and then re-buried in much later sediments. Direct dating of the bones themselves has supported the later date, with uranium–lead dating methods resulting in a precise age of 64.8 ± 0.9 million years ago. If correct, the presence of a handful of dinosaurs in the early Paleocene would not change the underlying facts of the extinction.
History of study
Dinosaur fossils have been known for millennia, although their true nature was not recognized. The Chinese considered them to be dragon bones and documented them as such. For example, Huayang Guo Zhi (華陽國志), a gazetteer compiled by Chang Qu (常璩) during the Western Jin Dynasty (265–316), reported the discovery of dragon bones at Wucheng in Sichuan Province. Villagers in central China have long unearthed fossilized "dragon bones" for use in traditional medicines, a practice that continues as of 2020. In Europe, dinosaur fossils were generally believed to be the remains of giants and other biblical creatures.
Scholarly descriptions of what would now be recognized as dinosaur bones first appeared in the late 17th century in England. Part of a bone, now known to have been the femur of a Megalosaurus, was recovered from a limestone quarry at Cornwell near Chipping Norton, Oxfordshire, in 1676. The fragment was sent to Robert Plot, Professor of Chemistry at the University of Oxford and first curator of the Ashmolean Museum, who published a description in his The Natural History of Oxford-shire (1677). He correctly identified the bone as the lower extremity of the femur of a large animal, and recognized that it was too large to belong to any known species. He therefore concluded it to be the femur of a huge human, perhaps a Titan or another type of giant featured in legends. Edward Lhuyd, a friend of Sir Isaac Newton, published Lithophylacii Britannici ichnographia (1699), the first scientific treatment of what would now be recognized as a dinosaur when he described and named a sauropod tooth, Rutellum impicatum, that had been found in Caswell, near Witney, Oxfordshire.
Between 1815 and 1824, the Rev William Buckland, the first Reader of Geology at the University of Oxford, collected more fossilized bones of Megalosaurus and became the first person to describe a dinosaur in a scientific journal. The second dinosaur genus to be identified, Iguanodon, was discovered in 1822 by Mary Ann Mantell – the wife of English geologist Gideon Mantell. Gideon Mantell recognized similarities between his fossils and the bones of modern iguanas. He published his findings in 1825.
The study of these "great fossil lizards" soon became of great interest to European and American scientists, and in 1842 the English paleontologist Richard Owen coined the term "dinosaur". He recognized that the remains that had been found so far, Iguanodon, Megalosaurus and Hylaeosaurus, shared a number of distinctive features, and so decided to present them as a distinct taxonomic group. With the backing of Prince Albert, the husband of Queen Victoria, Owen established the Natural History Museum, London, to display the national collection of dinosaur fossils and other biological and geological exhibits.
In 1858, William Parker Foulke discovered the first known American dinosaur, in marl pits in the small town of Haddonfield, New Jersey. (Although fossils had been found before, their nature had not been correctly discerned.) The creature was named Hadrosaurus foulkii. It was an extremely important find: Hadrosaurus was one of the first nearly complete dinosaur skeletons found (the first was in 1834, in Maidstone, England), and it was clearly a bipedal creature. This was a revolutionary discovery as, until that point, most scientists had believed dinosaurs walked on four feet, like other lizards. Foulke's discoveries sparked a wave of interests in dinosaurs in the United States, known as dinosaur mania.
Dinosaur mania was exemplified by the fierce rivalry between Edward Drinker Cope and Othniel Charles Marsh, both of whom raced to be the first to find new dinosaurs in what came to be known as the Bone Wars. The feud probably originated when Marsh publicly pointed out that Cope's reconstruction of an Elasmosaurus skeleton was flawed: Cope had inadvertently placed the plesiosaur's head at what should have been the animal's tail end. The fight between the two scientists lasted for over 30 years, ending in 1897 when Cope died after spending his entire fortune on the dinosaur hunt. Unfortunately, many valuable dinosaur specimens were damaged or destroyed due to the pair's rough methods: for example, their diggers often used dynamite to unearth bones (a method modern paleontologists would find appalling because the explosions from the dynamic would potentially destroy any and all dinosauric evidence). Despite their unrefined methods, the contributions of Cope and Marsh to paleontology were vast: Marsh unearthed 86 new species of dinosaur and Cope discovered 56, a total of 142 new species. Cope's collection is now at the American Museum of Natural History, while Marsh's is on display at the Peabody Museum of Natural History at Yale University.
After 1897, the search for dinosaur fossils extended to every continent, including Antarctica. The first Antarctic dinosaur to be discovered, the ankylosaurid Antarctopelta oliveroi, was found on James Ross Island in 1986, although it was 1994 before an Antarctic species, the theropod Cryolophosaurus ellioti, was formally named and described in a scientific journal.
Current dinosaur "hot spots" include southern South America (especially Argentina) and China. China in particular has produced many exceptional feathered dinosaur specimens due to the unique geology of its dinosaur beds, as well as an ancient arid climate particularly conducive to fossilization.
The field of dinosaur research has enjoyed a surge in activity that began in the 1970s and is ongoing. This was triggered, in part, by John Ostrom's discovery of Deinonychus, an active predator that may have been warm-blooded, in marked contrast to the then-prevailing image of dinosaurs as sluggish and cold-blooded. Vertebrate paleontology has become a global science. Major new dinosaur discoveries have been made by paleontologists working in previously unexploited regions, including India, South America, Madagascar, Antarctica, and most significantly China (the well-preserved feathered dinosaurs in China have further consolidated the link between dinosaurs and their living descendants, modern birds). The widespread application of cladistics, which rigorously analyzes the relationships between biological organisms, has also proved tremendously useful in classifying dinosaurs. Cladistic analysis, among other modern techniques, helps to compensate for an often incomplete and fragmentary fossil record.
|Timeline of notable dinosaur taxonomic descriptions|
Soft tissue and DNA
One of the best examples of soft-tissue impressions in a fossil dinosaur was discovered in the Pietraroia Plattenkalk in southern Italy. The discovery was reported in 1998, and described the specimen of a small, very young coelurosaur, Scipionyx samniticus. The fossil includes portions of the intestines, colon, liver, muscles, and windpipe of this immature dinosaur.
In the March 2005 issue of Science, the paleontologist Mary Higby Schweitzer and her team announced the discovery of flexible material resembling actual soft tissue inside a 68-million-year-old Tyrannosaurus rex leg bone from the Hell Creek Formation in Montana. After recovery, the tissue was rehydrated by the science team. When the fossilized bone was treated over several weeks to remove mineral content from the fossilized bone-marrow cavity (a process called demineralization), Schweitzer found evidence of intact structures such as blood vessels, bone matrix, and connective tissue (bone fibers). Scrutiny under the microscope further revealed that the putative dinosaur soft tissue had retained fine structures (microstructures) even at the cellular level. The exact nature and composition of this material, and the implications of Schweitzer's discovery, are not yet clear.
In 2009, a team including Schweitzer announced that, using even more careful methodology, they had duplicated their results by finding similar soft tissue in a duck-billed dinosaur, Brachylophosaurus canadensis, found in the Judith River Formation of Montana. This included even more detailed tissue, down to preserved bone cells that seem even to have visible remnants of nuclei and what seem to be red blood cells. Among other materials found in the bone was collagen, as in the Tyrannosaurus bone. The type of collagen an animal has in its bones varies according to its DNA and, in both cases, this collagen was of the same type found in modern chickens and ostriches.
The extraction of ancient DNA from dinosaur fossils has been reported on two separate occasions; upon further inspection and peer review, however, neither of these reports could be confirmed. However, a functional peptide involved in the vision of a theoretical dinosaur has been inferred using analytical phylogenetic reconstruction methods on gene sequences of related modern species such as reptiles and birds. In addition, several proteins, including hemoglobin, have putatively been detected in dinosaur fossils.
In 2015, researchers reported finding structures similar to blood cells and collagen fibers, preserved in the bone fossils of six Cretaceous dinosaur specimens, which are approximately 75 million years old.
By human standards, dinosaurs were creatures of fantastic appearance and often enormous size. As such, they have captured the popular imagination and become an enduring part of human culture. Entry of the word "dinosaur" into the common vernacular reflects the animals' cultural importance: in English, "dinosaur" is commonly used to describe anything that is impractically large, obsolete, or bound for extinction.
Public enthusiasm for dinosaurs first developed in Victorian England, where in 1854, three decades after the first scientific descriptions of dinosaur remains, a menagerie of lifelike dinosaur sculptures was unveiled in London's Crystal Palace Park. The Crystal Palace dinosaurs proved so popular that a strong market in smaller replicas soon developed. In subsequent decades, dinosaur exhibits opened at parks and museums around the world, ensuring that successive generations would be introduced to the animals in an immersive and exciting way. Dinosaurs' enduring popularity, in its turn, has resulted in significant public funding for dinosaur science, and has frequently spurred new discoveries. In the United States, for example, the competition between museums for public attention led directly to the Bone Wars of the 1880s and 1890s, during which a pair of feuding paleontologists made enormous scientific contributions.
The popular preoccupation with dinosaurs has ensured their appearance in literature, film, and other media. Beginning in 1852 with a passing mention in Charles Dickens' Bleak House, dinosaurs have been featured in large numbers of fictional works. Jules Verne's 1864 novel Journey to the Center of the Earth, Sir Arthur Conan Doyle's 1912 book The Lost World, the iconic 1933 film King Kong, the 1954 Godzilla and its many sequels, the best-selling 1990 novel Jurassic Park by Michael Crichton and its 1993 film adaptation are just a few notable examples of dinosaur appearances in fiction. Authors of general-interest non-fiction works about dinosaurs, including some prominent paleontologists, have often sought to use the animals as a way to educate readers about science in general. Dinosaurs are ubiquitous in advertising; numerous companies have referenced dinosaurs in printed or televised advertisements, either in order to sell their own products or in order to characterize their rivals as slow-moving, dim-witted, or obsolete.
- Owen 1842, p.103: "The combination of such characters … will, it is presumed, be deemed sufficient ground for establishing a distinct tribe or sub-order of Saurian Reptiles, for which I would propose the name of Dinosauria*. (*Gr. δεινός, fearfully great; σαύρος, a lizard. … )
- "Dinosauria". Merriam-Webster Dictionary. Retrieved 2019-11-10.
- Crane, George R. (ed.). "Greek Dictionary Headword Search Results". Perseus 4.0. Medford and Somerville, MA: Tufts University. Retrieved 2019-10-13. Lemma for 'δεινός' from Henry George Liddell, Robert Scott, A Greek-English Lexicon (1940): 'fearful, terrible'.
- Farlow & Brett-Surman 1997, pp. ix–xi, Preface, "Dinosaurs: The Terrestrial Superlative" by James O. Farlow and M.K. Brett-Surman.
- Chamary, JV (September 30, 2014). "Dinosaurs, Pterosaurs And Other Saurs – Big Differences". Forbes.com. Jersey City, NJ: Forbes Media, LLC. ISSN 0015-6914. Archived from the original on 2014-11-10. Retrieved 2018-10-02.
- Weishampel, Dodson & Osmólska 2004, pp. 7–19, chpt. 1: "Origin and Relationships of Dinosauria" by Michael J. Benton.
- Olshevsky 2000
- Langer, Max C.; Ezcurra, Martin D.; Bittencourt, Jonathas S.; Novas, Fernando E. (February 2010). "The origin and early evolution of dinosaurs". Biological Reviews. Cambridge: Cambridge Philosophical Society. 85 (1): 65–66, 82. doi:10.1111/j.1469-185x.2009.00094.x. ISSN 1464-7931. PMID 19895605.
- "Using the tree for classification". Understanding Evolution. Berkeley: University of California. Archived from the original on 2019-08-31. Retrieved 2019-10-14.
- Weishampel, Dodson & Osmólska 2004, pp. 210–231, chpt. 11: "Basal Avialae" by Kevin Padian.
- Wade, Nicholas (March 22, 2017). "Shaking Up the Dinosaur Family Tree". The New York Times. New York: The New York Times Company. ISSN 0362-4331. Archived from the original on 2018-04-07. Retrieved 2019-10-30. "A version of this article appears in print on March 28, 2017, on Page D6 of the New York edition with the headline: Shaking Up the Dinosaur Family Tree."
- Baron, Matthew G.; Norman, David B.; Barrett, Paul M. (March 22, 2017). "A new hypothesis of dinosaur relationships and early dinosaur evolution". Nature. London: Nature Research. 543 (7646): 501–506. Bibcode:2017Natur.543..501B. doi:10.1038/nature21700. ISSN 0028-0836. PMID 28332513. "This file contains Supplementary Text and Data, Supplementary Tables 1-3 and additional references.": Supplementary Information
- Glut 1997, p. 40
- Lambert & The Diagram Group 1990, p. 288
- Farlow & Brett-Surman 1997, pp. 607–624, chpt. 39: "Major Groups of Non-Dinosaurian Vertebrates of the Mesozoic Era" by Michael Morales.
- Wang, Steve C.; Dodson, Peter (September 12, 2006). "Estimating the diversity of dinosaurs". Proc. Natl. Acad. Sci. U.S.A. Washington, D.C.: National Academy of Sciences. 103 (37): 13601–13605. Bibcode:2006PNAS..10313601W. doi:10.1073/pnas.0606028103. ISSN 0027-8424. PMC 1564218. PMID 16954187.
- Russell, Dale A. (1995). "China and the lost worlds of the dinosaurian era". Historical Biology. Milton Park, Oxfordshire: Taylor & Francis. 10 (1): 3–12. doi:10.1080/10292389509380510. ISSN 0891-2963.
- Amos, Jonathan (September 17, 2008). "Will the real dinosaurs stand up?". BBC News. London: BBC. Archived from the original on 2008-09-18. Retrieved 2019-10-16.
- Starrfelt, Jostein; Liow, Lee Hsiang (April 5, 2016). "How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model". Philosophical Transactions of the Royal Society B. London: Royal Society. 371 (1691): 20150219. doi:10.1098/rstb.2015.0219. ISSN 0962-8436. PMC 4810813. PMID 26977060.
- Switek, Brian. "Most Dinosaur Species Are Still Undiscovered". Phenomena - A Science Salon. Washington, D.C.: National Geographic Society. OCLC 850948164. Archived from the original on 2019-07-12. Retrieved 2019-11-07.
- MacLeod, Norman; Rawson, Peter F.; Forey, Peter L.; et al. (April 1, 1997). "The Cretaceous–Tertiary biotic transition". Journal of the Geological Society. London: Geological Society of London. 154 (2): 265–292. Bibcode:1997JGSoc.154..265M. doi:10.1144/gsjgs.154.2.0265. ISSN 0016-7649.
- Amiot, Romain; Buffetaut, Éric; Lécuyer, Christophe; et al. (February 1, 2010). "Oxygen isotope evidence for semi-aquatic habits among spinosaurid theropods". Geology. Boulder, CO: Geological Society of America. 38 (2): 139–142. Bibcode:2010Geo....38..139A. doi:10.1130/G30402.1. ISSN 0091-7613.
- Brusatte 2012, pp. 9–20, 21
- Nesbitt, Sterling J. (April 29, 2011). "The Early Evolution of Archosaurs: Relationships and the Origin of Major Clades" (PDF). Bulletin of the American Museum of Natural History. New York: American Museum of Natural History. 2011 (352): 1–292. doi:10.1206/352.1. hdl:2246/6112. ISSN 0003-0090. Archived from the original on 2016-02-29. Retrieved 2019-10-16.
- Paul 2000, pp. 140–168, chpt. 3: "Classification and Evolution of the Dinosaur Groups" by Thomas R. Holtz Jr.
- Smith, Dave; et al. "Dinosauria: Morphology". Berkeley: University of California Museum of Paleontology. Retrieved 2019-10-16.
- Langer, Max C.; Abdala, Fernando; Richter, Martha; Benton, Michael J. (October 15, 1999). "Un dinosaure sauropodomorphe dans le Trias supérieur (Carnien) du Sud du Brésil" [A sauropodomorph dinosaur from the Upper Triassic (Carman) of southern Brazil]. Comptes Rendus de l'Académie des Sciences, Série IIA. Amsterdam: Elsevier on behalf of the French Academy of Sciences. 329 (7): 511–517. Bibcode:1999CRASE.329..511L. doi:10.1016/S1251-8050(00)80025-7. ISSN 1251-8050.
- Nesbitt, Sterling J.; Irmis, Randall B.; Parker, William G. (2007). "A critical re-evaluation of the Late Triassic dinosaur taxa of North America". Journal of Systematic Palaeontology. Milton Park, Oxfordshire: Taylor & Francis on behalf of the Natural History Museum, London. 5 (2): 209–243. doi:10.1017/S1477201907002040. ISSN 1477-2019.
- This was recognized not later than 1909: Celeskey, Matt (2005). "Dr. W. J. Holland and the Sprawling Sauropods". The Hairy Museum of Natural History. Archived from the original on 2011-06-12. Retrieved 2019-10-18.
- Holland, William J. (May 1910). "A Review of Some Recent Criticisms of the Restorations of Sauropod Dinosaurs Existing in the Museums of the United States, with Special Reference to that of Diplodocus Carnegiei in the Carnegie Museum". The American Naturalist. American Society of Naturalists. 44 (521): 259–283. doi:10.1086/279138. ISSN 0003-0147. Retrieved 2019-10-18.
- The arguments and many of the images are also presented in Desmond 1975.
- Benton 2005
- Cowen 2005, pp. 151–175, chpt. 12: "Dinosaurs".
- Kubo, Tai; Benton, Michael J. (November 2007). "Evolution of hindlimb posture in archosaurs: limb stresses in extinct vertebrates". Palaeontology. Hoboken, NJ: Wiley-Blackwell. 50 (6): 1519–1529. doi:10.1111/j.1475-4983.2007.00723.x. ISSN 0031-0239.
- Kump, Lee R.; Pavlov, Alexander; Arthur, Michael A. (May 1, 2005). "Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia" (PDF). Geology. Boulder, CO: Geological Society of America. 33 (5): 397–400. Bibcode:2005Geo....33..397K. doi:10.1130/G21295.1. ISSN 0091-7613. Retrieved 2019-11-14.
- Tanner, Lawrence H.; Lucas, Spencer G.; Chapman, Mary G. (March 2004). "Assessing the record and causes of Late Triassic extinctions" (PDF). Earth-Science Reviews. Amsterdam: Elsevier. 65 (1–2): 103–139. Bibcode:2004ESRv...65..103T. doi:10.1016/S0012-8252(03)00082-5. ISSN 0012-8252. Archived from the original (PDF) on 2007-10-25. Retrieved 2007-10-22.
- Alcober, Oscar A.; Martinez, Ricardo N. (October 19, 2010). "A new herrerasaurid (Dinosauria, Saurischia) from the Upper Triassic Ischigualasto Formation of northwestern Argentina". ZooKeys. Sofia: Pensoft Publishers (63): 55–81. doi:10.3897/zookeys.63.550. ISSN 1313-2989. PMC 3088398. PMID 21594020.
- Sereno, Paul C. (June 25, 1999). "The Evolution of Dinosaurs". Science. Washington, D.C.: American Association for the Advancement of Science. 284 (5423): 2137–2147. doi:10.1126/science.284.5423.2137. ISSN 0036-8075. PMID 10381873. Archived (PDF) from the original on 2018-01-05. Retrieved 2019-11-08.
- Sereno, Paul C.; Forster, Catherine A.; Rogers, Raymond R.; Monetta, Alfredo M. (January 7, 1993). "Primitive dinosaur skeleton from Argentina and the early evolution of Dinosauria". Nature. London: Nature Research. 361 (6407): 64–66. Bibcode:1993Natur.361...64S. doi:10.1038/361064a0. ISSN 0028-0836.
- Nesbitt, Sterling J.; Barrett, Paul M.; Werning, Sarah; et al. (February 23, 2012). "The oldest dinosaur? A Middle Triassic dinosauriform from Tanzania". Biology Letters. London: Royal Society. 9 (1): 20120949. doi:10.1098/rsbl.2012.0949. ISSN 1744-9561. PMC 3565515. PMID 23221875.
- Langer, Max C.; Ramezani, Jahandar; Da Rosa, Átila A.S. (May 2018). "U-Pb age constraints on dinosaur rise from south Brazil". Gondwana Research. Amsterdam: Elsevier. 57: 133–140. Bibcode:2018GondR..57..133L. doi:10.1016/j.gr.2018.01.005. ISSN 1342-937X.
- Brusatte, Stephen L.; Benton, Michael J.; Ruta, Marcello; Lloyd, Graeme T. (September 12, 2008). "Superiority, Competition, and Opportunism in the Evolutionary Radiation of Dinosaurs" (PDF). Science. Washington, D.C.: American Association for the Advancement of Science. 321 (5895): 1485–1488. Bibcode:2008Sci...321.1485B. doi:10.1126/science.1161833. hdl:20.500.11820/00556baf-6575-44d9-af39-bdd0b072ad2b. ISSN 0036-8075. PMID 18787166. Retrieved 2019-10-22.
- Tanner, Spielmann & Lucas 2013, pp. 562–566, "The first Norian (Revueltian) rhynchosaur: Bull Canyon Formation, New Mexico, U.S.A." by Justin A. Spielmann, Spencer G. Lucas and Adrian P. Hunt.
- Sulej, Tomasz; Niedźwiedzki, Grzegorz (January 4, 2019). "An elephant-sized Late Triassic synapsid with erect limbs". Science. Washington, D.C.: American Association for the Advancement of Science. 363 (6422): 78–80. Bibcode:2019Sci...363...78S. doi:10.1126/science.aal4853. ISSN 0036-8075. PMID 30467179.
- "Fossil tracks in the Alps help explain dinosaur evolution". Science and Technology. The Economist. London: The Economist Group. April 19, 2018. ISSN 0013-0613. Retrieved 2018-05-24.
- Weishampel, Dodson & Osmólska 2004, pp. 627–642, chpt. 27: "Mesozoic Biogeography of Dinosauria" by Thomas R. Holtz Jr., Ralph E. Chapman, and Matthew C. Lamanna.
- Weishampel, Dodson & Osmólska 2004, pp. 614–626, chpt. 26: "Dinosaur Paleoecology" by David E. Fastovsky and Joshua B. Smith.
- Sereno, Paul C.; Wilson, Jeffrey A.; Witmer, Lawrence M.; et al. (November 21, 2007). Kemp, Tom (ed.). "Structural Extremes in a Cretaceous Dinosaur". PLOS One. San Francisco, CA: PLOS. 2 (11): e1230. Bibcode:2007PLoSO...2.1230S. doi:10.1371/journal.pone.0001230. ISSN 1932-6203. PMC 2077925. PMID 18030355.
- Prasad, Vandana; Strömberg, Caroline A. E.; Alimohammadian, Habib; et al. (November 18, 2005). "Dinosaur Coprolites and the Early Evolution of Grasses and Grazers". Science. Washington, D.C.: American Association for the Advancement of Science. 310 (5751): 1170–1180. Bibcode:2005Sci...310.1177P. doi:10.1126/science.1118806. ISSN 0036-8075. PMID 16293759.
- Weishampel, Dodson & Osmólska 2004, pp. 672–684, chpt. 30: "Dinosaur Extinction" by J. David Archibald and David E. Fastovsky.
- Dyke & Kaiser 2011, chpt. 14: "Bird Evolution Across the K–Pg Boundary and the Basal Neornithine Diversification" by Bent E. K. Lindow. doi:10.1002/9781119990475.ch14
- Cracraft, Joel (June 21, 1968). "A Review of the Bathornithidae (Aves, Gruiformes), with Remarks on the Relationships of the Suborder Cariamae" (PDF). American Museum Novitates. New York: American Museum of Natural History. 2326: 1–46. hdl:2246/2536. ISSN 0003-0082. Retrieved 2019-10-22.
- Alvarenga, Herculano; Jones, Washington W.; Rinderknecht, Andrés (May 2010). "The youngest record of phorusrhacid birds (Aves, Phorusrhacidae) from the late Pleistocene of Uruguay". Neues Jahrbuch für Geologie und Paläontologie. Stuttgart: E. Schweizerbart. 256 (2): 229–234. doi:10.1127/0077-7749/2010/0052. ISSN 0077-7749. Retrieved 2019-10-22.
- Mayr 2009
- Paul 1988, pp. 248–250
- Weishampel, Dodson & Osmólska 2004, pp. 151–164, chpt. 7: "Therizinosauroidea" by James M. Clark, Teresa Maryańska, and Rinchen Barsbold.
- Weishampel, Dodson & Osmólska 2004, pp. 196–210, chpt. 10: "Dromaeosauridae" by Peter J. Makovicky and Mark A. Norell.
- Taylor, Michael P.; Wedel, Mathew J. (February 12, 2013). "Why sauropods had long necks; and why giraffes have short necks". PeerJ. Corte Madera, CA; London. 1: e36. doi:10.7717/peerj.36. ISSN 2167-8359. PMC 3628838. PMID 23638372.
- Holtz 2007
- St. Fleur, Nicholas (December 8, 2016). "That Thing With Feathers Trapped in Amber? It Was a Dinosaur Tail". Trilobites. The New York Times. New York: The New York Times Company. ISSN 0362-4331. Archived from the original on 2017-08-31. Retrieved 2016-12-08.
- Dal Sasso, Cristiano; Signore, Marco (March 26, 1998). "Exceptional soft-tissue preservation in a theropod dinosaur from Italy". Nature. London: Nature Research. 392 (6674): 383–387. Bibcode:1998Natur.392..383D. doi:10.1038/32884. ISSN 0028-0836.
- Schweitzer, Mary H.; Wittmeyer, Jennifer L.; Horner, John R.; Toporski, Jan K. (March 25, 2005). "Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex". Science. Washington, D.C.: American Association for the Advancement of Science. 307 (5717): 1952–1955. Bibcode:2005Sci...307.1952S. doi:10.1126/science.1108397. ISSN 0036-8075. PMID 15790853.
- Alexander, R. McNeill (August 7, 2006). "Dinosaur biomechanics". Proceedings of the Royal Society B. London: Royal Society. 273 (1596): 1849–1855. doi:10.1098/rspb.2006.3532. ISSN 0962-8452. PMC 1634776. PMID 16822743.
- Farlow, James O.; Dodson, Peter; Chinsamy, Anusuya (November 1995). "Dinosaur Biology". Annual Review of Ecology and Systematics. Palo Alto, CA: Annual Reviews. 26: 445–471. doi:10.1146/annurev.es.26.110195.002305. ISSN 1545-2069.
- Weishampel, Dodson & Osmólska 2004
- Dodson & Gingerich 1993, pp. 167–199, "On the rareness of big, fierce animals: speculations about the body sizes, population densities, and geographic ranges of predatory mammals and large carnivorous dinosaurs" by James O. Farlow.
- Peczkis, Jan (February 15, 1995). "Implications of body-mass estimates for dinosaurs". Journal of Vertebrate Paleontology. Milton Park, Oxfordshire: Taylor & Francis for the Society of Vertebrate Paleontology. 14 (4): 520–533. doi:10.1080/02724634.1995.10011575. ISSN 0272-4634. JSTOR 4523591.
- "Dinosaur Evolution". Department of Paleobiology. Dinosaurs. Washington, D.C.: National Museum of Natural History. 2007. Archived from the original on 2007-11-11. Retrieved 2007-11-21.
- Sander, P. Martin; Christian, Andreas; Clauss, Marcus; et al. (February 2011). "Biology of the sauropod dinosaurs: the evolution of gigantism". Biological Reviews. Cambridge: Cambridge Philosophical Society. 86 (1): 117–155. doi:10.1111/j.1469-185X.2010.00137.x. ISSN 1464-7931. PMC 3045712. PMID 21251189.
- Foster & Lucas 2006, pp. 131–138, "Biggest of the big: a critical re-evaluation of the mega-sauropod Amphicoelias fragillimus Cope, 1878" by Kenneth Carpenter.
- Paul 2010
- Colbert 1971
- Mazzetta, Gerardo V.; Christiansenb, Per; Fariñaa, Richard A. (2004). "Giants and Bizarres: Body Size of Some Southern South American Cretaceous Dinosaurs" (PDF). Historical Biology. Milton Park, Oxfordshire: Taylor & Francis. 16 (2–4): 71–83. CiteSeerX 10.1.1.694.1650. doi:10.1080/08912960410001715132. ISSN 0891-2963.
- Janensch, Werner (1950). Translation by Gerhard Maier. "Die Skelettrekonstruktion von Brachiosaurus brancai" [The Skeleton Reconstruction of Brachiosaurus brancai] (PDF). Palaeontographica. Stuttgart: E. Schweizerbart. Suplement VII (1. Reihe, Teil 3, Lieferung 2): 97–103. OCLC 45923346. Archived (PDF) from the original on 2017-07-11. Retrieved 2019-10-24.
- Lucas, Spencer G.; Herne, Matthew C.; Hecket, Andrew B.; et al. (November 9, 2004). Reappraisal of Seismosaurus, a Late Jurassic Sauropod Dinosaur From New Mexico. 2004 Denver Annual Meeting (November 7–10, 2004). 36. Boulder, CO: Geological Society of America. p. 422. OCLC 62334058. Paper No. 181-4. Archived from the original on 2019-10-08. Retrieved 2019-10-25.
- Sellers, William Irvin.; Margetts, Lee; Coria, Rodolfo Aníbal; Manning, Phillip Lars (October 30, 2013). Carrier, David (ed.). "March of the Titans: The Locomotor Capabilities of Sauropod Dinosaurs". PLOS ONE. San Francisco, CA: PLOS. 8 (10): e78733. Bibcode:2013PLoSO...878733S. doi:10.1371/journal.pone.0078733. ISSN 1932-6203. PMC 3864407. PMID 24348896.
- Lovelace, David M.; Hartman, Scott A.; Wahl, William R. (October–December 2007). "Morphology of a specimen of Supersaurus (Dinosauria, Sauropoda) from the Morrison Formation of Wyoming, and a re-evaluation of diplodocid phylogeny". Arquivos do Museu Nacional. Rio de Janeiro: National Museum of Brazil; Federal University of Rio de Janeiro. 65 (4): 527–544. CiteSeerX 10.1.1.603.7472. ISSN 0365-4508. Retrieved 2019-10-26.
- Woodruff, D. Cary; Foster, John R. (2014). "The fragile legacy of Amphicoelias fragillimus (Dinosauria: Sauropoda; Morrison Formation - Latest Jurassic)". Volumina Jurassica. 12 (2): 211–220. doi:10.5604/17313708.1130144 (inactive 2020-02-19).
- Dal Sasso, Cristiano; Maganuco, Simone; Buffetaut, Éric; et al. (December 30, 2005). "New information on the skull of the enigmatic theropod Spinosaurus, with remarks on its sizes and affinities" (PDF). Journal of Vertebrate Paleontology. Milton Park, Oxfordshire: Taylor & Francis for the Society of Vertebrate Paleontology. 25 (4): 888–896. doi:10.1671/0272-4634(2005)025[0888:NIOTSO]2.0.CO;2. ISSN 0272-4634. Archived from the original (PDF) on 2011-04-29. Retrieved 2011-05-05.
- Therrien, François; Henderson, Donald M. (March 12, 2007). "My theropod is bigger than yours … or not: estimating body size from skull length in theropods". Journal of Vertebrate Paleontology. Milton Park, Oxfordshire: Taylor & Francis for the Society of Vertebrate Paleontology. 27 (1): 108–115. doi:10.1671/0272-4634(2007)27[108:MTIBTY]2.0.CO;2. ISSN 0272-4634.
- Zhao, Xijin; Li, Dunjing; Han, Gang; et al. (2007). "Zhuchengosaurus maximus from Shandong Province". Acta Geoscientia Sinica. Beijing: Chinese Academy of Geological Sciences. 28 (2): 111–122. ISSN 1006-3021.
- Weishampel, Dodson & Osmólska 2004, pp. 438–463, chpt. 20: "Hadrosauridae" by John R. Horner David B. Weishampel, and Catherine A. Forster.
- Norell, Gaffney & Dingus 2000
- "Bee Hummingbird (Mellisuga helenae)". Birds.com. Paley Media. Archived from the original on 2015-04-03. Retrieved 2019-10-27.
- Zhang, Fucheng; Zhou, Zhonghe; Xu, Xing; et al. (October 23, 2008). "A bizarre Jurassic maniraptoran from China with elongate ribbon-like feathers". Nature. London: Nature Research. 455 (7216): 1105–1108. Bibcode:2008Natur.455.1105Z. doi:10.1038/nature07447. ISSN 0028-0836. PMID 18948955.
- Xu, Xing; Zhao, Qi; Norell, Mark; et al. (February 2008). "A new feathered maniraptoran dinosaur fossil that fills a morphological gap in avian origin". Chinese Science Bulletin. Amsterdam: Elsevier on behalf of Science in China Press. 54 (3): 430–435. doi:10.1007/s11434-009-0009-6. ISSN 1001-6538.
- Butler, Richard J.; Zhao, Qi (February 2009). "The small-bodied ornithischian dinosaurs Micropachycephalosaurus hongtuyanensis and Wannanosaurus yansiensis from the Late Cretaceous of China". Cretaceous Research. Amsterdam: Elsevier. 30 (1): 63–77. doi:10.1016/j.cretres.2008.03.002. ISSN 0195-6671.
- Yans, Johan; Dejax, Jean; Pons, Denise; et al. (January–February 2005). "Implications paléontologiques et géodynamiques de la datation palynologique des sédiments à faciès wealdien de Bernissart (bassin de Mons, Belgique)" [Palaeontological and geodynamical implications of the palynological dating of the wealden facies sediments of Bernissart (Mons Basin, Belgium)]. Comptes Rendus Palevol (in French). Amsterdam: Elsevier of behalf of the French Academy of Sciences. 4 (1–2): 135–150. doi:10.1016/j.crpv.2004.12.003. ISSN 1631-0683.
- Day, Julia J.; Upchurch, Paul; Norman, David B.; et al. (May 31, 2002). "Sauropod Trackways, Evolution, and Behavior". Science. Washington, D.C.: American Association for the Advancement of Science. 296 (5573): 1659. doi:10.1126/science.1070167. ISSN 0036-8075. PMID 12040187.
- Curry Rogers & Wilson 2005, pp. 252–284, chpt. 9: "Steps in Understanding Sauropod Biology: The Importance of Sauropods Tracks" by Joanna L. Wright.
- Varricchio, David J.; Sereno, Paul C.; Zhao, Xijin; et al. (2008). "Mud-trapped herd captures evidence of distinctive dinosaur sociality" (PDF). Acta Palaeontologica Polonica. Warsaw: Institute of Paleobiology, Polish Academy of Sciences. 53 (4): 567–578. doi:10.4202/app.2008.0402. ISSN 0567-7920. Archived (PDF) from the original on 2019-03-30. Retrieved 2011-05-06.
- Lessem & Glut 1993, pp. 19–20, "Allosaurus"
- Maxwell, W. Desmond; Ostrom, John H. (December 27, 1995). "Taphonomy and paleobiological implications of Tenontosaurus–Deinonychus associations". Journal of Vertebrate Paleontology. Milton Park, Oxfordshire: Taylor & Francis for the Society of Vertebrate Paleontology. 15 (4): 707–712. doi:10.1080/02724634.1995.10011256. ISSN 0272-4634.
- Roach, Brian T.; Brinkman, Daniel L. (April 2007). "A Reevaluation of Cooperative Pack Hunting and Gregariousness in Deinonychus antirrhopus and Other Nonavian Theropod Dinosaurs". Bulletin of the Peabody Museum of Natural History. New Haven, CT: Peabody Museum of Natural History. 48 (1): 103–138. doi:10.3374/0079-032X(2007)48[103:AROCPH]2.0.CO;2. ISSN 0079-032X.
- Tanke, Darren H. (1998). "Head-biting behavior in theropod dinosaurs: paleopathological evidence" (PDF). Gaia: Revista de Geociências. Lisbon: National Museum of Natural History and Science (15): 167–184. doi:10.7939/R34T6FJ1P. ISSN 0871-5424. Archived from the original (PDF) on 2008-02-27.
- "The Fighting Dinosaurs". New York: American Museum of Natural History. Archived from the original on 2012-01-18. Retrieved 2007-12-05.
- Carpenter, Kenneth (1998). "Evidence of predatory behavior by theropod dinosaurs" (PDF). Gaia: Revista de Geociências. Lisbon: National Museum of Natural History and Science. 15: 135–144. ISSN 0871-5424.
- Rogers, Raymond R.; Krause, David W.; Curry Rogers, Kristina (April 3, 2007). "Cannibalism in the Madagascan dinosaur Majungatholus atopus". Nature. London: Nature Research. 422 (6931): 515–518. Bibcode:2003Natur.422..515R. doi:10.1038/nature01532. ISSN 0028-0836. PMID 12673249.
- Schmitz, Lars; Motani, Ryosuke (May 6, 2011). "Nocturnality in Dinosaurs Inferred from Scleral Ring and Orbit Morphology". Science. Washington, D.C.: American Association for the Advancement of Science. 332 (6030): 705–708. Bibcode:2011Sci...332..705S. doi:10.1126/science.1200043. ISSN 0036-8075. PMID 21493820.
- Varricchio, David J.; Martin, Anthony J.; Katsura, Yoshihiro (June 7, 2007). "First trace and body fossil evidence of a burrowing, denning dinosaur". Proceedings of the Royal Society B. London: Royal Society. 274 (1616): 1361–1368. doi:10.1098/rspb.2006.0443. ISSN 0962-8452. PMC 2176205. PMID 17374596.
- Chiappe & Witmer 2002
- Chatterjee, Sankar; Templin, R. Jack (January 30, 2007). "Biplane wing planform and flight performance of the feathered dinosaur Microraptor gui" (PDF). Proc. Natl. Acad. Sci. U.S.A. Washington, D.C.: National Academy of Sciences. 104 (5): 1576–1580. Bibcode:2007PNAS..104.1576C. doi:10.1073/pnas.0609975104. ISSN 0027-8424. PMC 1780066. PMID 17242354. Archived (PDF) from the original on 2019-08-18. Retrieved 2019-10-29.
- Goriely, Alain; McMillen, Tyler (June 17, 2002). "Shape of a Cracking Whip". Physical Review Letters. Ridge, NY: American Physical Society. 88 (24): 244301. Bibcode:2002PhRvL..88x4301G. doi:10.1103/PhysRevLett.88.244301. ISSN 0031-9007. PMID 12059302.
- Henderson, Donald M. (2003). "Effects of stomach stones on the buoyancy and equilibrium of a floating crocodilian: a computational analysis". Canadian Journal of Zoology. Ottawa: NRC Research Press. 81 (8): 1346–1357. doi:10.1139/z03-122. ISSN 0008-4301.
- Senter, Phil (2008). "Voices of the past: a review of Paleozoic and Mesozoic animal sounds". Historical Biology. Milton Park, Oxfordshire: Taylor & Francis. 20 (4): 255–287. doi:10.1080/08912960903033327. ISSN 0891-2963.
- Li, Quanguo; Gao, Ke-Qin; Vinther, Jakob; et al. (March 12, 2010). "Plumage Color Patterns of an Extinct Dinosaur" (PDF). Science. Washington, D.C.: American Association for the Advancement of Science. 327 (5971): 1369–1372. Bibcode:2010Sci...327.1369L. doi:10.1126/science.1186290. ISSN 0036-8075. PMID 20133521. Archived (PDF) from the original on 2019-03-30. Retrieved 2019-11-07.
- Clarke, Julia A.; Chatterjee, Sankar; Zhiheng, Li; et al. (October 12, 2016). "Fossil evidence of the avian vocal organ from the Mesozoic". Nature. London: Nature Research. 538 (7626): 502–505. Bibcode:2016Natur.538..502C. doi:10.1038/nature19852. ISSN 0028-0836. PMID 27732575.
- Riede, Tobias; Eliason, Chad M.; Miller, Edward H.; et al. (June 27, 2016). "Coos, booms, and hoots: the evolution of closed-mouth vocal behavior in birds". Evolution. Hoboken, NJ: John Wiley & Sons for the Society for the Study of Evolution. 70 (8): 1734–1746. doi:10.1111/evo.12988. ISSN 0014-3820. PMID 27345722.
- Weishampel, David B. (Spring 1981). "Acoustic Analysis of Vocalization of Lambeosaurine Dinosaurs (Reptilia: Ornithischia)" (PDF). Paleobiology. Bethesda, MD: Paleontological Society. 7 (2): 252–261. doi:10.1017/S0094837300004036. ISSN 0094-8373. JSTOR 2400478. Archived from the original (PDF) on 2014-10-06. Retrieved 2019-10-30.
- Miyashita, Tetsuto; Arbour, Victoria M.; Witmer, Lawrence M.; et al. (December 2011). "The internal cranial morphology of an armoured dinosaur Euoplocephalus corroborated by X-ray computed tomographic reconstruction" (PDF). Journal of Anatomy. Hoboken, NJ: John Wiley & Sons. 219 (6): 661–675. doi:10.1111/j.1469-7580.2011.01427.x. ISSN 1469-7580. PMC 3237876. PMID 21954840. Archived from the original (PDF) on 2015-09-24. Retrieved 2019-10-30.
- Currie & Padian 1997, p. 206, "Eggs, Eggshells, and Nests" by Konstantin E. Mikhailov.
- Hansell 2000
- Varricchio, David J.; Horner, John R.; Jackson, Frankie D. (September 19, 2002). "Embryos and eggs for the Cretaceous theropod dinosaur Troodon formosus". Journal of Vertebrate Paleontology. Milton Park, Oxfordshire: Taylor & Francis for the Society of Vertebrate Paleontology. 22 (3): 564–576. doi:10.1671/0272-4634(2002)022[0564:EAEFTC]2.0.CO;2. ISSN 0272-4634.
- Lee, Andrew H.; Werning, Sarah (January 15, 2008). "Sexual maturity in growing dinosaurs does not fit reptilian growth models". Proc. Natl. Acad. Sci. U.S.A. Washington, D.C.: National Academy of Sciences. 105 (2): 582–587. Bibcode:2008PNAS..105..582L. doi:10.1073/pnas.0708903105. ISSN 0027-8424. PMC 2206579. PMID 18195356.
- Horner, John R.; Makela, Robert (November 15, 1979). "Nest of juveniles provides evidence of family structure among dinosaurs". Nature. London: Nature Research. 282 (5736): 296–298. Bibcode:1979Natur.282..296H. doi:10.1038/282296a0. ISSN 0028-0836.
- "Discovering Dinosaur Behavior: 1960–present view". Encyclopædia Britannica. Chicago, IL: Encyclopædia Britannica, Inc. Archived from the original on 2013-12-13. Retrieved 2019-10-30.
- Currie et al. 2004, pp. 234–250, chpt. 11: "Dinosaur Brooding Behavior and the Origin of Flight Feathers" by Thomas P. Hopp and Mark J. Orsen.
- Reisz, Robert R.; Scott, Diane; Sues, Hans-Dieter; et al. (July 29, 2005). "Embryos of an Early Jurassic Prosauropod Dinosaur and Their Evolutionary Significance" (PDF). Science. Washington, D.C.: American Association for the Advancement of Science. 309 (5735): 761–764. Bibcode:2005Sci...309..761R. doi:10.1126/science.1114942. ISSN 0036-8075. PMID 16051793.
- Clark, Neil D. L.; Booth, Paul; Booth, Claire L.; et al. (April 1, 2004). "Dinosaur footprints from the Duntulm Formation (Bathonian, Jurassic) of the Isle of Skye" (PDF). Scottish Journal of Geology. London: Geological Society of London. 40 (1): 13–21. doi:10.1144/sjg40010013. ISSN 0036-9276. Archived (PDF) from the original on 2013-06-22. Retrieved 2019-12-12.
- Zhou, Zhonghe; Zhang, Fucheng (October 22, 2004). "A Precocial Avian Embryo from the Lower Cretaceous of China". Science. Washington, D.C.: American Association for the Advancement of Science. 306 (5696): 653. doi:10.1126/science.1100000. ISSN 0036-8075. PMID 15499011.
- Naish, Darren (May 15, 2012). "A drowned nesting colony of Late Cretaceous birds". Tetrapod Zoology. Scientific American (blog). Stuttgart: Springer Nature. ISSN 0036-8733. Archived from the original on 2018-09-25. Retrieved 2019-11-16.
- Fernández, Mariela S.; García, Rodolfo A.; Fiorelli, Lucas; et al. (April 17, 2013). "A Large Accumulation of Avian Eggs from the Late Cretaceous of Patagonia (Argentina) Reveals a Novel Nesting Strategy in Mesozoic Birds". PLOS ONE. San Francisco, CA: PLOS. 8 (4): e61030. Bibcode:2013PLoSO...861030F. doi:10.1371/journal.pone.0061030. ISSN 1932-6203. PMC 3629076. PMID 23613776.
- Deeming, Denis Charles; Mayr, Gerald (May 2018). "Pelvis morphology suggests that early Mesozoic birds were too heavy to contact incubate their eggs". Journal of Evolutionary Biology. Hoboken, NJ: Wiley-Blackwell on behalf of the European Society for Evolutionary Biology. 31 (5): 701–709. doi:10.1111/jeb.13256. ISSN 1010-061X. PMID 29485191.
- Myers, Timothy S.; Fiorillo, Anthony R. (April 1, 2009). "Evidence for gregarious behavior and age segregation in sauropod dinosaurs". Palaeogeography, Palaeoclimatology, Palaeoecology. Amsterdam: Elsevier. 274 (1–2): 96–104. Bibcode:2009PPP...274...96M. doi:10.1016/j.palaeo.2009.01.002. ISSN 0031-0182.
- Weishampel, Dodson & Osmólska 2004, pp. 643–659, chpt. 28: "Physiology of Nonavian Dinosaurs" by Anusuya Chinsamy and Willem J. Hillenius.
- Pontzer, Herman; Allen, Vivian; Hutchinson, John R. (November 11, 2009). Farke, Andrew Allen (ed.). "Biomechanics of running indicates endothermy in bipedal dinosaurs". PLOS ONE. San Francisco, CA: PLOS. 4 (11): e7783. Bibcode:2009PLoSO...4.7783P. doi:10.1371/journal.pone.0007783. ISSN 1932-6203. PMC 2772121. PMID 19911059.
- Smith, Dave; et al. "Hot-Blooded or Cold-Blooded??". Berkeley: University of California Museum of Paleontology. Retrieved 2019-10-30.
- Bakker, Robert T. (Spring 1968). Remington, Jeanne E. (ed.). "The Superiority of Dinosaurs". Discovery: Magazine of the Peabody Museum of Natural History. New Haven, CT: Peabody Museum of Natural History. 3 (2): 11–22. ISSN 0012-3625. OCLC 297237777.
- Parsons 2001, pp. 22–48, "The Heresies of Dr. Bakker".
- Perkins, Sid (October 13, 2015). "How to take a dinosaur's temperature". Science Magazine. Washington, D.C.: American Association for the Advancement of Science. doi:10.1126/science.aad4705. Archived from the original on 2019-09-25. Retrieved 2019-10-30.
- Sereno, Paul C.; Martinez, Ricardo N.; Wilson, Jeffrey A.; et al. (September 2008). Kemp, Tom (ed.). "Evidence for Avian Intrathoracic Air Sacs in a New Predatory Dinosaur from Argentina". PLOS ONE. San Francisco, CA: PLOS. 3 (9): e3303. Bibcode:2008PLoSO...3.3303S. doi:10.1371/journal.pone.0003303. ISSN 1932-6203. PMC 2553519. PMID 18825273.
- Farlow & Brett-Surman 1997, pp. 449–473, chpt. 32: "Dinosaurian Physiology: The Case for 'Intermediate' Dinosaurs" by R.E.H. Reid.
- Ehrlich, Paul R.; Dobkin, David S.; Wheye, Darryl (1988). "Drinking". Birds of Stanford. Stanford, CA: Stanford University. Retrieved 2007-12-13.
- Tsahar, Ella; Martínez del Rio, Carlos; Izhaki, Ido; et al. (March 2005). "Can birds be ammonotelic? Nitrogen balance and excretion in two frugivores" (PDF). The Journal of Experimental Biology. Cambridge: The Company of Biologists. 208 (Pt. 6): 1025–1034. doi:10.1242/jeb.01495. ISSN 0022-0949. PMID 15767304. Archived (PDF) from the original on 2019-10-17. Retrieved 2019-10-31.
- Skadhauge, Erik; Erlwanger, Kennedy H.; Ruziwa, S.D.; et al. (April 2003). "Does the ostrich (Struthio camelus) coprodeum have the electrophysiological properties and microstructure of other birds?". Comparative Biochemistry and Physiology Part A. Amsterdam: Elsevier. 134 (4): 749–755. doi:10.1016/S1095-6433(03)00006-0. ISSN 1095-6433. PMID 12814783.
- Preest, Marion R.; Beuchat, Carol A. (April 1997). "Ammonia excretion by hummingbirds". Nature. London: Nature Research. 386 (6625): 561–562. Bibcode:1997Natur.386..561P. doi:10.1038/386561a0. ISSN 0028-0836.
- Mora, J.; Martuscelli, J; Ortiz Pineda, J.; et al. (July 1965). "The Regulation of Urea-Biosynthesis Enzymes in Vertebrates". Biochemical Journal. London: Portland Press. 96 (1): 28–35. doi:10.1042/bj0960028. ISSN 0264-6021. PMC 1206904. PMID 14343146.
- Packard, Gary C. (November–December 1966). "The Influence of Ambient Temperature and Aridity on Modes of Reproduction and Excretion of Amniote Vertebrates". The American Naturalist. American Society of Naturalists. 100 (916): 667–682. doi:10.1086/282459. ISSN 0003-0147. JSTOR 2459303.
- Balgooyen, Thomas G. (Autumn 1971). "Pellet Regurgitation by Captive Sparrow Hawks (Falco sparverius)" (PDF). Condor. 73 (3): 382–385. doi:10.2307/1365774. JSTOR 1365774. Archived from the original (PDF) on 2019-04-04. Retrieved 2019-10-30.
- Huxley, Thomas H. (February 7, 1868). "On the Animals which are most nearly intermediate between Birds and Reptiles". The Annals and Magazine of Natural History. London: Taylor & Francis. 4 (2): 66–75. Retrieved 2019-10-31.
- Heilmann 1926
- Osborn, Henry Fairfield (1924). "Three new Theropoda, Protoceratops zone, central Mongolia" (PDF). American Museum Novitates. New York: American Museum of Natural History. 144: 1–12. ISSN 0003-0082.
- Ostrom, John H. (March 9, 1973). "The ancestry of birds". Nature. London: Nature Research. 242 (5393): 136. doi:10.1038/242136a0. ISSN 0028-0836.
- Padian 1986, pp. 1–55, "Saurischian Monophyly and the Origin of Birds" by Jacques Gauthier.
- Mayr, Gerald; Pohl, Burkhard; Peters, D. Stefan (December 2, 2005). "A Well-Preserved Archaeopteryx Specimen with Theropod Features". Science. Washington, D.C.: American Association for the Advancement of Science. 310 (5753): 1483–1486. Bibcode:2005Sci...310.1483M. doi:10.1126/science.1120331. ISSN 0036-8075. PMID 16322455.
- Martin, Larry D. (2006). "A basal archosaurian origin for birds". Acta Zoologica Sinica. 50 (6): 977–990. ISSN 1674-5507.
- Feduccia, Alan (October 1, 2002). "Birds are Dinosaurs: Simple Answer to a Complex Problem". The Auk. Washington, D.C.: American Ornithologists' Union. 119 (4): 1187–1201. doi:10.1642/0004-8038(2002)119[1187:BADSAT]2.0.CO;2. ISSN 0004-8038. JSTOR 4090252. Retrieved 2019-11-03.
- Switek, Brian (July 2, 2012). "Rise of the fuzzy dinosaurs". News. Nature. London: Nature Research. doi:10.1038/nature.2012.10933. ISSN 0028-0836. Retrieved 2019-01-01.
- Godefroit, Pascal; Sinitsa, Sofia; Dhouailly, Danielle; Bolotsky, Yuri; Sizov, Alexander (November 2, 2013). Feather-like structures and scales in a Jurassic neornithischian dinosaur from Siberia (PDF). Program and Abstracts of the 73rd Meeting of the Society of Vertebrate Paleontology. Journal of Vertebrate Paleontology. Los Angeles, CA. p. 135. ISSN 1937-2809. Archived from the original on 2019-02-06. Retrieved 2019-11-01. Supplement to the online Journal of Vertebrate Paleontology, October 2013.
- Xu, Xing; Norell, Mark A.; Kuang, Xuewen; et al. (2004). "Basal tyrannosauroids from China and evidence for protofeathers in tyrannosauroids". Nature. London: Nature Research. 431 (7009): 680–684. Bibcode:2004Natur.431..680X. doi:10.1038/nature02855. ISSN 0028-0836. PMID 15470426.
- Göhlich, Ursula B.; Chiappe, Luis M. (March 16, 2006). "A new carnivorous dinosaur from the Late Jurassic Solnhofen archipelago" (PDF). Nature. London: Nature Research. 440 (7082): 329–332. Bibcode:2006Natur.440..329G. doi:10.1038/nature04579. ISSN 0028-0836. PMID 16541071. Archived from the original (PDF) on 2019-04-26. Retrieved 2019-11-01.
- Kellner, Alexander W. A.; Wang, Xiaolin; Tischlinger, Helmut; et al. (January 22, 2010). "The soft tissue of Jeholopterus (Pterosauria, Anurognathidae, Batrachognathinae) and the structure of the pterosaur wing membrane". Proceedings of the Royal Society B. London: Royal Society. 277 (1679): 321–329. doi:10.1098/rspb.2009.0846. ISSN 0962-8452. PMC 2842671. PMID 19656798.
- Alibardi, Lorenzo; Knapp, Loren W.; Sawyer, Roger H. (June–September 2006). "Beta-keratin localization in developing alligator scales and feathers in relation to the development and evolution of feathers". Journal of Submicroscopic Cytology and Pathology. Siena: Nuova Immagine Editrice. 38 (2–3): 175–192. ISSN 1122-9497. PMID 17784647.
- Wellnhofer, Peter (June 24, 1988). "A New Specimen of Archaeopteryx". Science. Washington, D.C.: American Association for the Advancement of Science. 240 (4860): 1790–1792. Bibcode:1988Sci...240.1790W. doi:10.1126/science.240.4860.1790. ISSN 0036-8075. JSTOR 1701652. PMID 17842432.
- —— (1988). "Ein neuer Exemplar von Archaeopteryx". Archaeopteryx. 6: 1–30.
- Schweitzer, Mary H.; Watt, J.A.; Avci, R.; et al. (August 15, 1999). "Beta-keratin specific immunological reactivity in feather-like structures of the Cretaceous Alvarezsaurid, Shuvuuia deserti". Journal of Experimental Zoology Part B. Hoboken, NJ: Wiley-Blackwell. 285 (2): 146–157. doi:10.1002/(SICI)1097-010X(19990815)285:2<146::AID-JEZ7>3.0.CO;2-A. ISSN 1552-5007. PMID 10440726.
- Lingham-Soliar, Theagarten (December 2003). "The dinosaurian origin of feathers: perspectives from dolphin (Cetacea) collagen fibers". Naturwissenschaften. Berlin: Springer Science+Business Media. 90 (12): 563–567. Bibcode:2003NW.....90..563L. doi:10.1007/s00114-003-0483-7. ISSN 0028-1042. PMID 14676953.
- Feduccia, Alan; Lingham-Soliar, Theagarten; Hinchliffe, J. Richard (November 2005). "Do feathered dinosaurs exist? Testing the hypothesis on neontological and paleontological evidence". Journal of Morphology. Hoboken, NJ: John Wiley & Sons. 266 (2): 125–166. doi:10.1002/jmor.10382. ISSN 0362-2525. PMID 16217748.
- Lingham-Soliar, Theagarten; Feduccia, Alan; Wang, Xiaolin (August 7, 2007). "A new Chinese specimen indicates that 'protofeathers' in the Early Cretaceous theropod dinosaur Sinosauropteryx are degraded collagen fibres". Proceedings of the Royal Society B. London: Royal Society. 274 (1620): 1823–1829. doi:10.1098/rspb.2007.0352. ISSN 0962-8452. PMC 2270928. PMID 17521978.
- Prum, Richard O. (April 1, 2003). "Are Current Critiques Of The Theropod Origin Of Birds Science? Rebuttal To Feduccia 2002". The Auk. Washington, D.C.: American Ornithologists' Union. 120 (2): 550–561. doi:10.1642/0004-8038(2003)120[0550:ACCOTT]2.0.CO;2. ISSN 0004-8038. JSTOR 4090212. Retrieved 2018-11-02.
- Romey, Kristin (December 8, 2016). "First Dinosaur Tail Found Preserved in Amber". News. Washington, D.C.: National Geographic Society. Archived from the original on 2019-09-24. Retrieved 2019-11-02.
- Xing, Lida; McKellar, Ryan C.; Xu, Xing; et al. (December 19, 2016). "A Feathered Dinosaur Tail with Primitive Plumage Trapped in Mid-Cretaceous Amber". Current Biology. Cambridge, MA: Cell Press. 26 (24): 3352–3360. Bibcode:1996CBio....6.1213A. doi:10.1016/j.cub.2016.10.008. hdl:1983/d3a169c7-b776-4be5-96af-6053c23fa52b. ISSN 0960-9822. PMID 27939315.
- "Archaeopteryx: An Early Bird". Berkeley: University of California Museum of Paleontology. Retrieved 2019-10-30.
- O'Connor, Patrick M.; Claessens, Leon P. A. M. (July 14, 2005). "Basic avian pulmonary design and flow-through ventilation in non-avian theropod dinosaurs". Nature. London: Nature Research. 436 (7048): 253–256. Bibcode:2005Natur.436..253O. doi:10.1038/nature03716. ISSN 0028-0836. PMID 16015329.
- Gibson, Andrea (July 13, 2005). "Study: Predatory Dinosaurs had Bird-Like Pulmonary System". Research Communications. Athens, OH: Ohio University. Retrieved 2019-11-18.
- "Meat-eating dinosaur from Argentina had bird-like breathing system". University of Michigan News. Ann Arbor, MI: Office of the Vice President for Communications; Regents of the University of Michigan. October 2, 2008. Retrieved 2019-11-02.
- Xu, Xing; Norell, Mark A. (October 14, 2004). "A new troodontid dinosaur from China with avian-like sleeping posture". Nature. London: Nature Research. 431 (7010): 838–841. Bibcode:2004Natur.431..838X. doi:10.1038/nature02898. ISSN 0028-0836. PMID 15483610.
- Norell, Mark A.; Clark, James M.; Chiappe, Luis M.; et al. (December 28, 1995). "A nesting dinosaur". Nature. London: Nature Research. 378 (6559): 774–776. Bibcode:1995Natur.378..774N. doi:10.1038/378774a0. ISSN 0028-0836.
- Varricchio, David J.; Moore, Jason R.; Erickson, Gregory M.; et al. (December 19, 2008). "Avian Paternal Care Had Dinosaur Origin". Science. Washington, D.C.: American Association for the Advancement of Science. 322 (5909): 1826–1828. Bibcode:2008Sci...322.1826V. doi:10.1126/science.1163245. ISSN 0036-8075. PMID 19095938.
- Wings, Oliver (2007). "A review of gastrolith function with implications for fossil vertebrates and a revised classification" (PDF). Palaeontologica Polonica. Warsaw: Institute of Paleobiology, Polish Academy of Sciences. 52 (1): 1–16. ISSN 0567-7920. Retrieved 2019-11-02.
- Dingus & Rowe 1998
- Le Loeuff, Jean (December 2012). "Paleobiogeography and biodiversity of Late Maastrichtian dinosaurs: how many dinosaur species went extinct at the Cretaceous-Tertiary boundary?". Bulletin de la Société Géologique de France. Les Ulis: EDP Sciences. 183 (6): 547–559. doi:10.2113/gssgfbull.183.6.547. ISSN 0037-9409.
- Longrich, Nicholas R.; Bhullar, Bhart-Anjan S.; Gauthier, Jacques A. (December 26, 2012). "Mass extinction of lizards and snakes at the Cretaceous–Paleogene boundary". Proc. Natl. Acad. Sci. U.S.A. Washington, D.C.: National Academy of Sciences. 109 (52): 21396–21401. Bibcode:2012PNAS..10921396L. doi:10.1073/pnas.1211526110. ISSN 0027-8424. PMC 3535637. PMID 23236177.
- Keller, Gerta; Adatte, Thierry; Gardin, Silvia; et al. (April 30, 2008). "Main Deccan volcanism phase ends near the K–T boundary: Evidence from the Krishna–Godavari Basin, SE India". Earth and Planetary Science Letters. Amsterdam: Elsevier. 268 (3–4): 293–311. Bibcode:2008E&PSL.268..293K. doi:10.1016/j.epsl.2008.01.015. ISSN 0012-821X.
- Mullen, Leslie (October 20, 2004). "Multiple Impacts?". Astrobiology Magazine. Washington, D.C.: NASA. ISSN 2152-1239. Archived from the original on 2017-07-05. Retrieved 2019-11-03.
- Randall 2015
- Alvarez, Luis W.; Alvarez, Walter; Asaro, Frank; Michel, Helen V. (June 6, 1980). "Extraterrestrial Cause for the Cretaceous-Tertiary Extinction" (PDF). Science. Washington, D.C.: American Association for the Advancement of Science. 208 (4448): 1095–1108. Bibcode:1980Sci...208.1095A. CiteSeerX 10.1.1.126.8496. doi:10.1126/science.208.4448.1095. ISSN 0036-8075. PMID 17783054. Archived from the original (PDF) on 2010-07-08. Retrieved 2019-10-30.
- Hildebrand, Alan R.; Penfield, Glen T.; Kring, David A.; et al. (September 1, 1991). "Chicxulub Crater: A possible Cretaceous/Tertiary boundary impact crater on the Yucatán Peninsula, Mexico". Geology. Boulder, CO: Geological Society of America. 19 (9): 867–871. Bibcode:1991Geo....19..867H. doi:10.1130/0091-7613(1991)019<0867:CCAPCT>2.3.CO;2. ISSN 0091-7613.
- Pope, Kevin O.; Ocampo, Adriana C.; Kinsland, Gary L.; et al. (June 1, 1996). "Surface expression of the Chicxulub crater". Geology. Boulder, CO: Geological Society of America. 24 (6): 527–530. Bibcode:1996Geo....24..527P. doi:10.1130/0091-7613(1996)024<0527:SEOTCC>2.3.CO;2. ISSN 0091-7613. PMID 11539331.
- Robertson, Douglas S.; McKenna, Malcolm C.; Toon, Owen B.; et al. (May–June 2004). "Survival in the first hours of the Cenozoic" (PDF). Geological Society of America Bulletin. Boulder, CO: Geological Society of America. 116 (5–6): 760–768. Bibcode:2004GSAB..116..760R. doi:10.1130/B25402.1. ISSN 0016-7606. Archived from the original (PDF) on 2012-09-18. Retrieved 2011-06-15.
- Hotz, Robert Lee (September 15, 2019). "Details Discovered About the Day the Dinosaurs Died". News. MSN.com. Redmond, WA: Microsoft. Archived from the original on 2019-09-18. Retrieved 2019-11-08.
- Hofman, Corine; Féraud, Gilbert; Courtillot, Vincent (July 30, 2000). "40Ar/39Ar dating of mineral separates and whole rocks from the Western Ghats lava pile: further constraints on duration and age of the Deccan traps". Earth and Planetary Science Letters. Amsterdam: Elsevier. 180 (1–2): 13–27. Bibcode:2000E&PSL.180...13H. doi:10.1016/S0012-821X(00)00159-X. ISSN 0012-821X.
- Duncan, Robert A.; Pyle, Douglas G. (June 30, 1988). "Rapid eruption of the Deccan flood basalts at the Cretaceous/Tertiary boundary". Nature. London: Nature Research. 333 (6176): 841–843. Bibcode:1988Natur.333..841D. doi:10.1038/333841a0. ISSN 0028-0836.
- Fortune, Jack (narrator) (October 7, 2004). "What Really Killed the Dinosaurs?". Horizon. Series 41. Episode 4. BBC Television. Retrieved 2019-11-11.
- Alvarez 1997, pp. 130–146, chpt. 7: "The World after Chicxulub".
- Fassett, James E.; Lucas, Spencer G.; Zielinski, Robert A.; et al. (July 9–12, 2000). Koeberl, Christian; MacLeod, Kenneth G. (eds.). Compelling New Evidence for Paleocene Dinosaurs in the Ojo Alamo Sandstone San Juan Basin, New Mexico and Colorado, USA (PDF). Catastrophic Events and Mass Extinctions: Impacts and Beyond. LPI Contribution No. 1053. Vienna: Lunar and Planetary Institute. pp. 45–46. Bibcode:2001caev.conf.3139F. Retrieved 2019-11-04. The volume is available from the Internet Archive.
- Sloan, Robert E.; Rigby, J. Keith Jr.; Van Valen, Leigh M.; et al. (May 2, 1986). "Gradual Dinosaur Extinction and Simultaneous Ungulate Radiation in the Hell Creek Formation". Science. Washington, D.C.: American Association for the Advancement of Science. 232 (4750): 629–633. Bibcode:1986Sci...232..629S. doi:10.1126/science.232.4750.629. ISSN 0036-8075. PMID 17781415.
- Fastovsky, David E.; Sheehan, Peter M. (July 2005). Reply to comment by James E. Fassett. "The extinction of the dinosaurs in North America". Comment and Reply. GSA Today. Boulder, CO: Geological Society of America. 15 (7): 11. doi:10.1130/1052-5173(2005)015[11b:RTEOTD]2.0.CO;2. ISSN 1052-5173.
- Sullivan, Robert M. (May 8, 2003). No Paleocene dinosaurs in the San Juan Basin, New Mexico. Rocky Mountain (56th Annual) and Cordilleran (100th Annual) Joint Meeting (May 3–5, 2004). 35. Boulder, CO: Geological Society of America. p. 15. OCLC 62334058. Paper No. 9-5. Archived from the original on 2019-10-29. Retrieved 2019-11-04.
- Fassett, James E.; Heaman, Larry M.; Simonetti, Antonio (February 1, 2011). "Direct U–Pb dating of Cretaceous and Paleocene dinosaur bones, San Juan Basin, New Mexico". Geology. Boulder, CO: Geological Society of America. 39 (2): 159–162. Bibcode:2011Geo....39..159F. doi:10.1130/G31466.1. ISSN 0091-7613.
- Dong 1992
- "Dinosaur bones 'used as medicine'". BBC News. London: BBC. July 6, 2007. Archived from the original on 2019-08-27. Retrieved 2019-11-04.
- Paul 2000, pp. 10–44, chpt. 1: "A Brief History of Dinosaur Paleontology" by Michael J. Benton.
- Farlow & Brett-Surman 1997, pp. 3–11, chpt. 1: "The Earliest Discoveries" by William A.S. Sarjeant.
- Plot 1677
- Plot 1677, p. 136
- "Robert Plot" (PDF). Learning more. Oxford: Oxford University Museum of Natural History. 2006. Archived from the original (PDF) on 2006-10-01. Retrieved 2019-11-14.
- Lhuyd 1699, p. 67
- Delair, Justin B.; Sarjeant, William A.S. (2002). "The earliest discoveries of dinosaurs: the records re-examined". Proceedings of the Geologists' Association. Amsterdam: Elsevier on behalf of the Geologists' Association. 113 (3): 185–197. doi:10.1016/S0016-7878(02)80022-0. ISSN 0016-7878.
- Gunther 1968
- Buckland, William (1824). "Notice on the Megalosaurus or great Fossil Lizard of Stonesfield". Transactions of the Geological Society of London. London: Geological Society of London. 1 (2): 390–396. doi:10.1144/transgslb.1.2.390. ISSN 2042-5295. Archived (PDF) from the original on 2019-10-21. Retrieved 2019-11-05.
- Mantell, Gideon A. (1825). "Notice on the Iguanodon, a newly discovered fossil reptile, from the sandstone of Tilgate forest, in Sussex". Philosophical Transactions of the Royal Society of London. London: Royal Society. 115: 179–186. doi:10.1098/rstl.1825.0010. ISSN 0261-0523. JSTOR 107739.
- Farlow & Brett-Surman 1997, pp. 14, chpt. 2: "European Dinosaur Hunters" by Hans-Dieter Sues.
- Rupke 1994
- Prieto-Marquez, Albert; Weishampel, David B.; Horner, John R. (March 2006). "The dinosaur Hadrosaurus foulkii, from the Campanian of the East Coast of North America, with a reevaluation of the genus" (PDF). Acta Palaeontologica Polonica. Warsaw: Institute of Paleobiology, Polish Academy of Sciences. 51 (1): 77–98. ISSN 0567-7920. Archived (PDF) from the original on 2019-06-22. Retrieved 2019-11-05.
- Holmes 1998
- Salgado, Leonardo; Gasparini, Zulma (March 2006). "Reappraisal of an ankylosaurian dinosaur from the Upper Cretaceous of James Ross Island (Antarctica)" (PDF). Geodiversitas. Paris: Muséum national d’Histoire naturelle. 28 (1): 119–135. ISSN 1280-9659. Archived (PDF) from the original on 2018-05-02. Retrieved 2019-11-05.
- Hammer, William R.; Hickerson, William J. (May 6, 1994). "A Crested Theropod Dinosaur from Antarctica". Science. Washington, D.C.: American Association for the Advancement of Science. 264 (5160): 828–830. Bibcode:1994Sci...264..828H. doi:10.1126/science.264.5160.828. ISSN 0036-8075. PMID 17794724.
- Bakker 1986
- Evershed, Nick (May 1, 2009). "Blood, tissue extracted from duck-billed dinosaur bone". Cosmos Online. Adelaide: Cosmos Media Pty Ltd. ISSN 1832-522X. Archived from the original on 2016-02-15. Retrieved 2013-10-02.
- Schweitzer, Mary H.; Wenxia, Zheng; Cleland, Timothy P.; et al. (January 2013). "Molecular analyses of dinosaur osteocytes support the presence of endogenous molecules". Bone. Amsterdam: Elsevier. 52 (1): 414–423. doi:10.1016/j.bone.2012.10.010. ISSN 8756-3282. PMID 23085295.
- Wang, Hai-Lin; Yan, Zi-Ying; Jin, Dong-Yan (May 1, 1997). "Reanalysis of Published DNA Sequence Amplified from Cretaceous Dinosaur Egg Fossil" (PDF). Molecular Biology and Evolution. Oxford: Oxford University Press on behalf of the Society for Molecular Biology and Evolution. 14 (5): 589–591. doi:10.1093/oxfordjournals.molbev.a025796. ISSN 0737-4038. PMID 9159936. Archived (PDF) from the original on 2018-07-30. Retrieved 2019-11-07.
- Chang, Belinda S. W.; Jönsson, Karolina; Kazmi, Manija A.; et al. (September 1, 2002). "Recreating a Functional Ancestral Archosaur Visual Pigment". Molecular Biology and Evolution. Oxford: Oxford University Press on behalf of the Society for Molecular Biology and Evolution. 19 (9): 1483–1489. doi:10.1093/oxfordjournals.molbev.a004211. ISSN 0737-4038. PMID 12200476.
- Schweitzer, Mary H.; Marshall, Mark; Carron, Keith; et al. (June 10, 1997). "Heme compounds in dinosaur trabecular bone". Proc. Natl. Acad. Sci. U.S.A. Washington, D.C.: National Academy of Sciences. 94 (12): 6291–6296. Bibcode:1997PNAS...94.6291S. doi:10.1073/pnas.94.12.6291. ISSN 0027-8424. PMC 21042. PMID 9177210.
- Embery, Graham; Milner, Angela C.; Waddington, Rachel J.; et al. (August 6, 2003). "Identification of Proteinaceous Material in the Bone of the Dinosaur Iguanodon". Connective Tissue Research. Milton Park, Oxfordshire: Taylor & Francis. 44 (1): 41–46. doi:10.1080/03008200390152070. ISSN 1607-8438. PMID 12952172.
- Peterson, Joseph E.; Lenczewski, Melissa E.; Scherer, Reed P. (October 12, 2010). Stepanova, Anna (ed.). "Influence of Microbial Biofilms on the Preservation of Primary Soft Tissue in Fossil and Extant Archosaurs". PLOS ONE. San Francisco, CA: PLOS. 5 (10): e13334. Bibcode:2010PLoSO...513334P. doi:10.1371/journal.pone.0013334. ISSN 1932-6203. PMC 2953520. PMID 20967227.
- Bertazzo, Sergio; Maidment, Susannah C.R.; Kallepitis, Charalambos; et al. (June 9, 2015). "Fibres and cellular structures preserved in 75-million-year-old dinosaur specimens". Nature Communications. London: Nature Research. 6: 7352. Bibcode:2015NatCo...6.7352B. doi:10.1038/ncomms8352. ISSN 2041-1723. PMC 4468865. PMID 26056764.
- Mortillaro, Nicole (June 9, 2015). "Scientists discover 75-million-year-old dinosaur blood and tissue". Global News. Toronto: Corus Entertainment. Archived from the original on 2019-06-11. Retrieved 2019-11-07.
- "Dinosaur". Merriam-Webster Dictionary. Retrieved 2019-11-07.
- Sarjeant 1995, pp. 255–284, chpt. 15: "The Dinosaurs and Dinomania over 150 Years" by Hugh S. Torrens.
- Currie & Padian 1997, pp. 347–350, "History of Dinosaur Discoveries: First Golden Period" by Brent H. Breithaupt.
- Dickens 1853, p. 1, chpt. I: "London. Michaelmas Term lately over, and the Lord Chancellor sitting in Lincoln's Inn Hall. Implacable November weather. As much mud in the streets, as if the waters had but newly retired from the face of the earth, and it would not be wonderful to meet a Megalosaurus, forty feet long or so, waddling like an elephantine lizard up Holborn Hill."
- Farlow & Brett-Surman 1997, pp. 675–697, chpt. 43: "Dinosaurs and the Media" by Donald F. Glut and M.K. Brett-Surman.
- Alvarez, Walter (1997). T. rex and the Crater of Doom. Princeton, NJ: Princeton University Press. ISBN 978-0-691-01630-6. LCCN 96049208. OCLC 1007846558. Retrieved 2019-11-04.CS1 maint: ref=harv (link)
- Bakker, Robert T. (1986). The Dinosaur Heresies: New Theories Unlocking the Mystery of the Dinosaurs and Their Extinction. New York: William Morrow and Company. ISBN 978-0-688-04287-5. LCCN 86012643. OCLC 13699558. Retrieved 2019-11-06.CS1 maint: ref=harv (link)
- Benton, Michael J. (2005). Vertebrate Palaeontology (3rd ed.). Malden, MA: Blackwell Publishing. ISBN 978-0-632-05637-8. LCCN 2003028152. OCLC 53970617. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Brusatte, Stephen L. (2012). Benton, Michael J. (ed.). Dinosaur Paleobiology. Topics in Paleobiology. Foreword by Michael J. Benton. Hoboken, NJ: Wiley-Blackwell. ISBN 978-0-470-65658-7. LCCN 2011050466. OCLC 781864955. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Chiappe, Luis M.; Witmer, Lawrence M., eds. (2002). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. ISBN 978-0-520-20094-4. LCCN 2001044600. OCLC 901747962.CS1 maint: ref=harv (link)
- Colbert, Edwin H. (1971) [Originally published, New York: E. P. Dutton, 1968; London: Evans Brothers Ltd, 1969]. Men and Dinosaurs: The Search in Field and Laboratory. Harmondsworth: Penguin. ISBN 978-0-14-021288-4. OCLC 16208760. Retrieved 2019-10-31.CS1 maint: ref=harv (link)
- Cowen, Richard (2005). History of Life (4th ed.). Malden, MA: Blackwell Publishing. ISBN 978-1-4051-1756-2. LCCN 2003027993. OCLC 53970577.CS1 maint: ref=harv (link) The 5th edition of the book is available from the Internet Archive. Retrieved 2019-10-19.
- Currie, Philip J.; Padian, Kevin, eds. (1997). Encyclopedia of Dinosaurs. San Diego, CA: Academic Press. ISBN 978-0-12-226810-6. LCCN 97023430. OCLC 436848919. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Currie, Philip J.; Koppelhus, Eva B.; Shugar, Martin A.; Wright, Joanna L., eds. (2004). Feathered Dragons: Studies on the Transition from Dinosaurs to Birds. Life of the Past. Bloomington, IN: Indiana University Press. ISBN 978-0-253-34373-4. LCCN 2003019035. OCLC 52942941.CS1 maint: ref=harv (link)
- Curry Rogers, Kristina A.; Wilson, Jeffrey A., eds. (2005). The Sauropods: Evolution and Paleobiology. Berkeley: University of California Press. ISBN 978-0-520-24623-2. LCCN 2005010624. OCLC 879179542.CS1 maint: ref=harv (link)
- Desmond, Adrian J. (1975). The Hot-Blooded Dinosaurs: A Revolution in Palaeontology. London: Blond & Briggs. ISBN 978-0-8037-3755-6. LCCN 76359907. OL 4933052M. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Dickens, Charles (1853). Bleak House. London: Bradbury and Evans. Retrieved 2019-11-07.CS1 maint: ref=harv (link)
- Dingus, Lowell; Rowe, Timothy (1998). The Mistaken Extinction: Dinosaur Evolution and the Origin of Birds. New York: W. H. Freeman and Company. ISBN 978-0-7167-2944-0. LCCN 97035749. OCLC 1110279352.CS1 maint: ref=harv (link)
- Dodson, Peter; Gingerich, Philip D., eds. (1993). Functional Morphology and Evolution. A special volume of the American Journal of Science. 293-A. New Haven, CT: Kline Geology Laboratory, Yale University. ISSN 0002-9599. OCLC 27781160.CS1 maint: ref=harv (link)
- Dong, Zhiming (1992). Dinosaurian Faunas of China (English ed.). Beijing; Berlin; New York: China Ocean Press; Springer-Verlag. ISBN 978-3-540-52084-9. LCCN 92207835. OCLC 26522845.CS1 maint: ref=harv (link)
- Dyke, Gareth; Kaiser, Gary, eds. (2011). Living Dinosaurs: The Evolutionary History of Modern Birds. Chichester; Hoboken, NJ: Wiley-Blackwell. ISBN 978-0-470-65666-2. LCCN 2010043277. OCLC 729724640.CS1 maint: ref=harv (link)
- Farlow, James O.; Brett-Surman, M.K., eds. (1997). The Complete Dinosaur. Bloomington, IN: Indiana University Press. ISBN 978-0-253-33349-0. LCCN 97-23698. OCLC 924985811. Retrieved 2019-10-14.CS1 maint: ref=harv (link)
- Foster, John R.; Lucas, Spencer G., eds. (2006). "Paleontology and Geology of the Upper Jurassic Morrison Formation". Bulletin of the New Mexico Museum of Natural History and Science. New Mexico Museum of Natural History and Science Bulletin. Albuquerque, NM: New Mexico Museum of Natural History and Science. 36. ISSN 1524-4156. OCLC 77520577. Retrieved 2019-10-21.CS1 maint: ref=harv (link)
- Glut, Donald F. (1997). Dinosaurs: The Encyclopedia. Foreword by Michael K. Brett-Surman. Jefferson, NC: McFarland & Company. ISBN 978-0-89950-917-4. LCCN 95047668. OCLC 33665881.CS1 maint: ref=harv (link)
- Gunther, Robert Theodore, ed. (1968) [First printed in Oxford 1945]. Life and Letters of Edward Lhwyd. Early Science in Oxford. XIV. Preface by Albert Everard Gunther (Reprint ed.). London: Dawsons of Pall Mall. ISBN 978-0-7129-0292-2. LCCN 22005926. OCLC 43529321. Retrieved 2019-11-04.CS1 maint: ref=harv (link)
- Hansell, Mike (2000). Bird Nests and Construction Behaviour. Pen and ink illustration by Raith Overhill. Cambridge: University of Cambridge Press. ISBN 978-0-521-46038-5. LCCN 99087681. OCLC 876286627. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Heilmann, Gerhard (1926). The Origin of Birds. London; New York: H. F. & G. Witherby; D. Appleton & Company. LCCN 27001127. OCLC 606021642.CS1 maint: ref=harv (link)
- Holmes, Thom (1998). Fossil Feud: The Rivalry of the First American Dinosaur Hunters. Parsippany, NJ: Julian Messner. ISBN 978-0-382-39149-1. LCCN 96013610. OCLC 34472600.CS1 maint: ref=harv (link)
- Holtz, Thomas R. Jr. (2007). Dinosaurs: The Most Complete, Up-to-Date Encyclopedia for Dinosaur Lovers of All Ages. Illustrated by Luis V. Rey. New York: Random House. ISBN 978-0-375-82419-7. LCCN 2006102491. OCLC 77486015. Retrieved 2019-10-22.CS1 maint: ref=harv (link)
- Lambert, David; The Diagram Group (1990). The Dinosaur Data Book: The Definitive, Fully Illustrated Encyclopedia of Dinosaurs. New York: Avon Books. ISBN 978-0-380-75896-8. LCCN 89092487. OCLC 21833417. Retrieved 2019-10-14.CS1 maint: ref=harv (link)
- Lessem, Don; Glut, Donald F. (1993). The Dinosaur Society's Dinosaur Encyclopedia. Illustrations by Tracy Lee Ford; scientific advisors, Peter Dodson, et al. New York: Random House. ISBN 978-0-679-41770-5. LCCN 94117716. OCLC 30361459. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Lhuyd, Edward (1699). Lithophylacii Britannici ichnographia [British figured stones]. London: Ex Officina M.C. Retrieved 2019-11-04.CS1 maint: ref=harv (link)
- Mayr, Gerald (2009). Paleogene Fossil Birds. Berlin: Springer-Verlag. doi:10.1007/978-3-540-89628-9. ISBN 978-3-540-89627-2. LCCN 2008940962. OCLC 916182693. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Norell, Mark; Gaffney, Eugene S.; Dingus, Lowell (2000) [Originally published as Discovering Dinosaurs in the American Museum of Natural History. New York: Knopf, 1995]. Discovering Dinosaurs: Evolution, Extinction, and the Lessons of Prehistory (Revised ed.). Berkeley: University of California Press. ISBN 978-0-520-22501-5. LCCN 99053335. OCLC 977125867. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Olshevsky, George (2000). An Annotated Checklist of Dinosaur Species by Continent. Mesozoic Meanderings. 3. Illustrated by Tracy Lee Ford. San Diego, CA: Publications Requiring Research. ISSN 0271-9428. LCCN 00708700. OCLC 44433611.CS1 maint: ref=harv (link)
- Owen, Richard (1842). "Report on British Fossil Reptiles. Part II". Report of the Eleventh Meeting of the British Association for the Advancement of Science; Held at Plymouth in July 1841. London: John Murray. pp. 60–204. ISBN 978-0-8201-1526-9. LCCN 99030427. OCLC 1015526268. Retrieved 2019-10-13.CS1 maint: ref=harv (link)
- Padian, Kevin, ed. (1986). The Origin of Birds and the Evolution of Flight. Memoirs of the California Academy of Sciences. 8. San Francisco, CA: California Academy of Sciences. ISBN 978-0-940228-14-6. OCLC 946083441. OL 9826926M.CS1 maint: ref=harv (link)
- Parsons, Keith M. (2001). Drawing out Leviathan: Dinosaurs and the Science Wars. Life in the Past. Bloomington, IN: Indiana University Press. ISBN 978-0-253-33937-9. LCCN 2001016803. OCLC 50174737. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Paul, Gregory S. (1988). Predatory Dinosaurs of the World: A Complete Illustrated Guide. New York: Simon & Schuster. ISBN 978-0-671-61946-6. LCCN 88023052. OCLC 859819093. Retrieved 2019-10-30.CS1 maint: ref=harv (link)
- Paul, Gregory S., ed. (2000). The Scientific American Book of Dinosaurs (1st ed.). New York: St. Martin's Press. ISBN 978-0-312-26226-6. LCCN 2001269051. OCLC 45256074.CS1 maint: ref=harv (link)
- Paul, Gregory S. (2010). The Princeton Field Guide to Dinosaurs. Princeton Field Guides. Princeton, NJ: Princeton University Press. ISBN 978-0-691-13720-9. LCCN 2010014916. OCLC 907619291.CS1 maint: ref=harv (link)
- Plot, Robert (1677). The Natural History of Oxford-shire: Being an Essay toward the Natural History of England. Printed at the Theater in OXFORD, and are to be had there: And in London at Mr. S. Millers, at the Star near the West-end of St. Pauls Church-yard. Oxford; London. LCCN 11004267. OCLC 933062622. Retrieved 2019-11-13.CS1 maint: ref=harv (link)
- Randall, Lisa (2015). Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe. New York: HarperCollins: Ecco. ISBN 978-0-06-232847-2. LCCN 2016427646. OCLC 962371431.CS1 maint: ref=harv (link)
- Rupke, Nicolaas A. (1994). Richard Owen: Victorian Naturalist. New Haven: Yale University Press. ISBN 978-0-300-05820-8. LCCN 93005739. OCLC 844183804. Retrieved 2019-11-05.CS1 maint: ref=harv (link)
- Sarjeant, William A.S., ed. (1995). Vertebrate Fossils and the Evolution of Scientific Concepts: Writings in Tribute to Beverly Halstead, by Some of His Many Friends. Modern Geology. Amsterdam: Gordon and Breach Publishers. ISBN 978-2-88124-996-9. ISSN 0026-7775. LCCN 00500382. OCLC 34672546.CS1 maint: ref=harv (link) "Reprint of papers published in a special volume of Modern geology [v. 18 (Halstead memorial volume), 1993], with five additional contributions.--Pref."
- Tanner, Lawrence H.; Spielmann, Justin A.; Lucas, Spencer G., eds. (2013). "The Triassic System: New Developments in Stratigraphy and Paleontology". Bulletin of the New Mexico Museum of Natural History and Science. New Mexico Museum of Natural History and Science Bulletin. Albuquerque, NM: New Mexico Museum of Natural History and Science. 61. ISSN 1524-4156. OCLC 852432407. Retrieved 2019-10-21.CS1 maint: ref=harv (link)
- Weishampel, David B.; Dodson, Peter; Osmólska, Halszka, eds. (2004). The Dinosauria (2nd ed.). Berkeley: University of California Press. ISBN 978-0-520-25408-4. LCCN 2004049804. OCLC 154697781.CS1 maint: ref=harv (link)
|Library resources about |
- Paul, Gregory S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore; London: Johns Hopkins University Press. ISBN 978-0-8018-6763-7. LCCN 2001000242. OCLC 1088130487..
- Sternberg, Charles Mortram (1966) [Original edition published by E. Cloutier, printer to the King, 1946]. Canadian Dinosaurs. Geological Series. 54 (2nd ed.). Ottawa: National Museum of Canada. LCCN gs46000214. OCLC 1032865683.
- Stewart, Tabori & Chang (1997). The Humongous Book of Dinosaurs. New York: Stewart, Tabori & Chang. ISBN 978-1-55670-596-0. LCCN 97000398. OCLC 1037269801.
- Zhou, Zhonghe (October 2004). "The origin and early evolution of birds: discoveries, disputes, and perspectives from fossil evidence" (PDF). Naturwissenschaften. Berlin: Springer Science+Business Media. 91 (10): 455–471. Bibcode:2004NW.....91..455Z. doi:10.1007/s00114-004-0570-4. ISSN 0028-1042. PMID 15365634. Archived from the original (PDF) on 2011-07-21. Retrieved 2019-11-06.
|Wikimedia Commons has media related to Dinosauria.|
|Wikiquote has quotations related to: Dinosauria|
|Wikisource has original works on the topic: Dinosaurs|
|Wikispecies has information related to Dinosauria|
|Look up dinosaur in Wiktionary, the free dictionary.|
- "DinoDatabase.com" – Hundreds of dinosaurs and dinosaur related topics.
- "The Science and Art of Gregory S. Paul" – Influential paleontologist's anatomy art and paintings.
- "Scott Hartman's Skeletal Drawing.com" – Professional restorations of numerous dinosaurs, and discussions of dinosaur anatomy.
- "Dinosaur Discovery - Early Published Images" – A collection of images from early works on dinosaurs at the Linda Hall Library, in support of the exhibition, Paper Dinosaurs, 1824–1969.
- BBC Nature: Prehistoric Life: Dinosaurs – Reconstructions and expert interpretations, including Walking with Dinosaurs. (Archived website last updated by the BBC in October 2014.)
- BBC Explainer: Dinosaurs – Animation. A complete history in 4 minutes.
- "The origin, evolution, and extinction of the dinosaurs" – A Stephen L. Brusatte video lecture, April 15, 2014.
- "The Dino Directory" – A well-illustrated dinosaur directory from the Natural History Museum in London.
- Dinosaurnews – Dinosaur-related headlines from around the world, including finds and discoveries, and many links.
- "The Dinosauria" – From University of California Museum of Paleontology.
- "Zoom Dinosaurs" – From Enchanted Learning. Kids' site, info pages and stats, theories, history.
- Dinosaur genus list contains data tables on nearly every published Mesozoic dinosaur genus as of January 2011. |
Sanitation is deemed to be a basic human right, not just a luxury, to have access to these facilities for every human being. Adequate sanitation is the most significant factor in the field of public health accessible to the global community . Sanitation is the application of different approaches and practices for the safe and sustainable treatment of human excreta, including the collection, storage, treatment, and disposal of human body wastes . It is also characterized as a program encouraging the safe disposal of human and animal waste to enhance and safeguard the natural environment and public health . The Millennium Development Goals (MDGs), introduced in 2000 by the 189 participating countries of the United Nations (UN), included improved sanitation as a priority under MDG 7 (Ensuring environmental sustainability) to halve the number of people with no access to improved sanitation by 2015 . However, the sanitation coverage has not advanced as expected and tends to stay a tremendous task for the next Sustainable Development Goals (SDGs) campaign .
Therefore, it is right that sanitation is a key element of the SDGs in the UN that is placed at SDGs 6. The SDG 6 consists of 8 global targets, of which target #2 of goal #6 is especially addressed in this review. This demonstrates a substantial growth in motivation and required significant step-by-step changes in both scales, especially the SDG sanitation goal advocates for sanitation “for everyone” . Access to an advanced sanitation service that the United Nations International Children’s Emergency Fund (UNICEF) estimates is not shared within a residential level. The term that we consider in SDG 6.2 was sanitation for everyone without discrimination and differences. Joint Monitoring Programme (JMP) also interprets sustainable sanitation in SDG 6.2 as suggesting a gradual decrease of differences within the community subgroups . Improved sanitation facilities refer to excretion treatment facilities that could effectively prevent contamination from excreta among humans, livestock, and insects. Improved facilities range from easy though secure latrines to flowing toilets with sewerage connections for children .
According to the JMP for water and sanitation performed by the World Health Organization/WHO/UNICEF, the percentage of the world’s population uses improved sanitation facilities developed by 14% from 54% in 1990 to 68% in 2015 . However, performances were much below the target for 2015 to 77% . From 2000 to 2017, 2.1 billion people worldwide (26 percent of the
population) gained access to basic sanitation facilities . Although there has been growth in improved access to sanitation in the developing world, the goal of reducing the population that needs adequate sanitation by 2015 has not been achieved . The sanitation target achievement was not met as planned, particularly in less developed countries and since 1990, globally only 27% of the populations were offered exposure to adequate sanitation. According to WHO/UNICEF data by JMP in 2015 worldwide 2.4 billion population, even lacked improved sanitation facilities and around 1.8 billion people depend mostly on basic pit latrine . This indicates that over 15 percent of the population around 1 billion people worldwide just don’t have access to every kind of sanitation service and that they exercise in open defecation . Approximately 892 million individuals around the world also experienced open defecation, 90% of those residing in rural areas .
Besides, 4.5 billion people throughout the world required a sustainable sanitation service that did not adequately dispose of their excreta in-situ or treated off-site . The dominant number of those people resides in less developed Asian, African, and Latin American regions. The two sub-regions severely impacted the overall population, South Asia with 953 million people and Sub-Saharan Africa, 695 million people demanding adequate sanitation . The existing condition in Africa is highly worrying since just 28% of its inhabitants in sub-Saharan Africa have received access to adequate sanitation and 23% of the inhabitants still exercise open defecation . According to the latest JMP report , 57% of the population in Ethiopia had offered proximity to clean drinking water has increased and sanitation services improved by 25% between from 1990 to 2015 and have remarkable achievement in open defecation which has decreased by 63% in the same period. In recent years, latrine coverage in Ethiopia has risen to 63 percent.
According to the latest studies available, access to affordable drinking water and sanitation developments in Ethiopia are below the Sub-Saharan and World standards . As a result, it is reported that about 37% of the people, 45 percent in rural communities, and 16 percent in urban areas (more than 35 million people) actually have no access to every kind of toilet and hence exercise open defecation . The “improved” sanitation levels do not portray any quantity of human body waste which does not safely isolate, conveyed, or handled. A survey of 12 towns across low- and middle-income countries showed that although 98% of residences provided toilets, only 29% of human body waste (feces and urine) was safely handled and managed . The handling of liquid waste at the household stage is extremely bad. The plan for the sewer line is not as common to the nation except for Addis Ababa.
Waste handling and treatment services remain neither sufficient nor adequate in Ethiopia, particularly in urban areas. In Ethiopia, poor sanitation remains the biggest development challenge affecting the growth of the country in terms of health, education, gender equality, and socioeconomic development worldwide . The potential leadership concerns and enforcement of administering institutions in the sector are indeed the biggest problems in the implementation of appropriate nationwide initiatives. Poor community sanitation practices along with issues such as lack of national policies and sanitation regulators, poor financing for sanitation infrastructure, government monitoring, and evaluation gaps remain the significant challenges facing this low coverage and people, particularly rural communities, urban slums, and other vulnerable groups, are suffering.
However, there has been limited research that has evaluated the current SDG 6.2 in Ethiopia, particularly the major challenges and prospects across various areas. Those studies were more related to inequalities in rural and urban areas and it is indeed uncertain if improvements have been distributed evenly through the population. As a result, it’s critical to evaluate sanitation coverage inequalities among rural and urban households, as well as the challenges and potentials that lead to having low progress of SDG 6.2 in Ethiopia. So, analyzing the current major challenges and potentials of SDG 6.2 in Ethiopia allows policymakers and responsible bodies to provide possible solutions for better achievement. Therefore, the main purpose of this research is to analyze the SDG 6.2 in Ethiopia, concerning major challenges and obstacles, and opportunities available for the sanitation sector towards achieving the SDG target. The review was guided by using Ethiopia Demographic Health Survey (EDHS) and WHO/UNICEF JMP data in different years, as well as other related literature.
2. Current Status and Trends of Sanitation Coverage in Ethiopia
2.1. General Introduction to Ethiopia
Ethiopia, formally the Federal Democratic Republic of Ethiopia (FDRE), is a landlocked nation located in East Africa and never colonized on the continent of Africa. It is located at 8.626703˚N, 39.637554˚E and bordered by at north Eritrea, at east, Djibouti, and Somalia, in western Sudan and South Sudan at south also Kenya. It has an overall land area of 1.13 million・km2. It is the 12th world’s populous nation and the 2nd in Africa, with a total population of 112,078,730 at a population growth rate of 2.6% in 2019. Ethiopia’s current urbanization is 21.2% and its urbanization growth rate is around 4.63% yearly . The mean minimum temperature is 6˚C, while the maximum mean has been rarely above 29˚C. Ethiopia, with an estimated Gross Domestic Product (GDP) of $81 billion in 2017, has the GDP per capita recorded at 862 US dollars in the same period . Ethiopia is a place of origin (Arabica coffee origin, home to the Blue Nile, rare species origin… etc.). Ethiopia is blessed with ample capacity for water supplies, known as Northeastern Africa’s “water tower.” Ethiopia has 12 major river basins, 12 large lakes, and some man-made reservoirs. The total annual mean discharge from all the 12 river basins is expected to be around 124 billion・m3 and 40 billion・m3 of groundwater . According to Figure 1, Ethiopia, which has
Figure 1. Map of Ethiopia that shows its political-administrative regions, and cities.
nine rural, political, administrative regions based on ethnic territoriality, named Tigray, Afar, Amhara, Oromia, Southern Nations, Nationalities and Peoples’ (SNNP), Benishangul-Gumuz, Gambela, Harari and Somali, and two administrative states cities, called Addis Ababa city administration and Dire Dewa city council.
Regarding the introduction in September 2015 of the Sustainable Development Framework 2030 and the SDGs, Ethiopia proactively incorporated and mainstreamed the SDGs with the national strategy. Ethiopia has adopted and supported the sustainable development agenda for 2030 with national responsibilities and ownership to incorporate its SDGs as part of its national development system . The government of Ethiopia acknowledges that improving access to drinking water, sanitation and hygiene is critical for achieving SDGs. Ethiopia incorporated SDG 6 in its 2nd Growth and Transformation Plan (GTP II), the 2015/16 to 2029/30 long-term strategic implementation roadmap. Besides, the Ethiopian government is officially formulating a 10-year prospective development plan for the duration 2019/20 to 2029/30 which is completely compatible with the 2030 framework and SDGs .
Accordingly, the adoption of SDGs in Ethiopia has made substantial progress, with a clear understanding of state ownership. Ethiopia has an index score of 53.2 SDG with nearly similar to a sub-regional average score of 53.8 and a global rank of 135 out of 162 SDG countries . Ethiopia has experienced an extremely poor improvement in water and sanitation and has also seen a troubling pattern in the achievements of the past year. Therefore, the current situation of Ethiopia in terms of SDG 6 can be recognized as extremely poor performance that most of the people are faced with severe challenges. This requires significant improvement and also this weak achievement indicated a major challenge remains and is not on the track to meet the SDGs at the end of 2030. In the SDG Dashboard, the colors are defined to represent the different levels of the current state and trend.
According to Table 1, Ethiopia has a total score of 41.1 in SDG 6 with a red color rating of major challenges and a very high stagnant tendency to achieve the goal . The arrow in yellow indicates that the SDG 6 pattern in Ethiopia is stagnating, significant challenges remain and it seems hard to meet the target by the end of 2030. Furthermore, the sub-goals are at lower performance and in different situations. Firstly, SDG 6.1, known as population using at least basic drinking water services, is in the situation where major challenges remain and stagnating, and also the trend shows not to be on track to achieve the goal by 2030. Secondly, SDG 6.2, named as population using at least basic sanitation services, is in a situation with major challenges and its trend is daunting and it is difficult to attain the goal . Thirdly, the value of SDG 6.3 shows that the freshwater withdrawal is currently green and in a good situation. The fourth sub-goal (imported groundwater depletion) is in green, indicating that it is on track to meet the target by 2030. Last but not least, SDG 6.5 reveals Ethiopia’s treated wastewater is quite insignificant and almost near to zero.
2.2. Sanitation Status and Trends
The estimated coverage of Ethiopia sanitation facility indicated as improved, shared and other unimproved facilities have reached 28%, 14%, and 29% in 2015 compared to 3%, 14%, and 1% respectively in 1990.
According to Figure 2, Ethiopia has an improved sanitation facility of around 8.6% in 2000 and it has 28% achievement in 2015 that increased by 20 percent within 15-year intervals . Only 7% of the population (0.63% rural, 18% urban) used basic sanitation facilities, and 6.84% of the population (1.26% rural,
Table 1. SDG 6 Water, Sanitation, and Hygiene (WASH) achievement of Ethiopia .
Figure 2. The proportion of the population using improved sanitation facilities in Ethiopia, 2000-2015 .
30% urban), used limited sanitation facilities and 59% of the population (62.4% rural and 44% urban) used unimproved sanitation facilities. This indicates that around 72 percent of people in Ethiopia live without adequate sanitation facilities . In 1990, 44.3 million people experienced open defecation and 28.3 million in 2015, indicating a higher decrease over the 25 years .
Reporting the data and making it available is very important. However, there are some nationwide studies in Ethiopia on access to household sanitation, such as reports from the DHS in Ethiopia and WHO/UNICEF JMP. There are no reliable structured sanitation data available after 2015 and this is also a major problem to evaluate the existing situation in Ethiopia. Estimation and correlation of statistics from the countrywide survey of sanitation facilities show 47.9% have access to improved sanitation facilities and 52.1% of the population in Ethiopia is facing unimproved sanitation facilities . It can be recognized that the EDHS survey is the only monitoring tool in Ethiopia to identify and collect data on shared latrines .
According to Figure 3, the number of households with open defecation has decreased from 82 percent in 2000 to 32 percent in 2016. Reducing the proportion of people practicing open defecation in Ethiopia has been a great achievement in these periods, representing a reduction of 50 percent within fifteen years. The result also showed a slight 1.2% increase from 2000 to 2005, 1.4% from 2005 to 2011, and the use of improved sanitation was decreased by 2.8% in 2011 and 2016. There was a higher reduction recorded in 2016, only 15% of households used improved sanitation. In rural areas, a decline was detected in the percentage of households using improved sanitation. A household survey of 16 cities and political regions in Ethiopia, which examines data based on JMP concepts, reveals that 57% of households have access to improved sanitation, 25% unimproved sanitation, and 13% of households suffer from open defecation. A minimal proportion (4 percent) of households uses services shared with other residences .
According to Figure 4, open defecation was 77% in 2000 and is reported as having reduced from 92% in 1990 to 29% by 2015 as well as it has decreased by
Figure 3. Percentage of the population access to sanitation facilities in Ethiopia .
Figure 4. Percentage of a population exercising open defecation by JMP (Joint Monitoring Programme) .
almost 63% in 25-year. There was a 4.7 percentage level improvement in urban residences using improved sanitation in the years 2011 and 2016, while the proportion of rural residences using improved sanitation was reduced by 3.4 percentage points. 85% of the nation used unimproved sanitation in 2016, 94% of which were in rural areas and 49% were urban. The urban/rural study revealed a substantially greater percentage of open defecation in rural areas. Although the findings resulted in a decrease of 8.9 percentage points from 2011 and 2016, and also a lower incidence of open defecation in urban areas in general, the percentage of open defecation residences enhanced by 12.2 percent in 2005 and 15.9 percent in 2011. It has already been established that one of the greatest challenges highlighted in SDG enables everyone to have access to adequate and equitable sanitation and hygiene, as well as avoiding open defecation, giving particular consideration to the demands of women and girls and others within marginalized circumstances . However, achieving this admirable goal is not feasible without a much greater emphasis on geography and community disparities in access to sanitation, along with rural and urban communities, poor and rich, men and women, or marginalized communities compared to the total population .
In reality, the disparity can be interpreted simultaneously in four meanings, specifically, spatial disparities, demographic disparities, gender disparities, and intergenerational disparities . Sanitation facilities and coverage also have disparities between urban and rural areas globally, including Ethiopia . According to WHO/UNICEF, the improved sanitation facility in 2015 was 27% in urban areas and 8% in rural areas, whereas the peoples who practiced open defecation were 6% in urban areas and 34% in rural areas. According to EDHS , the improved sanitation facility in 2016 was 15.9% in urban and 3.9% in rural areas, whereas the peoples practicing open defecation were 6.9% in urban and 38.8% in rural areas. This indicates that the sanitation coverage in rural areas much lower than in urban areas and 81% of the population of Ethiopia is also living in rural areas. Besides, approximately 92 percent (nearly 73.6 million people) of the population live without improved sanitation. In comparison, 34% of the rural population has practiced open defecation .
Several types of researches have made the point that urban areas have more toilet access relative to rural areas . Furthermore, there are also disparities between administrative regions . Based on the survey conducted in 2016 in rural, political, administrative regions of Ethiopia indicated that five regions have better performance varying from 51% in Amhara to 91% in Benishangul Gumuz. There are, however, three areas Gambella, Afar, and Somali―where open defecation still takes precedence, downplaying the national average. The EDHS 2016 report also indicated the same result. Figure 5 above also shows in the region Afar, Somali and Gambella open defecation is 70% on average and still predominant that brings down the result showed the national average . The national study conducted in Ethiopia also showed the emerging regions such as Afar, Somali, Benishangul Gumuz, and Oromia’s pastoralist areas face unique health service delivery challenges sanitation in particular .
Ethiopia’s pastoralist segment, the arid and semi-arid regions covering 61 percent of the national base and occupying 12 - 15 percent of the total population of the country, is residence to millions of pastoralists among various ethnic groups. Research has shown that pastoralists also long been excluded from the central government, and were among the most disadvantaged in terms of affordability
Figure 5. Percentage of the population access to the latrine in rural political administrative regions of Ethiopia .
and connection to sanitation facilities. It has a low sanitation facility due to the nature of the area characterized by poor water supply, mobile lifestyle of the community, soil property, lack of qualified manpower, and habit of open defecation. There was also a disparity of sanitation facilities based on wealth. As worldwide figures suggest, people without links to basic sanitation in Ethiopia are those in lower per capita income groups, with more than three-quarters of the lowest income group exercising open defecation equal to merely 12 percent of the richest.
The absence of access to improved sanitation and open defecation practices has a substantial socio-economic effect on homes without access, including those belonging to families with low access to sanitation . While urban sanitation is usually bigger than rural access, the small, unplanned, densely populated areas are generally considered to be severely unwarranted. Dry pit latrines composed of simple and improved pit latrines used by 92.5 percent of the people of Ethiopia demand frequent maintenance, especially pit emptying and better handling of fecal sludge . There was no adequate care for and valorization of fecal sludge in Ethiopia. Consequently, if the chain Fecal Sludge Management (FSM) system were used as a control criterion, none of Ethiopia’s sanitation facilities would count as adequate sanitation services. Response to better sanitation within the framework of the SDG should also be regarded as appropriate management and valorization of fecal sludge as response measures that increase sanitation coverage . Most latrines store liquid waste using septic tanks, latrine pits (seepages), and cesspools. Either city vacuum vehicles have been used once these storages are loaded, or the capacity is emptied freely or into sewer systems such as community culverts. Research in Bahir Dar city, Northern Ethiopia, for example, revealed that 64% of residences spill liquid waste into the nearby public area .
2.3. Wastewater Treatment
Particularly, to achieve SDG 6.2, attention has been given to the collection, transport, and handling of human body waste and wastewater, as well as hygiene. Sanitation solutions are typically categorized as distributed (cluster network and on-site systems) or decentralized. The more expensive the centralized system consists of a wastewater network with various pipe sizes needed to transport wastewater from a significant number of households’ central wastewater treatment plants, apart mainly from wastewater sources . The government of Ethiopia currently uses conventional decentralized sewage treatment systems in many areas of the state. Wastewater is harvested, treated in decentralized systems and the sewage is recycled or discharged from anywhere around the manufacturing source . The best kind of decentralization is the on-site facility located at the source of wastewater generation; this involves no sewer line network. Household owners are left to operate and manage on-site disposal systems, which created challenges as many currently in use conventional systems do not provide treatment. At the moment Ethiopia is subjected to the deterioration of the water resources connected with wastewater and the bad management of solid waste, sludge, and sewage . Nearly all segments of rural Ethiopia are non-sewer areas. Sewage connections are rare in Ethiopia even in urban areas (0.4% - 6.6%) and nonexistent in rural areas. The urban and peri-urban parts of Ethiopia are characterized by inadequate sanitation, indiscriminate waste disposal, and open defecation. The United Nations Environment Programme (UNEP) reports that 90% of the less developed world does not handle and manage wastewater until it is discharged freely into the environment . Globally, approximately 1.5 billion people have access to sanitation facilities that do not handle excreta in the environment until it is discharged .
The national survey in Ethiopian cities showed that the characteristics of household liquid waste disposal facilities with sewer lines were only 7%. For instance, Addis Ababa, the capital city, produces nearly 49 million・m3 of total wastewater per annum where about 4 million・m3 of industrial wastewater is produced. It has two secondary wastewater treatment plants (kality with a potential of approximately 7600 m3/day or 228,000 m3/month and Kotebe―sludge treatment capacity―of 85,000 m3/year) and a centralized sewerage system/sewer line with an average treated wastewater of less than 10% or 7.03% in 2017 and it is connected to the sewer line . As a possible consequence, wastewater treatment plants release wastewater freely into the ecosystem, slightly or entirely untreated . Effluent from water and sewage and pit latrines and urban waste, trigger algae and weed production, which decreases the water body’s oxygen levels, which in turn influences the marine and vegetation ecosystems. Sewage from domestic households, underground storage and tube leakages, wastewater, and septic tanks are considered to be the major causes of water pollution. This allowed sewage and wastewater directly discharged to water bodies that pollute the water source that affects people and the environment as a whole. It is largely due to the absence of policy and strategy on the sustainable utilization of wastewater, inadequate implementation of pollution prevention and control systems, lack of awareness of waste management, and high investment costs needed for treatment plants.
3. Challenges in Achieving SDG Sanitation Targets in Ethiopia
Even though Ethiopia has made substantial progress towards sanitation coverage just 29% of the population, close to 30 million inhabitants don’t have access to adequate sanitation. Thirty-four percent (34%) of rural inhabitants approach open defecation, relative to 6% of urban dwellers. Sewerage connections are uncommon in urban areas (0.4% - 6.6%) and non-existence for rural areas . The increasing problems with environmental sanitation and sewage treatment in Ethiopia can be considered as a consequence of the lack of institutional homes, which implies that duties are shared between several authorities, and lack of clear implementation approach of the sanitation strategy, the sector is underfunded, and the shortage of treatment facilities for liquid waste remains a major problem for the country to have poor sanitation coverage .
3.1. Policy and Institutional Challenges
The slow improvement in exposure to adequate sanitation in Ethiopia and several less-developed nations also may be related to a limitation of interpreting approaches, policies, and practices; flawed sector-specific integration; and inadequate national budget allocations . In many governments, operations are the responsibility of governments, such as policy formulation, creation of regulatory structures, planning, coordination, financing and funding, capacity building, data collection and monitoring, and regulation . The development of environmental health practices in Ethiopia does indeed have a strong tradition from the 1900s . As shown below in Table 2 since the 1993s, the sanitation and hygiene sector in Ethiopia has undertaken many initiatives through the development of appropriate institutions, regulatory and legal structures to resolve sector constraints . In 2010, according to the UN Economic
Table 2. Policy and institutions implemented in sanitation and hygiene target.
and Social Council Declaration on the right to sanitation: States must guarantee that all have adequate as well as sufficient access to improved sanitation across most fields of society which are clean, hygienic, socially, and culturally appropriate, provides confidentiality and guarantees dignity .
Although this demonstrates significant advances, most of these improvements stay insufficient as well as a majority of persistent obstacles have to be addressed to develop the organizational capacity to achieve and maintain levels of MDG and SDG coverage. Quite significantly, these include ongoing chronic monitoring and evaluation constraints that make the sector doubt one’s improvements as well as the feasibility of strategies and restrict prospects for advanced education . Monitoring variations in sanitation and hygiene coverage remains a significant challenge the country is facing . The MoH is responsible for monitoring health and hygiene activities in Ethiopia, but the prevailing trend in recent years has always been to coordinate planning, monitoring, and reporting within the WASH sector. To this end, their cooperation and coordination have been improved by the MoH, MoWE, the Ministry of Education (MoE), and the Ministry of Finance and Economic Development (MoFED) . Low levels of financial utilization of funding sources resulting from insufficient coordination as well as integration with fundamental government structures and significant potential human capital difficulties, particularly at (i.e., District) level, despite major, but separate capacity-building projects under the framework .
Governments and enterprises, as well as health ministries, in particular, could never achieve their prominent roles as sanitation coordinators and regulators without policy initiatives that encourage the development of government institutions into the leading health institutions, focusing specifically on domestic activities and public participation, promoting competition and integrating sanitation and hygiene in health systems . Today, on a national level, government budgets for health services are higher than most, and expenses for health services have increased much faster than most other sectors. Nevertheless, in previous years, the lack of national policies has been a significant problem in battling sanitation problems . Mainstreaming sanitation issues in various public and non-governmental organizations, given the presence of their nomenclature and structure, is still an incomplete mission. The political branches of governments cannot perform their essential functions as sanitation watchdogs. Experts and managers of health and the environment are the people responsible for persuading society and other stakeholders. The society’s socioeconomic and knowledge level is limited in general at the national level, and particularly in rural sections.
3.2. Water Scarcity
Over the last few decades, it has become clear that the growing demand, the sustainable growth of human society, and the worldwide systemic hazard of freshwater scarcity are increasingly being seen as a challenge due to a freshwater shortage. Water scarcity impacts above 40% of the global population and that number is expected to increase. At present, 4 billion people are estimated to be living under devastating water scarcity for about one month of the year, and 500 million of them face extreme water scarcity across the year . Water scarcity is expected to increase in many areas of the developing nations like Ethiopia and will have an effect on progress towards the achievement of SDG #6.2, as there might be inadequate or only sub-optimal quantities of water for handwashing menstrual hygiene control, food hygiene, low-volume latrine pour-flushing, and even hydraulic condominium sewerage service, as well as personal cleanliness. Even if Ethiopia has a plentiful amount of water resources, the available water is not distributed evenly across different areas of the country and the amount varies with seasons and years. Besides, to meet their regular water needs for both drinking and sanitation, the people of Ethiopia still have a long way to go, especially in the rural regions of Ethiopia.
3.3. Lack of Finance and Level of Poverty
There is a very strong need for additional financial support to meet the SDG 6.2 goal and the investment requirements in the sanitation sector rise significantly . Time and money will be heavily invested in designing and constructing new infrastructure. More financing is required, from more productive use of existing capital to new funding standards to stronger prospects for a dramatic change in the coming years . Official Development Assistance (ODA) disbursements for the total water sector rose between US$7.2 billion during 2011 to US$8.8 billion globally by 2016 . Currently available financial resources for implementing SDG 6 are insufficient. The World Bank projected as US$114 billion a year the total investment costs of achieving SDG goals 6.1 and 6.2. This does not include any other goals for SDG 6 and also this ignores operation and repair, supervision, institutional support, sector enhancement, and human resources. The overwhelming challenge is to plan, develop, and maintain systems for water, wastewater, and sanitation to promote universal access to clean water and sanitation . The funding needed to improve the enabling environment is insufficient to deliver improved sanitation services for households.
Ethiopia is spending 0.01% of its GDP on sanitation. The MoWE is budgeting about US$18 million annually for the reconstruction and extension of the Addis Ababa sewerage network. In 2007, an evaluation of institutional sanitation requirements projected the expense of sanitation to current schools and health facilities to be an estimated US$510 million. Total spending costs for sanitation equipment are projected at US$795 million annually due to the low current level of coverage, all of which households are required to contribute. A rough estimate of planned public investment in sanitation puts the overall projected investment in sanitation at about US$50 million each year . The funding deficit in Ethiopia is currently estimated at 60 - 70 percent of the SDG requirement. Universal improved sanitation and open defecation free may be met by 2030 given that financing investment can increase from the current less than 1% of GDP to 2 to 4%. Furthermore, there is not only a lack of finance in the region, but also unexploited use of repayable finance, such as microfinance and mixed-finance, and insufficient distribution of financial resources to the poor and disadvantaged who cannot access facilities.
3.4. Population Growth and Density
The decline in access to improve sanitation in Ethiopia is mainly a function of rapid population increase. The expected accelerated population growth in less developed regions is among the main challenges in achieving the target of safe-accessible sanitation for everyone in 2030. The urban population is projected to be about 4 billion in low- and middle-income countries, whereas the rural population is estimated to be approximately 3 billion by 2030 . The urban population in Africa with no upgraded sanitation facilities rose from 80 million in 1990 to 215 million in 2015 due to population growth. The major challenges in the provision of sanitation services also involve rapid urbanization . In most sub-Saharan African country’s infrastructure is lacking as regards sewer systems, and sewer networks are minimal. For example, in Ethiopia, the Kality treatment plant, established in 1983, initially planned to supply 50,000 residents of Addis Ababa, but it’s only serving 13,000 people over 30 years. Moreover, several infrastructure solutions, including building wastewater treatment plants, disintegrate or fail to adequately meet the sanitation requirements . As shown below in Table 3 Ethiopia’s population without enhanced sanitation facilities increased from nearly 46.6 million in 1990 to almost 70.2 million in 2015. This is the result of the increment of population growth from 48 million in 1990 and 98 million in 2015 with a 50 million total population increment within 25 years. So, this requires serious attention to improve the population that is living with unimproved sanitation facilities take into consideration population growth .
Table 3. Total population with access to sanitation facilities in Ethiopia by JMP (Joint Monitoring Programme) .
4. Impacts of Inadequate Sanitation
4.1. Health Consequences
Health consequences safe sanitation collection, transportation, and disposal, waste is still an ignored environmental problem in towns . Lack of sanitation and contaminated water and result in bacteria being spread pathogens and, to a lesser degree, in the urine . Insufficient sanitation and weak hygienic practices result in tremendous public health expenses and diseases . The sanitation problems constitute 10% of the worldwide disease threat . Improper handling of human body waste poses a significant public health hazard . Many of these diseases are spread through fecal-oral pathways, but others are transmitted via fecal-skin (such as schistosomiasis) and fecal-eye pathways like trachoma . 200 million tons of human wastes go uncollected and untreated worldwide per year . In Ethiopia, 60% of overall diseases are related to poor sanitation and unsafe water supply .
Diarrheal disease: This disease is the world's most severe fecal-oral disease, causing approximately 1.6 - 2.5 million fatalities annually, mostly of children under the age of five living throughout less-developed nations . Diarrhea is the second most frequent reason for the death of children below age 5, with no improvement in the past ten years . Those diseases are identified in the Ethiopian context by poverty and lack of knowledge of basic sanitation . Diarrheal diseases are Ethiopia’s first significant diseases, the main cause of death in millions primarily below 5. For example, in 2016 diarrhea was the primary factor for mortality in Ethiopia for infants below five resulting in 10 percent of all deaths . In Ethiopia alone, about 600 children are dying from diarrhea every day .
Trachoma: This is the world’s number one cause of contagious blindness. Trachoma is accountably causing vision loss for nearly 2.2 million individuals worldwide, from which 1.2 million have become permanently blind . Ethiopia is one of five world countries where half the active trachoma burden is concentrated all over the world. The National Regional State of Amhara (ANRS) is adversely impacted by trachoma among the 9 rural, regional states, and two chartered cities in Ethiopia .
Helminth infections and Schistosomiasis: Helminth infections are transmitted in water (schistosomiasis) and soil (soil-transmitted helminths, STH) via fecal matter . Intestinal worms infect around 10% of the developing world’s population, including Ethiopia . They can harm the liver, stomach, lungs, and bladder . Sub-Saharan Africa bears three-quarters of the burden . Schistosomes and STHs are significant public health problems in Ethiopia, with national prevalence recorded at 16.5% and 28.8% respectively. On the other hand, over 11 million pre-school children (aged 2 to 5 years) obtained preventive chemotherapy (PC) against STH infections within the years 2004 and 2009 .
Undernutrition: It causes approximately 45% of all infant fatalities and contributes to 11% of the world’s disease load . Undernutrition, inadequate sanitation, hygiene, and water account for around 50% of childhood and maternal underweight results, primarily through the interaction between diarrheal and undernutrition .
4.2. Environmental Consequences
Waste poses a major threat to the human environment and, thus, to health primarily because of how it can be disposed of . Insufficient and unsanitary treatment of contaminated human wastes contributes to the pollution of soil and water supply sources. Human waste management is among the key critical environmental health initiatives that the WHO has identified as one of the key measures to be followed to protect our environment. The current poor water and sanitation coverage in Ethiopia provides a drastically adverse environmental effect. The significant environmental impact of inadequate sanitation practices is contamination from poorly controlled human excreta. The absence of sufficient sanitation is a serious challenge for the ecosystem, destroying the urban environment by indiscriminately discharging solid and liquid waste and polluting freshwater and reservoirs with untreated human waste. Poor sanitation causes sewerage or waste stream inappropriately towards rivers, streams, lakes, and wetlands that threaten coastal and marine environments and subject people to pollution. Poorly managed waste also means being subjected to an unhealthy ecosystem regularly . In addition to this, poorly treated human excreta have significant implications for ecosystem impacts, contaminating human communities and water bodies. A large proportion of wastewater discharged towards rivers, lakes, oceans, and surrounding streams in Ethiopia contaminates some of the same facilities that people use as drinking water. Untreated sewage discharges pollute the environment along with polluting sources of drinking water and affect plant and marine life. Municipal wastewater and sewage account for a huge amount of total biological oxygen demand in densely populated river basins .
4.3. Financial and Economic Consequences
Although economic and financial analysis shows that sanitation provides economic benefits, the individual investing in improved sanitation doesn’t automatically gain from the investment. Therefore, household-level economics is an obstacle to health care development. Besides, many people are unable to spend the cost of inadequate sanitation in East Asia, and the Pacific and sub-Saharan African economies have surpassed 2% of total GDP, whereas, in South Asia, they have surpassed 4% of GDP. Poor financing for the sanitation sector and associated organizations inhibited the successful implementation of policy reforms. Costs are particularly significant in achieving the SDG # 6.2 target of safely-managed sanitation. Inadequate WASH infrastructure causes a yearly loss of approximately US$260 billion in less developed nations . Some countries cost billions equal to the equivalent of 7.2% of GDP in Cambodia, 6.3% of GDP in Bangladesh, 6.4% of GDP in India, 3.9% of GDP in Pakistan, and 2.4% of GDP in Niger and per annum estimated by World Bank . The annual costs of flood damage, insufficient WASH, and water shortage are estimated at USD 500 billion. It is also reported that low-quality sanitation infrastructure in Ethiopia costs US$570 million (13.5 billion Birr) annually, which is estimated to be 2.1% of the national GDP .
4.4. Impact on Well-Being
The concern for waste disposal has also a social and economic effect on the country. Connection to improved sanitation services is essential to every society's socio-economic health and sustainable growth . Low sanitation decreases human well-being, socio-economic growth related to consequences including sexual harassment, anxiety, risk, and missed opportunities for education. The problem with access to adequate sanitation and the widespread practice of open defecation affects households without access and those living in communities where access to sanitation is poor . Improved sanitation provides individuals with greater comfort, security, dignity, and position; along with wider environmental impacts. Nonetheless, these advantages are widely recognized as among the most significant for sanitation beneficiaries and may be of particular relevance to women . On-plot sanitation decreases any threat of theft or violence (including sexual assault and rape), particularly during the night or in remote areas.
5. Opportunities for Sanitation Service
Over the last decade, Ethiopia has made important improvements in expanding access to sanitation. According to the WHO/UNICEF JMP report that being achieved several greatest important improvements towards sanitation coverage, sanitation increased to 71 percent in 2015 just 25 years later from a mere 8 percent coverage in 1990. In Ethiopia, 44.3 million people were practicing open defecation in 1990 and this number has decreased to 28.3 million people after 25 years in 2015. This important success was largely facilitated through its adoption of a Community-led Total Sanitation and Hygiene approach (CLTSH) through the Ethiopian government, which its framework officially implemented in 2011 and enforced throughout the country under the health extension program. Throughout the process of building the policy, carrying out training, and implementing CLTSH across Ethiopia UNICEF has sponsored the MoH. Throughout the year 2000, Community-Led Total Sanitation (CLTS) originated as a participatory solution to tackling open defecation.
Health Extension Workers (HEWs) campaign to introduce CLTS in Ethiopia, where open defecation has been reduced substantially since CLTS was adopted . Besides, Ethiopia is among the least economically developed nations and has been the beneficiary of substantial donor funding to support the Ethiopian government in meeting SDGs. Due to this, there are many development partners, which have a role in increasing the improved sanitation facility of households in the country. The One WASH national program is the Ethiopian government’s key mechanism for achieving the sanitation and hygiene targets in the country’s GTP. The country has started also waste treatment and management strategies for both liquid and solid wastes generated from the inhabitants to tackle the sanitation problem, especially in cities. Currently, the government of Ethiopia invests huge amounts of money in water infrastructure to solve the socio-economic problems of the population. The establishment of a hydraulic infrastructure platform to store and disseminate the construction and management of water and urban infrastructure also hopefully solves the current problem. Water infrastructure investment creates greater economic benefits by spending to directly affected businesses and their workers and this can, in turn, improve the current low coverage of sanitation.
6. Strategies for Achieving Success in Sanitation
From the studies available three major strategies will be applied in Ethiopia to meet the sanitation target . The most important of such approaches is political leadership, demonstrated in establishing strong administrative accountability with resource allocation for sanitation, and in guaranteeing that government sector agencies operate together in the areas of health, water management, environmental protection, and urban municipal offices. Unfortunately, the government in Ethiopia is neglecting the sanitation sector. As we have mentioned above in Ethiopia the sanitation and waste treatment and management authority are given to the ministry of water irrigation and electricity, the ministry of health (environmental sanitation as well as national hygiene), the ministry of urban development and housing construction, and ministry of environment and forestry.
Efforts to meet the SDG 6.2 goal should emphasize more on enhancing current sanitation systems and sustaining them. To overcome the problems of improving access to sanitation, it is essential for the major stakeholders in Ethiopia’s sanitation sector, such as policymakers, health and water sectors, development partners including the population should work together. Formulating clear policies, regulations, strategies, or guidelines for promoting waste collection, transportation, reuse, and recycling is very essential. The policy, therefore, must build demand services, promote and strengthen collaborations between the private sector, Non-Governmental Organizations (NGOs), community-based organizations, local authorities, and individuals and eliminate barriers to improved sanitation. The present level of the funding system for sanitation in Ethiopia is not adequate to meet the SDG goals for sanitation.
The political leadership expresses itself by establishing strong institutional accountability and precise sanitation resource allocation and ensuring that agencies in the public sector can work well together on health, water supplies, and utility services. Awareness creation and empowerment of citizens, particularly women’s is essential to tackle the current overwhelming problem. Empowerment is indeed a crucial step toward a harmony of regulation and the responsibility to improve sanitation. Improved sanitation infrastructure is largely the responsibility of individual homeowners, except for the centralized sewage system, which is a public and private responsibility. Sanitation should be gender-sensitive and the sanitation policy must serve the interests, priorities, and lifestyles of girls, women, and men in equal proportions.
A Sustainable sanitation approach is needed to tackle the multilateral problems. Initiatives to enhance water quality and sanitation should include policies that take into consideration the impact of increased flows of urban wastewater and downstream agricultural and domestic uses. Measures to control pollution are important to minimize further degradation of particularly heavy metals in water, soils, and crops. A sanitation challenge approach is connected with establishing equity, preserving the consumer, and the general public about the environment. The goal is to build a system that is socially, economically, and environmentally sustainable. The sanitation system would not, however, pollute the atmosphere, nor would it deplete scarce resources. The waste generated from the public and private discharging and treatment mechanisms has to be designed to eliminate and avoid water pollution and related environmental contaminations. This means that sanitation mechanisms do not contribute to the deterioration of water or soil.
This paper explores the challenges and opportunities of sanitation facilities in Ethiopia. The review also shows the global situation, including Africa to easily compare and narrate Ethiopia’s achievement. Even though Ethiopia had substantial progress in improving sanitation facilities in the past decades, currently it has low coverage and most of the people are living with unimproved sanitation. It is essential to know the situation of the country and the problem of unsafe sanitation management as well as its related impacts. Wastewater treatment at the household level and on the public should be practiced for the improvement of the existing situation. Urban and rural sanitation professionals are combining strategies to find ways to achieve the SDG for eliminating open defecation. Comprehensive participation of the environment and health sector has a great opportunity to improve sanitation, and a great deal of strength to help achieve the goal. Reliable data availability, accessibility, and data sharing are important to timely evaluating and taking action for the current problems of sanitation, but there are gaps in information and data about what is potentially dangerous in Ethiopia.
Our finding also indicated that disturbing trends were observed in Ethiopia in the achievements of SDG 6, particularly basic sanitation services that have registered in very slow progress. This also indicated that the major challenge remains in Ethiopia and unless significant measures will be undertaken in the next 10 years of the SDGs plan, the country will not meet the SDGs at the end of 2030. Therefore, governments and policymakers need to proceed in a way to change their approach and prioritize water and sanitation-related development initiatives that currently need significant improvement in the sanitation services that mainly focusing on formulating effective policy and establishing the strong institution, capacity development in terms of preparation and deployment of human capital, sufficient resource allocation, and logistics arrangements.
I would like to express my sincere gratitude to Professor Hongtao Wang that who originated the concept of the paper and conducted part of the analysis and also critically revised the whole paper. Secondly, I would like to thanks the China Ministry of Commerce (MOFCOM) for financial support through a study of my scholarship, UNEP-TONGJI Institute of Environment for Sustainable Development (IESD), and Tongji University, particularly the College of Environmental Science and Engineering, who gave me the golden opportunity to do this paper entitled “Sustainable Development Goals (SDG) Target 6.2 in Ethiopia: Challenges and Opportunities”.
Mara, D. and Evans, B. (2018) The Sanitation and Hygiene Targets of the Sustainable Development Goals: Scope and Challenges. Journal of Water Sanitation and Hygiene for Development, 8, 1-16. https://doi.org/10.2166/washdev.2017.048
Dias, C.M., Rosa, L.P., Gomez, J. and D’Avignon, A. (2018) Achieving the Sustainable Development Goal 06 in Brazil: The Universal Access to Sanitation as a Possible Mission. Anais da Academia Brasileira de Ciências, 90, 1337-1367. https://doi.org/10.1590/0001-3765201820170590
Weststrate, J., Dijkstra, G., Eshuis, J., Gianoli, A. and Rusca, M. (2018) The Sustainable Development Goal on Water and Sanitation: Learning from the Millennium Development Goals. Social Indicators Research, 143, 795-810. https://doi.org/10.1007/s11205-018-1965-5
Hyun, C., et al. (2019) Sanitation for Low-Income Regions: A Cross-Disciplinary Review. Annual Review of Environment and Resources, 44, 287-318. https://doi.org/10.1146/annurev-environ-101718-033327
Baum, R., Luh, J. and Bartram, J. (2013) Sanitation: A Global Estimate of Sewerage Connections without Treatment and the Resulting Impact on MDG Progress. Environmental Science & Technology, 47, 1994-2000. https://doi.org/10.1021/es304284f
Jung, S., et al. (2016) The Effects of Improved Sanitation on Diarrheal Prevalence, Incidence, and Duration in Children under Five in the SNNPR State, Ethiopia: Study Protocol for a Randomized Controlled Trial. Trials, 17, 204. https://doi.org/10.1186/s13063-016-1319-z
Schmidt-Traub, G., Kroll, C., Teksoz, K., Durand-Delacre, D. and Sachs, J.D. (2017) National Baselines for the Sustainable Development Goals Assessed in the SDG Index and Dashboards. Nature Geoscience, 10, 547-555. https://doi.org/10.1038/ngeo2985
Khan, S.M., et al. (2017) Optimizing Household Survey Methods to Monitor the Sustainable Development Goals Targets 6.1 and 6.2 on Drinking Water, Sanitation and Hygiene: A Mixed-Methods Field-Test in Belize. PLoS ONE, 12, e0189089. https://doi.org/10.1371/journal.pone.0189089
Yohannes, T., Workicho, A. and Asefa, H. (2014) Cross-Sectional Study: Availability of Improved Sanitation Facilities and Associated Factors among Rural Communities in Lemo Woreda, Hadiya Zone, Southern Ethiopia. Open Access Library Journal, 1, e1020. https://doi.org/10.4236/oalib.1101020
Nansubuga, I., Banadda, N., Verstraete, W. and Rabaey, K. (2016) A Review of Sustainable Sanitation Systems in Africa. Reviews in Environmental Science and Bio/ Technology, 15, 465-478. https://doi.org/10.1007/s11157-016-9400-3
Odagiri, M., et al. (2020) Achieving the Sustainable Development Goals for Water and Sanitation in Indonesia—Results from a Five-Year (2013-2017) Large-Scale Effectiveness Evaluation. International Journal of Hygiene and Environmental Health, 230, Article ID: 113584. https://doi.org/10.1016/j.ijheh.2020.113584
Libby, J.A., Wells, E.C. and Mihelcic, J.R. (2020) Moving Up the Sanitation Ladder While Considering Function: An Assessment of Indigenous Communities, Pit Latrine Users, and Their Perceptions of Resource Recovery Sanitation Technology in Panama. Environmental Science & Technology, 54, 15405-15413. https://doi.org/10.1021/acs.est.0c04120
Seymour, Z. and Hughes, J. (2014) Sanitation in Developing Countries: A Systematic Review of User Preferences and Motivations. Journal of Water, Sanitation and Hygiene for Development, 4, 681-691. https://doi.org/10.2166/washdev.2014.127
Li, X., Hu, Q., Miao, Y., Chen, W. and Yuan, C. (2015) Household Access to Sanitation Facilities in Rural China. Journal of Water, Sanitation and Hygiene for Development, 5, 465-473. https://doi.org/10.2166/washdev.2015.141
Dwipayanti, N.M.U., Phung, T.D., Rutherford, S. and Chu, C. (2017) Towards Sustained Sanitation Services: A Review of Existing Frameworks and an Alternative Framework Combining Ecological and Sanitation Life Stage Approach. Journal of Water, Sanitation and Hygiene for Development, 7, 25-42. https://doi.org/10.2166/washdev.2017.086
Garn, J.V., et al. (2017) The Impact of Sanitation Interventions on Latrine Coverage and Latrine Use: A Systematic Review and Meta-Analysis. International Journal of Hygiene and Environmental Health, 220, 329-340. https://doi.org/10.1016/j.ijheh.2016.10.001
Kaminsky, J.A. and Javernick-Will, A.N. (2014) The Internal Social Sustainability of Sanitation Infrastructure. Environmental Science & Technology, 48, 10028-10035. https://doi.org/10.1021/es501608p
Templeton, M.R. (2015) Pitfalls and Progress: A Perspective on Achieving Sustainable Sanitation for All. Environmental Science: Water Research & Technology, 1, 17-21. https://doi.org/10.1039/C4EW00087K
Dreibelbis, R., et al. (2015) Development of a Multidimensional Scale to Assess Attitudinal Determinants of Sanitation Uptake and Use. Environmental Science & Technology, 49, 13613-13621. https://doi.org/10.1021/acs.est.5b02985
Hopewell, M.R. and Graham, J.P. (2014) Trends in Access to Water Supply and Sanitation in 31 Major Sub-Saharan African Cities: An Analysis of DHS Data from 2000 to 2012. BMC Public Health, 14, 208. https://doi.org/10.1186/1471-2458-14-208
Sclar, G.D., et al. (2016) Assessing the Impact of Sanitation on Indicators of Fecal Exposure along Principal Transmission Pathways: A Systematic Review. International Journal of Hygiene and Environmental Health, 219, 709-723. https://doi.org/10.1016/j.ijheh.2016.09.021
Appiah-Effah, E., Duku, G.A., Azangbego, N.Y., Aggrey, R.K.A., Gyapong-Korsah, B. and Nyarko, K.B. (2019) Ghana’s Post-MDGs Sanitation Situation: An Overview. Journal of Water, Sanitation and Hygiene for Development, 9, 397-415. https://doi.org/10.2166/washdev.2019.031
Hutton, G. and Chase, C. (2016) The Knowledge Base for Achieving the Sustainable Development Goal Targets on Water Supply, Sanitation and Hygiene. International Journal of Environmental Research and Public Health, 13, 536. https://doi.org/10.3390/ijerph13060536
Nhamo, G., Nhemachena, C. and Nhamo, S. (2019) Is 2030 Too Soon for Africa to Achieve the Water and Sanitation Sustainable Development Goal? Science of the Total Environment, 669, 129-139. https://doi.org/10.1016/j.scitotenv.2019.03.109
Gebremariam, B., Hagos, G. and Abay, M. (2018) Assessment of Community-Led Total Sanitation and Hygiene Approach on the Improvement of Latrine Utilization in Laelay Maichew District, North Ethiopia. A Comparative Cross-Sectional Study. PLoS ONE, 13, e0203458. https://doi.org/10.1371/journal.pone.0203458
Degu, A.A. (2019) The Nexus between Population and Economic Growth in Ethiopia: An Empirical Inquiry. International Journal of Business and Economic Sciences Applied Research, 12, 43-50. https://doi.org/10.25103/ijbesar.123.05
Mensah, J. and Ricart Casadevall, S. (2019) Sustainable Development: Meaning, History, Principles, Pillars, and Implications for Human Action: Literature Review. Cogent Social Sciences, 5, Article ID: 1653531. https://doi.org/10.1080/23311886.2019.1653531
Sachs, J., Schmidt-Traub, G., Kroll, C., Lafortune, G. and Fuller, G. (2019) Sustainable Development Report 2019. Transformations to Achieve the Sustainable Development Goals. Bertelsmann Stiftung. Sustainable Development Solutions Network (SDSN), New York. https://www.sdgindex.org
Beyene, A., Hailu, T., Faris, K. and Kloos, H. (2015) Current State and Trends of Access to Sanitation in Ethiopia and the Need to Revise Indicators to Monitor Progress in the Post-2015 Era. BMC Public Health, 15, 451. https://doi.org/10.1186/s12889-015-1804-4
Adank, M., Butterworth, J., Godfrey, S. and Abera, M. (2016) Looking beyond Headline Indicators: Water and Sanitation Services in Small Towns in Ethiopia. Journal of Water, Sanitation and Hygiene for Development, 6, 435-446. https://doi.org/10.2166/washdev.2016.034
Pullan, R.L., Freeman, M.C., Gething, P.W. and Brooker, S.J. (2014) Geographical Inequalities in the Use of Improved Drinking Water Supply and Sanitation across Sub-Saharan Africa: Mapping and Spatial Analysis of Cross-Sectional Survey Data. PLOS Medicine, 11, e1001626. https://doi.org/10.1371/journal.pmed.1001626
Luh, J., Baum, R. and Bartram, J. (2013) Equity in Water and Sanitation: Developing an Index to Measure Progressive Realization of the Human Right. International Journal of Hygiene and Environmental Health, 216, 662-671. https://doi.org/10.1016/j.ijheh.2012.12.007
Mosisa, M., Mesele, A., Bogale, B. and Work, K. (2019) Sanitation in Borena Pastoral Community of Ethiopia: Pinpointing the Status and Challenges. Ethiopian Journal of Science and Sustainable Development, 6, 31-37.
Agide, F.D., et al. (2019) Application of Kingdon and Hall Models to Review Environmental Sanitation and Health Promotion Policy in Ethiopia: A Professional Perspective as a Review. Ethiopian Journal of Health Sciences, 29, 277-286.
Beyene, A., Addis, T., Hailu, T., Tesfahun, E., Wolde, M. and Faris, K. (2015) Situational Analysis of Access to Improved Sanitation in the Capital of Ethiopia and the Urgency of Adopting an Integrated Fecal Sludge Management (FSM) System. Science, 3, 726-732. https://doi.org/10.11648/j.sjph.20150305.29
Gebregiorgs, M.T. (2018) Towards Sustainable Waste Management through the Cautious Design of Environmental Taxes: The Case of Ethiopia. Sustainability, 10, 3088. https://doi.org/10.3390/su10093088
Malik, O.A., Hsu, A., Johnson, L.A. and de Sherbinin, A. (2015) A Global Indicator of Wastewater Treatment to Inform the Sustainable Development Goals (SDGs). Environmental Science & Policy, 48, 172-185. https://doi.org/10.1016/j.envsci.2015.01.005
Odey, E.A., Li, Z., Zhou, X. and Kalakodio, L. (2017) Fecal Sludge Management in Developing Urban Centers: A Review on the Collection, Treatment, and Composting. Environmental Science and Pollution Research, 24, 23441-23452. https://doi.org/10.1007/s11356-017-0151-7
Orner, K.D. and Mihelcic, J.R. (2018) A Review of Sanitation Technologies to Achieve Multiple Sustainable Development Goals That Promote Resource Recovery. Environmental Science: Water Research & Technology, 4, 16-32. https://doi.org/10.1039/C7EW00195A
Assefa, Y.T., Babel, M.S., Susnik, J. and Shinde, V.R. (2019) Development of a Generic Domestic Water Security Index, and Its Application in Addis Ababa, Ethiopia. Water, 11, 37. https://doi.org/10.3390/w11010037
Van Rooijen, D. and Taddesse, G. (2009) Urban Sanitation and Wastewater Treatment in Addis Ababa in the Awash Basin, Ethiopia. Proceedings of the 34th WEDC International Conference, Addis Ababa, 18-22 May 2009, 18-22.
Aiemjoy, K., et al. (2017) Is Using a Latrine “A Strange Thing to Do”? A Mixed- Methods Study of Sanitation Preference and Behaviors in Rural Ethiopia. The American Society of Tropical Medicine and Hygiene, 96, 65-73. https://doi.org/10.4269/ajtmh.16-0541
Fry, L.M., Mihelcic, J.R. and Watkins, D.W. (2008) Water and Nonwater-Related Challenges of Achieving Global Sanitation Coverage. Environmental Science & Technology, 42, 4298-4304. https://doi.org/10.1021/es7025856
Kumie, A. and Ali, A. (2005) An Overview of Environmental Health Status in Ethiopia with Particular Emphasis to Its Organization, Drinking Water and Sanitation: A Literature Survey. Ethiopian Journal of Health Development, 19, 89. https://doi.org/10.4314/ejhd.v19i2.9977
I. U. JSI John Snow (2015) Situational Analysis of Urban Sanitation and Waste Management in Ethiopia: The Structural, Socio-Economic, Institutional, Organizational, Environmental, Behavioral, Cultural, Socio-Demographic Dimensions.
Terefe, B. and Welle, K. (2008) Policy and Institutional Factors Affecting Formulation and Implementation of Sanitation and Hygiene Strategy. A Case Study from the Southern Nations Region (SNNPR) of Ethiopia. RiPPLE, Addis Ababa, 42.
Welle, K. (2014) Monitoring Performance or Performing Monitoring? Exploring the Power and Political Dynamics Underlying Monitoring the MDG for Rural Water in Ethiopia. Canadian Journal of Development Studies, 35, 155-169. https://doi.org/10.1080/02255189.2014.877380
Bartram, J., Brocklehurst, C., Bradley, D., Muller, M. and Evans, B. (2018) Policy Review of the Means of Implementation Targets and Indicators for the Sustainable Development Goal for Water and Sanitation. NPJ Clean Water, 1, 1-5. https://doi.org/10.1038/s41545-018-0003-0
Nagy, J.A., Benedek, J. and Ivan, K. (2018) Measuring Sustainable Development Goals at a Local Level: A Case of a Metropolitan Area in Romania. Sustainability, 10, 3962. https://doi.org/10.3390/su10113962
World Bank (2017) Reducing Inequalities in Water Supply, Sanitation, and Hygiene in the Era of the Sustainable Development Goals: Synthesis Report of the Wash Poverty Diagnostic Initiative. WASH Synthesis Report.
Medland, L.S., Scott, R.E. and Cotton, A.P. (2016) Achieving Sustainable Sanitation Chains through Better Informed and More Systematic Improvements: Lessons from Multi-City Research in Sub-Saharan Africa. Environmental Science: Water Research & Technology, 2, 492-501. https://doi.org/10.1039/C5EW00255A
Novotny, J., Hasman, J. and Lepic, M. (2018) Contextual Factors and Motivations Affecting Rural Community Sanitation in Low- and Middle-Income Countries: A Systematic Review. International Journal of Hygiene and Environmental Health, 221, 121-133. https://doi.org/10.1016/j.ijheh.2017.10.018
Exley, J.L., Liseka, B., Cumming, O. and Ensink, J.H.J. (2015) The Sanitation Ladder, What Constitutes an Improved Form of Sanitation? Environmental Science & Technology, 49, 1086-1094. https://doi.org/10.1021/es503945x
Dugassa Girsha, W. (2016) Assessment of Water, Sanitation and Hygiene Status of Households in Welenchiti Town, Boset Woreda, East Shoa Zone, Ethiopia. Science Journal of Public Health, 4, 435. https://doi.org/10.11648/j.sjph.20160406.13
Tessema, R.A. (2017) Assessment of the Implementation of Community-Led Total Sanitation, Hygiene, and Associated Factors in Diretiyara District, Eastern Ethiopia. PLoS ONE, 12, e0175233. https://doi.org/10.1371/journal.pone.0175233
Alemu, F., Kumie, A., Medhin, G., Gebre, T. and Godfrey, P. (2017) A Socio-Eco- logical Analysis of Barriers to the Adoption, Sustainability and Consistent Use of Sanitation Facilities in Rural Ethiopia. BMC Public Health, 17, Article No. 706. https://doi.org/10.1186/s12889-017-4717-6
Seyoum, S. and Graham, J.P. (2016) Equity in Access to Water Supply and Sanitation in Ethiopia: An Analysis of EDHS Data (2000-2011). Journal of Water, Sanitation and Hygiene for Development, 6, 320-330. https://doi.org/10.2166/washdev.2016.004
Adane, M., Mengistie, B., Kloos, H., Medhin, G. and Mulat, W. (2017) Sanitation Facilities, Hygienic Conditions, and Prevalence of Acute Diarrhea among Under-Five Children in Slums of Addis Ababa, Ethiopia: Baseline Survey of a Longitudinal Study. PLoS ONE, 12, e0182783. https://doi.org/10.1371/journal.pone.0182783
Tadesse, B., Worku, A., Kumie, A. and Yimer, S.A. (2017) Effect of Water, Sanitation, and Hygiene Interventions on Active Trachoma in North and South Wollo Zones of Amhara Region, Ethiopia: A Quasi-Experimental Study. PLOS Neglected Tropical Diseases, 11, e0006080. https://doi.org/10.1371/journal.pntd.0006080
Grimes, J.E., et al. (2016) School Water, Sanitation, and Hygiene, Soil-Transmitted Helminths, and Schistosomes: National Mapping in Ethiopia. PLOS Neglected Tropical Diseases, 10, e0004515. https://doi.org/10.1371/journal.pntd.0004515
Vidal, B., Hedström, A., Barraud, S., Kärrman, E. and Herrmann, I. (2019) Assessing the Sustainability of On-Site Sanitation Systems Using Multi-Criteria Analysis. Environmental Science: Water Research & Technology, 5, 1599-1615. https://doi.org/10.1039/C9EW00425D
Chaudhuri, S. and Roy, M. (2017) Rural-Urban Spatial Inequality in Water and Sanitation Facilities in India: A Cross-Sectional Study from Household to National Level. Applied Geography, 85, 27-38. https://doi.org/10.1016/j.apgeog.2017.05.003
Prasetyoputra, P. and Irianti, S. (2013) Access to Improved Sanitation Facilities in Indonesia: An Econometric Analysis of Geographical and Socioeconomic Disparities. Journal of Applied Sciences in Environmental Sanitation, 8, 215-224.
Crocker, J., Geremew, A., Atalie, F., Yetie, M. and Bartram, J. (2016) Teachers and Sanitation Promotion: An Assessment of Community-Led Total Sanitation in Ethiopia. Environmental Science & Technology, 50, 6517-6525. https://doi.org/10.1021/acs.est.6b01021 |
The working of the SVM algorithm can be understood by using an example. Suppose we have a dataset that has two tags (green and blue), and the dataset has two features x1 and x2. We want a classifier that can classify the pair(x1, x2) of coordinates in either green or blue. Consider the below image:
So as it is 2-d space so by just using a straight line, we can easily separate these two classes. But there can be multiple lines that can separate these classes. Consider the below image:
Hence, the SVM algorithm helps to find the best line or decision boundary; this best boundary or region is called as a hyperplane . SVM algorithm finds the closest point of the lines from both the classes. These points are called support vectors. The distance between the vectors and the hyperplane is called as margin . And the goal of SVM is to maximize this margin. The hyperplane with maximum margin is called the optimal hyperplane .
If data is linearly arranged, then we can separate it by using a straight line, but for non-linear data, we cannot draw a single straight line. Consider the below image:
So to separate these data points, we need to add one more dimension. For linear data, we have used two dimensions x and y, so for non-linear data, we will add a third dimension z. It can be calculated as:
By adding the third dimension, the sample space will become as below image:
So now, SVM will divide the datasets into classes in the following way. Consider the below image:
Since we are in 3-d Space, hence it is looking like a plane parallel to the x-axis. If we convert it in 2d space with z=1, then it will become as:
Hence we get a circumference of radius 1 in case of non-linear data. |
The sine (abbreviated "sin") is a type of trigonometric function.
Definition 1 is the simplest and most intuitive definition of the sine function. It basically says that, on a right triangle, the following measurements are related:
Futhermore, Definition I gives an exact equation that describes this relation:
sin(q) = opposite / hypotenuseThis equation says that if we evaluate the sine of that angle q, we will get the exact same value as if we divided the length of the side opposite to that angle by the length of the triangle's hypotenuse. The relation holds for any right triangle, regardless of size.
The main result is this: If we know the values of any two of the above quantities, we can use the above relation to mathematically derive the third quantity. For example, the sine function allows us to answer any of the following three questions:
"Given a right triangle, where the measurement of one of the non-right angles (q) is known and the length of the side opposite to that angle q is known, find the length of the triangle's hypotenuse."
"Given a right triangle, where the measurement of one of the non-right angles (q) is known and the length of the triangle's hypotenuse is known, find the length of the side opposite to that angle q."
"Given a right triangle, where the length of the triangle's hypotenuse and the length of one of the triangle's other sides is known, find the measurement of the angle (q) opposite to that other side."The function takes the form y = sin(q). Usually, q is an angle measurement and y denotes a length.
The sine function, like all trig functions,
evaluates differently depending on the units
on q, such as degrees, radians, or grads. For example, sin(90°) = 1, while sin(90)=0.89399.... |
Here on this particular example, you see that you do have an X that's after this fraction. Which means that this number is a slope. If there were no X here, that means that this would be the number that you put a dot at on your Y axis. Because there's an X, that means that this number is your slope. The number behind it is your Y intercept. We don't see a number here. So it's a zero. There's an imaginary plus zero here. So we're going to put our dot on the positive zero. So I'm going to write this down. So if you see. Y equals, and you have a negative. You have a negative 9 over ten, and there's an X beside it. That means that this is your slope. Remember that the Y intercept is behind it. This is your Y intercept. So you're going to put a dot at zero. So let's go over here to the graph. And we're going to put our dot at the Y intercept, which is zero. So I'm going to put a dot right here. Sorry. Right here at zero. That is my Y intercept. And then I'm going to count the slope. Remember slope always tells me to go up 9 and to the left ten since it's negative. So let me count up 912-345-6789, and I need to go to the left ten. One, two, three, four, 5, 6, 7, 8, 9, ten. And we know that this is correct because a negative slope means that the graph is always pointing to the top left corner of the page. So now let's hit enter, so we just our line went through the origin. At Y equals zero. And that is correct. |
In science and engineering, the parts-per notation is a set of pseudo units to describe small values of miscellaneous dimensionless quantities, e.g. mole fraction or mass fraction. Since these fractions are quantity-per-quantity measures, they are pure numbers with no associated units of measurement. Commonly used are ppm (parts-per-million, 10–6), ppb (parts-per-billion, 10–9), ppt (parts-per-trillion, 10–12) and ppq (parts-per-quadrillion, 10-15).
Parts-per notation is often used describing dilute solutions in chemistry, for instance, the relative abundance of dissolved minerals or pollutants in water. The unit “1 ppm” can be used for a mass fraction if a water-borne pollutant is present at one-millionth of a gram per gram of sample solution. When working with aqueous solutions, it is common to assume that the density of water is 1.00 g/mL. Therefore, it is common to equate 1 gram of water with 1 mL of water. Consequently, ppm corresponds to 1 mg/L and ppb corresponds to 1 μg/L.
Similarly, parts-per notation is used also in physics and engineering to express the value of various proportional phenomena. For instance, a special metal alloy might expand 1.2 micrometers per meter of length for every degree Celsius and this would be expressed as “α = 1.2 ppm/°C.” Parts-per notation is also employed to denote the change, stability, or uncertainty in measurements. For instance, the accuracy of land-survey distance measurements when using a laser rangefinder might be 1 millimeter per kilometer of distance; this could be expressed as “Accuracy = 1 ppm.”
Parts-per notations are all dimensionless quantities: in mathematical expressions, the units of measurement always cancel. In fractions like “2 nanometers per meter” (2 n
m/ m = 2 nano = 2 × 10−9 = 2 ppb = 2 × 0.000000001) so the quotients are pure-number coefficients with positive values less than 1. When parts-per notations, including the percent symbol (%), are used in regular prose (as opposed to mathematical expressions), they are still pure-number dimensionless quantities. However, they generally take the literal “parts per” meaning of a comparative ratio (e.g., “2 ppb” would generally be interpreted as “two parts in a billion parts”).
Parts-per notations may be expressed in terms of any unit of the same measure. For instance, the coefficient of thermal expansion of a certain brass alloy, α = 18.7 ppm/°C, may be expressed as 18.7 (µm/m)/°C, or as 18.7 (µin/in)/°C; the numeric value representing a relative proportion does not change with the adoption of a different unit of measure. Similarly, a metering pump that injects a trace chemical into the main process line at the proportional flow rate Qp = 125 ppm, is doing so at a rate that may be expressed in a variety of volumetric units, including 125 µL/L, 125 µgal/gal, 125 cm3/m3, etc.
In nuclear magnetic resonance spectroscopy (NMR), chemical shift is usually expressed in ppm. It represents the difference of a measured frequency in parts per million from the reference frequency. The reference frequency depends on the instrument's magnetic field and the element being measured. It is usually expressed in MHz. Typical chemical shifts are rarely more than a few hundred Hz from the reference frequency, so chemical shifts are conveniently expressed in ppm (Hz/MHz). Parts-per notation gives a dimensionless quantity that does not depend on the instrument's field strength.
Although the International Bureau of Weights and Measures (an international standards organization known also by its French-language initials BIPM) recognizes the use of parts-per notation, it is not formally part of the International System of Units (SI). Note that although “percent” (%) is not formally part of the SI, both the BIPM and the ISO take the position that “in mathematical expressions, the internationally recognized symbol % (percent) may be used with the SI to represent the number 0.01” for dimensionless quantities. According to IUPAP, “a continued source of annoyance to unit purists has been the continued use of percent, ppm, ppb, and ppt.”. Although SI-compliant expressions should be used as an alternative, the parts-per notation remains nevertheless widely used in technical disciplines. The main problems with the parts-per notation are the following:
Because the named numbers starting with a “billion” have different values in different countries, the BIPM suggests avoiding the use of “ppb” and “ppt” to prevent misunderstanding. In the English language, named numbers have a consistent meaning only up to “million”. Starting with “billion”, there are two numbering conventions: the “long” and “short” scales, and “billion” can mean either 109 or 1012. The U.S. National Institute of Standards and Technology (NIST) takes the stringent position, stating that “the language-dependent terms [ . . . ] are not acceptable for use with the SI to express the values of quantities.”
Although "ppt" usually means "parts per trillion", it occasionally means "parts per thousand". Unless the meaning of "ppt" is defined explicitly, it has to be guessed from the context.
Another problem of the parts-per notation is that it may refer to either a mass fraction or a mole fraction. Since it is usually not stated which quantity is used, it is better to write the unit as kg/kg, or mol/mol (even though they are all dimensionless). For example, the conversion factor between a mass fraction of 1 ppb and a mole fraction of 1 ppb is about 4.7 for the greenhouse gas CFC-11 in air. The usage is generally quite fixed inside most specific branches of science, leading some researchers to draw the conclusion that their own usage (mass/mass, mol/mol or others) is the only correct one. This, in turn, leads them to not specify their usage in their publications, and others may therefore misinterpret their results. For example, electrochemists often use volume/volume, while chemical engineers may use mass/mass as well as volume/volume. Many academic papers of otherwise excellent level fail to specify their usage of the parts-per notation. The difference between expressing concentrations as mass/mass or volume/volume is quite significant when dealing with gases and it is very important to specify which is being used.
SI-compliant units that can be used as alternatives are shown in the chart below. Expressions that the BIPM does not explicitly recognize as being suitable for denoting dimensionless quantities with the SI are shown in underlined green text.
|NOTATIONS FOR DIMENSIONLESS QUANTITIES|
|A strain of…||2 cm/m||2 parts per hundred||2%||2 × 10−2|
|A sensitivity of…||2 mV/V||2 parts per thousand||2 ‰||2 × 10−3|
|A sensitivity of…||0.2 mV/V||2 parts per ten thousand||2 ‱||2 × 10−4|
|A sensitivity of…||2 µV/V||2 parts per million||2 ppm||2 × 10−6|
|A sensitivity of…||2 nV/V||2 parts per billion||2 ppb||2 × 10−9|
|A sensitivity of…||2 pV/V||2 parts per trillion||2 ppt||2 × 10−12|
|A mass fraction of…||2 mg/kg||2 parts per million||2 ppm||2 × 10−6|
|A mass fraction of…||2 µg/kg||2 parts per billion||2 ppb||2 × 10−9|
|A mass fraction of…||2 ng/kg||2 parts per trillion||2 ppt||2 × 10−12|
|A mass fraction of…||2 pg/kg||2 parts per quadrillion||2 ppq||2 × 10−15|
|A volume fraction of…||5.2 µL/L||5.2 parts per million||5.2 ppm||5.2 × 10−6|
|A mole fraction of…||5.24 µmol/mol||5.24 parts per million||5.24 ppm||5.24 × 10−6|
|A mole fraction of…||5.24 nmol/mol||5.24 parts per billion||5.24 ppb||5.24 × 10−9|
|A mole fraction of…||5.24 pmol/mol||5.24 parts per trillion||5.24 ppt||5.24 × 10−12|
|A stability of…||1 (µA/A)/min.||1 part per million per min.||1 ppm/min.||1 × 10−6/min.|
|A change of…||5 nΩ/Ω||5 parts per billion||5 ppb||5 × 10−9|
|An uncertainty of…||9 µg/kg||9 parts per billion||9 ppb||9 × 10−9|
|A shift of…||1 nm/m||1 part per billion||1 ppb||1 × 10−9|
|A strain of…||1 µm/m||1 part per million||1 ppm||1 × 10−6|
|A temperature coefficient of…||0.3 (µHz/Hz)/°C||0.3 part per million per °C||0.3 ppm/°C||0.3 × 10−6/°C|
|A frequency change of…||0.35 × 10−9 ƒ||0.35 part per billion||0.35 ppb||0.35 × 10−9|
Note that the notations in the “SI units” column above are all dimensionless quantities; that is, the units of measurement factor out in expressions like “1 nm/m” (1 n
m/ m = 1 nano = 1 × 10−9) so the quotients are pure-number coefficients with values less than 1.
Because of the cumbersome nature of expressing certain dimensionless quantities per SI guidelines, the International Union of Pure and Applied Physics (IUPAP) in 1999 proposed the adoption of the special name "uno" (symbol: U) to represent the number 1 in dimensionless quantities. This symbol is not to be confused with the always-italicized symbol for the variable 'uncertainty' (symbol: U). This unit name uno and its symbol could be used in combination with the SI prefixes to express the values of dimensionless quantities which are much less—or even greater—than one.
Common parts-per notations in terms of the uno are given in the table below.
|Coefficient||Parts-per example||Uno equiv.||Symbol form||Value of quantity|
|10−2||2%||2 centiuno||2 cU||2 × 10−2|
|10−3||2 ‰||2 milliuno||2 mU||2 × 10−3|
|10−6||2 ppm||2 microuno||2 µU||2 × 10−6|
|10−9||2 ppb||2 nanouno||2 nU||2 × 10−9|
|10−12||2 ppt||2 picouno||2 pU||2 × 10−12|
In 2004, a report to the International Committee for Weights and Measures (known also by its French-language initials CIPM) stated that response to the proposal of the uno "had been almost entirely negative" and the principal proponent "recommended dropping the idea". To date, the uno has not been adopted by any standards organization and it appears unlikely it will ever become an officially sanctioned way to express low-value (high-ratio) dimensionless quantities. The proposal was instructive, however, as to the perceived shortcomings of the current options for denoting dimensionless quantities.
Parts-per notation may properly be used only to express true dimensionless quantities; that is, the units of measurement must cancel in expressions like "1 mg/kg" so that the quotients are pure numbers with values less than 1. Mixed-unit quantities such as "a radon concentration of 15 pCi/L" are not dimensionless quantities and may not be expressed using any form of parts-per notation, such as "15 ppt". Other examples of measures that are not dimensionless quantities are as follows:
Note however, that it is not uncommon to express aqueous concentrations—particularly in drinking-water reports intended for the general public—using parts-per notation (2.1 ppm, 0.8 ppb, etc.) and further, for those reports to state that the notations denote milligrams per liter or micrograms per liter. Although "2.1 mg/L" is not a dimensionless quantity, it is assumed in scientific circles that "2.1 mg/kg" (2.1 ppm) is the true measure because one liter of water has a mass of about one kilogram, The goal in all technical writing (including drinking-water reports for the general public) is to clearly communicate to the intended audience with minimal confusion. Drinking water is intuitively a volumetric quantity in the public’s mind so measures of contamination expressed on a per-liter basis are considered to be easier to grasp. Still, it is technically possible, for example, to "dissolve" more than one liter of a very hydrophilic chemical in 1 liter of water; parts-per notation would be confusing when describing its solubility in water (greater than a million parts per million), so one would simply state the volume (or mass) that will dissolve into a liter, instead.
When reporting air-borne rather than water-borne densities, a slightly different convention is used since air is approximately 1000 times less dense than water. In water, 1 µg/m3 is roughly equivalent to parts-per-trillion whereas in air, it is roughly equivalent to parts-per-billion. Note also, that in the case of air, this convention is much less accurate. Whereas one liter of water is almost exactly 1 kg, one cubic meter of air is often taken as 1.143 kg—much less accurate, but still close enough for many practical uses.
|Look up ppm, ppb, ppt, or ppq in Wiktionary, the free dictionary.| |
Proteins are large biomolecules and macromolecules that are comprised of one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity.
A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides, or sometimes oligopeptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can also work together to achieve a particular function, and they often associate to form stable protein complexes.
Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable.
Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Proteins also have structural or mechanical functions, such as actin and myosin in muscle and the proteins in the cytoskeleton, which form a system of scaffolding that maintains cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for use in the metabolism.
Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography; the advent of genetic engineering has made possible a number of methods to facilitate purification. Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry.
Proteins were recognized as a distinct class of biological molecules in the eighteenth century by Antoine Fourcroy and others, distinguished by the molecules' ability to coagulate or flocculate under treatments with heat or acid.Noted examples at the time included albumin from egg whites, blood serum albumin, fibrin, and wheat gluten.
Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838.Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word πρώτειος (proteios), meaning "primary", "in the lead", or "standing in front", + -in . Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da. Prior to "protein", other names were used, like "albumins" or "albuminous materials" (Eiweisskörper, in German).
Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh."Karl Heinrich Ritthausen extended known protein forms with the identification of glutamic acid. At the Connecticut Agricultural Experiment Station a detailed review of the vegetable proteins was compiled by Thomas Burr Osborne. Working with Lafayette Mendel and applying Liebig's law of the minimum in feeding laboratory rats, the nutritionally essential amino acids were established. The work was continued and communicated by William Cumming Rose. The understanding of proteins as polypeptides came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein.
The difficulty in purifying proteins in large quantities made them very difficult for early protein biochemists to study. Hence, early studies focused on proteins that could be purified in large quantities, e.g., those of blood, egg white, various toxins, and digestive/metabolic enzymes obtained from slaughterhouses. In the 1950s, the Armour Hot Dog Co. purified 1 kg of pure bovine pancreatic ribonuclease A and made it freely available to scientists; this gesture helped ribonuclease A become a major target for biochemical study for the following decades.
Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933.Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions.
The first protein to be sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols.He won the Nobel Prize for this achievement in 1958.
The first protein structures to be solved were hemoglobin and myoglobin, by Max Perutz and Sir John Cowdery Kendrew, respectively, in 1958. As of 2017 [update] , the Protein Data Bank has over 126,060 atomic-resolution structures of proteins. In more recent times, cryo-electron microscopy of large macromolecular assemblies and computational protein structure prediction of small protein domains are two methods approaching atomic resolution.
The number of proteins encoded in a genome roughly corresponds to the number of genes (although there may be a significant number of genes that encode RNA of protein, e.g. ribosomal RNAs). Viruses typically encode a few to a few hundred proteins, archaea and bacteria a few hundred to a few thousand, while eukaryotes typically encode a few thousand up to tens of thousands of proteins (see genome size for a list of examples).
Most proteins consist of linear polymers built from series of up to 20 different L-α- amino acids. All proteinogenic amino acids possess common structural features, including an α-carbon to which an amino group, a carboxyl group, and a variable side chain are bonded. Only proline differs from this basic structure as it contains an unusual ring to the N-end amine group, which forces the CO–NH amide moiety into a fixed conformation. 19The side chains of the standard amino acids, detailed in the list of standard amino acids, have a great variety of chemical structures and properties; it is the combined effect of all of the amino acid side chains in a protein that ultimately determines its three-dimensional structure and its chemical reactivity. The amino acids in a polypeptide chain are linked by peptide bonds. Once linked in the protein chain, an individual amino acid is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone. :
The peptide bond has two resonance forms that contribute some double-bond character and inhibit rotation around its axis, so that the alpha carbons are roughly coplanar. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. 31 The end with a free amino group is known as the N-terminus or amino terminus, whereas the end of the protein with a free carboxyl group is known as the C-terminus or carboxy terminus (the sequence of the protein is written from N-terminus to C-terminus, from left to right).:
The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues.Polypeptide can refer to any single linear chain of amino acids, usually regardless of length, but often implies an absence of a defined conformation.
Proteins can interact with many types of molecules, including with other proteins, with lipids, with carboyhydrates, and with DNA.
It has been estimated that average-sized bacteria contain about 2 million proteins per cell (e.g. E. coli and Staphylococcus aureus ). Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion.The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells.
Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon. 1002–42 Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (also known as a primary transcript) using various forms of Post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.:
The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus. 1002–42:
The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms.For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids.
Short proteins can also be synthesized chemically by a family of methods known as peptide synthesis, which rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield.Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction.
Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation. :36 Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states. :37 Biochemists often refer to four distinct aspects of a protein's structure: :30–34
Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution proteins also undergo variation in structure through thermal vibration and the collision with other molecules. 368–75:
Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane. 165–85:
A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons.
Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units. Domains usually also have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules (e.g. the SH3 domain binds to proline-rich sequences in other proteins).
Short amino acid sequences within proteins often act as recognition sites for other proteins.For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database.
Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes.With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome.
The chief characteristic of proteins that also allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (>1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine.
Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions also regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can also bind to, or even be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks. 830–49 As interactions between proteins are reversible, and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types.:
The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes.The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme).
The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis.The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site.
Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes.
Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell. 251–81:
Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high. 275–50:
Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, but must also release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom. 222–29 Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins.:
Transmembrane proteins can also serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions. 232–34:
Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells. 178–81 Some globular proteins can also play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size.:
Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They also generate the forces exerted by contracting muscles 258–64, 272 and play essential roles in intracellular transport.:
A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM).In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely. However, this can also lead to complete loss of gene function and thus pseudo-genes. More commonly, single amino acid changes have limited consequences although some can change protein function substantially, especially in enzymes. For instance, many enzymes can change their substrate specificity by one or a few mutations. Changes in substrate specificity are facilitated by substrate promiscuity, i.e. the ability of many enzymes to bind and process multiple substrates. When mutations occur, the specificity of an enzyme can increase (or decrease) and thus its enzymatic activity. Thus, bacteria (or other organisms) can adapt to different food sources, including unnatural substrates such as plastic.
The activities and structures of proteins may be examined in vitro, in vivo, and in silico . In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism. In silico studies use computational methods to study proteins.
To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity. 21–24 The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing.:
For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of different tags have been developed to help researchers purify specific proteins from complex mixtures.
The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP).The fused protein's position within the cell can be cleanly and efficiently visualized using microscopy, as shown in the figure opposite.
Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose.
Other possibilities exist, as well. For example, immunohistochemistry usually utilizes an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation.While this technique does not prove colocalization of a compartment of known density and the protein of interest, it does increase the likelihood, and is more amenable to large-scale studies.
Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique also uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest.
Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs,and may allow the rational design of new proteins with novel properties.
The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis,which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics.
Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses; 340–41 a variant known as electron crystallography can also produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein.:
Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB.Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined.
Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation.The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Also proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is, therefore, an important part of protein structure characterisation.
A vast array of computational methods have been developed to analyze the structure, function and evolution of proteins. The development of such tools has been driven by the large amount of genomic and proteomic data available for a variety of organisms, including the human genome. It is simply impossible to study all proteins experimentally, hence only a few are subjected to laboratory experiments while computational tools are used to extrapolate to similar proteins. Such homologous proteins can be efficiently identified in distantly related organisms by sequence alignment. Genome and gene sequences can be searched by a variety of tools for certain properties. Sequence profiling tools can find restriction enzyme sites, open reading frames in nucleotide sequences, and predict secondary structures. Phylogenetic trees can be constructed and evolutionary hypotheses developed using special software like ClustalW regarding the ancestry of modern organisms and the genes they express. The field of bioinformatics is now indispensable for the analysis of genes and proteins.
A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking,protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins.
Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree (MCTDH) method and the hierarchical equations of motion (HEOM) approach, which have been applied to plant cryptochromesand bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives (for example, the Folding@home project ) facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques.
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
Most microorganisms and plants can biosynthesize all 20 standard amino acids, while animals (including humans) must obtain some of the amino acids from the diet.The amino acids that an organism cannot synthesize on its own are referred to as essential amino acids. Key enzymes that synthesize certain amino acids are not present in animals—such as aspartokinase, which catalyses the first step in the synthesis of lysine, methionine, and threonine from aspartate. If amino acids are present in the environment, microorganisms can conserve energy by taking up the amino acids from their surroundings and downregulating their biosynthetic pathways.
In animals, amino acids are obtained through the consumption of foods containing protein. Ingested proteins are then broken down into amino acids through digestion, which typically involves denaturation of the protein through exposure to acid and hydrolysis by enzymes called proteases. Some ingested amino acids are used for protein biosynthesis, while others are converted to glucose through gluconeogenesis, or fed into the citric acid cycle. This use of protein as a fuel is particularly important under starvation conditions as it allows the body's own proteins to be used to support life, particularly those found in muscle.
In animals such as dogs and cats, protein maintains the health and quality of the skin by promoting hair follicle growth and keratinization, and thus reducing the likelihood of skin problems producing malodours.Poor-quality proteins also have a role regarding gastrointestinal health, increasing the potential for flatulence and odorous compounds in dogs because when proteins reach the colon in an undigested state, they are fermented producing hydrogen sulfide gas, indole, and skatole. Dogs and cats digest animal proteins better than those from plants, but products of low-quality animal origin are poorly digested, including skin, feathers, and connective tissue.
Biochemistry or biological chemistry, is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology which is the study of the molecular mechanisms of biological phenomena.
Protein biosynthesis is a core biological process, occurring inside cells, balancing the loss of cellular proteins through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences.
Ribosomes are macromolecular machines, found within all living cells, that perform biological protein synthesis. Ribosomes link amino acids together in the order specified by the codons of messenger RNA (mRNA) molecules to form polypeptide chains. Ribosomes consist of two major components: the small and large ribosomal subunits. Each subunit consists of one or more ribosomal RNA (rRNA) molecules and many ribosomal proteins. The ribosomes and associated molecules are also known as the translational apparatus.
The central dogma of molecular biology is an explanation of the flow of genetic information within a biological system. It is often stated as "DNA makes RNA, and RNA makes protein", although this is not its original meaning. It was first stated by Francis Crick in 1957, then published in 1958:
The Central Dogma. This states that once "information" has passed into protein it cannot get out again. In more detail, the transfer of information from nucleic acid to nucleic acid, or from nucleic acid to protein may be possible, but transfer from protein to protein, or from protein to nucleic acid is impossible. Information means here the precise determination of sequence, either of bases in the nucleic acid or of amino acid residues in the protein.
In molecular biology and genetics, translation is the process in which ribosomes in the cytoplasm or endoplasmic reticulum synthesize proteins after the process of transcription of DNA to RNA in the cell's nucleus. The entire process is called gene expression.
In polymer science, the backbone chain of a polymer is the longest series of covalently bonded atoms that together create the continuous chain of the molecule. This science is subdivided into the study of organic polymers, which consist of a carbon backbone, and inorganic polymers which have backbones containing only main group elements.
Structural bioinformatics is the branch of bioinformatics that is related to the analysis and prediction of the three-dimensional structure of biological macromolecules such as proteins, RNA, and DNA. It deals with generalizations about macromolecular 3D structures such as comparisons of overall folds and local motifs, principles of molecular folding, evolution, binding interactions, and structure/function relationships, working both from experimentally solved structures and from computational models. The term structural has the same meaning as in structural biology, and structural bioinformatics can be seen as a part of computational structural biology. The main objective of structural bioinformatics is the creation of new methods of analysing and manipulating biological macromolecular data in order to solve problems in biology and generate new knowledge.
Protein structure is the three-dimensional arrangement of atoms in an amino acid-chain molecule. Proteins are polymers – specifically polypeptides – formed from sequences of amino acids, the monomers of the polymer. A single amino acid monomer may also be called a residue indicating a repeating unit of a polymer. Proteins form by amino acids undergoing condensation reactions, in which the amino acids lose one water molecule per reaction in order to attach to one another with a peptide bond. By convention, a chain under 30 amino acids is often identified as a peptide, rather than a protein. To be able to perform their biological function, proteins fold into one or more specific spatial conformations driven by a number of non-covalent interactions such as hydrogen bonding, ionic interactions, Van der Waals forces, and hydrophobic packing. To understand the functions of proteins at a molecular level, it is often necessary to determine their three-dimensional structure. This is the topic of the scientific field of structural biology, which employs techniques such as X-ray crystallography, NMR spectroscopy, cryo electron microscopy (cryo-EM) and dual polarisation interferometry to determine the structure of proteins.
Chemical biology is a scientific discipline spanning the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. In contrast to biochemistry, which involves the study of the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology deals with chemistry applied to biology.
Aptamers are oligonucleotide or peptide molecules that bind to a specific target molecule. Aptamers are usually created by selecting them from a large random sequence pool, but natural aptamers also exist in riboswitches. Aptamers can be used for both basic research and clinical purposes as macromolecular drugs. Aptamers can be combined with ribozymes to self-cleave in the presence of their target molecule. These compound molecules have additional research, industrial and clinical applications.
An intrinsically disordered protein (IDP) is a protein that lacks a fixed or ordered three-dimensional structure, typically in the absence of its macromolecular interaction partners, such as other proteins or RNA. IDPs range from fully unstructured to partially structured and include random coil, molten globule-like aggregates, or flexible linkers in large multi-domain proteins. They are sometimes considered as a separate class of proteins along with globular, fibrous and membrane proteins.
A gene product is the biochemical material, either RNA or protein, resulting from expression of a gene. A measurement of the amount of gene product is sometimes used to infer how active a gene is. Abnormal amounts of gene product can be correlated with disease-causing alleles, such as the overactivity of oncogenes which can cause cancer. A gene is defined as "a hereditary unit of DNA that is required to produce a functional product". Regulatory elements include:
An ATP-binding motif is a 250-residue sequence within an ATP-binding protein’s primary structure. The binding motif is associated with a protein’s structure and/or function. ATP is a molecule of energy, and can be a coenzyme, involved in a number of biological reactions. ATP is proficient at interacting with other molecules through a binding site. The ATP binding site is the environment in which ATP catalytically actives the enzyme and, as a result, is hydrolyzed to ADP. The binding of ATP causes a conformational change to the enzyme it is interacting with.
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids. The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Tyrocidine is a mixture of cyclic decapeptides produced by the bacteria Bacillus brevis found in soil. It can be composed of 4 different amino acid sequences, giving tyrocidine A–D. Tyrocidine is the major constituent of tyrothricin, which also contains gramicidin. Tyrocidine was the first commercially available antibiotic, but has been found to be toxic toward human blood and reproductive cells. The function of tyrocidine within its host B. brevis is thought to be regulation of sporulation.
Molecular biophysics is a rapidly evolving interdisciplinary area of research that combines concepts in physics, chemistry, engineering, mathematics and biology. It seeks to understand biomolecular systems and explain biological function in terms of molecular structure, structural organization, and dynamic behaviour at various levels of complexity. This discipline covers topics such as the measurement of molecular forces, molecular associations, allosteric interactions, Brownian motion, and cable theory. Additional areas of study can be found on Outline of Biophysics. The discipline has required development of specialized equipment and procedures capable of imaging and manipulating minute living structures, as well as novel experimental approaches.
Protein metabolism denotes the various biochemical processes responsible for the synthesis of proteins and amino acids (anabolism), and the breakdown of proteins by catabolism.
Cell-penetrating peptides (CPPs) are short peptides that facilitate cellular intake and uptake of molecules ranging from nanosize particles to small chemical compounds to large fragments of DNA. The "cargo" is associated with the peptides either through chemical linkage via covalent bonds or through non-covalent interactions.
Numerous key discoveries in biology have emerged from studies of RNA, including seminal work in the fields of biochemistry, genetics, microbiology, molecular biology, molecular evolution and structural biology. As of 2010, 30 scientists have been awarded Nobel Prizes for experimental work that includes studies of RNA. Specific discoveries of high biological significance are discussed in this article.
A protein superfamily is the largest grouping (clade) of proteins for which common ancestry can be inferred. Usually this common ancestry is inferred from structural alignment and mechanistic similarity, even if no sequence similarity is evident. Sequence homology can then be deduced even if not apparent. Superfamilies typically contain several protein families which show sequence similarity within each family. The term protein clan is commonly used for protease and glycosyl hydrolases superfamilies based on the MEROPS and CAZy classification systems.
|Wikimedia Commons has media related to Proteins .|
|Look up protein in Wiktionary, the free dictionary.| |
In mathematical studies, the Pythagorean theorem is a vital concept that is required to be grasped by the student. Studying this theorem is very much helpful for the students.
Many important concepts are being covered in this section. Well, a Pythagoras theorem denotes the relationship between the sides of a right-angle triangle.
With the help of the Pythagoras Theorem the hypotenuse, base, and perpendicular of the right-angled triangle can be estimated.
Pythagoras Theorem Explained in the Form of a Statement
The Pythagoras Theorem states that “considering that a triangle is a right-angled triangle, the square of the hypotenuse side will be exactly equal to the summation of squares of the remaining two sides of that right-angled triangle”.
While, this is to be noted that, the sides of the right-angled triangle is known as the:
In a right-angled triangle, the hypotenuse side is the longest side. The hypotenuse has the longest measurement as the opposite angle is always 90 degrees.
Also Read: What is EPFO Exam in India? Eligibility, Syllabus, Age Limit, Fees, Registration
Pythagoras Theorem is Credited to Whom?
Pythagoras Theorem is named after a Greek Mathematician who was named as ‘Pythagoras’, from his name the theorem is honored.
Lay Out the Formula of The Pythagoras Theorem
In order to understand the formula better, let us consider a right-angled triangle that has three sides namely – R T Y
R is the Perpendicular side, T is the Base, while Y is the Hypotenuse.
With this consideration, we can draft the Pythagoras Theorem which is as follows:
Y2 = R2 + T2
This formula is based on the formula – Hypotenuse2 = Perpendicular2 + Base2
Example Of a Pythagoras Theorem
An example will make the understanding much clearer. Let us consider there is a right-angled triangle whose Hypotenuse side is unknown, while the base and Perpendicular is 4 and 3 respectively. You need to find out the value of the hypotenuse side.
Solution: Let us consider the value of the Hypotenuse side to be ‘x’.
With the help of the theorem, we can find out the value of the x that is the Hypotenuse side.
Hypotenuse2 = Base2 + Perpendicular2
X2 = 42 + 32
X2 = 16 + 9
X2 = 25
X = √25
X = 5
Thus, the value of X or the Hypotenuse side is 5.
What Is an Isosceles Triangle?
An isosceles triangle is the one where any of the two sides have an equal measurement of length.
Also, the angles which are placed opposite two equal sides measure the equal angle. Thus, the triangles have three sides which are based on the measurement, they are:
- Scalene Triangle
- Isosceles Triangle
- Equilateral Triangle
Now we will learn about the Isosceles Triangle.
Define Isosceles Triangle
If two of the sides of the triangle are equal in measurement, that triangle is called an isosceles triangle.
Even the two angles given opposite to the two equal sides of the isosceles triangle are equal too. Thus, it means – an isosceles triangle is one that has two of its sides congruent in nature.
Example: There is a triangle named QPR, in this Triangle, PQ and QR have equal sides, then this is known as the isosceles triangle.
Also, the angles supposed named as angle G and Angle O will have an equal degree of inclination.
Also Read: Top 26 Subject-verb Agreement Rules
Chalk out the Properties of Isosceles Triangle
The properties of the Isosceles Triangle are numbered down as follows:
- The two sides of the isosceles triangle are equal in measurement.
- The side which is unequal is actually the base of the triangle.
- The angles which are placed opposite to the equal sides are also equal in measurement.
- In order to measure the height or altitude of the isosceles triangle, then it is to be measured from the vase to the vertex of the isosceles triangle.
- While the third angle will always be 90 degrees of a right isosceles triangle.
Visit Cuemath to learn more amazing concepts. |
Found 629 Learning Lab Collections
This collection provides resources that can be used to introduce and discuss the following essential questions, as part of a larger "American Identity" literature-based unit:
1) In what ways do immigrants change America?
2) What would America be like without immigrants?
3) How do immigrants' experiences contribute to a complex and multifaceted American Identity?
English 12 unit
Focus on "Identity" and transition to "Conformity" and the response of the individual to environmental sources that might seek to suppress individuality
Is American Culture always perceived in the same way by everyone or does it differ from person to person?
Women and men who helped New York immigrates' living conditions during the 19th and early 20th century.
This collections shows men and women who helped change the living conditions of the immigrants that flooded into New York City during the 19th and 20th centuries. They changed the way people lived by shining a light on the poor living conditions of the newest Americans. The following people are discussed in this collection: Lillian Wald, Jane Addams, Margaret Sanger, Jacob Riis, and Theodore Roosevelt. The themes that are discussed are: tenement living, women's health, and immigrants.
This collection was created in conjunction with the National Portrait Gallery's 2019 Learning to Look Summer Teacher Institute.
This collection serves to end the unit on Edo Japan and retake the discussion of how the period fits within the greater scene of world history. In our class, seclusion and openness of countries is an common through line, and so the arrival of the Americans effectively ending the Sakoku period is an important historical milestone. The main goal of this collection is to lead students into this dialectical reflection of how these two countries interacted and what this meant for a Japan that had consciously shut down most trade relations. The opening lesson on Edo Japan puts in doubt how closed the country really was; this last lesson highlights how Edo Japan had evolved since the edict of 1635, and how it had to open its ports and face the conjunctions of the 19th century's international scene.
This collection also brings into light reactions on both sides of the American arrival. Images and archives from both Japanese, as well as American witnesses, allow students to understand the motivations coming from East, as well as the West.
Lesson plan (2 hours)
1. Provide the students with the resources "4c United States-Japan Treaty single." "Black Ships and Samurai," "Founding Fragments - Commodore Perry," and "Matthew Calbraith Perry." Allow students time to browse at least two topics from the website and play the video "Founding Fragments - Commodore Perry" for the entire class.
2. Using all the resources in Step 1, lead class through the visible thinking routine "True for who?" While completing this routine, highlight how each country struggles to defend their views.
At the end of this unit, students have a fairly strong understanding of Japanese national interests. For this reason, the teacher can help provide information of the U.S.'s international stance during the 19th century. While the U.S. plays a background role in our curriculum, we do a quick mention of the Manifest Destiny and the Monroe Doctrine, as ways in which the students’ own country emulate cycles of international openness or seclusion. Following this through line, it is necessary to stress the arrival of Commodore Perry to Japan as a thematic intersection. The moment marks both the end of Edo period for Japan, and the United States’ efforts to expand their field of influence.
3. Allow students time to read further into the "Black Ships and Samurai" website. Students can also conduct quick research on the arrival of the Americans in 1853, and Japanese-American relations previous to this date.
4. Provide students a copy of Commodore Perry and President Fillmore's letter to the Emperor of Japan. Use resource "Letters of the Commodore Perry and President Fillmore to the Emperor of Japan (1852-1853)" Do a close reading of the letters and highlight the main passages.
5. Present the remaining images and complete a visible thinking routine "Parts/People/Interactions." Allow students to cite the letters in Step 4, as well as the images in this collection. At the end of this lesson, students are able to compare, as well as to question each country’s discourse of seclusion or non-intervention.
This collection serves as a pre-assessment activity to a unit on Edo Japan. The artwork in the module is heavily influenced by the Edo period. The goal of this short lesson is to activate imagery and ideas of Japanese art during the Edo period.
Lesson plan (<1 hour)
Complete *ONE of the following activities:
1. Use the resource "One Step, First Step, Apple Computer, Osaka, Japan" to lead a "Think/Puzzle/Explore" routine.
2. Use all four resources of the Japan Railway Company to complete a "See/Think/Wonder" visible thinking routine.
3. Use either "How Japan Does It" or "MacDonald's Hamburgers Invading Japan/Tokyo Ginza Shuffle" to lead the "3-2-1 Bridge" visible thinking routine.
*For all three routines:
As you complete the routine, evaluate how much students already know about Japan and the lexicon that they use. Highlight key concepts written in the routine's poster. What do students know and how do these artifacts corroborate or oppose their initial ideas of Japan? Can students discern the historical nature behind some of these ads and posters? For instance, the railway ad displays a screen, and the poster for ‘How Japan Does It’ has a heavy ukiyo-e influence. In this way, these examples of commercial art tie in lasting impressions of a culture with contemporary takes on the country in a fun way.
These three routines could also be completed in unison. Divide the class in smaller groups, provide each group with the images of the collection, and guide each group through one of the routines listed. The result as a class will be a much richer lexicon bank.
Using the Project Zero Visible Thinking routine "See Think Wonder," this activity investigates the cultural connections between Ancient Greece, Rome, and Gandhara* as seen through a sculpture of the Buddha created in the 2nd century CE. Buddhist sculptures from Gandhara are significant not only because they show the extent of Alexander the Great's influence on Asia, but also because they are some of the first human depictions of the Buddha in the history of Buddhist art.
Even without a deep knowledge of the art of this period, students can make visual observations and comparisons that reveal the blending of Asian and Greco-Roman culture in this particular region.
*Gandhara is a region in what is now modern Afghanistan and Pakistan.
Keywords: greek, kushan, mathura, india, inquiry strategy, classical, roman, gautama, siddhārtha, siddhartha, shakyamuni, lakshanas, signs of the buddha
These six images give a glimpse of the damage done during the 1968 riots on U street following the assassination of Dr. Martin Luther King. The images are all attributed to Scurlock Studios, which students will study more in depth in a separate collection.
The two day lesson centered around this collection begins with a gallery walk. The Guiding Question for this lesson are:
-What can primary source photographs tell us about an event in history?
-How did the 1968 riots change Washington DC?
The Big Idea for this lesson is:
One event can have lasting effects on the history of a place.
Each student will have a packet featuring six 'See, Think, Wonder' pages, and a final page titled 'Gallery Walk Debrief.' On Day 1, computers will be set up at six tables throughout the classroom, with all computers on a given table showing one of the six images in the collection. At the teacher's direction, student partnerships will have 3-5 minutes to stop at each station and fill out one of the 'See, Think, Wonder' pages.
At the conclusion of the gallery walk, student will meet with their partner for approximately 3 minutes to discuss the important question on the last page of their packet: 'Based on the images you viewed, how do you think the riots on U Street changed Washington DC?' Once students have discussed, they will have approximately 5 minutes to write at least two sentences in response to this question.
On Day 2 of the lesson, the teacher will use a projectable screen in the class room to walk through the interactive Washington Post article about the 1968 riots, allowing time to pause and watch each embedded video and answer any pressing questions.
At the conclusion of the article, students will spend approximately 5 minutes at their tables discussing how their understanding of the 1968 riots has changed or expanded based on the Washington Post piece. The teacher will then lead a discussion that should convey, at the very least, the following points:
-The U Street riots were widespread and caused major damage to areas of the city including but not limited to the U Street Corridor.
-Many business' in DC were forever wiped out because of the riots and entire neighborhoods took, in some cases, decades to fully recover.
- Martin Luther King's death served as the final straw for many African Americans both in DC and around the country who had long been suffering under the crippling effects of segregation, discrimination, and racism.
- Following the 1968 riots, most white people left the city.
Following the teacher discussion, students will have approximately 5 minutes to write down an answer to the single question on the worksheet titled Washington Post Article Debrief: After viewing the Washington Post article about the 1968 riots, what new information did you learn about how the 1968 riots changed Washington DC?
This is a collection designed to introduce students to the history of aviation as told through the lens of the scientific method-design process. Students begin by thinking about why is flight important in our lives, and how did we get to the airplanes we now know? Students look at the many designs that planes have gone through, and discuss why perseverance and problem-solving are important skills to have. They also see that teamwork, cooperation, and a desire to succeed were necessary for the Wright Brothers to do their important work. Feel free to pick and choose from the resources in creating your own collections:
Overall Learning Outcomes:
- Scientists use trial and error to form conclusions.
- Scientists test hypotheses using multiple trials in order to get accurate results and form strong conclusions.
- Scientists use multiple data and other evidence to form strong conclusions about a topic.
- Scientists work together to apply scientific research and knowledge to create new designs that meet human needs.
- Scientists help each other persevere through mistakes to learn new ideas.
Guiding Questions for Students to Answer from this collection:
- Why is flight important?
- How do scientists solve problems?
- How do scientists collect data to help them solve problems?
Beginning in the late 1940s, the notable African American writer, James Baldwin (1924-1987), lived abroad for much of his life. While acknowledging the benefits of his residence abroad to all of his works and life story, he always considered himself an American living as a “transatlantic commuter.” This Learning Lab Collection asks you to analyze the documents, images, and objects that give insight into James Baldwin's life as transatlantic commuter and to use these objects of Baldwin to understand how they impacted his work and writings.
A downloadable PDF workbook is included. The questions and activities are arranged as they appear in the Learning Lab Collection and are designed to enhance student exploration.
This Learning Lab is a companion piece of the digital NMAAHC exhibition Chez Baldwin.
Keywords: nmaahc, African, american, James, Baldwin, travel, Atlantic, commuter, writing, literature, Paris, New York, France, Turkey, Istanbul, Switzerland, United States, #NMAAHCteach
This module is designed to compliment a unit on Heian Japan or of feudalism in Japan in general. The goal of this collection is to purposely include the role of women within an evaluation of feudal Japanese society and history. The lesson plan highlights Japanese women in leading roles, with a focus on historical representations of women during Heian Japan; it also includes similar examples of female characters from the Kamakura and Edo period. The two main categories of the collection are warriors and noble women, with the inclusion of the writer Murasaki Shikibu and illustrations of The Tale of Genji. The idea is to study ‘women’ as its own historical component, and the group as actors exerting historical agency.
Given that the purpose of this collection is to concentrate on the role of women, it includes artwork that was achieved after the Heian and Kamakura periods and that are representations of salient women from the feudal era.
Lesson plan (3-4 hours)
1. Teacher leads an introduction to the feudal system and its particularities in Japan. If the class is by topics, this discussion could easily stem from a general discussion of feudalism in Europe. In our particular case, we have already discussed feudalism in Europe earlier, and so the teachers highlight parallels between the two systems in order to activate the main keywords of the unit and review ideas of how the feudal economy worked.
2. Students read a textbook chapter on feudal Japan and answer comprehension and analysis questions from the text. Key concepts are established following this reading such as: daimyo, samurai, land distribution, family clans, and feudal social pyramid, among others.
Spielvogel, Jackson J. World History: Journey Across Time. New York: Glencoe/McGraw-Hill, 2008. Print.
3. In small groups, students analyze original documents from the feudal period. Documents from the book cited below include: the Bushido code, family letters, and excerpts of laws, among other primary sources. Each group of students is in charge of one particular document. Students should identify: main idea, intended audience, who wrote it, and how does this particular document help understand Japanese feudalism. These documents should also help activate many of the key concepts studied earlier. Once all tables have their findings, the class comes together to present and discuss all documents.
Stearns, Peter N. World History in Documents. New York, USA: NYU Press, 2008. Print.
4. Use this collection to shine the light on women during the feudal period. Lead a "Step Inside" routine with the resource "Ohatsu avenging her mistress Onoe."
Students may well have noticed the silence regarding women's role at this point in the unit. In my classes, for instance, students automatically assume that there are working women alongside male merchants and farmers, but they have doubts as to women occupying higher roles in society. This routine can clarify some doubts as to their presence among higher social ranks.
5. Allow students to browse the collection, play one of the videos on female samurais or lead other Project Zero routines with the other paintings of female warriors and writers. Once the class is familiar with the resources in the collection, lead the visible thinking routine "People/Parts/Interactions" to reevaluate society as a whole.
Discuss how their reading of the texts in Step 2 and Step 3 has changed based on this new information. How do they now imagine women in feudal society?
6. Close the unit with the visible thinking routine "Circle of viewpoints." In our class, we use the routine's questions as a prompt for a one-page essay. Students answer the questions of the routine as if they were a person living in feudal Japan; they can choose to write a journal entry or an essay in the third person. Students should use the resources in this collection and in the texts provided to describe the life of their chosen character. This exercise allows students to explore context, society, thoughts, limitations and daily life from the point of view of a historical actor.
Extension activity (1 hour)
Instead of leading a written routine of "Circle of viewpoints" students can create postcards written from the point of view of their historical characters. Students also design the flip side of their postcards and the artwork should illustrate the environment or experiences of their historical character.
This collection is meant to be used in the midst of a unit of Edo Japan. Through the study of new technologies and scientific advances at the time, students can further dive into the Edo national dynamics by means of the developments in science. This module on science and technology is geared towards understanding Edo Japan through inventions and progress other than in the arts, and in unison with the rest of the world, therefore opening discussion as to how closed the country really was.
Numerous technologies are tightly linked to cultural expressions such as theater arts and the ukiyo-e , and therefore, a separate series of lessons on arts and culture during the Edo Period is absolutely necessary following or preceding this lesson. A study of the Edo culture remains a common approach to explore the society in Edo Japan; the study of science and new technologies compliments this analysis, and it will facilitate engaging a wider audience.
The artifacts listed here provide illustrations of cross-cultural developments and technological inventions before the end of the Edo period. Through these resources, the teacher can focus on medical advances, particular inventions such as the Montgolfière or simple robots, greater historical processes such as industrialization or everyday objects such as hairpins and cloth, which were also part of the exchange of ideas.
Analyzing these technological commonalities between Japan and the greater global arena, will provide context for the later discussion of ‘rangaku’ (Dutch studies) during the Edo period.
Lesson plan (2-4 hours)
1. Make the resources and artwork available to the students in preparation for the lesson at least one day ahead of class. These artifacts and texts will serve as a pre-research idea bank and starting point.
2. Teacher can briefly present the material available and prepare a quick lecture or discussion presenting a general overview of science in Japan at the time, or sciences in the world during the same period (e.g. main inventions and discoveries, scientific leaders and award winners, revolutions in science such as the Industrial Revolution.) The lecture could include a brief overview of the state of the social sciences around the world, as well (e.g. theories in psychology, birth of sociology, main theories in anthropology.)
3. Lead the routine "Claim/Support/Question" using the resource "Ukie Edo Nihonbashi Odawaracho uoichi no zu." Discuss the main issues and talking points that surfaced during the routine; tie in the results of the routine with the keywords presented in Step 2.
4. Students explore on their own the resources in the collection and decide on a topic that they would like to research further. A few ideas are: automated technologies, advances in medicine, technology of daily-life objets or technology in the arts. Teacher can also provide research support to guide students into the collection's reading, such as scaffolded questions or a diagram to lead to their preferred topic.
5. Students research the topic of their choice and prepare 10-20 minute presentations on the topic. The goal and format of the presentations can also be defined in class (e.g. slideshow, written piece, a draft for a longer essay, a design technology project...)
At the end of the lesson on Edo culture and science, create a newspaper that covers the main events of the Edo period. Students can write pieces on the area of their choice: politics, science and technology, arts and culture, or even a column on daily life. Teacher can define the word limit and format, topics covered, and members of each newspaper. After editing and correcting the articles, they can be arranged as a real newspaper. The resources in this collection serve as primary and secondary sources for the activity.
*PDF of examples is attached in the collection.
This lesson serves as an introduction to the Edo Period in Japan. The module is centered around the artwork "Southern Barbarians," a folding screen painting depicting the arrival of Portuguese traders to a Japanese port, a common scene previous to the Sakoku (closed country) period. After a close analysis of the folding screen, students contrast the scene depicted in the artwork with the proscriptions of the Sakoku edict of 1635 and the Portuguese exclusion edict of 1639. The stark contrast between these two trade scenarios will help students understand the nuance of the political and economic situation of Edo Japan. Additionally, transitioning from a scene where international trade is robust and ordinary, to the drafting of these two edicts severely curtailing this very trade, will lead students to inquire into the extent, as well as the limitations of the closed country period.
Lesson plan (1 - 2 hours)
1. "Southern Barbarians" illustrates and extends understanding of the ‘Nanbanjin’ as well as Nanban trade previous to Edo Japan. 'Nanbanjin' referred to Southern European, usually Spanish and Portuguese. The teacher will explain the main traits of Nanban art in order to elucidate further details of the artwork other than the ones that the students observe during the routine.
For further reference on Nanban Art, read pages 71-142 of the book referenced here. The text contains multiple other examples of folding screens from the period.
Weston, Victoria. Portugal, Jesuits and Japan: Spiritual Beliefs and Earthly Goods. Chestnut Hill, MA: McMullen Museum of Art, Boston College, 2013. Print.
Link to online copy: https://archive.org/details/portugaljesuitsj00west
2. Class completes a 'See, Think, Wonder' routine with the resource "Southern Barbarians in Japan." The artwork is full of details (such as the man carrying fabric from another Asian port because the Portuguese served as relay traders in the region). This routine might take 30 minutes or more to complete for this reason.
As part of a World History class, the teacher could highlight these historic "easter eggs" in the artwork and tie in other topics from class such as cotton and silk trade, slavery, navigation technology, missionaries in the East or the Portuguese empire and extension among other subjects present in the folding screen.
While at first, the Project Zero routines will help to understand the period, the actors and the reasons for drafting the two edicts, the teacher should also emphasize at the end of the routine why this type of art existed and how Japanese viewed Nanban trade. The purpose is to begin the discussion of Edo Japan with an understanding of the complex world of foreign relations, cultural forces and international commerce at the time.
3. Following this analysis, students perform a close reading and discussion of the edicts of 1635 and 1639. The Project Zero routine 'Explanation Game' should help guide the reading of the edicts. Students first read the edicts on their own, clarify obtuse language, and highlight a few proscriptions that they believe define the Sakoku period. Following this, students complete the 'Explanation Game' routine in small groups.
4. At the end of this introductory lesson, the teacher leads a group discussion on the edicts, establishing the main proscriptions and political reasons to ban the Portuguese traders. Teacher should clarify the political and social situation of Japan at the time, the presence of the Spanish and Portuguese traders in neighboring countries and the expansion of their respective empires. If class will continue exploring the nuances of the Edo Period, then the teacher could also briefly explain the difference in operations between the Dutch traders and the Portuguese traders.
Mini-lesson plan (30 minutes)
The remaining resources in this collection allow to further explore the other foreigners in Edo Japan in order to nuance the discussion of international trade and foreign relations during the period. Smaller groups of 3-5 students can analyze separately various ukiyo-e of foreigners, while completing a 'Question Starts' visible thinking routine and discussing their findings at the end of class period with their classmates.
These materials address a unit on resilience and global competence as related to and extended from The Book Thief by Markus Zusak. #SAAMteach
After using the "Seven Ways to Look at a Portrait" strategy, students create self-portraits in the style of Kehinde Wiley that incorporates study symbolism, self-identity narrative, and reflection on the poses of traditional American portraiture. This lesson requires access to computer technology, a camera (mobile phone is fine), a green screen background, a green screen phone app or program, and ideally a printer.
This activity will be used to reinforce close reading and analysis of visual text in either a pop culture unit or an identity unit in AP English Language and Composition. The idea is to examine how iconic popular images can be remixed to create new meaning and conversation about identity.
The collection was created in conjunction with the National Portrait Gallery's 2019 Learning to Look Summer Teacher Institute.
In this collection, portraits are used for both pre-reading and post-reading activities in connection with reading a biography of Marian Anderson. The pre-reading activity uses Betsy Graves Reyneau's oil on canvas portrait, Marian Anderson, to begin to reveal Anderson to students. Post-reading activities include the use of photographs, video and William H. Johnson's oil on paperboard Marian Anderson to enhance understanding of Anderson's 1939 concert and to informally access student learning.
When Marian Sang: The True Recital of Marian Anderson: The Voice of a Century is a picture book written by Pam Munoz Ryan and illustrated by Brian Selznick. This biography shares the story of opera star Marian Anderson's historic concert of 1939 on the steps of the Lincoln Memorial to an integrated crowd of over 75,000 people. The book recounts Marian's life as a she trains to become an opera singer and as she struggles with the obstacles she faces in pre-Civil Rights America. This picture book is an excellent choice to use in the upper elementary classroom in the context of a unit that focuses on "challenges and obstacles."
This collection was created in conjunction with the National Portrait Gallery's 2019 Learning to Look Summer Teacher Institute.
In this collection, students will explore an artwork by El Anatsui, a contemporary artist whose recent work addresses global ideas about the environment, consumerism, and the social history and memory of the "stuff" of our lives. After looking closely and exploring the artwork using an adapted version of Project Zero's "Parts, Purposes, and Complexities" routine, students will create a "diamante" poem using their observations of the artwork and knowledge they gained about El Anatsui's artistic influences. Additional resources about El Anatsui, how to look at African Art, and Project Zero Thinking Routines are located at the end of the collection.
This collection was created for the "Smithsonian Learning Lab, Focus on Global Arts and Humanities" session at the 2019 New Jersey Principals and Supervisors Association (NJPSA) Arts Integration Leadership Institute.
Keywords: nigeria, african art, textile, poetry, creative writing, analysis
This collection was created in conjuncture with the National Portrait Gallery's 2019 Learning to Look Summer Teacher Institute.The following collection showcases images of key figures such as Martin Luther King, and Malcolm X from the Civil Rights Movement, particularly on the issues of voting in Alabama. The images and activities showcase the struggle of the march from Selma to Montgomery in an effort to make voting an equal right among all people. This lesson can be used in the social studies classroom for the subjects of Civil Rights, Voting, and Federal Government VS State Government. In addition to the images there are in class activities and thought provoking questions that go along with the visuals to provide for a more engaged learning experience. #NPGteach
This lesson, integrated halfway through F. Scott Fitzgerald's The Great Gatsby, will address both character analysis and the ever present theme of appearance vs. reality in the text. By using Thomas Hart Benton's "Self Portrait with Rita" as a starting point students will study the specifics of a self portrait from the 1920s which highlights American dream centered ideals. As a second step, students will make connections between the painting and the characters from our text. As a final extension activity, students will further explore the inspiration, the biography, or another work by Benton.
Opening: Class Discussion: What is a portrait? What are the Elements of Portrayal?
Show Michelle Obama Portrait- Have students work in pairs to come up with a list of things the artist wants us to know about the sitter.
Read Washington Post article - Add any ideas to list
Divide class into 6 groups - Each group is given a group of first ladies. Students should come up with a list of attributes/characteristics/symbols for the group as a whole.
Small groups should then meet together and complete a Venn Diagram to show similarities and differences of the groups to distinguish how portraits may/may not have changed through time. Does this portray how the role of the first lady has evolved over time?
Further questioning: What roles will future first ladies (men, husband, partner) play in the U.S.
Extension activity: Portrait - Create a portrait of someone of importance or even a self-portrait. What style will it be in? How will you use the elements of portrayal?
This collection was created in conjunction with the National Portrait Gallery's 2019 Learning to Look Summer Teacher Institute.
This collection/lesson is designed to compare and evaluate portraiture of Gilded Age Industrialists and of the Founding Fathers. Students will explore different mediums of portraiture and attempt to place these examples of artwork into the legacy that Gilded Age Industrialists hoped to create for themselves. This lesson plan involves close analysis of specific portraits of Andrew Carnegie, a sorting activity, a Google Doc graphic organizer to help students inquire information, and some overarching discussion and analysis questions to help foster class discourse. Each of the sources used in this collection are owned by the National Portrait Gallery, and many - as of 6/27/19 - are currently on display. Some questions to consider as you and/or your students peruse this collection: What does it mean to have a legacy? How are portraiture and legacy connected or related to each other? Why, in an era when photography is en vogue, would an individual choose to have a painting done of them? What would you want a portrait of you to look like?
Lesson Overview: (See Collection or the link below for Full Google Doc Lesson Plan)
CLASS (SUBJECT & LEVEL): High School American History - for an 80 minute block
- Students will closely analyze Gilded Age industrialist portraits in both painting and photograph formats, attempting to understand the legacy that these leaders were trying to create for themselves in the future.
- Students will compare and contrast portrayals of Gilded Age industrialists and the Founding Fathers.
- Students will argue different ideas about portraiture in U.S. History and reach their own conclusions.
CONTENT: Gilded Age Industrialists, Founding Fathers, Portraits and Photos, Source Analysis
This collection is meant to be used as an introductory activity to the novel Their Eyes Were Watching God by Zora Neale Hurston. Specifically, it focuses on the different styles employed by artist Aaron Douglas, most notably in his Scottsboro Boys portrait and in his 1925 self-portrait. In doing so, it asks students to consider when and why an artist who is more than capable of creating within the boundaries of classically beautiful art or writing might chose to create in this style at some times and at other times to create in more radical or avante-garde styles. It uses a Compare and Contrast looking technique before revealing to students that all four distinct pieces are created by the same artist.
Ideally, teachers can end the unit by facilitating discussion of the social change Douglas aims for with his Scottsboro portrait and of the bridge that Hurston creates with her prose narrator before launching into the dialect of her characters that earned her such scorn from the African American community of her era.
Time- 2 class periods
Using the Project Zero Design Thinking routines "Parts, People, Interaction", this activity provides an understanding of the system of gender power at stake in the representation of Chapter 34 of Tales of Genji - Kashiwagi catches sight of the third Princess. It then looks at a modernization of the illustrations and offers a reflection on what the new feminine contemporary perspective brings to the interpretation of the Third princess story.
In exploring the representations of the tales of Genji, students have the opportunity to discover tales that have become a standard for Japanese culture. They look at the first known literature piece written by a woman, who shares a rare and intimate perspective of a woman on a world governed by men. Students compare the representation of the tales from the XVIth century with one from the XXth century to identify in what ways they have been interpreted.
Step 1: Have students sketch The tale of Genji, chapter 34; Kashiwagi catches sight of the third Princess
Step 2: Debrief as a whole group
Discuss what the students have noticed. Do not show the caption to the students yet. The observational drawing is good to help students pay attention to details and unveil the artist's choices. It also encourages them to initiate a first interpretation.
Step 3: Parts, People, Interaction
Once students have discussed the painting, guide them through the routine "Parts, People, Interaction".
"This thinking routine helps students slow down and look closely at a system ( here the system of gender power.) In doing so, young people are able to situate objects within systems and recognize the various people who participate—either directly or indirectly—within a particular system.
Students also notice that a change in one aspect of the system may have both intended and unintended effects on another aspect of the system. When considering the parts, people, and interactions within a system, young people begin to notice the multitude of subsystems within systems.
This thinking routine helps stimulate curiosity, raises questions, surfaces areas for further inquiry, and introduces systems thinking." (PZ)
Step 4: Read the PDF "More about Chapter 34" and go back to the questions
Have students read the caption, go back and look at the painting and ask them to take notes on how their understanding has shifted from their initial interpretation.
Step 5: Debrief the "Parts, People and Interaction" routine as a whole group:
During the discussion, here are some specific question students may want to address:
- What does the illustration of Chapter 34, Kashiwagi catches sight of the third Princess says about the system of power gender in place at the Japanese court in the XIth century?
- To what extent the architecture in the painting play a role in facilitating the superiority of men?
- How does the system in place impact relationship between men and women?
Step 1: "See, Think, Wonder" - The third princess with her pet cat, Yamato Maki, 1987
Have them do a quick "See, Think, Wonder" to encourages them to reactivate prior knowledge, pay attention to details and reflect on the effects of the modernization of the illustration of The tales of Genji though manga. Identify the audience and the context of the illustration.
Step 2: Read the caption as a group - notice what is important.
Step 3: "Layers"
This routine will encourage students to refine their first analysis of the illustration by looking at it through different angles (Aesthetic, Mechanical, Connections, Narrative, Dynamic). It will allow them to draw upon their prior knowledge and consider the impact of modernization of art on the public.
Students can work in small group and cover between 3 and 5 of the categories.
Step 4: Each group of students present their learning to the class |
Before we get down to the business of evaluating arguments—of judging them valid or invalid, strong or weak—we still need to do some preliminary work. We need to develop our analytical skills to gain a deeper understanding of how arguments are constructed, how they hang together. So far, we’ve said that the premises are there to support the conclusion. But we’ve done very little in the way of analyzing the structure of arguments: we’ve just separated the premises from the conclusion. We know that the premises are supposed to support the conclusion. What we haven’t explored is the question of just how the premises in a given argument do that job—how they work together to support the conclusion, what kinds of relationships they have with one another. This is a deeper level of analysis than merely distinguishing the premises from the conclusion; it will require a mode of presentation more elaborate than a list of propositions with the bottom one separated from the others by a horizontal line. To display our understanding of the relationships among premises supporting the conclusion, we are going to depict them: we are going to draw diagrams of arguments.
Here’s how the diagrams will work. They will consist of three elements: (1) circles with numbers inside them—each of the propositions in the argument we’re diagramming will be assigned a number, so these circled numbers in the diagram will represent the propositions; (2) arrows pointed at circled numbers—these will represent relationships of support, where one or more propositions provide a reason for believing the one pointed to; and (3) horizontal brackets—propositions connected by these will be interdependent (in a sense to be specified below).
Our diagrams will always feature the circled number corresponding to the conclusion at the bottom. The premises will be above, with brackets and arrows indicating how they collectively support the conclusion and how they’re related to one another. There are a number of different relationships that premises can have to one another. We will learn how to draw diagrams of arguments by considering them in turn.
Often, different premises will support a conclusion—or another premise—individually, without help from any others. When this is the case, we draw an arrow from the circled number representing that premise to the circled number representing the proposition it supports.
Consider this simple argument:
① Marijuana is less addictive than alcohol. In addition, ② it can be used as a medicine to treat a variety of conditions. Therefore, ③ marijuana should be legal.
The last proposition is clearly the conclusion (the word ‘therefore’ is a big clue), and the first two propositions are the premises supporting it. They support the conclusion independently. The mark of independence is this: each of the premises would still provide support for the conclusion even if the other weren’t true; each, on its own, gives you a reason for believing the conclusion. In this case, then, we diagram the argument as follows:
Some premises support their conclusions more directly than others. Premises provide more indirect support for a conclusion by providing a reason to believe another premise that supports the conclusion more directly. That is, some premises are intermediate between the conclusion and other premises.
Consider this simple argument:
① Automatic weapons should be illegal. ② They can be used to kill large numbers of people in a short amount of time. This is because ③ all you have to do is hold down the trigger and bullets come flying out in rapid succession.
The conclusion of this argument is the first proposition, so the premises are propositions 2 and 3. Notice, though, that there’s a relationship between those two claims. The third sentence starts with the phrase ‘This is because’, indicating that it provides a reason for another claim. The other claim is proposition 2; ‘This’ refers to the claim that automatic weapons can kill large numbers of people quickly. Why should I believe that they can do that? Because all one has to do is hold down the trigger to release lots of bullets really fast. Proposition 2 provides immediate support for the conclusion (automatic weapons can kill lots of people really quickly, so we should make them illegal); proposition 3 supports the conclusion more indirectly, by giving support to proposition 2. Here is how we diagram in this case:
Sometimes premises need each other: the job of supporting another proposition can’t be done by each on its own; they can only provide support together, jointly. Far from being independent, such premises are interdependent. In this situation, on our diagrams, we join together the interdependent premises with a bracket underneath their circled numbers.
There are a number of different ways in which premises can provide joint support. Sometimes, premises just fit together like a hand in a glove; or, switching metaphors, one premise is like the key that fits into the other to unlock the proposition they jointly support. An example can make this clear:
① The chef has decided that either salmon or chicken will be tonight’s special. ② Salmon won’t be the special. Therefore, ③ the special will be chicken.
Neither premise 1 nor premise 2 can support the conclusion on its own. A useful rule of thumb for checking whether one proposition can support another is this: read the first proposition, then say the word ‘therefore’, then read the second proposition; if it doesn’t make any sense, then you can’t draw an arrow from the one to the other. Let’s try it here: “The chef has decided that either salmon or chicken will be tonight’s special; therefore, the special will be chicken.” That doesn’t make any sense. What happened to salmon? Proposition 1 can’t support the conclusion on its own. Neither can the second: “Salmon won’t be the special; therefore, the special will be chicken.” Again, that makes no sense. Why chicken? What about steak, or lobster? The second proposition can’t support the conclusion on its own, either; it needs help from the first proposition, which tells us that if it’s not salmon, it’s chicken. Propositions 1 and 2 need each other; they support the conclusion jointly. This is how we diagram the argument:
The same diagram would depict the following argument:
① John Le Carre gives us realistic, three-dimensional characters and complex, interesting plots. ② Ian Fleming, on the other hand, presents an unrealistically glamorous picture of international espionage, and his plotting isn’t what you’d call immersive. ③ Le Carre is a better author of spy novels than Fleming.
In this example, the premises work jointly in a different way than in the previous example. Rather than fitting together hand-in-glove, these premises each give us half of what we need to arrive at the conclusion. The conclusion is a comparison between two authors. Each of the premises makes claims about one of the two authors. Neither one, on its own, can support the comparison, because the comparison is a claim about both of them. The premises can only support the conclusion together. We would diagram this argument the same way as the last one.
Another common pattern for joint premises is when general propositions need help to provide support for particular propositions. Consider the following argument:
① People shouldn’t vote for racist, incompetent candidates for president. ② Donald Trump seems to make a new racist remark at least twice a week. And ③ he lacks the competence to run even his own (failed) businesses, let alone the whole country. ④ You shouldn’t vote for Trump to be the president.
The conclusion of the argument, the thing it’s trying to convince us of, is the last proposition—you shouldn’t vote for Trump. This is a particular claim: it’s a claim about an individual person, Trump. The first proposition in the argument, on the other hand, is a general claim: it asserts that, generally speaking, people shouldn’t vote for incompetent racists; it makes no mention of an individual candidate. It cannot, therefore, support the particular conclusion—about Trump—on its own. It needs help from other particular claims—propositions 2 and 3—that tell us that the individual in the conclusion, Trump, meets the conditions laid out in the general proposition 1: racism and incompetence. This is how we diagram the argument:
Occasionally, an argumentative passage will only explicitly state one of a set of joint premises because the others “go without saying”—they are part of the body of background information about which both speaker and audience agree. In the last example, that Trump was an incompetent racist was not uncontroversial background information. But consider this argument:
① It would be good for the country to have a woman with lots of experience in public office as president. ② People should vote for Hillary Clinton.
Diagramming this argument seems straightforward: an arrow pointing from 1 to 2. But we’ve got the same relationship between the premise and conclusion as in the last example: the premise is a general claim, mentioning no individual at all, while the conclusion is a particular claim about Hillary Clinton. Doesn’t the general premise “need help” from particular claims to the effect that the individual in question, Hillary Clinton, meets the conditions set forth in the premise—i.e., that she’s a woman and that she has lots of experience in public office? No, not really. Everybody knows those things about her already; they go without saying, and can therefore be left unstated (implicit, tacit).
But suppose we had included those obvious truths about Clinton in our presentation of the argument; suppose we had made the tacit premises explicit:
① It would be good for the country to have a woman with lots of experience in public office as president. ② Hillary Clinton is a woman. And ③ she has deep experience with public offices—as a First Lady, U.S. Senator, and Secretary of State. ④ People should vote for Hillary Clinton.
How do we diagram this? Earlier, we talked about a rule of thumb for determining whether or not it’s a good idea to draw an arrow from one number to another in a diagram: read the sentence corresponding to the first number, say the word ‘therefore’, then read the sentence corresponding to the second number; if it doesn’t make sense, then the arrow is a bad idea. But if it does make sense, does that mean you should draw the arrow? Not necessarily. Consider the first and last sentences in this passage. Read the first, then ‘therefore’, then the last. Makes pretty good sense! That’s just the original formulation of the argument with the tacit propositions remaining implicit. And in that case we said it would be OK to draw an arrow from the general premise’s number straight to the conclusion’s. But when we add the tacit premises—the second and third sentences in this passage—we can’t draw an arrow directly from ① to ④. To do so would obscure the relationship among the first three propositions and misrepresent how the argument works. If we drew an arrow from ① to ④, what would we do with ② to ③ in our diagram? Do they get their own arrows, too? No, that won’t do. Such a diagram would be telling us that the first three propositions each independently provide a reason for the conclusion. But they’re clearly not independent; there’s a relationship among them that our diagram must capture, and it’s the same relationship we saw in the parallel argument about Trump, with the particular claims in the second and third propositions working together with the general claim in the first:
The arguments we’ve looked at thus far have been quite short—only two or three premises. But of course some arguments are longer than that. Some are much longer. It may prove instructive, at this point, to tackle one of these longer bits of reasoning. It comes from the (fictional) master of analytical deductive reasoning, Sherlock Holmes. The following passage is from the first Holmes story—A Study in Scarlet, one of the few novels Arthur Conan Doyle wrote about his most famous character—and it’s a bit of early dialogue that takes place shortly after Holmes and his longtime associate Dr. Watson meet for the first time. At that first meeting, Holmes did his typical Holmes-y thing, where he takes a quick glance at a person and then immediately makes some startling inference about them, stating some fact about them that it seems impossible he could have known. Here they are—Holmes and Watson—talking about it a day or two later. Holmes is the first to speak:
“Observation with me is second nature. You appeared to be surprised when I told you, on our first meeting, that you had come from Afghanistan.”
“You were told, no doubt.”
“Nothing of the sort. I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly through my mind, that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, ‘Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.’ The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.” (Also excerpted in Copi and Cohen, 2009, Introduction to Logic 13e, pp. 58 - 59.)
This is an extended inference, with lots of propositions leading to the conclusion that Watson had been in Afghanistan. Before we draw the diagram, let’s number the propositions involved in the argument:
- Watson was in Afghanistan.
- Watson is a medical man.
- Watson is a military man.
- Watson is an army doctor.
- Watson has just come from the tropics.
- Watson’s face is dark.
- Watson’s skin is not naturally dark.
- Watson’s wrists are fair.
- Watson has undergone hardship and sickness.
- Watson’s face is haggard.
- Watson’s arm has been injured.
- Watson holds his arm stiffly and unnaturally.
- Only in Afghanistan could an English army doctor have been in the tropics, seen much hardship and got his arm wounded.
Lots of propositions, but they’re mostly straightforward, right from the text. We just had to do a bit of paraphrasing on the last one—Holmes asks a rhetorical question and answers it, the upshot of which is the general proposition in 13. We know that proposition 1 is our conclusion, so that goes at the bottom of the diagram. The best thing to do is to start there and work our way up. Our next question is: Which premise or premises support that conclusion most directly? What goes on the next level up on our diagram?
It seems fairly clear that proposition 13 belongs on that level. The question is whether it is alone there, with an arrow from 13 to 1, or whether it needs some help. The answer is that it needs help. This is the general/particular pattern we identified above. The conclusion is about a particular individual—Watson. Proposition 13 is entirely general (presumably Holmes knows this because he reads the paper and knows the disposition of Her Majesty’s troops throughout the Empire); it does not mention Watson. So proposition 13 needs help from other propositions that give us the relevant particulars about the individual, Watson. A number of conditions are laid out that a person must meet in order for us to conclude that they’ve been in Afghanistan: army doctor, being in the tropics, undergoing hardship, getting wounded. That Watson satisfies these conditions is asserted by, respectively, propositions 4, 5, 9, and 11. Those are the propositions that must work jointly with the general proposition 13 to give us our particular conclusion about Watson:
Next, we must figure out how what happens at the next level up. How are propositions 4, 5, 13, 9, and 11 justified? As we noted, the justification for 13 happens off-screen, as it were. Holmes is able to make that generalization because he follows the news and knows, presumably, that the only place in the British Empire where army troops are actively fighting in tropics is Afghanistan. The justification for the other propositions, however, is right there in the text.
Let’s take them one at a time. First, proposition 4: Watson is an army doctor. How does Holmes support this claim? With propositions 2 and 3, which tell us that Watson is a medical and a military man, respectively. This is another pattern we’ve identified: these two propositions jointly support 4, because they each provide half of what we need to get there. There are two parts to the claim in 4: army and doctor. 2 gives us the doctor part; 3 gives us the army part. 2 and 3 jointly support 4.
Skipping 5 (it’s a bit more involved), let’s turn to 9 and 11, which are easily dispatched. What’s the reason for believing 9, that Watson has suffered hardship? Go back to the passage. It’s his haggard face that testifies to his suffering. Proposition 10 supports 9. Now 11: what evidence do we have that Watson’s arm has been injured? Proposition 12: he holds it stiffly and unnaturally. 12 supports 11.
Finally, proposition 5: Watson was in the tropics. There are three propositions involved in supporting this one: 6, 7, and 8. Proposition 6 tells us Watson’s face is dark; 7 tells us that his skin isn’t naturally dark; 8 tells us his wrists are fair (light-colored skin). It’s tempting to think that 6 on its own—dark skin—supports the claim that he was in the tropics. But it does not. One can have dark skin and not visited the tropics, provided one’s skin is naturally dark. What tells us Watson has been in the tropics is that he has a tan—that his skin is dark and that’s not its natural tone. 6 and 7 jointly support 5. And how do we know Watson’s skin isn’t naturally dark? By checking his wrists, which are fair: proposition 8 supports 7.
So this is our final diagram:
And there we go. An apparently unwieldy passage—thirteen propositions!—turns out not to be so bad. The lesson is that we must go step by step: start by identifying the conclusion, then ask which proposition(s) most directly support it; from there, work back until all the propositions have been diagrammed. Every long argument is just composed out of smaller, easily analyzed inferences.
Diagram the following arguments.
1. ① Hillary Clinton would make a better president than Donald Trump. ② Clinton is a tough-minded pragmatist who gets things done. ③ Trump is a thin-skinned maniac who will be totally ineffective in dealing with Congress.
2. ① Donald Trump is a jerk who’s always offending people. Furthermore, ② he has no experience whatsoever in government. ③ Nobody should vote for him to be president.
3. ① Human beings evolved to eat meat, so ② eating meat is not immoral. ③ It’s never immoral for a creature to act according to its evolutionary instincts.
4. ① We need new campaign finance laws in this country. ② The influence of Wall Street money on elections is causing a breakdown in our democracy with bad consequences for social justice. ③ Politicians who have taken those donations are effectively bought and paid for, consistently favoring policies that benefit the rich at the expense of the vast majority of citizens.
5. ① Voters shouldn’t trust any politician who took money from Wall Street bankers. ② Hillary Clinton accepted hundreds of thousands of dollars in speaking fee from Goldman Sachs, a big Wall Street firm. ③ You shouldn’t trust her.
6. ① There are only three possible explanations for the presence of the gun at the crime scene: either the defendant just happened to hide from the police right next to where the gun was found, or the police planted the gun there after the fact, or it was really the defendant’s gun like the prosecution says. ② The first option is too crazy a coincidence to be at all believable, and ③ we’ve been given no evidence at all that the officers on the scene had any means or motivation to plant the weapon. Therefore, ④ it has to be the defendant’s gun.
7. ① Golden State has to be considered the clear favorite to win the NBA Championship. ② No team has ever lost in the Finals after taking a 3-games-to-1 lead, and ③ Golden State now leads Cleveland 3-to-1. In addition, ④ Golden State has the MVP of the league, Stephen Curry.
8. ① We should increase funding to public colleges and universities. First of all, ② as funding has decreased, students have had to shoulder a larger share of the financial burden of attending college, amassing huge amounts of debt. ③ A recent report shows that the average college student graduates with almost $30,000 in debt. Second, ④ funding public universities is a good investment. ⑤ Every economist agrees that spending on public colleges is a good investment for states, where the economic benefits far outweigh the amount spent.
9. ① LED lightbulbs last for a really long time and ② they cost very little to keep lit. ③ They are, therefore, a great way to save money. ④ Old-fashioned incandescent bulbs, on the other hand, are wasteful. ⑤ You should buy LEDs instead of incandescent bulbs.
10. ① There’s a hole in my left shoe, which means ② my feet will get wet when I wear them in the rain, and so ③ I’ll probably catch a cold or something if I don’t get a new pair of shoes. Furthermore, ④ having new shoes would make me look cool. ⑤ I should buy new shoes.
11. Look, it’s just simple economics: ① if people stop buying a product, then companies will stop producing it. And ② people just aren’t buying tablets as much anymore. ③ The CEO of Best Buy recently said that sales of tablets are “crashing” at his stores. ④ Samsung’s sales of tablets were down 14% this year alone. ⑤ Apple’s not going to continue to make your beloved iPad for much longer.
12. ① We should increase infrastructure spending as soon as possible. Why? First, ② the longer we delay needed repairs to things like roads and bridges, the more they will cost in the future. Second, ③ it would cause a drop in unemployment, as workers would be hired to do the work. Third, ④ with interest rates at all-time lows, financing the spending would cost relatively little. A fourth reason? ⑤ Economic growth. ⑥ Most economists agree that government spending in the current climate would boost GDP.
13. ① Smoking causes cancer and ② cigarettes are really expensive. ③ You should quit smoking. ④ If you don’t, you’ll never get a girlfriend. ⑤ Smoking makes you less attractive to girls: ⑥ it stains your teeth and ⑦ it gives you bad breath.
14. ① The best cookbooks are comprehensive, well-written, and most importantly, have recipes that work. This is why ② Mark Bittman’s classic How to Cook Everything is among the best cookbooks ever written. As its title indicates, ③ Bittman’s book is comprehensive. Of course it doesn’t literally teach you how to cook everything, but ④ it features recipes for cuisines from around the world—from French, Italian, and Spanish food to dishes from the Far and Middle East, as well as classic American comfort foods. In addition, ⑤ he covers almost every ingredient imaginable, with all different kinds of meats—including game—and every fruit and vegetable under the sun. ⑥ The book is also extremely well-written. ⑦ Bittman’s prose is clear, concise, and even witty. Finally, ⑧ Bittman’s recipes simply work. ⑨ In my many years of consulting How to Cook Everything, I’ve never had one lead me astray.
15. ① Logic teachers should make more money than CEOs. ② Logic is more important than business. ③ Without logic, we wouldn’t be able to tell when people were trying to fool us: ④ we wouldn’t know a good argument from a bad one. ⑤ But nobody would miss business if it went away. ⑥ What do businesses do except take our money? ⑦ And all those damned commercials they make; everybody hates commercials. ⑧ In a well-organized society, members of more important professions would be paid more, because ⑨ paying people is a great way to encourage them to do useful things. ⑩ People love money. |
It is a decimal number. 0.67 shows a 6 in the tenths column, and 7 in the hundredths column. The next one beyond it the thousandths column.
A decimal shows numbers in tenths, hundredths, and other powers of ten. A fraction can be shown in other ways such as elevenths and fifteenths as well as tenths and hundredths.
A decimal number is not always smaller than a whole number. This is a decimal number 2.45 The number on the left of the decimal point shows the whole numbers. The numbers on the right of the point shows the parts/fractions. This number is not a whole number .098 This number is a whole number 2.00 This number has whole numbers and parts/fractions of the whole 2.098
It is 5.39
A point separates the whole numbers from the decimal numbers. The point is called a decimal place, and anything after the decimal place symbolises a fraction of a number.For example, 1.5 = 1 and a halfThe 5 is in the 'tenths' column, which means any number in that column = 1/10 x the number, so in this case, 1/10 x 5 = 0.5 = a half.If you have a number like 1.25, the 2 is in the 'tenths' column, and the 5 is in the 'hundredths' column.This shows 1 + 2 tenths (2/10) + 5 hundredths (5/100), which is the same as one and a quarter.
0.9 that is how you write nine-tenths in standard form
The decimal point is not a digit, and it has no place value. It only shows youthe place in a number where whole things are on one side and pieces of thingsare on the other side. It's the boundary line between the tenths place and theunits (ones) place.
What is a decimal
It basically just shows that the number is in the hundreds. If it wasn't there, it would be in the tenths place.
The decimal that shows the most place value.
Standard notation is the usual way of writing a number that only shows digits.
It is called the decimal point. We say "point". For example 3.25 cakes is "three point two five cakes". The decimal point shows that the number on its left is the last whole number. In the above example, you have three whole cakes. The number in the next place on the right tells you how many tenths of a cake there are, in this case two-tenths of a cake. The number in the next place on the right tells you how many hundredths of a cake there are, in this case five-hundredths of a cake. 0.1 (zero point one) is one tenth. 0.04 (zero point zero four) is four hundredths.
The number in the second place after the decimal point shows the value of hundredths.EXAMPLE 0.345 : The '4' represents 4/100.
It shows you that there are no tenths 1/10
38/100 as a decimal is 0.38
This is 5/9. 5/9 becomes the repeating decimal .5555555.... or you can round it to .56
Look at the digit after the tenths digit; if it is half way or more add one to the tenths digit otherwise leave it alonereplace all digits to the right of the tenths digit by zerosRemove any trailing zeros after the decimal point after the tenths digit.digit after tenths digit is 6, more than halfway (5) so add one to tenths: 9+1=10, so carry the 1 to the units digit: 8+1=9 → 919.0...919.000000919.0⇒ 918.963255 = 919.0 to the nearest tenth.It can also be written as 919 without the ".0" as it is the same value, but writing the ".0" shows that the number is accurate (rounded) to the nearest tenth.
The decimal point shows that the number is less than 1. It is a fraction of 1. So you have to know the places to the right of the decimal point. The places are TENTHS, HUNDREDTHS and THOUSANDTHS. Written as .0 .00 .000 If you take the numbers you listed it would be written as 2/10 6/10 7/100 and 38/100 The biggest number after decimal point is the key .6 is first .38 is next .2 is next and .07 is last. If you have two numbers after the decimal that are the same, like .28 and .25 then you would move over the second number after the decimal to see what is bigger.
5/20 written in decimal = 0.25 |
(Fregata magnificens) on the Galápagos Islands.
Degland & Gerbe, 1867
Frigatebirds (also listed as "frigate bird", "frigate-bird", "frigate", frigate-petrel") are a family of seabirds called Fregatidae which are found across all tropical and subtropical oceans. The five extant species are classified in a single genus, Fregata. All have predominantly black plumage, long, deeply forked tails and long hooked bills. Females have white underbellies and males have a distinctive red gular pouch, which they inflate during the breeding season to attract females. Their wings are long and pointed and can span up to 2.3 metres (7.5 ft), the largest wing area to body weight ratio of any bird.
Able to soar for weeks on wind currents, frigatebirds spend most of the day in flight hunting for food, and roost on trees or cliffs at night. Their main prey are fish and squid, caught when chased to the water surface by large predators such as tuna. Frigatebirds are referred to as kleptoparasites as they occasionally rob other seabirds for food, and are known to snatch seabird chicks from the nest. Seasonally monogamous, frigatebirds nest colonially. A rough nest is constructed in low trees or on the ground on remote islands. A single egg is laid each breeding season. The duration of parental care is among the longest of any bird species; frigatebirds are only able to breed every other year.
The Fregatidae are a sister group to Suloidea which consists of cormorants, darters, gannets, and boobies. Three of the five extant species of frigatebirds are widespread, (the magnificent, great and lesser frigatebirds) while two are endangered (the Christmas Island and Ascension Island frigatebirds) and restrict their breeding habitat to one small island each. The oldest fossils date to the early Eocene, around 50 million years ago. Classified in the genus Limnofregata, the three species had shorter, less-hooked bills and longer legs, and lived in a freshwater environment.
- 1 Taxonomy
- 2 Description
- 3 Distribution and habitat
- 4 Behaviour and ecology
- 5 Status and conservation
- 6 Cultural significance
- 7 See also
- 8 Notes
- 9 References
- 10 External links
The term Frigate Bird itself was used in 1738 by the English naturalist and illustrator Eleazar Albin in his A Natural History of the Birds. The book included an illustration of the male bird showing the red gular pouch. Like the genus name, the English term is derived from the French mariners' name for the bird la frégate—a frigate or fast warship. The etymology was mentioned by French naturalist Jean-Baptiste Du Tertre when describing the bird in 1667.[a]
Christopher Columbus encountered frigatebirds when passing the Cape Verde Islands on his first voyage across the Atlantic in 1492. In his journal entry for 29 September he used the word rabiforçado, modern Spanish rabihorcado or forktail.[b] In the Caribbean frigatebirds were called Man-of-War birds by English mariners. This name was used by the English explorer William Dampier in his book An Account of a New Voyage Around the World published in 1697:
The Man-of-War (as it is called by the English) is about the bigness of a Kite, and in shape like it, but black; and the neck is red. It lives on Fish yet never lights on the water, but soars aloft like a Kite, and when it sees its prey, it flys down head foremost to the Waters edge, very swiftly takes its prey out of the Sea with his Bill, and immediately mounts again as swiftly; never touching the Water with his Bill. His Wings are very long; his feet are like other Land-fowl, and he builds on Trees, where he finds any; but where they are wanting on the ground.
Frigatebirds were grouped with cormorants, and sulids (gannets and boobies) as well as pelicans in the genus Pelecanus by Linnaeus in 1758 in the tenth edition of his Systema Naturae. He described the distinguishing characteristics as a straight bill hooked at the tip, linear nostrils, a bare face, and fully webbed feet. The genus Fregata was defined by French naturalist Bernard Germain de Lacépède in 1799. Louis Jean Pierre Vieillot described the genus name Tachypetes in 1816 for the great frigatebird. The genus name Atagen had been coined by German naturalist Paul Möhring in 1752, though this has no validity as it predates the official beginning of Linnaean taxonomy.
In 1874, English zoologist Alfred Henry Garrod published a study where he had examined various groups of birds and recorded which muscles of a selected group of five[c] they possessed or lacked. Noting that the muscle patterns were different among the steganopodes (classical Pelecaniformes), he resolved that there were divergent lineages in the group that should be in separate families, including frigatebirds in their own family Fregatidae. Urless N. Lanham observed in 1947 that frigatebirds bore some skeletal characteristics more in common with Procellariiformes than Pelecaniformes, though concluded they still belonged in the latter group (as suborder Fregatae), albeit as an early offshoot. Martyn Kennedy and colleagues derived a cladogram based on behavioural characteristics of the traditional Pelecaniformes, calculating the frigatebirds to be more divergent than pelicans from a core group of gannets, darters and cormorants, and tropicbirds the most distant lineage. The classification of this group as the traditional Pelecaniformes, united by feet that are totipalmate (with all four toes linked by webbing) and the presence of a gular pouch, persisted until the early 1990s. The DNA–DNA hybridization studies of Charles Sibley and Jon Edward Ahlquist placed the frigatebirds in a lineage with penguins, loons, petrels and albatrosses. Subsequent genetic studies place the frigatebirds as a sister group to the group Suloidea, which comprises the gannets and boobies, cormorants and darters. Microscopic analysis of eggshell structure by Konstantin Mikhailov in 1995 found that the eggshells of frigatebirds resembled those of other Pelecaniformes in having a covering of thick microglobular material over the crystalline shells.
Molecular studies have consistently shown that pelicans, the namesake family of the Pelecaniformes, are actually more closely related to herons, ibises and spoonbills, the hamerkop and the shoebill than to the remaining species. In recognition of this, the order comprising the frigatebirds and Suloidea was renamed Suliformes in 2010.
In 1994 the family name Fregatidae, cited as described in 1867 by French naturalists Côme-Damien Degland and Zéphirin Gerbe, was conserved under Article 40(b) of the International Code of Zoological Nomenclature in preference to the 1840 description Tachypetidae by Johann Friedrich von Brandt. This was because the genus names Atagen and Tachypetes had been synonymised with Fregata before 1961, resulting in the aligning of family and genus names.
The Eocene frigatebird genus Limnofregata comprises birds whose fossil remains were recovered from prehistoric freshwater environments, unlike the marine preferences of their modern-day relatives. They had shorter less-hooked bills and longer legs, and longer slit-like nasal openings. Three species have been described from fossil deposits in the western United States, two—L. azygosternon and L. hasegawai—from the Green River Formation (48–52 million years old) and one—L. hutchisoni—from the Wasatch Formation (between 53 and 55 million years of age). Fossil material indistinguishable from living species dating to the Pleistocene and Holocene has been recovered from Ascension Island (for F. aquila), Saint Helena Island, both in the southern Atlantic Ocean, and also from various islands in the Pacific Ocean (for F. minor and F. ariel).
A cladistic study of the skeletal and bone morphology of the classical Pelecaniformes and relatives found that the frigatebirds formed a clade with Limnofregata. Birds of the two genera have 15 cervical vertebrae, unlike almost all other Ciconiiformes, Suliformes and Pelecaniformes, which have 17. The age of Limnofregata indicates that these lineages had separated by the Eocene.
Living species and infrageneric classification
The type species of the genus is the Ascension frigatebird (Fregata aquila). For many years, the consensus was to recognise only two species of frigatebird, with larger birds as F. aquila and smaller as F. ariel. In 1914 the Australian ornithologist Gregory Mathews delineated five species, which remain valid. Analysis of ribosomal and mitochondrial DNA indicated that the five species had diverged from a common ancestor only recently—as little as 1.5 million years ago. There are two species pairs, the great and Christmas Island frigatebirds, and the magnificent and Ascension frigatebirds, while the fifth species, the lesser frigatebird, is an early offshoot of the common ancestor of the other four species. Three subspecies of the lesser and five subspecies of the great frigatebird are recognised.
|Living species of frigatebirds|
|Common and binomial names||Image||Description||Range|
|With a body length of 89–114 cm (35–45 in), it is the largest species and has the longest bill. The adult male is all-black with a scarlet throat pouch that is inflated like a balloon in the breeding season. Although the feathers are black, the scapular feathers have a purple sheen, in contrast to the male great frigatebird's green sheen. The female is brownish-black, but has a white breast and lower neck sides, a brown band on the wings, and a blueish-grey eye-ring.||Widespread in the tropical Atlantic, it breeds in colonies in trees in Florida, the Caribbean and Cape Verde Islands, as well as along the Pacific coast of the Americas from Mexico to Ecuador, including the Galápagos Islands.|
|Apart from its smaller size, the adult male is very similar to the magnificent frigatebird. The female is brownish black with a rusty brown mantle and chest, and normally lacks any white patches present on the front of female birds of other species. The occasional female observed with a white belly may be breeding before obtaining the full adult plumage.||It is found on Boatswain Bird Island just off Ascension Island in the tropical Atlantic Ocean, having not bred on the main island since the 1800s.|
|The adult male is the only frigatebird species with white on its belly – an egg shaped patch. It is larger with a longer bill than the related great frigatebird. Its upperparts are black with green metallic gloss on the mantle and scapulars. The female has dark upperparts with brown wing bars, a black head with white belly and white collar (sometimes incomplete) around its neck.||Breeds only on Christmas Island in the eastern Indian Ocean.|
|The adult male has black upperparts with green metallic gloss on the mantle and scapulars. It is completely black underneath with subtle brown barring on the axillaries. The upperparts of the female are dark with lighter brown wing bars. Its head is black with a mottled throat and belly. The neck has a white collar.||Found in tropical Indian and Pacific oceans, as well as one colony—Trindade and Martim Vaz—in the south Atlantic, generally where the water is warmer than 22 °C (72 °F), and breeding on islands and atolls with sufficient vegetation to nest in.|
(G. R.Gray, 1845)
|With a body length of around 75 cm (30 in), it is the smallest species. The adult male has black upperparts with greenish to purple metallic gloss on the mantle and scapulars, and is black underneath except for bold white axillary spurs. The upperparts of the female are dark with lighter wing bars. The head is black while the belly and the neck collar are white.||Tropical and subtropical waters across the Indian and Pacific Oceans. Atlantic race trinitatis was limited to Trindade, off Eastern Brazil but may now be extinct.|
Frigatebirds are large slender mostly black-plumaged seabirds, with the five species similar in appearance to each other. The largest species is the magnificent frigatebird, which reaches 114 cm (45 in) in length, with three of the remaining four almost as large. The lesser frigatebird is substantially smaller, at around 71 cm (28 in) long. Frigatebirds exhibit marked sexual dimorphism; females are larger and up to 25 percent heavier than males, and generally have white markings on their underparts. Frigatebirds have short necks and long, slender hooked bills. Their long narrow wings (male wingspan can reach 2.3 metres (7.5 ft)) taper to points. Their wings have eleven primary flight feathers, with the tenth the longest and eleventh a vestigial feather only, and 23 secondaries. Their tails are deeply forked, though this is not apparent unless the tail is fanned. The tail and wings give them a distinctive 'W' silhouette in flight. The legs and face are fully feathered. The totipalmate feet are short and weak, the webbing is reduced and part of each toe is free.
The bones of frigatebirds are markedly pneumatic (filled with air), making them very light and contribute only 5% to total body weight. The pectoral girdle (shoulder joint) is strong as its bones are fused. The pectoral muscles are well-developed, and weigh as much as the frigatebird's feathers—around half the body weight is made up equally of these muscles and feathers. The males have inflatable red-coloured throat pouches called gular pouches, which they inflate to attract females during the mating season. The gular sac is, perhaps, the most striking frigatebird feature. These can only deflate slowly, so males that are disturbed will fly off with pouches distended for some time.
Frigatebirds remain in the air and do not settle on the ocean. They produce very little oil from their uropygial glands so their feathers would become sodden if they settled on the surface. In addition, with their long wings relative to body size, they would have great difficulty taking off again.
Distribution and habitat
Frigatebirds are found over tropical oceans, and ride warm updrafts under cumulus clouds. Their range coincides with availability of food such as flying fish, and with the trade winds, which provide the windy conditions that facilitate their flying. They are rare vagrants to temperate regions and not found in polar latitudes. Adults are generally sedentary, remaining near the islands where they breed. However, male frigatebirds have been recorded dispersing great distances after departing a breeding colony—one male great frigatebird relocated from Europa Island in the Mozambique Channel to the Maldives 4,400 km (2,700 mi) away, and a male magnificent frigatebird flew 1,400 km (870 mi) from French Guiana to Trinidad. Great frigatebirds marked with wing tags on Tern Island in the French Frigate Shoals were found to regularly travel the 873 km (542 mi) to Johnston Atoll, although one was reported in Quezon City in the Philippines. Genetic testing seems to indicate that the species has fidelity to their site of hatching despite their high mobility. Young birds may disperse far and wide, with distances of up to 6,000 km (3,700 mi) recorded.
Behaviour and ecology
Having the largest wing-area-to-body-weight ratio of any bird, frigatebirds are essentially aerial. This allows them to soar continuously and only rarely flap their wings. One great frigatebird, being tracked by satellite in the Indian Ocean, stayed aloft for two months. They can fly higher than 4,000 meters in freezing conditions. Like swifts they are able to spend the night on the wing, but they will also return to an island to roost on trees or cliffs. Field observations in the Mozambique Channel found that great frigatebirds could remain on the wing for up to 12 days while foraging. Highly adept, they use their forked tails for steering during flight and make strong deep wing-beats, though not suited to flying by sustained flapping. Frigatebirds bathe and clean themselves in flight by flying low and splashing at the water surface before preening and scratching afterwards. Conversely, frigatebirds do not swim and with their short legs cannot walk well or take off from the sea easily.
The average life span is unknown but in common with seabirds such as the wandering albatross and Leach's storm petrel, frigatebirds are long-lived. In 2002, 35 ringed great frigatebirds were recovered on Tern Island in the Hawaiian Islands. Of these ten were older than 37 years and one was at least 44 years of age.
Despite having dark plumage in a tropical climate, frigatebirds have found ways not to overheat—particularly as they are exposed to full sunlight when on the nest. They ruffle feathers to lift them away from the skin and improve air circulation, and can extend and upturn their wings to expose the hot undersurface to the air and lose heat by evaporation and convection. Frigatebirds also place their heads in the shade of their wings, and males frequently flutter their gular pouches.
Frigatebirds typically breed on remote oceanic islands, generally in colonies of up to 5000 birds. Within these colonies, they most often nest in groups of 10 to 30 (or rarely 100) individuals. Breeding can occur at any time of year, often prompted by commencement of the dry season or plentiful food.
Frigatebirds have the most elaborate mating displays of all seabirds. The male birds take up residence in the colony in groups of up to thirty individuals. They display to females flying overhead by pointing their bills upwards, inflating their red throat pouches and vibrating their outstretched wings, showing the lighter wing undersurfaces in the process. They produce a drumming sound by vibrating their bills together and sometimes give a whistling call. The female descends to join a male she has chosen and allows him to take her bill in his. The pair also engages in mutual "head-snaking".
After copulation it is generally the male who gathers sticks and the female that constructs the loosely woven nest. The nest is subsequently covered with (and cemented by) guano. Frigatebirds prefer to nest in trees or bushes, though when these are not available they will nest on the ground. A single white egg that weighs up to 6–7% of mother's body mass is laid, and is incubated in turns by both birds for 41 to 55 days. The altricial chicks are naked on hatching and develop a white down. They are continuously guarded by the parents for the first 4–6 weeks and are fed on the nest for 5–6 months. Both parents take turns feeding for the first three months, after which the male's attendance trails off leaving the mother to feed the young for another six to nine months on average. The chicks feed by reaching their heads in their parents' throat and eating the part-regurgitated food. It takes so long to rear a chick that frigatebirds generally breed every other year.
The duration of parental care in frigatebirds is among the longest for birds, rivalled only by the southern ground hornbill and some large accipitrids. Frigatebirds take many years to reach sexual maturity. A study of great frigatebirds in the Galapagos Islands found that they only bred once they have acquired the full adult plumage. This was attained by female birds when they were eight to nine years of age and by male birds when they were ten to eleven years of age.
Frigatebirds' feeding habits are pelagic, and they may forage up to 500 km (310 mi) from land. They do not land on the water but snatch prey from the ocean surface using their long, hooked bills. They mainly catch small fish such as flying fish, particularly the genera Exocoetus and Cypselurus, that are driven to the surface by predators such as tuna and dolphinfish, but they will also eat cephalopods, particularly squid. Menhaden of the genus Brevoortia can be an important prey item where common, and jellyfish and larger plankton are also eaten. Frigatebirds have learned to follow fishing vessels and take fish from holding areas. Conversely tuna fishermen fish in areas where they catch sight of frigatebirds due to their association with large marine predators. Frigatebirds also at times prey directly on eggs and young of other seabirds, including boobies, petrels, shearwaters and terns, in particular the sooty tern.
Frigatebirds will rob other seabirds such as boobies, particularly the red-footed booby, tropicbirds, shearwaters, petrels, terns, gulls and even ospreys of their catch, using their speed and manoeuvrability to outrun and harass their victims until they regurgitate their stomach contents. They may either assail their targets after they have caught their food or circle high over seabird colonies waiting for parent birds to return laden with food. Although frigatebirds are renowned for their kleptoparasitic feeding behaviour, kleptoparasitism is not thought to play a significant part of the diet of any species, and is instead a supplement to food obtained by hunting. A study of great frigatebirds stealing from masked boobies estimated that the frigatebirds could at most obtain 40% of the food they needed, and on average obtained only 5%.
Unlike most other seabirds, frigatebirds drink freshwater when they come across it, by swooping down and gulping with their bills.
Frigatebirds are unusual among seabirds in that they often carry blood parasites. Blood-borne protozoa of the genus Haemoproteus have been recovered from four of the five species. Bird lice of the ischnoceran genus Pectinopygus and amblyceran genus Colpocephalum and species Fregatiella aurifasciata have been recovered from magnificent and great frigatebirds of the Galapagos Islands. Frigatebirds tended to have more parasitic lice than did boobies analysed in the same study.
A heavy chick mortality at a large and important colony of the magnificent frigatebird, located on Île du Grand Connétable off French Guiana, was recorded in summer 2005. Chicks showed nodular skin lesions, feather loss and corneal changes, with around half the year's progeny perishing across the colony. An alphaherpesvirus was isolated and provisionally named Fregata magnificens herpesvirus, though it was unclear whether it caused the outbreak or affected birds already suffering malnutrition.
Status and conservation
Populations and threats
Two of the five species are considered at risk. In 2003, a survey of the four colonies of the critically endangered Christmas Island frigatebirds counted 1200 breeding pairs. As frigatebirds normally breed every other year, the total adult population was estimated to lie between 1800 and 3600 pairs. Larger numbers formerly bred on the island, but the clearance of breeding habitat during World War II and dust pollution from phosphate mining have contributed to the decrease. The population of the vulnerable Ascension frigatebird has been estimated at around 12,500 individuals. The birds formerly bred on Ascension Island itself, but the colonies were exterminated by feral cats introduced in 1815. The birds continued to breed on a rocky outcrop just off the shore of the island. A program conducted between 2002 and 2004 eradicated the feral cats and a few birds have returned to nest on the island.
The other three species are classified by the International Union for Conservation of Nature as being of Least Concern. The populations of all three are large, with that of the magnificent frigatebird thought to be increasing, while the great and lesser frigatebird decreasing. Monitoring populations of all species is difficult due to their movements across the open ocean and low reproductivity. The status of the Atlantic populations of the great and lesser frigatebirds are unknown and possibly extinct.
As frigatebirds rely on large marine predators such as tuna for their prey, overfishing threatens to significantly impact on food availability and jeopardise whole populations. As frigatebirds nest in large dense colonies in small areas, they are vulnerable to local disasters that could wipe out the rare species or significantly impact the widespread ones.
In Nauru, catching frigatebirds was an important tradition still practised to some degree. Donald W. Buden writes: "Birds typically are captured by slinging the weighted end of a coil of line in front of an approaching bird attracted to previously captured birds used as decoys. In a successful toss, the line becomes entangled about the bird's wing and bringing [sic] it to ground." Marine birds including frigatebirds were once harvested for food on Christmas Island but this practice ceased in the late 1970s. Eggs and young of magnificent frigatebirds were taken and eaten in the Caribbean. Great frigatebirds were eaten in the Hawaiian Islands and their feathers used for decoration.
The great frigatebird was venerated by the Rapa Nui people on Easter Island; carvings of the birdman Tangata manu depict him with the characteristic hooked beak and throat pouch. Its incorporation into local ceremonies suggests that the now-vanished species was extant there between the 1800s and 1860s.
Maritime folklore around the time America was discovered held that frigatebirds were birds of good omen as their presence meant land was near.
There are anecdotal reports of tame frigatebirds being kept across Polynesia and Micronesia in the Pacific. A bird that had come from one island and had been taken elsewhere could be reliably trusted to return to its original home, hence would be used as a speedy way to relay a message there. There is firmer evidence of this practice taking place in the Gilbert Islands and Tuvalu.
- Du Tertre wrote: "Loyseau que les habitans des Indes appellent Fregate (à cause de la vistesse de son vol) n'a pas le corp plus gros qu'une poule ..." ("The bird that the inhabitants of the Indies call "frigate" (because of the speed of its flight) has a body no larger than a chicken's.")
- Columbus's journal survives in a version recorded by Bartholomé de las Casas in the 1530s. In English the entry reads: "They saw a bird that is called a frigatebird, which makes the boobies throw up what they eat in order to eat it herself, and she does not sustain herself on anything else. It is a seabird, but does not alight on the sea nor depart from land 20 leagues. There are many of these on the islands of Cape Verde."
- ambiens, fermorocaudal, accessory femorocaudal, semitendinosus, and accessory tendinosus
- Shorter Oxford English Dictionary. Oxford, UK: Oxford University Press. 2007. ISBN 0-19-920687-2.
- Albin, Eleazar (1738). A Natural History of the Birds. Volume 3. London: Printed for the author and sold by William Innys. p. 75 and plate 80 on previous page.
- Jobling, James A. (2010). The Helm Dictionary of Scientific Bird Names. London, United Kingdom: Christopher Helm. p. 164. ISBN 978-1-4081-2501-4.
- du Tertre, du Jean-Baptiste (1667). Histoire générale des Antilles habitées par les François (in French). Volume 2. Paris: Thomas Joly. p. 269, Plate p. 246.
- Hartog, J.C. den (1993). "An early note on the occurrence of the Magnificent Frigate Bird, Fregata magnificens Mathews, 1914, in the Cape Verde Islands: Columbus as an ornithologist". Zoologische Mededelingen. 67: 361–64.
- Dunn, Oliver; Kelley, James E. Jr (1989). The Diario of Christopher Columbus's First Voyage to America, 1492–1493. Norman, Oklahoma: University of Oklahoma Press. p. 45. ISBN 0-8061-2384-2.
- Dampier, James (1699) . An Account of a New Voyage Around the World. London, United Kingdom: James Knapton. p. 49.
- Linnaeus, Carolus (1758). Systema Naturae per Regna Tria Naturae, Secundum Classes, Ordines, Genera, Species, cum Characteribus, Differentiis, Synonymis, Locis. Tomus I. Editio Decima, Reformata (in Latin). Holmiae: Laurentii Salvii. pp. 132–34.
Rostrum edentulum, rectum: apice adunco, unguiculato. Nares lineares. Facies nuda. Pedes digitís omnibus palmatis.
- Meyer, Ernst; Cottrell, G. William, eds. (1979). Checklist of birds of the world. Volume 1 (2nd ed.). Cambridge, Massachusetts: Museum of Comparative Zoology. p. 159.
- Lacépède, Bernard Germain de (1799). "Tableau des sous-classes, divisions, sous-division, ordres et genres des oiseux". Discours d'ouverture et de clôture du cours d'histoire naturelle (in French). Paris: Plassan. p. 15. Page numbering starts at one for each of the three sections.
- Australian Biological Resources Study (26 August 2014). "Family Fregatidae Degland & Gerbe, 1867". Australian Faunal Directory. Canberra, Australian Capital Territory: Department of the Environment, Water, Heritage and the Arts, Australian Government. Archived from the original on 2014-12-07. Retrieved 30 November 2014.
- Garrod, Alfred Henry (1874). "On certain muscles of birds and their value in classification". Proceedings of the Zoological Society of London. 42 (1): 111–23. doi:10.1111/j.1096-3642.1874.tb02459.x.
- Lanham, Urless N. (1947). "Notes on the phylogeny of the Pelecaniformes" (PDF). The Auk. 64 (1): 65–70. doi:10.2307/4080063. JSTOR 4080063.
- Kennedy, Martyn; Spencer, Hamish G.; Gray, Russell D. (1996). "Hop, step and gape: do the social displays of the Pelecaniformes reflect phylogeny?" (PDF). Animal Behaviour. 51 (2): 273–91. doi:10.1006/anbe.1996.0028.
- Hedges, S. Blair; Sibley, Charles G. (1994). "Molecules vs. morphology in avian evolution: the case of the "pelecaniform" birds". PNAS. 91 (21): 9861–65. doi:10.1073/pnas.91.21.9861. PMC 44917.
- Sibley, Charles Gald; Ahlquist, Jon Edward (1990). Phylogeny and classification of birds. New Haven, Connecticut: Yale University Press. ISBN 978-0-300-04085-2.
- Hackett, Shannon J.; Kimball, Rebecca T.; Reddy, Sushma; Bowie, Rauri C. K.; Braun, Edward L.; Braun, Michael J.; Chojnowski, Jena L.; Cox, W. Andrew; Han, Kin-Lan; Harshman, John; Huddleston, Christopher J.; Marks, Ben D.; Miglia, Kathleen J.; Moore, William S.; Sheldon, Frederick H.; Steadman, David W.; Witt, Christopher C.; Yuri, Tamaki (2008). "A phylogenomic study of birds reveals their evolutionary history". Science. 320 (5884): 1763–68. doi:10.1126/science.1157704. PMID 18583609.
- Smith, Nathan D. (2010). "Phylogenetic analysis of Pelecaniformes (Aves) based on osteological data: Implications for waterbird phylogeny and fossil calibration studies". PLoS ONE. 5 (10): e13354. doi:10.1371/journal.pone.0013354. PMC 2954798. PMID 20976229.
- Mikhailov, Konstantin E. (1995). "Eggshell structure in the shoebill and pelecaniform birds: comparison with hamerkop, herons, ibises and storks". Canadian Journal of Zoology. 73 (9): 1754–70. doi:10.1139/z95-207.
- Chesser, R. Terry; Banks, Richard C.; Barker, F. Keith; Cicero, Carla; Dunn, Jon L.; Kratter, Andrew W.; Lovette, Irby J.; Rasmussen, Pamela C.; Remsen, J.V. Jr; Rising, James D.; Stotz, Douglas F.; Winker, Kevin (2010). "Fifty-First Supplement to the American Ornithologists' Union Check-List of North American Birds". The Auk. 127 (3): 726–44. doi:10.1525/auk.2010.127.3.726.
- "Taxonomy Version 2". IOC World Bird List: Taxonomy Updates – v2.6 (23 October 2010). 2010. Retrieved 29 November 2014.
- Bock, Walter J. (1994). History and nomenclature of avian family-group names. Bulletin of the American Museum of Natural History Issue 222. pp. 131, 166.
- Mayr, Gerald (2009). Paleogene Fossil Birds. New York: Springer Science & Business Media. pp. 63–64. ISBN 978-3-540-89628-9.
- Stidham, Thomas A. (2014). "A new species of Limnofregata (Pelecaniformes: Fregatidae) from the Early Eocene Wasatch Formation of Wyoming: implications for palaeoecology and palaeobiology". Palaeontology. 58: 1–11. doi:10.1111/pala.12134.
- Ashmole, Nelson Philip (1963). "Sub-fossil bird remains on Ascension Island". Ibis. 103: 382–89. doi:10.1111/j.1474-919X.1963.tb06761.x.
- Olson, Storrs L. (1975). "Paleornithology of St. Helena Island, South Atlantic Ocean" (PDF). Smithsonian Contributions to Paleobiology. 23: 1–49. doi:10.5479/si.00810266.23.1.
- James, Helen F. (1987). "A late Pleistocene avifauna from the island of Oahu, Hawaiian Islands" (PDF). Documents des laboratories de Géologie, Lyon. 99: 221–30.
- Steadman, David W. (2006). Extinction and biogeography of tropical Pacific birds. Chicago, Illinois: University of Chicago Press. ISBN 978-0-226-77142-7.
- Kennedy, Martyn; Spencer, Hamish G. (2004). "Phylogenies of the frigatebirds (Fregatidae) and tropicbirds (Phaethonidae), two divergent groups of the traditional order Pelecaniformes, inferred from mitochondrial DNA sequences". Molecular Phylogenetics and Evolution. 31 (1): 31–38. doi:10.1016/j.ympev.2003.07.007. PMID 15019606.
- Australian Biological Resources Study (29 July 2014). "Genus Fregata Lacépède, 1799". Australian Faunal Directory. Canberra, Australian Capital Territory: Department of the Environment, Water, Heritage and the Arts, Australian Government. Archived from the original on 2014-12-05. Retrieved 30 November 2014.
- Mathews, Gregory M. (1914). "On the species and subspecies of the genus Fregata". Australian Avian Record. 2 (6): 117–21.
- Gill, Frank; Donsker, David (23 April 2015). "Hamerkop, Shoebill, Pelicans, Boobies & Cormorants". IOC World Bird List. International Ornithologists' Committee. Retrieved 10 June 2015.
- Orta, Jaume; Christie, D.A.; Garcia, E.F.J.; Boesman, P. (2014). "Magnificent Frigatebird (Fregata magnificens)". In del Hoyo, J.; Elliott, A.; Sargatal, Sargatal; Christie, D.A.; de Juana, E. Handbook of the Birds of the World Alive. Barcelona, Spain: Lynx Edicions. Retrieved 27 May 2015.(subscription required)
- BirdLife International (2014). "Fregata magnificens". IUCN Red List of Threatened Species. Version 2014.3. International Union for Conservation of Nature. Retrieved 16 May 2015.
- Orta, Jaume; Christie, D.A.; Garcia, E. F. J.; Jutglar, F.; Boesman, P. (2014). "Ascension Frigatebird (Fregata aquila)". In del Hoyo, J.; Elliott, A.; Sargatal, Sargatal; Christie, D. A.; de Juana, E. Handbook of the Birds of the World Alive. Barcelona, Spain: Lynx Edicions. Retrieved 29 December 2014.(subscription required)
- BirdLife International (2014). "Fregata aquila". IUCN Red List of Threatened Species. Version 2014.3. International Union for Conservation of Nature. Retrieved 31 December 2014.
- James, David J. (2004). "Identification of Christmas Island, Great and Lesser Frigatebirds" (PDF). BirdingASIA. 1: 22–38. Archived from the original (PDF) on 2015-05-27.
- BirdLife International (2014). "Fregata andrewsi". IUCN Red List of Threatened Species. Version 2014.3. International Union for Conservation of Nature. Retrieved 31 December 2014.
- BirdLife International (2014). "Fregata minor". IUCN Red List of Threatened Species. Version 2014.3. International Union for Conservation of Nature. Retrieved 16 May 2015.
- Orta, Jaume; Garcia, E.F.J.; Kirwan, G.M.; Boesman, P. "Lesser Frigatebird (Fregata ariel)". In del Hoyo, J.; Elliott, A.; Sargatal, J.; Christie, D.A.; de Juana, E. Handbook of the Birds of the World Alive. Lynx Edicions. Retrieved 30 November 2014.(subscription required)
- Alves, R.J.V.; da Silva, N.G.; Aguirre-Muñoz, A. (2011). "Return of endemic plant populations on Trindade Island, Brazil, with comments on the fauna" (PDF). In Veitch, CR; Clout, MN; Towns, DR. Island invasives: eradication and management : proceedings of the International Conference on Island Invasives. Gland, Switzerland: IUCN. pp. 259–263. OCLC 770307954.
- Orta, Jaume. "Family Fregatidae, Frigatebirds". In del Hoyo, J; Elliott, A.; Sargatal, J.; Christie, D.A.; de Juana, E. Handbook of the Birds of the World Alive. Lynx Edicions. Retrieved 13 May 2015.(subscription required)
- Khanna, D. R. (2005). Biology of Birds. New Delhi, India: Discovery Publishing House. pp. 317–19. ISBN 978-81-7141-933-3.
- O'Brien, Rory M. (1990). "Family Fregatidae frigatebirds" (PDF). In Marchant, S.; Higgins, P.G. Handbook of Australian, New Zealand & Antarctic Birds. Volume 1: Ratites to ducks; Part B, Australian pelican to ducks. Melbourne, Victoria: Oxford University Press. p. 912. ISBN 978-0-19-553068-1.
- Weimerskirch, Henri; Le Corre, Matthieu; Marsac, Francis; Barbraud, Christophe; Tostain, Olivier; Chastel, Olivier (2006). "Postbreeding movements of frigatebirds tracked with satellite telemetry". The Condor. 108 (1): 220–25. doi:10.1650/0010-5422(2006)108[0220:PMOFTW]2.0.CO;2.
- Dearborn, D.; Anders, A.; Schreiber, E.; Adams, R.; Muellers, U. (2003). "Inter island movements and population differentiation in a pelagic seabird". Molecular Ecology. 12 (10): 2835–43. doi:10.1046/j.1365-294X.2003.01931.x. PMID 12969485.
- Weimerskirch, H.; Bishop, C.; Jeanniard-du-Dot, T.; Prudor, A.; Sachs, G. (2016). "Frigate birds track atmospheric conditions over months-long transoceanic flights". Science. 353 (6294): 74–78. doi:10.1126/science.aaf4374. PMID 27365448.
- Weimerskirch, Henri; Chastel, Olivier; Barbraud, Christophe; Tostain, Olivier (2003). "Frigatebirds ride high on thermals" (PDF). Nature. 421 (6921): 333–34. doi:10.1038/421333a. PMID 12540890.
- Weimerskirch, Henri; Le Corre, Matthieu; Jaquemet, Sébastien; Potier, Michel; Marsac, Francis (2004). "Foraging strategy of a top predator in tropical waters: great frigatebirds in the Mozambique Channel" (PDF). Marine Ecology Progress Series. 275: 297–308. doi:10.3354/meps275297.
- Juola, Frans A.; Haussmann, Mark F.; Dearborn, Donald C.; Vleck, Carol M. (2006). "Telomere shortening in a long-lived marine bird: cross-sectional analysis and test of an aging tool". The Auk. 123 (3): 775–83. doi:10.1642/0004-8038(2006)123[775:TSIALM]2.0.CO;2.
- Skutch, Alexander Frank; Gardner, Dana (illustrator) (1987). Helpers at Birds' Nests : a worldwide survey of cooperative breeding and related behaviour. Iowa City: University of Iowa Press. pp. 69–71. ISBN 0-87745-150-8.
- Valle, Arlos A.; de Vries, Tjitte; Hernández, Cecilia (2006). "Plumage and sexual maturation in the Great frigatebird Fregata minor in the Galapagos Islands" (PDF). Marine Ornithology. 34: 51–59.
- Weimerskirch, Henri; Le Corre, Matthieu; Kai, Emilie Tew; Marsac, Francis (2010). "Foraging movements of great frigatebirds from Aldabra Island: Relationship with environmental variables and interactions with fisheries". Progress in Oceanography. 86 (1–2): 204–13. doi:10.1016/j.pocean.2010.04.003.
- Schreiber, Elizabeth A.; Burger, Joanne (2001). Biology of Marine Birds. Boca Raton, Florida: CRC Press. ISBN 0-8493-9882-7.
- Vickery, J.A.; Brooke, M. de L. (1994). "The kleptoparasitic interactions between Great Frigatebirds and Masked Boobies on Henderson Island, South Pacific" (PDF). Condor. 96 (2): 331–40. doi:10.2307/1369318. JSTOR 1369318.
- Merino, Santiago; Hennicke, Janos; Martínez, Javier; Ludynia, Katrin; Torres, Roxana; Work, Thierry M.; Stroud, Stedson; Masello, Juan F.; Quillfeldt, Petra (2012). "Infection by Haemoproteus parasites in four species of frigatebirds and the description of a new species of Haemoproteus (Haemosporida: Haemoproteidae)" (PDF). Journal of Parasitology. 98 (2): 388–97. doi:10.1645/GE-2415.1. PMID 21992108.
- Rivera-Parra, Jose L.; Levin, Iris I.; Parker, Patricia G. (2014). "Comparative ectoparasite loads of five seabird species in the Galapagos Islands". Journal of Parasitology. 100 (5): 569–77. doi:10.1645/12-141.1. PMID 24911632.
- de Thoisy, Benoit; Lavergne, Anne; Semelin, Julien; Pouliquen, Jean-François; Blanchard, Fabian; Hansen, Eric; Lacoste, Vincent (2009). "Outbreaks of disease possibly due to a natural avian herpesvirus infection in a colony of young magnificent frigatebirds (Fregata magnificens) in French Guiana". Journal of Wildlife Diseases. 45 (3): 802–07. doi:10.7589/0090-3558-45.3.802. PMID 19617492.
- James, David J.; McAllan, Ian A.W. (2014). "The birds of Christmas Island, Indian Ocean: A review" (PDF). Australian Field Ornithology. 31 (Supplement): S24 Table 3, S64–S67.
- Ratcliffe, Norman; Pelembe, Tara; White, Richard (2008). "Resolving the population status of Ascension Frigatebird Fregata aquila using a 'virtual ecologist' model" (PDF). Ibis. 150 (2): 300–306. doi:10.1111/j.1474-919X.2007.00778.x.
- Ratcliffe, Norman; Bella, Mike; Pelembe, Tara; Boyle, Dave; Benjamin, Raymond; White, Richard; Godley, Brendan; Stevenson, Jim; Sanders, Sarah (2010). "The eradication of feral cats from Ascension Island and its subsequent recolonization by seabirds" (PDF). Oryx. 44 (1): 20–29. doi:10.1017/S003060530999069X.
- McKie, Robin (8 December 2012). "Frigatebird returns to nest on Ascension for first time since Darwin". The Observer. Retrieved 10 December 2012.
- Fisher, Ian (23 January 2014). "Ascension frigatebird - the return continues". Royal Society for the Protection of Birds. Retrieved 8 December 2014.
- BirdLife International (2014). "Fregata ariel". IUCN Red List of Threatened Species. Version 2014.3. International Union for Conservation of Nature. Retrieved 16 May 2015.
- Buden, Donald W. (2008). "The birds of Nauru" (PDF). Notornis. 55: 8–19.
- Barwell, Graham (2013). Albatross. London, United Kingdom: Reaktion Books. p. 68. ISBN 978-1-78023-214-0.
- Kjellgren, Eric; Van Tilburg, JoAnne; Kaeppler, Adrienne Lois (2001). Splendid Isolation: Art of Easter Island. New York, New York: Metropolitan Museum of Art. pp. 44–45. ISBN 978-1-58839-011-0.
- Fischer, Steven Roger (1967). Rongorongo: The Easter Island Script: History, Traditions, Texts. Oxford studies in anthropological linguistics. 14. Oxford, United Kingdom: Clarendon Press. p. 489. ISBN 978-0-19-823710-5.
- Lewis, David (1994). We, the Navigators: The Ancient Art of Landfinding in the Pacific. University of Hawaii Press. p. 208. ISBN 978-0-8248-1582-0.
|Wikimedia Commons has media related to Fregatidae.|
- Frigatebird videos, photos and sounds on the Internet Bird Collection |
What is Disk Access? An Easy-to-Understand Explanation of the Basic Concepts of Computer Data Processing
In the world of computer data processing, efficient and reliable data access is a crucial aspect. Today, we will delve into the concept of “disk access” and explore its significance in computer systems. We will walk you through the basics of disk access, its components, and its impact on computer data processing. So, let’s get started!
What is Disk Access?
Disk access refers to the process of reading or writing data to or from a computer’s disk storage. It involves retrieving or storing information on a hard disk drive (HDD) or a solid-state drive (SSD), both of which are commonly used in modern computer systems. Disk access plays a vital role in various operations, such as loading programs, saving files, and retrieving data from storage devices.
Disk Access Components:
To better understand disk access, let’s explore its main components:
1. Disk Drive: The physical device responsible for storing and retrieving data is known as the disk drive. It comprises one or more platters coated with a magnetic material on which data is written or read. The disk drive also includes an actuator arm that positions read/write heads to access specific data tracks on the platters.
2. File System: The file system acts as an intermediary layer between applications and the physical disk. It manages how data is organized, stored, and retrieved on the disk. Popular file systems include NTFS, FAT32, and ext4, each with its own advantages and limitations.
3. Read/Write Operations: Disk access involves two fundamental operations: reading and writing data. When data is read from the disk, the read/write heads locate the desired data on the platters, and the retrieved information is transferred to the computer’s memory for processing. Writing data, on the other hand, involves the process of storing information onto the disk drive.
The Impact of Disk Access on Computer Data Processing
Efficient disk access is critical for ensuring optimal performance and responsiveness of computer systems. Slow disk access can result in system lag, increased application load times, and even data loss in some cases. To mitigate such issues, various techniques and technologies have been developed:
1. Caching: Caching involves the temporary storage of frequently accessed data in a faster access medium such as RAM. By keeping frequently used data closer to the processing unit, caching reduces the number of disk access operations required, thereby improving overall system performance.
2. Disk Defragmentation: Over time, data on a disk can become fragmented, meaning it is scattered across different physical locations. Disk defragmentation is a process that rearranges the data on the disk, organizing it into contiguous blocks. This optimization technique reduces the time required for disk access, resulting in faster data retrieval.
3. Solid-State Drives (SSDs): Unlike traditional hard disk drives (HDDs), SSDs use flash memory technology, which offers faster read and write speeds. This advancement translates to significantly improved disk access times, boosting overall system performance.
In conclusion, disk access is a fundamental aspect of computer data processing. It involves reading and writing data from and to storage devices, such as hard disk drives (HDDs) and solid-state drives (SSDs). Understanding the components and optimizing disk access can greatly enhance the performance and responsiveness of computer systems. Hence, it becomes crucial to implement efficient disk access strategies and technologies to ensure smooth and efficient data processing. |
We already know that a simple closed curve that is made up of more than three line segments is called a polygon. Every polygon has a set of angles that are a result of the line segments involved in the closed figure. In the chapter below we shall learn about the angle sum property of polygons, which indirectly depends on the number of sides in that polygon.
Angle Sum Property of Polygons
We have learned about the angle sum property in triangles! According to the angle sum property of a triangle, the sum of all the angles in a triangle is 180º. Since a triangle has three sides, we find the measurements of the angles accordingly.
Let’s recap the method. For example, if there is a triangle with angles 45º and 60º. The third angle is unknown. For finding the third angle we follow the given system of calculation:
A + B + C = 180º
A = 45º; B = 60º; C =?
45 + 60 + ? = 180º
? = 180º – 105º
? = 75º
So the third angle is 75º. Using the above-shown system of calculations we can find out the unknown angle in a triangle, but what about a polygon. Similarly, according to the angle sum property of a polygon, the sum of angles depends on the number of triangles in the polygon.
According to the Angle sum property of polygons, the sum of all the angles in a polygon is the multiple the number of triangles constituting the polygon. We use the angle sum property of triangles while calculating the unknown angles of a polygon.
Browse more Topics under Understanding Quadrilaterals
- Polygon and Its Types
- Properties of Trapezium and Kite
- Properties of Parallelogram, Rhombus, Rectangle and Square
Relation of Angle Sum Property of Triangles and Polygons
When we analyze a polygon we come to know that it is a compilation of many triangles. Let’s see how? Take a polygon and draw diagonals that divide the structure into triangles. The number of triangles formed from this division gives us the idea of the total sum of angles in a polygon. See the figure below,
In the figures above, a is a hexagon while b is a pentagon. Hexagon when divided into diagonals, constitutes four triangles. The sum of angles in a triangle is 180 °. This means that the sum of angles in a hexagon is equal to 4 × 180° that is 720°.
Similarly, in figure b which is a pentagon, the number of triangles constituting the shape is three, so the sum of angles in a polygon shall be 3× 180 which equals 540°. Likewise, for a heptagon, the number of triangles formed after dividing into diagonals is five hence the sum of angles in a heptagon shall be 5 × 180° which equals 900°.
In the above discussion, one thing worth noting is that the number of angles = number of sides – 2. So for every polygon with x number of sides, the number of triangles is 2 less than the number of sides.
Polygons can have any number of sides greater than three, and when we find the sum of angles in a polygon we study the number of triangles constituting the closed shape. It is only after the study of the number of triangles, we can find the sum of angles in a polygon.
Solved Example for You
Question 1: Find the sum of angles for the following polygons
- for a polygon with 9 sides, the number of angles is 7. Therefore the sum of angles in a triangle shall be 7 × 180 = 1260°
- for a polygon with 8 sides, the number of angles is 6. Therefore the sum of angles in a triangle shall be 6 × 180 = 1080°
Question 3: What is the formula of angle sum property?
Answer: The sum of interior angles in a triangle refers to 180°. In order to find the sum of interior angles of a polygon we need to multiply the number of triangles in the polygon by 180°. Further, the sum of exterior angles of a polygon will be 360°. In other words, the formula to calculate the size of an exterior angle will be exterior angle of a polygon = 360 ÷ number of sides
Question 4: What is angle sum property of quadrilateral?
Answer: As per the angle sum property of a quadrilateral, the sum of all the four interior angles will be 360 degrees.
Question 5: What is the sum of parallelogram?
Answer: Firstly, please note that sum of the internal angles of any four-sided figure whether regular or irregular will be 360 degrees. However, regular figures like square, rectangle, parallelogram, or rhombus consist of an additional characteristic that the sum of any two adjacent angles is 180 degrees.
Question 6: What is the sum of all angles in a triangle?
Answer: When we look at a Euclidean space we see that the sum of measures of these three angles of any triangle is consistently equal to the straight angle which we also express as 180 °, π radians, two right angles, or a half-turn. However, it was not known for a long period whether other geometries exist having different sums. |
Arithmetic operators are used in mathematical operations:
The addition operator (+) adds numbers
var result = 1 + 2; // 3
The subtraction operator (-) subtracts the right number from the left
var result = 3 - 2; // 1
The multiplication operator (*) multiplies numbers
var result = 2 * 3; // 6
The division operator (/) divides one number by another
var result = 8 / 4; // 2
The modulo operator (%) returns the remainder after division of one number by another
var result = 8 / 5; // 3
In addition to the operators listed above, there are unary operators which modify the value of one operand:
+ (Unary plus)
The unary plus operator (+) converts a value to a number. It works in the same way as the Number() casting function. If the value cannot be converted to a valid number, result of the conversion is NaN.
var result = +"3.4"; // 3.4 var result2 = +false; // 0 var result3 = +"abc"; // NaN
– (Unary negation)
The unary negation operator (-) negates the value of the operand. Used on non-numeric values it works like the unary plus.
var x = 6; var result = -x; // -6 var result = -"def"; // NaN
The increment operator (++) adds 1 to its operand. Pre-increment version of operator (++x) increments its operand by 1 and returns the new value in the expression. Post-increment version of operator (x++) also increments its operand by 1, however returns its original value in the expression.
var x = 2; var y = ++x; // y = 3, x = 3 var t = 2; var u = t++; // u = 2, t = 3
The decrement operator(–) subtracts 1 from its operand. It works in an analogous manner like increment operator.
var x = 2; var y = --x; // y = 1, x = 1 var t = 2; var u = t--; // u = 2, t = 1
= (Assignment operator)
The assignment (=) operator is used to assign a value to the variable or property.
x = 2
There are other assignment operators that combine an operation with assignment. Their brief overview is in the table below:
|+= (Addition assignment)||x += y||x = x + y|
|-= (Subtraction assignment)||x -= y||x = x – y|
|*= (Multiplication assignment)||x *= y||x = x * y|
|/= (Division assignment)||x /= y||x = x / y|
|%= (Modulo assignment)||x %= y||x = x % y|
Comparison operators (also known as relational operators) compare two values and return true or false depending on the relationship between them. Results of comparison are often used to control the flow of program. If the two operands are different types, interpreter attempts to convert them to suitable type.
The equality operator (==) checks if two values are the same. If the operands are of different types, it performs conversions, to determine equality. It returns true if both operands are equal.
var x = 3; x == 3 // true x == '3' // true, conversion was done x == 3.1 // false y = 0; y == false // true, conversion was done
=== (Strict equality)
The strict equality operator (===) also checks if two values are the same. However it returns true if both operands are equal and of the same type.
var x = 3; x === 3 // true x === '3' // false y = 0; y === false // false
!= (Inequality operator)
The inequality operator (!=) is opposite to equality operator and returns true if operands are not equal.
var x = 4; x != 4 // false x != '4' // false x != 5 // true y = 0 y != false // false y != true // true
!== (Strict inequality operator)
The strict inequality operator (!==) is opposite to strict equality operator and returns true if operands are not equal or are equal but of different type.
var x = 4; x !== 4 // false x !== '4' // true x !== 5 // true y = 0 y !== false // true y !== true // true
Several next comparison operators check relative order of its operands. It can be numerical or alphabetical order. Operands that are different types than number or string are converted to one of them.
< (Less than)
The less than operator (<) returns true if the first operand is less than the second.
var x = 5; x < 6 // true x < '6' // true x < 4 // false
> (Greater than)
The greater than operator (>) returns true if the first operand is greater than the second.
var x = 5; x > 4 // true x > '4' // true x > 6 // false
<= (Less than or equal)
The less than or equal operator (<=) returns true if the first operand is less than or equal to the second.
var x = 5; x <= 6 // true x <= 5 // true x <= 4 // false
<= (Greater than or equal)
The greater than operator or equal (<=) returns true if the first operand is greater than or equal to the second.
var x = 5; x >= 4 // true x >= 5 // true x >= 6 // false
Logical operators return result of logical operation. They often occur in conjunction with relational operators.
&& (Logical AND)
The logical AND operator (&&) returns true if both its operands are true (or are converted to true). If one or both operands are false (are converted to false) it returns false.
var x = true, y = true, z = false; x && y // true x && 5 // true x && z // false z && 0 // false x == true && z == false // true
|| (Logical OR)
The logical OR operator (||) returns true if one or both its operands are true. If both operands are false it returns false.
var x = true, y = true, z = false; x || y // true x || z // true x == true || z == true // true z && 0 // false
! (Logical NOT)
The logical NOT operator (!) is a unary operator placed before a single operand. It returns true if its operand can be converted to false. Otherwise it returns false
var x = 2, y = 0; !x // false !y // true
& (Bitwise AND)
The bitwise AND operator (&) returns 1 in each bit position if the corresponding bits of both operands are 1.
| (Bitwise OR)
The bitwise OR operator (I) returns 1 in each bit position if the corresponding bits of one or both operands are 1.
^ (Bitwise XOR)
The bitwise XOR operator (^) returns 1 in each bit position if the corresponding bits of operands have unequal values (one of them is 1 but not both).
~ (Bitwise NOT)
The bitwise NOT operator (~) is a unary operator, reverse all bits in its operand, converts all 0 bits to 1 and 1 bits to 0.
The concatenation operator (+) concatenates string values and returns a new string which is combination of operands.
If only one operand is string value, type conversion is performed. String concatenation has priority over addition.
"xyz" + " " + "qwe"; // "xyz qwe" 2 + "3" // 23 "3" + 4 // 34 4 + 5 // 9
The conditional (ternary) operator (?:) is operator that uses three operands. The first operand is condition and evaluates to boolean value. If condition is true, the value of second operand is returned, otherwise it is returned the value of third operand. It is a compact equivalent of if-else statement.
It is useful because it can be used in constructions in which the use of the normal if-else syntax is not possible.
var type = (x <= 90) ? 'light' : 'heavy'; |
Fact fluency is important for growing mathematicians. Developing students’ confidence in basic facts sets them up for success when using higher-level maths skills.
Use MathPlayground to help students practice basic facts on their iPads or laptops.
This website is a great resource to help students consolidate their knowledge in basic operations. Make the most of your students’ practice time by choosing games ahead of time. Math Playground has a TON of games; and at times, it can be overwhelming to decide which ones work best for your groups or centers. Consider selecting simple games that can be differentiated, by allowing students to select specific fact groups to practice. See the list below for some recommended games to get you started.
Addition and Subtraction Basic Fact Games:
- Superhero Addition or Subtraction
- Addition Blocks
- Number Bonds
- Treasure Quest Addition
- Math Monster Addition and Subtraction
- Puzzle Pics Addition and Subtration
Multiplication and Division Basic Fact Games:
- Match 10 Multiples
- Math Monster Multiplication and Division
- Treasure Quest: 100 Number Grid
- Music Shop Multiplication
- Math Racer Multiplication
Disclaimer: This article does not promote rote memorization of facts before developing a conceptual understanding of number within each maths strand. Concepts for basic addition, subtraction, multiplication and division should first be taught using visual representations and materials such as abacuses, tens-frames, arrays, equal-sharing and counters. Once students have developed a conceptual understanding of number in these areas, it’s okay to encourage your students commit their facts to memory.
Enjoy this resource? Let us know which websites and apps work well for your students’ basic fact consolidation. |
Global Earthquakes: Teaching about Earthquakes with Data and 3D Visualizations
Cara Harwood , Author Profile
Used this activity? Share your experiences and modifications
In this series of visualizations and accompanying activities, students visualize the distribution and magnitude of earthquakes and explore their distribution at plate boundaries. Earthquakes are visualized on a 3D globe, making it easy to see their distribution within and below Earth's surface without having to mentally transform and interpret symbols that indicate earthquake magnitude and hypocenter depth.
Introductory-level undergraduate earth science class, although talking points could be adapted for younger students by giving more background.
Skills and concepts that students must have mastered
Students should understand what earthquakes are and what causes them. Students should also understand how plates move relative to each other at the three types of plate boundaries (convergent, divergent, transform). These concepts could be introduced immediately prior to this activity.
How the activity is situated in the course
This a series of visualizations in a unit about plate tectonics, although each visualization could also be used in isolation.
Three visualizations and accompanying activities are included:
- Visualizing Global Earthquakes – Where and Why do Earthquakes Occur?
- Visualizing Earthquakes at Convergent Plate Margins
- Visualizing Earthquakes at Divergent Plate Margins
Content/concepts goals for this activity
Students will understand the following concepts:
- The distribution and characteristics of earthquakes are directly related to the location and type of plate tectonic boundaries.
- Different types of plate boundaries result in different magnitudes and distributions of earthquakes.
- (Also refer to concept goals for each activity and visualization).
Higher order thinking skills goals for this activity
Students will be able to:
- Synthesize large data sets to recognize naturally occurring patterns
- Use patterns in data sets to make predictions about the distribution and characteristics of earthquakes
- Visualize data in 3D that is traditionally represented in 2D
Other skills goals for this activity
Description of the activity/assignment
This module series is designed to teach introductory-level college-age geology students about the basic processes and dynamics that produce earthquakes. Students learn about how and why earthquakes are distributed at plate boundaries using 3D visualizations of real data. These 3D visualizations were designed to allow students to more easily visualize and experience complex and highly visual geologic concepts. 3D visualizations allow students to examine features of the Earth from many different scales and perspectives, and to view both the space and time distributions of events. For example, students can view the earth from the perspective of the entire solar system, or from one point on the Earth's surface, and can visualize how earthquakes along a fault occur through time. By teaching about earthquakes and plate tectonics using a real data set that students can visualize in three-dimensions, students learn how scientists analyze large data sets to look for patterns and test hypotheses. At the end of this module students will understand how earthquakes are distributed on Earth, and how different types of plate boundaries result in different magnitudes and distributions of earthquakes.
Determining whether students have met the goals
'Quakes Questions' throughout each activity are short-answer questions that students answer while the visualization is playing to ensure that they are taking away key concepts. These questions require students to synthesize ideas and articulate their understanding of concepts introduced in the visualization.
More information about assessment tools and techniques.
Download teaching materials and tips
The visualization software used to create this visualization is freely available and can be downloaded from http://keckcaves.org/education/
In addition to playing back the visualizations available here, instructors can also download the visualization software and data sets and explore it themselves. Download the software and quick-start guide to begin exploring your own data sets in your classroom. |
This course provides you with a deep dive into how to refactor and structure your code into smaller more manageable building blocks using Functions, Modules, and Packages.
- Understand how to write and define functions
- Understand the four kinds of different function input parameters and how to use them
- Review how modules are created and used
- Learn how modules are loaded using the import statement
- Examine how modules are discovered via module search locations
- Review how modules can themselves be organized into packages
- And finally, we understand how to use aliases for both module and package names
- A basic understanding of the Python programming language
- A basic understanding of software development
- A basic understanding of the software development life cycle
- Software developers interested in learning how to write Python code in a Pythonic way
- Python junior level developers interested in advancing their Python skills
- Anyone with an interest in Python and how to use Python to write concise and elegant scripts for general purpose tasks
Okay, to begin with, let's talk about functions. Functions are a way of isolating code that is needed in more than one place by refactoring it to make it much more modular. They are defined with the def statement or def keyword. Functions can take various types of parameters as we'll see in the following slide. Parameter types are dynamic. Functions, more often than not, can return one object of any type using the return statement. If there is no return statement, the function simply returns None.
Let's take a quick look at a simple function using our Python interpreter. We'll take the following code and we'll spin this up. Okay back within our terminal, we'll start the Python3 interpreter. And this time we'll use the def keyword to define our function. We'll call it, say, underscore hello. We'll take zero parameters. And all we'll do is simply print out the statement or print out the string hello, world. Enter. Followed by another call to the print function. Let's print out an empty line.
Okay, we can call our function now by using the function name. Enter. And so this function has just been executed, and we see that indeed hello, world is being printed out. We could establish a variable to hold this function. And when we take a look at the type of our new variable, we see that it's set to NoneType. So the reason for this is that when we created our function, we didn't specify a return keyword on it, so it's not returning anything. So we could also check that hello world is in fact None, and you see that it is set to True.
Okay, let's now create a new function. Again we use the def keyword. This time we'll call it get underscore hello. Okay, then we'll take zero parameters. And this time we'll return, so we're using the return keyword, then we're going to return a string, which will be hello, world, exclamation mark. Okay, so the key point of this second function is that it's now using a return statement. And most functions that you create will in fact use a return statement to return something. We could again create a variable called hello and set it to be the return value of our get underscore hello function.
This time if we print hello, you can see indeed it's printing our result. And then if we check the type of hello, it's now classified a a string which we would expect. And finally if we check to see whether hello is None, it's now False because it's actually a string. Okay, let's create a third function. This one we'll square root a number, and we'll be passing the number in as a parameter to this function. We'll return the square root of this number, so we take our input parameter. We can now perform square roots on different numbers that we pass in, say that we'll call our square root function with the number one, two, three, four.
Likewise we can also do it on another number, this time we'll do the square root of two. If we take a look at m, we can see here that the square root of 1,234 is this value here. Likewise if we look at n, we can see that the square root of two is this value here. Finally let's use a string format to print out our values, m and n, to three decimal places. Okay, so that was a quick introduction to functions. Let's now carry on.
Okay, so as we've briefly just seen, functions can have input parameters. Functions can accept both positional and named parameters. And furthermore, parameters can be either mandatory or optional. They must be specified in the order presented in the next slide. The first set of parameters, if any, is a set of comma-separated names. These are all required. Next, you can specify a variable preceded by an asterisk. This will accept any optional parameters. After the optional positional parameters, you can specify required named parameters. These must come after the optional parameters. If there are no optional parameters, you can use a plain asterisk as a placeholder. And finally you can specify a variable preceded by two asterisks to accept optional named parameters.
Okay, let's jump back into our Python interpreter and take a look at each of these different types of function parameters. Okay for our first example, we'll define fun underscore one which will take no parameters. And we'll simply print out hello world again. Okay, as we know we can call this function like so. Now what happens if we attempt to pass in a parameter? It would tell us that it takes a zero positional arguments but one was given.
Okay, next we'll define fun underscore two which will take one required parameter, n, and will simply return n squared. Again we can call it. We'll pass in, pass in five. Five squared is 25. Now if we call it without a parameter, we'll get an error saying that one required positional argument is missing. Okay, in our next example we'll define fun underscore three and it will have a required parameter with a default value, in this case we'll call it count, and we'll set the default value to be three. We'll then loop over this using the range function over count. And we'll print spam comma end equals. Enter.
So now if we run it, we'll call fun underscore three with no parameters, we get spam written out three times because of our default value for count. And this time we'll pass in 10 for count, and this time we get spam written out 10 times. So that is parameters that have default values. Okay, in our next example we'll do fun underscore four, and this will be defined to have one fixed plus also optional parameters. We do that by specifying firstly our fixed parameter. And then using the asterisk symbol, we can state that we can have optional parameters afterwards. We'll then use the following print statements. And now we can call it. So we'll call it fun underscore four with the value apple. And you can see that n is indeed apple, and that our optional variable is actually a tuple.
So let's call it with some extra values. Again we can see that our first parameter is fixed, that's set to apple, and then our optional parameters have been captured in a tuple. Okay, in our next example we'll create another function, and this one will be called fun underscore five, and this will be designed to have keyword only parameters. So we put in a placeholder first and then we'll have two parameters. We'll declare this function to have the following print statements. And now we can call it like so. So we can go fun underscore five, and we'll specify an input parameter called spam with the value equal to one, and eggs equal to two.
Okay, we can call it again. This time we can change the positions of these parameters, and we'll see that we'll get the same result. We can call it by passing in spam only and leverage the fact that a default value is set on eggs. We can do the same with eggs. And spam comes out with its default value. And then finally we can call that with no parameters, and we get both default values coming out. Okay, in this next example we'll define another function called fun underscore six, and it will use keyword named parameters. So we do so by using double asterisk named underscore args, and we'll declare it like so. We can then call this function, passing in our named keyword parameters.
So the first one will be name equals. Quest equals grail. And color equals red. Enter. So we can see there that our named arguments have come through. The first one with the name set to the value lancelot, second one, quest, set to the value grail, and third one, color, set to the value red. Cool. Okay, let's just redefine our last function. And this time what I wanna do is I wanna print out named args as well as the type of named args.
Okay, we'll just recall it like this. So what you can see here is that our named args is actually passed into our function as a dictionary with these key-value pairings. So as we've just seen in our examples, functions can have default parameters, required parameters which can have default values. They are assigned to parameters with the equal sign. Parameters without defaults cannot be specified after parameters with defaults.
Okay, let's take a closer look at default parameters. We'll take the following example and run it within our Python interpreter. So for starters let's declare our spam function like so. Here we can see that we have two positional parameters, greeting and whom. Greeting doesn't have a default value, and whom does which is set to world. We can then call spam like so. So whom is taking on the default value, and if we call it again but this time we pass in Jeremy, whom is set to take this value here.
Okay, in our next example we'll define the ham function, and we'll then call it like so, ham file name equals. So we can see here that it's executed and that file format has taken on its default value. And then if we call it a second time, this time we'll pass in file name and file format, we get the expected result. Now what happens if we were to call the function like so? As expected, this fails because it's now attempting to pass in positional arguments, but the design of the function has an asterisk, meaning that there are no positional arguments.
Okay, next we'll talk about name resolution or what's otherwise referred to as scope. A scope is the area within a Python program where an unqualified name can be looked up. Scopes are used dynamically. And there are four nested scopes that are searched for names in the following order. Local, local names bound within a function. Nonlocal, local names plus local names of outer functions. Global, the current module's global names. And builtin, built-in functions. Within a function, all assignments and declarations create local names. All variables found outside of local scope, that is, outside of the function, are read-only. Inside functions, local scope references the local names of the current function. Outside functions, local scope is the same as the global scope, the module's namespace. Class definitions are also created in local scope. Class definitions also create a local scope. Nested functions provide another scope. Code in function B which is defined inside function A has read-only access to all of A's variables. This is called nonlocal scope.
Let's now take the following example, and we'll run it up in our Python interpreter to see the different types of scope in action. Okay, we'll start up our Python3 interpreter. And in this example we'll start off with defining variable x which will be considered to have global scope. And then we'll define a function, function underscore a like so. So function a has variable y as a local scope variable, set to five, and then it has a nested function called function underscore b. Function underscore b has its own local scope, the variable z equal to 32. We then return function b as part of function a's definition.
So now we can call it like so. We could set up another variable called f and set it to be function underscore a. So type of f is a function as expected. So now we can actually call f. So here we see the results. So function b gets access to z which you'd expect because z is locally scoped to function b. Function b also has access to y because function b is nested within function a. And function b can also see the global variable x, x is 42. And then the final call here is using builtin scope, allowing the type function to be called on x.
Okay, we'll now focus this particular part of the discussion on the global statement. The global keyword allows a function to modify a global variable. This is universally acknowledged as a bad idea. Mutating global data can lead to all sorts of hard to diagnose bugs because a function might change a global that affects some other part of the system or program. It's better to pass data to functions as parameters and return data as needed. Mutable objects such as lists, sets, and dictionaries can be modified in place. The nonlocal keyword can be used like global to make nonlocal variables in an outer function writable. Again we must emphasize that using globally scoped variables is dangerous and should be avoided as much as possible.
Okay, we'll next move on to talking about modules. A module is a file containing Python definitions and statements. The file name is the module name with the suffix .py appended. Within a module, the module's name is available as the value of the global variable name. To use a module named, for example, spam.py, then we would import the value spam. This does not enter the names of the functions defined in spam directly into the symbol table, it only adds the module name spam. Use the module name then to access the functions or other attributes. Python itself uses modules to contain functions that can be loaded as needed by scripts. A simple module contains one or more functions. More complex modules can contain initialization code as well. Python classes are also implemented as modules.
A module is only loaded once even when there are multiple places in the application that import it. Modules and packages should be documented using docstrings. When working with modules, you need to use the import statement. The import statement loads modules. There are three variations, import module, from module import function-list, and from module import asterisk. We'll now take a closer look at each of these three variations.
Variation one. Import module loads the module so its data and functions can be used, but does not put its attributes, names of classes, functions, and variables, into the current namespace. Variation two. From module import function imports only the functions specified in the current namespace. Other functions are not available even though they are loaded into memory. And variation three. From module import asterisk loads the module and imports all functions that do not start with an underscore into the current namespace. This should be used with caution as it can pollute the current namespace and possibly overwrite builtin attributes or attributes from a different module. Let's now create our first module.
We'll take the following Python code and create a module called spam. Okay, back within our terminal, we can see that we're in the current directory called PythonDemo. Let's do a directory listing. And as you can see it's empty. We'll use Visual code to set up a couple of files in this directory. The first file we'll add we'll call samplelib.py for Python. And in this file we'll add the following statements. So we'll declare three functions. The first one called spam, the second one called ham, and the third one called underscore eggs. Note the underscore here. We'll come back to this later. We'll save that. And then we'll create a second file, this one called samplelib1.py.
Okay, in this Python file we'll use the import statement, and we'll import samplelib like so. We'll use the print function just to specify that everything is okay. And then we will attempt to call our module functions declared in samplelib. The first one was ham, and the second one was spam. We'll save that. Okay, we'll jump back to our terminal. We'll do another directory listing. And now we'll use python3 to execute our samplelib1 file. And there we go. So we can see that indeed our functions declared in our samplelib module have actually been called. So, great result.
Okay, let's return to Visual code. And this time we'll update our import statement like so. From module name import spam and ham. We can then go down to where we call these functions and remove samplelib from them. By taking this course of action, we import functions, spam and ham, from the samplelib module into the current namespace, meaning the module name is no longer required to call the functions. So we'll save this file. We'll go back to PythonDemo, and we'll rerun the script. Again that's worked as expected.
Returning to Visual code, we'll create a third file called samplelib2.py. And this time we'll use the following type of import statement. So from samplelib import asterisk for everything. We can then call spam and ham again. Again we can do so without explicitly referring to the module name. Back to our terminal, we'll run samplelib2, and again it works as expected. Okay, returning to Visual code one last time, we'll create a third file called samplelib3.py. And this time we'll use the following import statement where we're aliasing our functions. So spam is being aliased as pig, and ham as hog. We can then refer to these functions by their aliases, pig and hog. Save.
We'll return to the terminal. And we'll run this, and again it has worked with the same results. Now it must be noted that importing everything through the import asterisk statement should be considered dangerous. Using import asterisk to import all public names from a module has risk with it. While generally harmless, there is always the chance that you will unknowingly import a module that overwrites some previously imported module. To be 100% certain, always import the entire module or else import names explicitly. We'll demonstrate this with the following example.
So back within Visual code, we'll create a new file called electrical.py, and we'll give it the following Python code. So in here we have three functions, amps, voltage, and current, and then each of those just returns a value defined in global scope up here. Okay, we'll save that. We'll create a second file called navigation.py. Navigation will contain the following code, noting that it also has a function called current. So in electrical we define current to be this which returns the default underscore current value defined here. However now in navigation we also are defining current, but this time current will return the first item, slow. And then lastly we'll create a third file called why_import_star_is_bad.py. And in this file we'll import from electrical everything, and likewise from navigation.
We'll then add some print statements. So we'll call current, we'll call voltage, and we'll call amps. So the intention here is that we're calling each of these three functions from our electrical module. However because we have imported everything from navigation, and navigation also defines a current function, then when we go to run this, we'll now run our file, python3 name of the file. In here you can see that we've got slow, 110, and 10. So slow is the result of current coming from navigation, not from electrical. So if we look at navigation again, current is returning current types, and current types is defined to be the split of these words. So that's why we see slow when actually our intention was to return current which would have been this value here.
Okay, let's now talk about how modules are discovered at runtime. When you specify a module to load with the import statement, it first looks in the current directory, and then searches the directories listed within sys.path. To add locations, put one or more directories to search in the PYTHONPATH environment variable. Separate multiple paths by semicolons for Windows, or colons for Unix/Linux. This will add them to sys.path after the current folder but before the predefined locations. The following example here sets PYTHONPATH for Windows whereas this example is for Linus and or OS X. You can also append to sys.path in your scripts, but this can result in non-portable scripts and scripts that will fail if the location of the imported modules change. It is also sometimes convenient to have a module as a runnable script. This is handy for testing and debugging, and for providing modules that can also be used as standalone utilities.
Since the interpreter defines its own name as underscore underscore main underscore underscore, you can test the current namespace's name attribute. If it is underscore underscore main underscore underscore, then you are at the main or top level of the interpreter and your file is being run as a script. Any code in a module that is not contained in a function or method is executed when the module is imported. This can include data assignments and other startup tasks, for example, connecting to a database or opening a file.
Let's now take a look at an example where the Python script checks to see if it is the top level script, and if it is, it calls the main function within itself. Back within Visual code, we'll create a new file called main.py, and this time we'll paste the following code. Now the key point about this file is that at the bottom of it, we're checking to see if it was the file that was used to start up the program, and if it was, we then call the main function, which is very much a typical naming convention for our first starting function. So we'll save this, and then we'll jump back to our terminal to our directory listing.
So we have main.py, and this time we'll do python3 and the name of the file. And there we go. So the file has executed successfully. Okay, we'll now move on to a discussion around packages. A package is a group of related modules or subpackages. The grouping is physical, that is, a package is a folder that contains one or more modules. It is a way of giving a hierarchical structure to the module namespace so that all modules do not live in the same folder. A package may have an initialization script named underscore underscore init underscore underscore.py. And if present, this script is executed when the package or any of its contents are loaded. Modules in packages are accessed by prefixing the module with the package name using the dot notation used to access module attributes. As an example, if module eggs is in package spam, then to call the scramble function in eggs, you would likely call spam.eggs.scramble.
By default, importing a package name by itself has no effect. You must explicitly load the modules in the packages. You should usually import the module using its package name, like from spam import eggs, to import the eggs module from the spam package. Packages can be nested. Let's now take a look at a package example. In this example we have the sound package which is considered the top-level package. Beneath this, we have a initialization script for the sound package. Then we have two subpackages, the file format subpackage and the sound effects subpackage. Each of these also has its own initialization script. Stored within the same sound package, we have an additional filter subpackage, again it also has an initialization script. Then at the bottom, we can now see the import statements that are required to import the package and modules.
For convenience, you can put import statement into a package's initialization script to autoload the modules into the package namespace. So having now reviewed functions, modules, and packages, the following next two slides show you all of the various types of import statements. Take time to review this. Take time to understand what the import statement is and what it achieves. Okay, moving on. Documenting modules and packages. In addition to comments, which are typically used by the maintainers of your code, you should also add docstrings which provide documentation for the user of your code. If the first statement in a module, function, or class is an unassigned string, it is assigned as the docstring of that object. It is stored in the special attribute, underscore doc underscore, and so is available to code. The docstring can use any form of literal string, but typically triple double quotes are preferred for consistency. See PEP 257 or Python Enhancement Proposal 257 for a detailed guide on docstring conventions. Tools such as pydoc and many IDEs will use this information.
Okay, the following two slides will show you examples of using docstrings. Here in this slide, we can see that above the import sys statement is a docstring which documents the intent of this module. Likewise in this example, we have a main function in a function1 function. Inside each of these functions, docstrings are used to describe the intent of each function. Keep in mind that when you're writing your Python code, that you should do so using a Python style. On this slide are a number of guidelines that should be read and understood and applied to make sure that your code, when written and developed, remains Pythonic. There are many resources on the internet that will help you to write Pythonic styled code. Take time to read both the Python Enhancement Proposal 8 which is the Style Guide for Python Code, and also the Python Enhancement Proposal 257 which documents Docstring Conventions.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes (CKA, CKAD, CKS). |
So far, we have learned primitive data types, which are the simplest types of data with no built-in behavior. Our programs will also use
Strings, which are objects, instead of primitives. Objects have built-in behavior.
Strings hold sequences of characters. We’ve already seen instances of a
String, for example, when we printed out
"Hello World". There are two ways to create a
String object: using a
String literal or calling the
String class to create a new
A String literal is any sequence of characters enclosed in double-quotes (
""). Like primitive-type variables, we declare a
String variable by specifying the type first:
String greeting = "Hello World";
We could also create a new String object by calling the
String class when declaring a
String like so:
String salutations = new String("Hello World");
There are subtle differences in behavior depending on whether you create a
String using a
String literal or a new
String object. We’ll dive into those later, but for now, we’ll almost always be using
Keep Reading: AP Computer Science A Students
Certain symbols, known as escape sequences, have an alternative use in Java print statements. Escape sequences are interpreted differently by the compiler than other characters. Escape characters begin with the character
There are three escape sequences to be aware of for the AP exam.
\" escape sequence allows us to add quotation marks
" to a
String value. :
System.out.println("\"Hello World\""); // Prints: "Hello World"
If we didn’t use an escape sequence, then Java would think we’re using
" to end the String!
\\ escape sequence allows us to place backslashes in our
System.out.println("This is the backslash symbol: \\"); // Prints: This is the backslash symbol: \
This is similar to the last example - just like
\ usually has a special meaning. In this case,
\ is used to start an escape sequence. Well, if we don’t want to start an escape sequence and just want a
\ in our String, then we’ll use
\\ — we’re using an escape sequence to say that we don’t want
\ to be interpreted as the start of an escape sequence. It’s a little mind-bending!
Finally, if we place a
\n escape sequence in a
String, the compiler will output a new line of text:
System.out.println("Hello\nGoodbye"); /* Prints: Hello Goodbye */
You can think of
\n as the escape sequence for “newline”.
Create a variable called
openingLyrics that holds
"Yesterday, all my troubles seemed so far away".
System.out.println() to print out |
In the hour of descent as the Mars Space Laboratory dropped toward martian soil a small gadget whirred. The gadget was a particle catcher.
|Mars Science Laboratory approaching the martian atmosphere (artist's concept)|
Image Credit: NASA/JPL-Caltech
The size of a coffee pot, NASA's Radiation Assessment Detector (RAD) hitched a ride on the Mars Space Laboratory to measure the radiation of the martian atmosphere. RAD is the first instrument to measure the radiation on the way to Mars from inside a spacecraft that is similar to one future human astronauts could fly to Mars. The results will be published in the May 31 issue of the journal Science. Today, four members of the research teams reported the results at NASA's press conference in Washington, D.C.
It's in the journey -- and in the destination
|Radiation Assessment Detector |
for Mars Science Laboratory
Image Credit: NASA/JPL-Caltech/SwRI
"We realized, that taking measurements on the way to Mars would be not so different from the environment the future human astronaut might experience on their spacecraft on their way to Mars." said RAD principal investigator Donald Hassler from the Southwest Research Institute.
The researchers turned RAD on about ten days after the Mars Space Laboratory launched in November, 2011 and collected data about the radiation environment inside the space capsule for seven months.
"NASA is planning on sending astronauts to Mars in the 2030's," said Chris Moore, the deputy director of advanced exploration systems at NASA headquarters, "Before we can send astronauts there, we need to understand the environments and hazards they would face. RAD data will help us design the space habitats in which astronauts will live on their trip to Mars."
There are two types of deep space radiation the RAD system measures: galactic cosmic rays and solar energetic particles. Galactic cosmic rays come from outside the solar system and are thought to originate at supernova remnants and other high-energy explosions. Despite their high energy, they radiate at moderately low levels that vary over the eleven-year solar cycle. In contrast, solar energetic particles during solar storms and coronal mass ejections are very difficult to predict and can last anywhere from hours to days. While spacecrafts do a pretty good job of keeping solar energetic particles out, we don't yet know how to prevent cosmic rays from penetrating the ship's living quarters.
|Radiation exposure comparison from Mars trip|
Image Credit: NASA/JPL-Caltech/SwRI
"In terms of accumulated dose, it's like getting a whole-body CT scan once every five or six days," said Cary Zeitlin, lead author and principal scientist at the Southwest Research Institute in NASA's statement.
Based on the new RAD data, engineers at NASA hope to design more effective shielding for future deep space flight. "The radiation environment in deep space is several hundred times more intense than on earth," said Zeitlin, "that's even inside a shielded spacecraft."
Dressing for Mars
There are basically two ways to protect astronauts from radiation, according to Moore.
"Hydrogen is the best radiation shield we know about", said Moore. One way to protect a crew would be to surround their living quarters with walls filled with water. Water, which is rich in hydrogen, absorbs the radiation. Moore said that NASA is also tossing around the idea of arranging food packets around the spacecraft living quarters; food, rich in water, also contains a lot of hydrogen.
Currently, NASA uses polyethylene, a material made up of long chains of hydrogen. "Some of our initial concepts [for new space suits] involve multiple layers of polyethylene plastic," Moore said, "I tried on one of these garments once. It reminds me of samurai armor. Or a very heavy coat."
In 2015, NASA is launching a version of RAD to the International Space Station. With comparable instruments on the ISS and on Mars, researchers will be able to compare radiation levels, calculate health risks, and develop new tools for the future of deep space travel. |
pdf 21: Notes Compound inequality Day 6. The level of an worksheet should really. Algebra 1-2: Compound Inequalities word problem? I'm in Algebra 1-2, and I just don't get one problem in a worksheet i got. Learn exactly what happened in this chapter, scene, or section of Compound Inequalities and what it means. Worksheet works extremely well for revising this issue for assessments, recapitulation, helping the scholars to recognize the subject more precisely or to improve the about the subject. Graph the solutions. pdf 19 Multi-step inequality notes 20 Inequality puzzle practice Inquality puzzle practice. The graph of a compound inequality with an "and" represents the intersection of the graph of the inequalities. Write and solve an absolute value inequality to describe acceptable can volumes. I like to spend my time reading, gardening, running, learning languages and exploring new places. I've never really given my students as reason for why we need to even deal with compound inequalities in the first place. all real numbers that are less than -3 or greater than or equal to 5. 4 Multiple Choice Identify the choice that best completes the statement or answers the question. Printable Worksheets from sofatutor. Math video on how to solve and graph compound inequalities with two inequality signs of an "and". 6 Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Compound Inequalities Date_____ Period____. m + 3 ≥ 5 and m + 3 < 7 10. Then, solve both inequalities and graph. Displaying top 8 worksheets found for - Compound Inequalities. In this compound inequalities worksheet, 9th graders solve and complete 7 different problems that include graphing various inequalities on a number line given. If the inequality is an "And" compound inequality , then the solution is the intersection of the solution sets for both inequalities. 2 8 16 2 and 7 21 9rrrr 5. The symbol \(\leq\) means less than or equal to. A L 1Mda9d keN 6wsi rt 4hw HINnbf Ti7n niPt ie2 uAjlagte 8b 0r4aL Y1e. The operator <> returns a symbolic expression representing an inequality. com contains usable info on fraction worksheet, subtracting and algebra exam and other math subject areas. In this worksheet, we will practice solving compound linear inequalities by applying inverse operations. resent the answer using interval notation. Problems include defining, solving, graphing, and writing compound inequalities from a written statement. net) Compound Inequalities Worksheet (doc) Compound Inequalities Worksheet (pdf). Adding subtracting multiplying dividing positive and negative numbers worksheets, compound inequality calculator, word problems to equations with polynomials worksheets, math algebra substitution worksheets, ranking roots and fractions, method of substitution using consistent system. ©b 1220 L1c2E 8Kou 1tfa S xSSo5f ftbwAawrKem YL9LEC8. An easy example of an inequality deals with money. Inequalities: solving (one sign) Video 178 Practice Questions Textbook Exercise. Using Absolute Value to Combine Integers. Name : Score : Printable Math Worksheets @ www. The following is called a compound inequality: x > 1 and x ≤ 5. Unformatted text preview: Guided Notes - Compound Inequalities Inequalities that relate to the same topic can be written as a m. -4 -3 -2 -101234-41-3 -2 -1 0 234. graph the solution sets of compound inequalities. Compound Inequalities Solve each compound inequality then match it to the correct solution and number line. -3 ≤ ≤ 3 states that x is any number between -3, and 3, including -3 and 3. Solving Inequalities Worksheet 1 - Here is a twelve problem worksheet featuring simple one-step inequalities. You already have $24 saved. When two simple inequalities are combined into one statement by the words AND or OR the result is called a compound inequality. This printable pdf worksheet can be used by students in 5th, 6th, 7th and 8th grade. Thus, the graph of a compound inequality containing and is the intersection of the graphs of the two inequalities. Absolute-Value Inequalities. mon Worksheets graphing inequalities worksheet Graphing from Graphing Inequalities On A Number Line Worksheet, source:madner. In this Warm Up, I provide the students with two real world examples. Compound Inequalities –. In this case, we are looking for a solution to either one of the. Solving Compound Inequalities Worksheet Idea Of Graphing. Example: Solving Compound Inequalities (and/or) Solve the following compound inequalities and graph the solution on the number line. graphs of the two inequalities that form the compound inequality. Compound Inequalities Write a compound inequality that represents each phrase. How are you with solving word problems in Algebra? Are you ready to dive into the "real world" of inequalities? I know that solving word problems in Algebra is probably not your favorite, but there's no point in learning the skill if you don't apply it. Example 3: 4x ≤ 20 OR 3x > 21. 3 Pg (Notes 1) Applications of Equations n/a Apps HW Worksheet 1. superteacherworksheets. In this inequality game, Genie will be there to help you solve inequalities and word problems involving inequalities. All worksheets are free to download and use for practice or in your classroom. c 0 UA Xljlz aroi1g6h jtEs3 Zrueas 3e yr6voeDd7. If you're behind a web filter, please make sure that the domains *. Pizzazz Algebra Author: Stephanie Demaio Created Date: 20160919154753Z. Problem Solution Graph -10 ≤ 2x-4 < -2. When two simple inequalities are combined into one statement by the words AND or OR, the result is called a compound inequality. Compound Inequalities Card Match Activity. Hyphens are necessary when spelling out fractions (especially small fractions) and compound numbers (such as fifty-three). Solving Inequalities One Step –. ) Your 3 year investment of $20,000 received 5. function or non function. Systems of Inequalities $4 Stones Turquoise Stones $6 Stones 0 132 4567 6 5 4 3 2 1 Gym (hours) Diego’s Routine Walking (miles) 0 1324567 8 16 14 12 10 8 6 4 2 6-6. Real World Application for Compound Inequalities Graphing Inequalities ( or statements) The laboratory chemicals were very sensitive to heat , so the supervison installed alarms to alert the staff if the temperature rose above 72 degrees OR below 60 degrees. Equation - A statement declaring the equality of two expressions. 1) m or m. Try for free. AB 1BC > AC AC 1BC > AB AB 1AC > BC Proof: Ex. " It says that x takes on values that are greater than 1 and less than or equal to 5. Some of the worksheets displayed are Solve each compound inequality and graph its, Solve each compound inequality and graph its, Alg 1a, Graphing compound inequalities, Compound inequalities work, Solving compound inequalities one step s1, Inequalities. This algebra 2 video tutorial focuses on solving compound inequalities with fractions. Algebra 1 Compound Inequalities Worksheets. Some of the worksheets displayed are Solve each compound inequality and graph its, Solve each compound inequality and graph its, Solving compound inequalities one step s1, Compound inequalities work, Compound inequalities, Inequalities, Alg 1a, 4 2 quadratic inequalities. Compound inequalities Absolute value inequalities. 10) Write an absolute value inequality and a compound inequality for the temperature, t, that was recorded to be as low as 65 F and as high as 87 F on a certain day. Learn with flashcards, games, and more — for free. x < 5 and x ≥ −2 b. When solving an absolute value. The inequalities you have seen so far are simple inequalities. Mesa Academy for Advanced Studies Mesa Academy for Advanced Studies Rigor and Challenge in the Classroom. This quiz and worksheet combo will help you understand how to solve a compound inequality that uses 'and' or 'or. 5 ,k 22 ,11 4. 1 Create equations and inequalities in one variable and use them to solve problems. Thus 2 x + 4 < 10 But 2 x + 4 is simply y, so we can conclude that y < 10 The trick is to make your inequality look like the equation. Subtracting Integers Using a Number Line. A compound inequality is an equation with two or more inequalities joined together with either "and" or "or" (for example, and; or). Tell whether this statement is true or false: Multiplying both sides of an inequality by the same number always produces an equivalent inequality. mathworksheets4kids. Make the boundary points solid circles if the original inequality includes equality; otherwise, make the boundary points open circles. Solving Compound Inequalities. Vocabulary:. 3 Multistep Inequalities HW: Watch 5. 6t > 3 or 6 < 0 2. This Algebra Worksheet may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Printable in convenient PDF format. Displaying all worksheets related to - Translate To Inequalities. all real numbers that are less than —3 or greater than or equal to 5 x < —3 or x 25 2. f Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ Absolute Value Inequalities Date_____ Period____. Graph 2 of the 5 problems like this on your own paper. Math Worksheets Examples, worksheets, solutions, and activities to help Algebra 1 students learn how to solve compound inequalities. So the first problem I have is negative 5 is less than or equal to x minus 4, which is also less than or equal to 13. Writing Compound Inequalities from a Graph Worksheet - Problems. c N 7M Wa2dDek rw Riqt XhK DIRngfOicnGi2t gew DAZlqgbeIbyr0aK r1 4. This activity is easy for the teacher to check while walking around the room due. The low temperatures for the previous two days were 62 and 58 degrees. l q lA HlXlk ir piMgKhPtIs f 0r2e 9s9e5rTvue WdU. Emathtutoring. Write a compound inequality that represents each situation. This Compound Inequalities Worksheet is suitable for 9th - 10th Grade. Grover Cleveland was the US president who signed the Interstate Commerce Act. ©f 12i0 X1J2 S zK9uOtia x rS 7omfit ewSavr8e W OLSLsCN. 2 inequalities joined by the word "and" or "or" Example: -5≤ x ≤ 7 is the same as x ≥ -5 and x ≤ 7. 6 > x > −3. Write a compound inequality for each problem. Consider using these activities and games to help students understand, solve, and graph compound. In this algebra worksheet, 11th graders solve compound inequalities and graph their solution on a number line. compound inequality worksheets 3 5 compound inequalities worksheet l name date per write a compound inequalities worksheet gina wilson. Keywords: graphing inequalities, solving one step inequalities, compound inequalities, inequality word problems. Sal solves the compound inequality 5x-3<12 AND 4x+1>25, only to realize there's no x-value that makes both inequalities true. Make the boundary points solid circles if the original inequality includes equality; otherwise, make the boundary points open circles. Express the solution set for this compound inequality using set notation. 1 M BALlPl 8 7rAiJgzh ItUsT jr WeAsZevr evZe1dq. U C PMiaRdme5 ywJitzh5 sIzn6fvipn4iCteeV YAblXgee1bRria c w1X. Examples: a. Graph each inequality. 4 F7 Q21 F10 F4 O16 4. How To Solve Compound Inequalities. -3 ≤ ≤ 3 states that x is any number between -3, and 3, including -3 and 3. Displaying top 8 worksheets found for - Compound Inequality. So the first problem I have is negative 5 is less than or equal to x minus 4, which is also less than or equal to 13. (f) Write a two pieced piecewise- defined function, 𝑓𝑓, that accurately represents. 5 fluid ounces by more than 0. Problem 1 : Solve the compound inequality. Directions: In the following inequalities, solve for x. It can be completed individually or with partners. y – 5 < –4 or y – 5 ≥ 1 11. In this diagram,. com Graph the compound inequalities. all real numbers that are less than —3 or greater than or equal to 5 x < —3 or x 25 2. Math Inequalities Worksheets 7th Grade Download Them And Try To. Solving Compound Inequalities Involving OR. If x represents weight, write an inequality that describes her goal weight. Solve each compound inequality. The endpoint on the right is an open endpoint, which represents greater than. The worksheets (WS) may be of use to teachers looking for a quick source of some extra questions. (c) State the domain and range of 𝑓𝑓. p Worksheet by Kuta Software LLC. The symbol > means greater than. Football and Other Integer Word Problems. 5 Solving Compound Inequalities 83 Solving Compound Inequalities You can solve a compound inequality by solving two inequalities separately. Graphing pound Inequalities Worksheet Free Worksheets Library from Compound Inequalities Worksheet, source:comprar-en-internet. To solve a compound inequality, you must solve each part of the inequality. Create a Polynomial Algebra Worksheet This page will create a practice worksheet for you, dealing with polynomials. The intersection of two sets is another set altogether, formed by the numbers common to both sets. TER ART OBJECTIVE 2-j: To solve systems of linear. The truth is that most fields that use inequalities on a daily basis are seen as your more prestigious careers. function or non function. Solving inequalities worksheet 1 here is a twelve problem worksheet featuring simple one step inequalities. 4 < f + 6 and f + 6 < 5 12. Our range of worksheets is comprehensive as we provide area of triangle worksheets, compound shapes worksheets with answers, volume and area worksheets, and even an area of quadrilaterals worksheet with answers. Graphing Compound Inequalities. For example, solve 5z+7<27 OR -3z≤18. This printable pdf worksheet can be used by students in 5th, 6th, 7th and 8th grade. Come to Sofsource. Topics include basic single-variable inequalities, as well as, one-step, two-step, and compound inequalities. Graphing linear inequalities. In this algebra worksheet, students solve compound inequalities and graph their answer on a number line. A compound inequality is the combination of two or more inequalities. Compound Inequalities and Interval Notation Dr. You may select which type of inequality to use in the problems. 1) x + 5 > 6 and 6x ! 18 2) "15 ! x " 13 ! 0. This is a game like "Who Wants to Be a Millionaire?" where you have to keep getting the answer right in order to move up in money amounts. the intersection of both inequalities. The symbol > means greater than. Then you'll see how to solve those inequalities, write the answer in set builder notation, and graph the solution on a number line. You can write an absolute value inequality as a compound inequality. mathworksheets4kids. 4 Compound Inequalities HW: Watch 5. If the inequality is greater than zero or greater than or equal to zero, then you want all of the positive sections found in the sign analysis chart. For more intricate graphs, you can also use inequalities with restrictions to shade selected parts of the graph. Graphing Compound Inequalities. For instance, we can add 3 to every side of the inequality, and all of the inequalities will remain equally unequal. How can this be helpful to you? identify the search phrase that you are interested in (i. com Introduction to Inequalities ANSWER KEY Intermediate Single Variable An inequality is a pair of expressions or numbers that are not equal. You may select which type of inequality to use in the problems. This divides the number line into two regions, one below 5 and one above 5. In the case you have service with math and in particular with compound inequalities calculator or dividing polynomials come visit us at Mathpoint. GCSE (9-1) Exam Questions 2017 Specs Solutions Worksheet Solutions; Quadratic Simultaneous Equations: Solutions: Worksheet: Solutions: Completing the square. Compound Interest Name_____ Worksheets Calculate the total amount of the investment or total paid in a loan in the following situations: 1. Learn about compound inequalities, linear inequalities and solving compund inequalities using the resources on this page. Printable in convenient PDF format. com Gallery for 50 Compound Inequalities Worksheet Answers. Polymathlove. 3 10 13 or 2 5 12nnn 6. Compound Inequalities Compound inequalities, graphing on a number line and solving interval notation. c N 7M Wa2dDek rw Riqt XhK DIRngfOicnGi2t gew DAZlqgbeIbyr0aK r1 4. 1 - Solving Compound Inequalities The inequalities we have seen so far are simple inequalities. Download by size Handphone Tablet Desktop Original Size · pound Inequality Worksheet Fresh Absolute Value via codedell. ©s cKfuftwa0 NSboqfst3woaPrCeY 7LELJCh. 4 Multiple Choice Identify the choice that best completes the statement or answers the question. 64 KB] Absolute Value Equations and Inequalities : Absolute Value Definition, Steps for Solving Linear Absolute Value Equations, Steps for Solving non- linear Absolute Value Equations, exercises with solutions. Do you know how to solve and graph linear inequalities? Test your knowledge by taking the following online test. 6 > x > −3. To solve a compound inequality, first separate it into two inequalities. " With "and" inequalities, we only graph the numbers that satisfy both inequalities, a. So z can satisfy this or z can satisfy this over here. You can select different. Problem Solution Graph -10 ≤ 2x-4 < -2. Define a variable, write an inequality, and solve each problem. 3 10 13 or 2 5 12nnn 6. It is not safe to use a light bulb of more than 60 watts in this light fixture. In our last lesson, we solved compound inequalities that involved the word "and". solve compound inequalities. The crosshatching or shading, if extended, would cover a set of three letters. 3 5 Compound Inequalities Worksheet L Name Date Per Write A. wrutsmzoicommommm_ - KNHS Homeroom Community. Compound Inequalities. 4 ~ Compound Inequalities CW: Hangman/WS 5. In this algebra worksheet, 11th graders solve compound inequalities and graph their solution on a number line. 5 Solving Compound Inequalities 83 Solving Compound Inequalities You can solve a compound inequality by solving two inequalities separately. I've never really given my students as reason for why we need to even deal with compound inequalities in the first place. A compound inequality is an equation with two or more inequalities joined together with either "and" or "or" (for example, and; or). About This Quiz & Worksheet. Solving Compound Inequalities with. Rags to Riches: Answer questions in a quest for fame and fortune. This one page, art worksheet reviews solving inequalities. I like to start by exploring the real meaning of AND vs OR with this worksheet. Watch the 1 min video tutorial on solving. 1) − v ≤ −3. Solving Inequalities, Graphing Inequalities, Linear Inequalities A system of linear inequalities consists of two or more inequalities with the same set of variables. Free Algebra 1 worksheets created with Infinite Algebra 1. The time a cake must bake is between 25 minutes and 30 minutes, inclusive. Intro to Inequalities in One Variable –. In this algebra worksheet, 11th graders solve compound inequalities and graph their solution on a number line. [Solution] x 0 1 1 1 = − = + − = + x x x x x Since 0=1 is false, the equation x = x +1 has no solutions. ©f 12i0 X1J2 S zK9uOtia x rS 7omfit ewSavr8e W OLSLsCN. All you have to do is bring the –2 to the left hand side. Solving Inequalities Worksheets. NAME DATE PERIOD 5-4 Skills Practice Solving Compound Inequalities Graph the solution set of each compound inequality. Use standard worksheet answer key for funsheet key. 3 Absolute Value Equations and Inequalities. \( - 7 x 3\) Notice that the circles are open this time! This is because the answer can't be equal to -7 or 3. You can write an absolute value inequality as a compound inequality. Solve the following equations (try rearranging the equations. For a man to. For example, solve 5z+7<27 OR -3z≤18. The set of all. Compound Inequalities Worksheet e Step Inequalities Worksheets by Adding and Subtracting from Compound Inequalities Worksheet , source: pinterest. WORKSHEETS: Regents-Modeling Linear Systems 1a. The compound inequality is í x 62/87,21 The graphs do not intersect, so it represents a union. It is presented in a number line. You plan to save $10 a week for the shoes by mowing lawns. Designed by Skip Tyler, Varina High School What is the difference between and and or? AND means intersection -what do the two items have in common?. Tracing Lines Worksheets. And now, this can be the very first impression: Systems Inequalities Word Problems Worksheet Fresh Systems from compound inequalities worksheet answers , image source: ajihle. It seems as if it would be possible to configure the conjunction of two inequalities with the countif using the 'and' worksheet function but maybe not. On a road in the city of Rochester, the maximum speed is 50 miles per hour. 334 C A B USE SYMBOLS You can combine the two inequalities, x > 4 and x < 20, to write the compound inequality 4 , and ≥ provide information about the relative sizes of the two expressions. x < 5 and x ≥ −2 b. Compound Inequalities 2 - Cool Math has free online cool math lessons, cool math games and fun math activities. Real World Application for Compound Inequalities Graphing Inequalities ( or statements) The laboratory chemicals were very sensitive to heat , so the supervison installed alarms to alert the staff if the temperature rose above 72 degrees OR below 60 degrees. Solving Linear Inequalities Concept 12: Solving Linear Inequalities Pre Score 5 = Level 4 DEADLINE: (C) Level 2 1. DOC - Free Printable Handwriting Worksheets For Kindergarten. 5 Compound Inequalities #3 Intermediate Algebra / Copy of MAT 135 Spring 2014 (Prof. The low temperatures for the previous two days were 62 and 58 degrees. Method 3: Absolute Value as Compound Inequality. 1 M BALlPl 8 7rAiJgzh ItUsT jr WeAsZevr evZe1dq. Math Inequalities Worksheets 7th Grade Download Them And Try To. Solving Compound Inequality -Word Problem. ©D t2 7021 B2O 1Kcuft XaP QSmoMfft vw5a5rdeR 8L sL BCo. Choose a specific addition topic below to view all of our worksheets in that content area. Educreations is a community where anyone can teach what they know and learn what they don't. 4 Compound Inequalities HW: Watch 5. Unions When an inequality is combined by the word “or” the compound inequality is formed. Every 2 weeks she withdraws $60 from her savings account for food. The time a cake must bake is between 25 minutes and 30 minutes, inclusive. Then interpret your solution. Compound inequalities combine more than one inequality to get a solution. Since this is a Union, unite the two graphs into one compound graph. If you don't see any interesting for you, use our search form on bottom ↓. Let's first return to the number line, and consider the inequality | x | > 2. The endpoint on the left is only a point. 1 6 Solving pound Inequalities Understanding that conjunctive from Compound Inequalities. Then, solve both inequalities and graph. Former Section 1. Video 4 Inequalities; Video 5 Compound Inequalities *Must be completed by Nov. 2z > 5 z z > 5 3. Write an inequality that can be used to find the minimum number of weeks you must. Math video on how to solve and graph compound inequalities with two inequality signs of an "and". Solve each compound inequality and graph its solution. Write the following as an inequality. ©V TKhugtfaW USwo8fNt6wMaXrceW PL8L8Ct. Best Answer: Compound inequality: Given the temperature this past October is represented by x: 54 <= x <= 78. There are basically three different ways to write these, as shown below. x > 4 and x > −4. Here are two inequalities: X > 2 X is greater than 2 (X represents all real numbers) X < 6 X is less. There are two card sets, one with all the answers and one with unfinished cards to complete by the student. Worksheets are Inequality word problems, Concept 11 writing graphing inequalities, Compound inequalities work, Two step inequalities date period, One step inequalities date period, Solving inequalities date period, Study guide practice unit 5 test inequalities, 1 read carefully and underline key words write a let. Solve for z. Compound Inequalities Worksheet is an accumulation strategies from teachers, doctoral philosophers, and professors, regarding how to use worksheets in class. 4 Introduction to Inequalities Section 1 Inequalities The sign < stands for less than. Worksheet by Kuta Software LLC Algebra 1 WS 2. 1) x + 5 > 6 and 6x ! 18 2) "15 ! x " 13 ! 0. Day 10: Inequalities TEST Day Thursday, October 20. In this worksheet, we will practice solving compound linear inequalities by applying inverse operations. Compound Inequalities Worksheet e Step Inequalities Worksheets by Adding and Subtracting from Compound Inequalities Worksheet , source: pinterest. Inequalities Worksheets. Chapter 1 Review Worksheet - Equations and Inequalities _____ Section 1. Then graph the solution set. com and discover rational functions, math review and a number of other algebra subjects. Lesson 6 Inequalities 23 Main Idea Find solutions of inequalities by using mental math and the guess, check, and revise strategy. 3(x+2) 2x 4 x 2 3. Compound Inequalities —6 Class Date Form G Write a compound inequality that represents each phrase. This quiz and worksheet combo will help you understand how to solve a compound inequality that uses 'and' or 'or. Worksheet 2. Cj a2b0i1 j1 b ik su9t wac isorfftfw kayr leq plplnc zs z 9a elpl j xrvikg5hmtrsb fr ie hsnejr rv 2ecdeo o 7m6atdze k owziftah 7 3idn9f 2ixn2intde y ja nl zg seeb. Algebra-expression. Name : Score : Printable Math Worksheets @ www. 5) Compound Inequalities Quiz. A linear inequality divides the coordinate plane into two halves by a boundary line where one half represents the solutions of the inequality. These are the values that solve at least one of the given inequalities. Solving Compound Inequalities. Complete the Square 8. Therefore, the absolute value of any number is always greater than a negative value. All we ask is that you don’t remove the KidSmart logo. |
Astronomical radio sources are objects in outer space that emit strong radio waves. Radio emission comes from a wide variety of sources. Such objects represent some of the most extreme and energetic physical processes in the universe.
In 1932, American physicist and radio engineer Karl Jansky detected radio waves coming from an unknown source in the center of our galaxy. Jansky was studying the origins of radio frequency interference for Bell Laboratories. He found "...a steady hiss type static of unknown origin", which eventually he concluded had an extraterrestrial origin.This was the first time that radio waves were detected from outer space. The first radio sky survey was conducted by Grote Reber and was completed in 1941. In the 1970s, some stars in our galaxy were found to be radio emitters, one of the strongest being the unique binary MWC 349.
As the nearest star, the Sun is the brightest radiation source in most frequencies, down to the radio spectrum at 300 MHz (1 m wavelength). When the Sun is quiet, the galactic background noise dominates at longer wavelengths. During geomagnetic storms, the Sun will dominate even at these low frequencies.
Supernovas sometimes leave behind dense spinning neutron stars called pulsars. They emit jets of charged particles which emit synchrotron radiation in the radio spectrum. Examples include the Crab Pulsar, the first pulsar to be discovered. Pulsars and quasars (dense central cores of extremely distant galaxies) were both discovered by radio astronomers. In 2003 astronomers using the Parkes radio telescope discovered two pulsars orbiting each other, the first such system known.
Spiral galaxies contain clouds of neutral hydrogen and carbon monoxide which emit radio waves. The radio frequencies of these two molecules were used to map a large portion of the Milky Way galaxy.
Quasars (short for "quasi-stellar radio source") were one of the first point-like radio sources to be discovered. Quasars' extreme red shift led us to conclude that they are distant active galactic nuclei. Active galactic nuclei have jets of charged particles which emit synchrotron radiation. One example is 3C 273, the optically brightest quasar in the sky.
According to the Big Bang Model (also called the Standard Model), during the first few moments after the Big Bang, pressure and temperature were extremely great. Under these conditions, simple fluctuations in the density of matter may have resulted in local regions dense enough to create black holes. Although most regions of high density would be quickly dispersed by the expansion of the universe, a primordial black hole would be stable, persisting to the present.
One goal of Astropulse is to detect postulated mini black holes that might be evaporating due to "Hawking radiation". Such mini black holes are postulated to have been created during the Big Bang, unlike currently known black holes. Martin Rees has theorized that a black hole, exploding via Hawking radiation, might produce a signal that's detectable in the radio. The Astropulse project hopes that this evaporation would produce radio waves that Astropulse can detect. The evaporation wouldn't create radio waves directly. Instead, it would create an expanding fireball of high-energy gamma rays and particles. This fireball would interact with the surrounding magnetic field, pushing it out and generating radio waves.
Rotating radio transients (RRATs) are a type of neutron stars discovered in 2006 by a team led by Maura McLaughlin from the Jodrell Bank Observatory at the University of Manchester in the UK. RRATs are believed to produce radio emissions which are very difficult to locate, because of their transient nature. Early efforts have been able to detect radio emissions (sometimes called RRAT flashes) for less than one second a day, and, like with other single-burst signals, one must take great care to distinguish them from terrestrial radio interference. Distributing computing and the Astropulse algorithm may thus lend itself to further detection of RRATs.
D. R. Lorimer and others analyzed archival survey data and found a 30-jansky dispersed burst, less than 5 milliseconds in duration, located 3° from the Small Magellanic Cloud. They reported that the burst properties argue against a physical association with our Galaxy or the Small Magellanic Cloud. In a recent paper, they argue that current models for the free electron content in the universe imply that the burst is less than 1 gigaparsec distant. The fact that no further bursts were seen in 90 hours of additional observations implies that it was a singular event such as a supernova or coalescence (fusion) of relativistic objects. It is suggested that hundreds of similar events could occur every day and, if detected, could serve as cosmological probes. Radio pulsar surveys such as [email protected] offer one of the few opportunities to monitor the radio sky for impulsive burst-like events with millisecond durations. Because of the isolated nature of the observed phenomenon, the nature of the source remains speculative. Possibilities include a black hole-neutron star collision, a neutron star-neutron star collision, a black hole-black hole collision, or some phenomenon not yet considered.
In 2010 there was a new report of 16 similar pulses from the Parkes Telescope which were clearly of terrestrial origin, but in 2013 four pulse sources were identified that supported the likelihood of a genuine extragalactic pulsing population.
These pulses are known as fast radio bursts (FRBs). The first observed burst has become known as the Lorimer burst. Blitzars are one proposed explanation for them.
Previous searches by [email protected] have looked for extraterrestrial communications in the form of narrow-band signals, analogous to our own radio stations. The Astropulse project argues that since we know nothing about how ET might communicate, this might be a bit closed-minded. Thus, the Astropulse Survey can be viewed as complementary to the narrow-band [email protected] survey as a by-product of the search for physical phenomena.
Explaining their recent discovery of a powerful bursting radio source, NRL astronomer Dr. Joseph Lazio stated: "Amazingly, even though the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths, very little has been done to look for radio bursts, which are often easier for astronomical objects to produce." The use of coherent dedispersion algorithms and the computing power provided by the SETI network may lead to discovery of previously undiscovered phenomena.
Content from Wikipedia |
Unleashing the Power of Critical Thinking and Problem Solving
Introduction: In a world characterized by rapid change and complexity, critical thinking and problem-solving skills have become increasingly vital. Gone are the days when rote memorization and following prescribed formulas could guarantee success. Instead, individuals need to develop the ability to analyze, evaluate, and creatively solve problems. Let’s delve into the importance of critical thinking and problem-solving skills and how they can empower individuals in various aspects of life.
- The Essence of Critical Thinking: Critical thinking goes beyond the acquisition of knowledge; it involves actively engaging with information, questioning assumptions, and examining evidence. It is the ability to evaluate information objectively, identify biases, and form well-reasoned judgments. Critical thinking encourages intellectual curiosity and challenges individuals to seek deeper understanding.
- Nurturing Analytical Skills: Analytical skills are fundamental to critical thinking and problem-solving. Individuals with strong analytical abilities can break down complex issues into manageable components, identify patterns, and draw logical conclusions. These skills enable individuals to approach problems systematically and make informed decisions.
- Effective Problem-Solving: Problem-solving is an essential life skill that empowers individuals to overcome obstacles and seize opportunities. It involves defining a problem, generating potential solutions, evaluating alternatives, and implementing effective strategies. By cultivating problem-solving skills, individuals become resourceful, adaptable, and resilient.
- Enhancing Decision-Making: Critical thinking and problem-solving skills greatly impact decision-making. In a world where choices abound, the ability to weigh options, consider consequences, and make well-informed decisions is crucial. Critical thinkers assess risks, anticipate outcomes, and choose the best course of action based on available information.
- Fostering Creativity and Innovation: Critical thinking and problem-solving nurture creativity and fuel innovation. When individuals approach problems with an open mind and embrace diverse perspectives, they unlock innovative solutions. Creative problem solvers think outside the box, challenge conventional wisdom, and generate novel ideas that drive progress in various fields.
- Real-World Application: Critical thinking and problem-solving skills extend beyond the academic realm. In professional settings, individuals who can analyze complex data, propose innovative strategies, and solve intricate problems are highly sought after. These skills also empower individuals to navigate personal challenges, engage in constructive debates, and make informed decisions in everyday life.
- Lifelong Learning: Critical thinking and problem-solving skills are foundations for lifelong learning. They encourage individuals to question assumptions, seek multiple perspectives, and continuously expand their knowledge. Lifelong learners embrace intellectual growth, adapt to new environments, and remain agile in the face of evolving circumstances.
[…] ← Previous […] |
Basic Math | Basic-2 Math | Prealgebra | Workbooks | Glossary | Standards | Site Map | Help
THIRD GRADE MATH STANDARDS
By the end of grade three, students deepen their understanding of place value and their understanding of and skill with addition, subtraction, multiplication, and division of whole numbers. Students estimate, measure, and describe objects in space. They use patterns to help solve problems. They represent number relationships and conduct simple probability experiments.
1.0 Students understand the place value of whole numbers:
1.1 Count, read, and write whole numbers to 10,000.
- Numbers 1-10,000 Card Quiz
1.2 Compare and order whole numbers to 10,000.
- "More or Less" 1-10,000
1.3 Identify the place value for each digit in numbers to 10,000.
1.4 Round off numbers to 10,000 to the nearest ten, hundred, and thousand.
- Rounding to the Nearest Thousand
- Rounding Thousands Memory Game
- Rounding to the Nearest Hundred
- Rounding Hundreds Memory Game
1.5 Use expanded notation to represent numbers (e.g., 3,206 = 3,000 + 200 + 6).
2.0 Students calculate and solve problems involving addition, subtraction, multiplication, and division:
2.1 Find the sum or difference of two whole numbers between 0 and 10,000.
- Four-Digit Addition (No Carrying)
- Four-Digit Addition (Carrying)
- Four-Digit Number Quiz (No Borrowing)
- Four-Digit Number Quiz (Borrowing)
2.2 Memorize to automaticity the multiplication table for numbers between 1 and 10.
- 4, 6, and 8 Multiplication Quiz
- 3, 7, and 9 Multiplication Quiz
- 2, 5, and 10 Multiplication Quiz
2.3 Use the inverse relationship of multiplication and division to compute and check results.
2.4 Solve simple problems involving multiplication of multidigit numbers by one-digit numbers (3,671 x 3 = __).
- One and Two-Digit (No Carrying-H)
- One and Two-Digit (No Carrying-V)
- "More or Less" 1 and 2-Digit (No Carrying)
- One and Two-Digit (Carrying)
- "More or Less" 1 and 2-Digit (Carrying)
- One and Three-Digit (Carrying)
- One and Four-Digit (No Carrying)
2.5 Solve division problems in which a multidigit number is evenly divided by a one-digit number (135 ÷ 5 = __).
- Single-Digit (No Remainder)
- One and Two-Digit (No Remainder-LT 10)
- One and Two-Digit (No Remainder-H)
- One and Two-Digit (No Remainder)
2.6 Understand the special properties of 0 and 1 in multiplication and division.
2.7 Determine the unit cost when given the total cost and number of units.
2.8 Solve problems that require two or more of the skills mentioned above.
3.0 Students understand the relationship between whole numbers, simple fractions, and decimals:
3.1 Compare fractions represented by drawings or concrete materials to show equivalency and to add and subtract simple fractions in context (e.g., 1/2 of a pizza is the same amount as 2/4 of another pizza that is the same size; show that 3/8 is larger than 1/4).
- Identifying Equivalent Fractions (Level I)
- Identifying Equivalent Fractions (Level II)
- "More or Less" Fractions (Level I)
- "More or Less" Fractions (Level II)
3.2 Add and subtract simple fractions (e.g., determine that 1/8 + 3/8 is the same as 1/2).
3.3 Solve problems involving addition, subtraction, multiplication, and division of money amounts in decimal notation and multiply and divide money amounts in decimal notation by using whole-number multipliers and divisors.
- Adding Numbers with Hundredth Values
- Subtracting Numbers with Hundredth Values
- Adding Amounts Under Ten Dollars
- Subtracting Amounts Under Ten Dollars
3.4 Know and understand that fractions and decimals are two different representations of the same concept (e.g., 50 cents is 1/2 of a dollar, 75 cents is 3/4 of a dollar).
ALGEBRA AND FUNCTIONS
1.0 Students select appropriate symbols, operations, and properties to represent, describe, simplify, and solve simple number relationships:
1.1 Represent relationships of quantities in the form of mathematical expressions, equations, or inequalities.
1.2 Solve problems involving numeric equations or inequalities.
1.3 Select appropriate operational and relational symbols to make an expression true (e.g., if 4 __ 3 = 12, what operational symbol goes in the blank?).
1.4 Express simple unit conversions in symbolic form (e.g., __ inches = __ feet x 12).
1.5 Recognize and use the commutative and associative properties of multiplication (e.g., if 5 x 7 = 35, then what is 7 x 5? and if 5 x 7 x 3 = 105, then what is 7 x 3 x 5?).
2.0 Students represent simple functional relationships:
2.1 Solve simple problems involving a functional relationship between two quantities (e.g., find the total cost of multiple items given the cost per unit).
2.2 Extend and recognize a linear pattern by its rules (e.g., the number of legs on a given number of horses may be calculated by counting by 4s or by multiplying the number of horses by 4).
MEASUREMENT AND GEOMETRY
1.0 Students choose and use appropriate units and measurement tools to quantify the properties of objects:
1.1 Choose the appropriate tools and units (metric and U.S.) and estimate and measure the length, liquid volume, and weight/mass of given objects.
1.2 Estimate or determine the area and volume of solid figures by covering them with squares or by counting the number of cubes that would fill them.
1.3 Find the perimeter of a polygon with integer sides.
1.4 Carry out simple unit conversions within a system of measurement (e.g., centimeters and meters, hours and minutes).
- Converting Days to Weeks
- Converting Hours to Days
- Converting Minutes to Months
2.0 Students describe and compare the attributes of plane and solid geometric figures and use their understanding to show relationships and solve problems:
2.1 Identify, describe, and classify polygons (including pentagons, hexagons, and octagons).
2.2 Identify attributes of triangles (e.g., two equal sides for the isosceles triangle, three equal sides for the equilateral triangle, right angle for the right triangle).
2.3 Identify attributes of quadrilaterals (e.g., parallel sides for the parallelogram, right angles for the rectangle, equal sides and right angles for the square).
2.4 Identify right angles in geometric figures or in appropriate objects and determine whether other angles are greater or less than a right angle.
2.5 Identify, describe, and classify common three-dimensional geometric objects (e.g., cube, rectangular solid, sphere, prism, pyramid, cone, cylinder).
- 3-Dimensional Shape Memory Game
- 3-Dimensional Shape Card Quiz
2.6 Identify common solid objects that are the components needed to make a more complex solid object.
STATISTICS, DATA ANALYSIS, AND PROBABILITY
1.0 Students conduct simple probability experiments by determining the number of possible outcomes and make simple predictions:
1.1 Identify whether common events are certain, likely, unlikely, or improbable.
1.2 Record the possible outcomes for a simple event (e.g., tossing a coin) and systematically keep track of the outcomes when the event is repeated many times.
1.3 Summarize and display the results of probability experiments in a clear and organized way (e.g., use a bar graph or a line plot).
1.4 Use the results of probability experiments to predict future events (e.g., use a line plot to predict the temperature forecast for the next day).
1.0 Students make decisions about how to approach problems:
1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, sequencing and prioritizing information, and observing patterns.
1.2 Determine when and how to break a problem into simpler parts.
2.0 Students use strategies, skills, and concepts in finding solutions:
2.1 Use estimation to verify the reasonableness of calculated results.
2.2 Apply strategies and results from simpler problems to more complex problems.
2.3 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning.
2.4 Express the solution clearly and logically by using the appropriate mathematical notation and terms and clear language; support solutions with evidence in both verbal and symbolic work.
2.5 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy.
2.6 Make precise calculations and check the validity of the results from the context of the problem.
3.0 Students move beyond a particular problem by generalizing to other situations:
3.1 Evaluate the reasonableness of the solution in the context of the original situation.
3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems.
3.3 Develop generalizations of the results obtained and apply them in other circumstances.
* The custom search only looks at Rader's sites.
Go for site help or a list of mathematics topics at the site map!
©copyright 2004-2013 Andrew Rader Studios, All rights reserved.
** Andrew Rader Studios does not monitor or review the content available at these web sites. They are paid advertisements and neither partners nor recommended web sites. |
Wolf–Rayet stars, often abbreviated as WR stars, are a rare heterogeneous set of stars with unusual spectra showing prominent broad emission lines of ionised helium and highly ionised nitrogen or carbon. The spectra indicate very high surface enhancement of heavy elements, depletion of hydrogen, and strong stellar winds. Their surface temperatures range from 30,000 K to around 210,000 K, hotter than almost all other stars. They were previously called W-type stars referring to their spectral classification.
Classic (or Population I) Wolf–Rayet stars are evolved, massive stars that have completely lost their outer hydrogen and are fusing helium or heavier elements in the core. A subset of the population I WR stars show hydrogen lines in their spectra and are known as WNh stars; they are young extremely massive stars still fusing hydrogen at the core, with helium and nitrogen exposed at the surface by strong mixing and radiation-driven mass loss. A separate group of stars with WR spectra are the central stars of planetary nebulae (CSPNe), post asymptotic giant branch stars that were similar to the Sun while on the main sequence, but have now ceased fusion and shed their atmospheres to reveal a bare carbon-oxygen core.
All Wolf–Rayet stars are highly luminous objects due to their high temperatures—thousands of times the bolometric luminosity of the Sun (L☉) for the CSPNe, hundreds of thousands L☉ for the Population I WR stars, to over a million L☉ for the WNh stars—although not exceptionally bright visually since most of their radiation output is in the ultraviolet.
In 1867, using the 40 cm Foucault telescope at the Paris Observatory, astronomers Charles Wolf and Georges Rayet discovered three stars in the constellation Cygnus (HD 191765, HD 192103 and HD 192641, now designated as WR 134, WR 135, and WR 137 respectively) that displayed broad emission bands on an otherwise continuous spectrum. Most stars only display absorption lines or bands in their spectra, as a result of overlying elements absorbing light energy at specific frequencies, so these were clearly unusual objects.
The nature of the emission bands in the spectra of a Wolf–Rayet star remained a mystery for several decades. Edward C. Pickering theorized that the lines were caused by an unusual state of hydrogen, and it was found that this "Pickering series" of lines followed a pattern similar to the Balmer series, when half-integer quantum numbers were substituted. It was later shown that these lines resulted from the presence of helium; a chemical element that was discovered in 1868. Pickering noted similarities between Wolf–Rayet spectra and nebular spectra, and this similarity led to the conclusion that some or all Wolf Rayet stars were the central stars of planetary nebulae.
By 1929, the width of the emission bands was being attributed to Doppler broadening, and hence that the gas surrounding these stars must be moving with velocities of 300–2400 km/s along the line of sight. The conclusion was that a Wolf–Rayet star is continually ejecting gas into space, producing an expanding envelope of nebulous gas. The force ejecting the gas at the high velocities observed is radiation pressure. It was well known that many stars with Wolf Rayet type spectra were the central stars of planetary nebulae, but also that many were not associated with an obvious planetary nebula or any visible nebulosity at all.
In addition to helium, Carlyle Smith Beals identified emission lines of carbon, oxygen and nitrogen in the spectra of Wolf–Rayet stars. In 1938, the International Astronomical Union classified the spectra of Wolf–Rayet stars into types WN and WC, depending on whether the spectrum was dominated by lines of nitrogen or carbon-oxygen respectively.
In 1969, several CSPNe with strong O VI emissions lines were grouped under a new "O VI sequence", or just OVI type. These were subsequently referred to as [WO] stars. Similar stars not associated with planetary nebulae were described shortly after and the WO classification was eventually also adopted for population I WR stars.
The understanding that certain late, and sometimes not-so-late, WN stars with hydrogen lines in their spectra are at a different stage of evolution from hydrogen-free WR stars has led to the introduction of the term WNh to distinguish these stars generally from other WN stars. They were previously referred to as WNL stars, although there are late-type WN stars without hydrogen as well as WR stars with hydrogen as early as WN5.
Wolf–Rayet stars were named on the basis of the strong broad emission lines in their spectra, identified with helium, nitrogen, carbon, silicon, and oxygen, but with hydrogen lines usually weak or absent. The first system of classification split these into stars with dominant lines of ionised nitrogen (N III, N IV, and N V) and those with dominant lines of ionised carbon (C III and C IV) and sometimes oxygen (O III - O VI), referred to as WN and WC respectively. The two classes WN and WC were further split into temperature sequences WN5-WN8 and WC6-WC8 based on the relative strengths of the 541.1 nm He II and 587.5 nm He I lines. Wolf–Rayet emission lines frequently have a broadened absorption wing (P Cygni profile) suggesting circumstellar material. A WO sequence has also been separated from the WC sequence for even hotter stars where emission of ionised oxygen dominates that of ionised carbon, although the actual proportions of those elements in the stars are likely to be comparable. WC and WO spectra are formally distinguished based on the presence or absence of C III emission. WC spectra also generally lack the O VI lines that are strong in WO spectra.
The WN spectral sequence was expanded to include WN2 - WN9, and the definitions refined based on the relative strengths of the N III lines at 463.4-464.1 nm and 531.4 nm, the N IV lines at 347.9-348.4 nm and 405.8 nm, and the N V lines at 460.3 nm, 461.9 nm, and 493.3-494.4 nm. These lines are well separated from areas of strong and variable He emission and the line strengths are well correlated with temperature. Stars with spectra intermediate between WN and Ofpe have been classified as WN10 and WN11 although this nomenclature is not universally accepted.
The type WN1 was proposed for stars with neither N IV nor N V lines, to accommodate Brey 1 and Brey 66 which appeared to be intermediate between WN2 and WN2.5. The relative line strengths and widths for each WN sub-class were later quantified, and the ratio between the 541.1 nm He II and 587.5m, He I lines was introduced as the primary indicator of the ionisation level and hence of the spectral sub-class. The need for WN1 disappeared and both Brey 1 and Brey 66 are now classified as WN3b. The somewhat obscure WN2.5 and WN4.5 classes were dropped.
|Spectral Type||Original criteria||Updated criteria||Other features|
|WN2||N V weak or absent||N V and N IV absent||Strong He II, no He I|
|WN2.5||N V present, N IV absent||Obsolete class|
|WN3||N IV ≪ N V, N III weak or absent||He II/He I > 10, He II/C IV > 5||Peculiar profiles, unpredictable N V strength|
|WN4||N IV ≈ N V, N III weak or absent||4 < He II/He I < 10, N V/N III > 2||C IV present|
|WN4.5||N IV > N V, N III weak or absent||Obsolete class|
|WN5||N III ≈ N IV ≈ N V||1.25 < He II/He I < 8, 0.5 < N V/N III < 2||N IV or C IV > He I|
|WN6||N III ≈ N IV, N V weak||1.25 < He II/He I < 8, 0.2 < N V/N III < 0.5||C IV ≈ He I|
|WN7||N III > N IV||0.65 < He II/He I < 1.25||Weak P-Cyg profile He I, He II > N III, C IV > He I|
|WN8||N III ≫ N IV||He II/He I < 0.65||Strong P-Cyg profile He I, He II ≈ N III, C IV weak|
|WN9||N III > N II, N IV absent||N III > N II, N IV absent||P-Cyg profile He I|
|WN10||N III ≈ N II||N III ≈ N II||H Balmer, P-Cyg profile He I|
|WN11||N III weak or absent, N II present||N III ≈ He II, N III weak or absent,||H Balmer, P-Cyg profile He I, Fe III present|
The WC spectral sequence was expanded to include WC4 - WC11, although some older papers have also used WC1 - WC3. The primary emission lines used to distinguish the WC sub-types are C II 426.7 nm, C III at 569.6 nm, C III/IV465.0 nm, C IV at 580.1-581.2 nm, and the O V (and O III) blend at 557.2-559.8 nm. The sequence was extended to include WC10 and WC11, and the subclass criteria were quantified based primarily on the relative strengths of carbon lines to rely on ionisation factors even if there were abundance variations between carbon and oxygen.
|Spectral type||Original criteria||Quantitative criteria||Other features|
|WC4||C IV strong, C II weak, O V moderate||C IV/C III > 32||O V/C III > 2.5||O VI weak or absent|
|WC5||C III ≪ C IV, C III < O V||12.5 < C IV/C III < 32||0.4 < C III/O V < 3||O VI weak or absent|
|WC6||C III ≪ C IV, C III > O V||4 < C IV/C III < 12.5||1 < C III/O V < 5||O VI weak or absent|
|WC7||C III < C IV, C III ≫ O V||1.25 < C IV/C III < 4||C III/O V > 1.25||O VI weak or absent|
|WC8||C III > C IV, C II absent, O V weak or absent||0.5 < C IV/C III < 1.25||C IV/C II > 10||He II/He I > 1.25|
|WC9||C III > C IV, C II present, O V weak or absent||0.2 < C IV/C III < 0.5||0.6 < C IV/C II < 10||0.15 < He II/He I < 1.25|
|WC10||0.06 < C IV/C III < 0.15||0.03 < C IV/C II < 0.6||He II/He I < 0.15|
|WC11||C IV/C III < 0.06||C IV/C II < 0.03||He II absent|
For WO-type stars the main lines used are C IV at 580.1 nm, O IV at 340.0 nm, O V (and O III) blend at 557.2-559.8 nm, O VI at 381.1-383.4 nm, O VII at 567.0 nm, and O VIII at 606.8 nm. The sequence was expanded to include WO5 and quantified based the relative strengths of the O VI/C IV and O VI/O V lines. A later scheme, designed for consistency across classical WR stars and CSPNe, returned to the WO1 to WO4 sequence and adjusted the divisions.
|Spectral type||Original criteria||Quantitative criteria||Other features|
|WO1||O VII ≥ O V, O VIII present||O VI/O V > 12.5||O VI/C IV > 1.5||O VII ≥ O V|
|WO2||O VII < O V, C IV < O VI||4 < O VI/O V < 12.5||O VI/C IV > 1.5||O VII ≤ O V|
|WO3||O VII weak or absent, C IV ≈ O VI||1.8 < O VI/O V < 4||0.1 < O VI/C IV < 1.5||O VII ≪ O V|
|WO4||C IV ≫ O VI||0.5 < O VI/O V < 1.8||0.03 < O VI/C IV < 0.1||O VII ≪ O V|
Detailed modern studies of Wolf Rayet stars can identify additional spectral features, indicated by suffixes to the main spectral classification:
- h for hydrogen emission;
- ha for hydrogen emission and absorption;
- w for weak lines;
- s for strong lines;
- b for broad strong lines;
- d for dust (occasionally vd, pd, or ed for variable, periodic, or episodic dust).
The classification of Wolf Rayet spectra is complicated by the frequent association of the stars with dense nebulosity, dust clouds, or binary companions. A suffix of "+OB" is used to indicate the presence of absorption lines in the spectrum likely to be associated with a more normal companion star, or "+abs" for absorption lines with an unknown origin.
The hotter WR spectral sub-classes are described as early and the cooler ones as late, consistent with other spectral types. WNE and WCE refer to early type spectra while WNL and WCL refer to late type spectra, with the dividing line approximately at sub-class six or seven. There is no such thing as a late WO-type star. There is a strong tendency for WNE stars to be hydrogen-poor while the spectra of WNL stars frequently include hydrogen lines.
Spectral types for the central stars of planetary nebulae are qualified by surrounding them with square brackets (e.g. [WC4]). They are almost all of the WC sequence with the known [WO] stars representing the hot extension of the carbon sequence. There are also a small number of [WN] and [WC/WN] types, only discovered quite recently. Their formation mechanism is as yet unclear.
Temperatures of the planetary nebula central stars tend to the extremes when compared to population I WR stars, so [WC2] and [WC3] are common and the sequence has been extended to [WC12]. The [WC11] and [WC12] types have distinctive spectra with narrow emission lines and no He II and C IV lines.
Certain supernovae observed before their peak brightness show WR spectra. This is due to the nature of the supernova at this point: a rapidly expanding helium-rich ejecta similar to an extreme Wolf Rayet wind. The WR spectral features only last a matter of hours, the high ionisation features fading by maximum to leave only weak neutral hydrogen and helium emission, before being replaced with a traditional supernova spectrum. It has been proposed to label these spectral types with an "X", for example XWN5(h). Similarly, classical novae develop spectra consisting of broad emission bands similar to a Wolf Rayet star. This is caused by the same physical mechanism: rapid expansion of dense gases around an extremely hot central source.
The separation of Wolf Rayet stars from spectral class O stars of a similar temperature depends on the existence of strong emission lines of ionised helium, nitrogen, carbon, and oxygen, but there are a number of stars with intermediate or confusing spectral features. For example, high luminosity O stars can develop helium and nitrogen in their spectra with some emission lines, while some WR stars have hydrogen lines, weak emission, and even absorption components. These stars have been given spectral types such as O3 If∗/WN6 and are referred to as slash stars.
Class O supergiants can develop emission lines of helium and nitrogen, or emission components to some absorption lines. These are indicated by spectral peculiarity suffix codes specific to this type of star:
- f for N iii and He ii emission
- f* for N and He emission with N iv stronger than N iii
- f+ for emission in Si iv in addition to N and He
- parentheses indicating He ii absorption lines instead of emission, e.g. (f)
- double parentheses indicating strong He ii absorption and N iii emission diluted, e.g. ((f+))
These codes may also be combined with more general spectral type qualifiers such as p or a. Common combinations include OIafpe and OIf*, and Ofpe. In the 1970s it was recognised that there was a continuum of spectra from pure absorption class O to unambiguous WR types, and it was unclear whether some intermediate stars should be given a spectral type such as O8Iafpe or WN8-a. The slash notation was proposed to deal with these situations and the star Sk−67°22 was assigned the spectral type O3If*/WN6-A. The criteria for distinguishing OIf*, OIf*/WN, and WN stars have been refined for consistency. Slash star classifications are used when the Hβ line has a P Cygni profile; this is an absorption line in O supergiants and an emission line in WN stars. Criteria for the following slash star spectral types are given, using the nitrogen emission lines at 463.4-464.1 nm, 405.8 nm, and 460.3-462.0 nm, together with a standard star for each type:
|Spectral type||Standard star||Criteria|
|O2If*/WN5||Melnick 35||N iv ≫ N iii, N v ≥ N iii|
|O2.5If*/WN6||WR 25||N iv > N iii, N v < N iii|
|O3.5If*/WN7||Melnick 51||N iv < N iii, N v ≪ N iii|
Another set of slash star spectral types is in use for Ofpe/WN stars. These stars have O supergiant spectra plus nitrogen and helium emission, and P Cygni profiles. Alternatively they can be considered to be WN stars with unusually low ionisation levels and hydrogen. The slash notation for these stars was controversial and an alternative was to extend the WR nitrogen sequence to WN10 and WN11 Other authors preferred to use the WNha notation, for example WN9ha for WR 108. A recent recommendation is to use an O spectral type such as O8Iaf if the 447.1 nm He i line is in absorption and a WR class of WN9h or WN9ha if the line has a P Cygni profile. However, the Ofpe/WN slash notation as well as WN10 and WN11 classifications continue to be widely used.
A third group of stars with spectra containing features of both O class stars and WR stars has been identified. Nine stars in the Large Magellanic Cloud have spectra that contain both WN3 and O3V features, but do not appear to be binaries. Many of the WR stars in the Small Magellanic Cloud also have very early WN spectra plus high excitation absorption features. It has been suggested that these could be a missing link leading to classical WN stars or the result of tidal stripping by a low-mass companion.
The first three Wolf Rayet stars to be identified, coincidentally all with hot O companions, had already been numbered in the HD catalogue. These stars and others were referred to as Wolf–Rayet stars from their initial discovery but specific naming conventions for them would not be created until 1962 in the "fourth" catalogue of galactic Wolf Rayet stars. The first three catalogues were not specifically lists of Wolf Rayet stars and they used only existing nomenclature. The fourth catalogue numbered the Wolf Rayet stars sequentially in order of right ascension. The fifth catalogue used the same numbers prefixed with MR after the author of the fourth catalogue, plus an additional sequence of numbers prefixed with LS for new discoveries. Neither of these numbering schemes is in common use.
The sixth Catalogue of Galactic Wolf Rayet stars was the first to actually bear that name, as well as to describe the previous five catalogues by that name. It also introduced the WR numbers widely used ever since for galactic WR stars. These are again a numerical sequence from WR 1 to WR 158 in order of right ascension. The seventh catalogue and its annex use the same numbering scheme and insert new stars into the sequence using lower case letter suffixes, for example WR 102ka for one of the numerous WR stars discovered in the galactic centre. Modern high volume identification surveys use their own numbering schemes for the large numbers of new discoveries. An IAU working group has accepted recommendations to expand the numbering system from the Catalogue of Galactic Wolf Rayet stars so that additional discoveries are given the closest existing WR number plus a numeric suffix in order of discovery. This applies to all discoveries since the 2006 annex, although some of these have already been named under the previous nomenclature; thus WR 42e is now numbered WR 42-1.
Wolf Rayet stars in external galaxies are numbered using different schemes. In the Large Magellanic Cloud, the most widespread and complete nomenclature for WR stars is from The Fourth Catalogue of Population I Wolf Rayet stars in the Large Magellanic Cloud, prefixed by BAT-99, for example BAT-99 105. Many of these stars are also referred to by their third catalogue number, for example Brey 77. As of 2018, 154 WR stars are catalogued in the LMC, mostly WN but including about twenty-three WCs as well as three of the extremely rare WO class. Many of these stars are often referred to by their RMC (Radcliffe observatory Magellanic Cloud) numbers, frequently abbreviated to just R, for example R136a1.
In the Small Magellanic Cloud SMC WR numbers are used, usually referred to as AB numbers, for example AB7. There are only twelve known WR stars in the SMC, a very low number thought to be due to the low metallicity of that galaxy
Wolf–Rayet stars are a normal stage in the evolution of very massive stars, in which strong, broad emission lines of helium and nitrogen ("WN" sequence), carbon ("WC" sequence), and oxygen ("WO" sequence) are visible. Due to their strong emission lines they can be identified in nearby galaxies. About 500 Wolf–Rayets are catalogued in our own Milky Way Galaxy. This number has changed dramatically during the last few years as the result of photometric and spectroscopic surveys in the near-infrared dedicated to discovering this kind of object in the Galactic plane. It is expected that there are fewer than 1,000 WR stars in the rest of the Local Group galaxies, with around 166 known in the Magellanic Clouds, 206 in M33, and 154 in M31. Outside the local group, whole galaxy surveys have found thousands more WR stars and candidates. For example, over a thousand WR stars have been detected in M101, from magnitude 21 to 25. WR stars are expected to be particularly common in starburst galaxies and especially Wolf–Rayet galaxies.
The characteristic emission lines are formed in the extended and dense high-velocity wind region enveloping the very hot stellar photosphere, which produces a flood of UV radiation that causes fluorescence in the line-forming wind region. This ejection process uncovers in succession, first the nitrogen-rich products of CNO cycle burning of hydrogen (WN stars), and later the carbon-rich layer due to He burning (WC and WO-type stars).
It can be seen that the WNh stars are completely different objects from the WN stars without hydrogen. Despite the similar spectra, they are much more massive, much larger, and some of the most luminous stars known. They have been detected as early as WN5h in the Magellanic clouds. The nitrogen seen in the spectrum of WNh stars is still the product of CNO cycle fusion in the core, but it appears at the surface of the most massive stars due to rotational and convectional mixing while still in the core hydrogen burning phase, rather than after the outer envelope is lost during core helium fusion.
Some Wolf–Rayet stars of the carbon sequence ("WC"), especially those belonging to the latest types, are noticeable due to their production of dust. Usually this takes place on those belonging to binary systems as a product of the collision of the stellar winds forming the pair, as is the case of the famous binary WR 104; however this process occurs on single ones too.
A few (roughly 10%) of the central stars of planetary nebulae are, despite their much lower (typically ~0.6 solar) masses, also observationally of the WR-type; i.e. they show emission line spectra with broad lines from helium, carbon and oxygen. Denoted [WR], they are much older objects descended from evolved low-mass stars and are closely related to white dwarfs, rather than to the very young, very massive population I stars that comprise the bulk of the WR class. These are now generally excluded from the class denoted as Wolf–Rayet stars, or referred to as Wolf–Rayet-type stars.
The numbers and properties of Wolf Rayet stars vary with the chemical composition of their progenitor stars. A primary driver of this difference is the rate of mass loss at different levels of metallicity. Higher metallicity leads to high mass loss, which affects the evolution of massive stars and also the properties of Wolf Rayet stars. Higher levels of mass loss cause stars to lose their outer layers before an iron core develops and collapses, so that the more massive red supergiants evolve back to hotter temperatures before exploding as a supernova, and the most massive stars never become red supergiants. In the Wolf Rayet stage, higher mass loss leads to stronger depletion of the layers outside the convective core, lower hydrogen surface abundances and more rapid stripping of helium to produce a WC spectrum.
These trends can be observed in the various galaxies of the local group, where metallicity varies from near-solar levels in the Milky Way, somewhat lower in M31, lower still in the Large Magellanic Cloud, and much lower in the Small Magellanic Cloud. Strong metallicity variations are seen across individual galaxies, with M33 and the Milky Way showing higher metallicities closer to the centre, and M31 showing higher metallicity in the disk than in the halo. Thus the SMC is seen to have few WR stars compared to its stellar formation rate and no WC stars at all (one star has a WO spectral type), the Milky Way has roughly equal numbers of WN and WC stars and a large total number of WR stars, and the other main galaxies have somewhat fewer WR stars and more WN than WC types. LMC, and especially SMC, Wolf Rayets have weaker emission and a tendency to higher atmospheric hydrogen fractions. SMC WR stars almost universally show some hydrogen and even absorption lines even at the earliest spectral types, due to weaker winds not entirely masking the photosphere.
The maximum mass of a main-sequence star that can evolve through a red supergiant phase and back to a WNL star is calculated to be around 20 M☉ in the Milky Way, 32 M☉ in the LMC, and over 50 M☉ in the SMC. The more evolved WNE and WC stages are only reached by stars with an initial mass over 25 M☉ at near-solar metallicity, over 60 M☉ in the LMC. Normal single star evolution is not expected to produce any WNE or WC stars at SMC metallicity.
Mass loss is influenced by a star's rotation rate, especially strongly at low metallicity. Fast rotation contributes to mixing of core fusion products through the rest of the star, enhancing surface abundances of heavy elements, and driving mass loss. Rotation causes stars to remain on the main sequence longer than non-rotating stars, evolve more quickly away from the red supergiant phase, or even evolve directly from the main sequence to hotter temperatures for very high masses, high metallicity or very rapid rotation.
Stellar mass loss produces a loss of angular momentum and this quickly brakes the rotation of massive stars. Very massive stars at near-solar metallicity should be braked almost to a standstill while still on the main sequence, while at SMC metallicity they can continue to rotate rapidly even at the highest observed masses. Rapid rotation of massive stars may account for the unexpected properties and numbers of SMC WR stars, for example their relatively high temperatures and luminosities.
Massive stars in binary systems can develop into Wolf Rayet stars due to stripping by a companion rather than inherent mass loss due to a stellar wind. This process is relatively insensitive to the metallicity or rotation of the individual stars and is expected to produce a consistent set of WR stars across all the local group galaxies. As a result, the fraction of WR stars produced through the binary channel, and therefore the number of WR stars observed to be in binaries, should be higher in low metallicity environments. Calculations suggest that the binary fraction of WR stars observed in the SMC should be as high as 98%, although less than half are actually observed to have a massive companion. The binary fraction in the Milky Way is around 20%, in line with theoretical calculations.
A significant proportion of WR stars are surrounded by nebulosity associated directly with the star, not just the normal background nebulosity associated with any massive star forming region, and not a planetary nebula formed by a post-AGB star. The nebulosity presents a variety of forms and classification has been difficult. Many were originally catalogued as planetary nebulae and sometimes only a careful multi-wavelength study can distinguish a planetary nebula around a low mass post-AGB star from a similarly shaped nebula around a more massive core helium-burning star.
A Wolf–Rayet galaxy is a type of starburst galaxy where a sufficient number of WR stars exist that their characteristic emission line spectra become visible in the overall spectrum of the galaxy. Specifically a broad emission feature due to the 468.6 nm He ii and nearby spectral lines is the defining characteristic of a Wolf–Rayet galaxy. The relatively short lifetime of WR stars means that the starbursts in such galaxies must have lasted less than a million years and occurred within the last few million years, or else the WR emission would be swamped by large numbers of other luminous stars.
Theories about how WR stars form, develop, and die have been slow to form compared to the explanation of less extreme stellar evolution. They are rare, distant, and often obscured, and even into the 21st century many aspects of their lives are unclear.
Although Wolf–Rayet stars have been clearly identified as an unusual and distinctive class of stars since the 19th century, the nature of these stars was uncertain until towards the end of the 20th century. Before the 1960s, even the classification of WR stars was highly uncertain, and their nature and evolution was essentially unknown. The very similar appearance of the central stars of planetary nebulae (CSPNe) and the much more luminous classical WR stars contributed to the uncertainty.
By about 1960, the distinction between CSPNe and massive luminous classical WR stars was more clear. Studies showed that they were small dense stars surrounded by extensive circumstellar material, but not yet clear whether the material was expelled from the star or contracting onto it. The unusual abundances of nitrogen, carbon, and oxygen, as well as the lack of hydrogen, were recognised, but the reasons remained obscure. It was recognised that WR stars were very young and very rare, but it was still open to debate whether they were evolving towards or away from the main sequence.
By the 1980s, WR stars were accepted as the descendants of massive OB stars, although their exact evolutionary state in relation to the main sequence and other evolved massive stars was still unknown. Theories that the preponderance of WR stars in massive binaries and their lack of hydrogen could be due to gravitational stripping had been largely ignore or abandoned. WR stars were being proposed as possible progenitors of supernovae, and particularly the newly-discovered type Ib supernovae, lacking hydrogen but apparently associated with young massive stars.
By the start of the 21st century, WR stars were largely accepted as massive stars that had exhausted their core hydrogen, left the main sequence, and expelled most of their atmospheres, leaving behind a small hot core of helium and heavier fusion products.
Most WR stars, the classical population I type, are now understood as being a natural stage in the evolution of the most massive stars (not counting the less common planetary nebula central stars), either after a period as a red supergiant, after a period as a blue supergiant, or directly from the most massive main-sequence stars. Only the lower mass red supergiants are expected to explode as a supernova at that stage, while more massive red supergiants progress back to hotter temperatures as they expel their atmospheres. Some explode while at the yellow hypergiant or LBV stage, but many become Wolf Rayet stars. They have lost or burnt almost all of their hydrogen and are now fusing helium in their cores, or heavier elements for a very brief period at the end of their lives.
Massive main-sequence stars create a very hot core which fuses hydrogen very rapidly via the CNO process and results in strong convection throughout the whole star. This causes mixing of helium to the surface, a process that is enhanced by rotation, possibly by differential rotation where the core is spun up to a faster rotation than the surface. Such stars also show nitrogen enhancement at the surface at a very young age, caused by changes in the proportions of carbon and nitrogen due to the CNO cycle. The enhancement of heavy elements in the atmosphere, as well as increases in luminosity, create strong stellar winds which are the source of the emission line spectra. These stars develop an Of spectrum, Of* if they are sufficiently hot, which develops into a WNh spectrum as the stellar winds increase further. This explains the high mass and luminosity of the WNh stars, which are still burning hydrogen at the core and have lost little of their initial mass. These will eventually expand into blue supergiants (LBVs?) as hydrogen at the core becomes depleted, or if mixing is efficient enough (e.g. through rapid rotation) they may progress directly to WN stars without hydrogen.
WR stars are likely to end their lives violently rather than fade away to a white dwarf. Thus every star with an initial mass more than about 9 times the Sun would inevitably result in a supernova explosion, many of them from the WR stage.
A simple progression of WR stars from low to hot temperatures, resulting finally in WO-type stars, is not supported by observation. WO-type stars are extremely rare and all the known examples are more luminous and more massive than the relatively common WC stars. Alternative theories suggest either that the WO-type stars are only formed from the most massive main-sequence stars, and/or that they form an extremely short-lived end stage of just a few thousand years before exploding, with the WC phase corresponding to the core helium burning phase and the WO phase to nuclear burning stages beyond. It is still unclear whether the WO spectrum is purely the result of ionisation effects at very high temperature, reflects an actual chemical abundance difference, or if both effects occur to varying degrees.
|Initial Mass (M☉)||Evolutionary Sequence||Supernova Type|
|60+||O → Of → WNh ↔ LBV →[WNL]||IIn|
|45–60||O → WNh → LBV/WNE? → WO||Ib/c|
|20–45||O → RSG → WNE → WC||Ib|
|15–20||O → RSG ↔ (YHG) ↔ BSG (blue loops)||II-L (or IIb)|
|8–15||B → RSG||II-P|
- O: O-type main-sequence star
- Of: evolved O-type showing N and He emission
- BSG: blue supergiant
- RSG: red supergiant
- YHG: yellow hypergiant
- LBV: luminous blue variable
- WNh: WN plus hydrogen lines
- WNL: "late" WN-class Wolf–Rayet star (about WN6 to WN9)
- WNE: "early" WN-class Wolf–Rayet star (about WN2 to WN6)
- WC: WC-class Wolf–Rayet star
- WO: WO-class Wolf–Rayet star
Wolf–Rayet stars form from massive stars, although the evolved population I stars have lost half or more of their initial masses by the time they show a WR appearance. For example, γ2 Velorum A currently has a mass around 9 times the Sun, but began with a mass at least 40 times the Sun. High-mass stars are very rare, both because they form less often and because they have short lives. This means that Wolf–Rayet stars themselves are extremely rare because they only form from the most massive main-sequence stars and because they are a relatively short-lived phase in the lives of those stars. This also explains why type Ibc supernovae are less common than type II, since they result from higher-mass stars.
WNh stars, spectroscopically similar but actually a much less evolved star which has only just started to expel its atmosphere, are an exception and still retain much of their initial mass. The most massive stars currently known are all WNh stars rather than O-type main-sequence stars, an expected situation because such stars show helium and nitrogen at the surface only a few thousand years after they form, possibly before they become visible through the surrounding gas cloud. An alternative explanation is that these stars are so massive that they could not form as normal main-sequence stars, instead being the result of mergers of less extreme stars.
The difficulties of modelling the observed numbers and types of Wolf Rayet stars through single star evolution have led to theories that they form through binary interactions which could accelerate loss of the outer layers of a star through mass exchange. WR 122 is a potential example that has a flat disk of gas encircling the star, almost 2 trillion miles wide, and may have a companion star that stripped its outer envelope.
It is widely suspected that many type Ib and type Ic supernova progenitors are WR stars, although no conclusive identification has been made of such a progenitor.
Type Ib supernovae lack hydrogen lines in their spectra. The more common type Ic supernova lack both hydrogen and helium lines in their spectra. The expected progenitors for such supernova are massive stars that respectively lack hydrogen in their outer layers, or lack both hydrogen and helium. WR stars are just such objects. All WR stars lack hydrogen and in some WR stars, most notably the WO group, helium is also strongly depleted. WR stars are expected to experience core collapse when they have generated an iron core, and resulting supernova explosions would be of type Ib or Ic. In some cases it is possible that direct collapse of the core to a black hole would not produce a visible explosion.
WR stars are very luminous due to their high temperatures but not visually bright, especially the hottest examples that are expected to make up most supernova progenitors. Theory suggests that the progenitors of type Ibc supernovae observed to date would not be bright enough to be detected, although they place constraints on the properties of those progenitors. A possible progenitor star which has disappeared at the location of supernova iPTF13bvn may be a single WR star, although other analyses favour a less massive binary system with a stripped star or helium giant. The only other possible WR supernova progenitor is for SN 2017ein, and again it is uncertain whether the progenitor is a single massive WR star or binary system.
By far the most visible example of a Wolf–Rayet star is γ2 Velorum (WR 11), which is a bright naked eye star for those located south of 40 degrees northern latitude, although most of the light comes from an O7.5 giant companion. Due to the exotic nature of its spectrum (bright emission lines in lieu of dark absorption lines) it is dubbed the "Spectral Gem of the Southern Skies". The only other Wolf–Rayet star brighter than magnitude 6 is θ Muscae (WR 48), a triple star with two O class companions. Both are WC stars. The "ex" WR star WR 79a (HR 6272) is brighter than magnitude 6 but is now considered to be a peculiar O8 supergiant with strong emission. The next brightest at magnitude 6.4 is WR 22, a massive binary with a WN7h primary.
The most massive and most luminous star currently known, R136a1, is also a Wolf–Rayet star of the WNh type that is still fusing hydrogen in its core. This type of star, which includes many of the most luminous and most massive stars, is very young and usually found only in the centre of the densest star clusters. Occasionally a runaway WNh star such as VFTS 682 is found outside such clusters, probably having been ejected from a multiple system or by interaction with other stars.
Only a minority of planetary nebulae have WR type central stars, but a considerable number of well-known planetary nebulae do have them.
|Planetary nebula||Central star type|
|NGC 5189 (Spiral Planetary Nebula)||[WO1]|
|NGC 6369 (Little Ghost Nebula)||[WO3]|
|MyCn18 (Hourglass Nebula)||[WC]-PG1159|
- Murdin, P. (2001). "Wolf, Charles J E (1827-1918)". The Encyclopedia of Astronomy and Astrophysics. p. 4101. Bibcode:2000eaa..bookE4101.. doi:10.1888/0333750888/4101. ISBN 978-0333750889.
- Huggins, W.; Huggins, Mrs. (1890). "On Wolf and Rayet's Bright-Line Stars in Cygnus". Proceedings of the Royal Society of London. 49 (296–301): 33–46. doi:10.1098/rspl.1890.0063.
- Fowler, A. (1912). "Hydrogen, Spectrum of, Observations of the principal and other series of lines in the". Monthly Notices of the Royal Astronomical Society. 73 (2): 62–105. Bibcode:1912MNRAS..73...62F. doi:10.1093/mnras/73.2.62.
- Wright, W. H. (1914). "The relation between the Wolf–Rayet stars and the planetary nebulae". The Astrophysical Journal. 40: 466. Bibcode:1914ApJ....40..466W. doi:10.1086/142138.
- Beals, C. S. (1929). "On the nature of Wolf–Rayet emission". Monthly Notices of the Royal Astronomical Society. 90 (2): 202–212. Bibcode:1929MNRAS..90..202B. doi:10.1093/mnras/90.2.202.
- Beals, C. S. (1940). "On the Physical Characteristics of the Wolf Rayet Stars and their Relation to Other Objects of Early Type (with Plates VIII, IX)". Journal of the Royal Astronomical Society of Canada. 34: 169. Bibcode:1940JRASC..34..169B.
- Beals, C. S. (1930). "The Wolf-Rayet Stars". Publ. Dominion Astrophysical Observatory. 4: 271–301. Bibcode:1930PDAO....4..271B.
- Beals, C. S. (1933). "Classification and temperatures of Wolf–Rayet stars". The Observatory. 56: 196–197. Bibcode:1933Obs....56..196B.
- Swings, P. (1942). "The Spectra of Wolf–Rayet Stars and Related Objects". The Astrophysical Journal. 95: 112. Bibcode:1942ApJ....95..112S. doi:10.1086/144379.
- Starrfield, S.; Cox, A. N.; Kidman, R. B.; Pensnell, W. D. (1985). "An analysis of nonradial pulsations of the central star of the planetary nebula K1-16". Astrophysical Journal. 293: L23. Bibcode:1985ApJ...293L..23S. doi:10.1086/184484.
- Sanduleak, N. (1971). "On Stars Having Strong O VI Emission". The Astrophysical Journal. 164: L71. Bibcode:1971ApJ...164L..71S. doi:10.1086/180694.
- Barlow, M. J.; Hummer, D. G. (1982). "The WO Wolf–Rayet stars". Wolf–Rayet stars: Observations, physics, evolution; Proceedings of the Symposium, Cozumel, Mexico. 99. pp. 387–392. Bibcode:1982IAUS...99..387B. doi:10.1007/978-94-009-7910-9_51. ISBN 978-90-277-1470-1.
- Smith, Nathan; Conti, Peter S. (2008). "On the Role of the WNH Phase in the Evolution of Very Massive Stars: Enabling the LBV Instability with Feedback". The Astrophysical Journal. 679 (2): 1467–1477. arXiv:0802.1742. Bibcode:2008ApJ...679.1467S. doi:10.1086/586885.
- Sander, A.; Hamann, W.-R.; Todt, H. (2012). "The Galactic WC stars". Astronomy & Astrophysics. 540: A144. arXiv:1201.6354. Bibcode:2012A&A...540A.144S. doi:10.1051/0004-6361/201117830.
- Beals, C. S. (1933). "Classification and temperatures of Wolf–Rayet stars". The Observatory. 56: 196. Bibcode:1933Obs....56..196B.
- Van Der Hucht, Karel A. (2001). "The VIIth catalogue of galactic Wolf–Rayet stars". New Astronomy Reviews. 45 (3): 135–232. Bibcode:2001NewAR..45..135V. doi:10.1016/S1387-6473(00)00112-3.
- Crowther, P. A.; De Marco, O.; Barlow, M. J. (1998). "Quantitative classification of WC and WO stars". Monthly Notices of the Royal Astronomical Society. 296 (2): 367–378. Bibcode:1998MNRAS.296..367C. doi:10.1046/j.1365-8711.1998.01360.x. ISSN 0035-8711.
- Smith, Lindsey F. (1968). "A revised spectral classification system and a new catalogue for galactic Wolf–Rayet stars". Monthly Notices of the Royal Astronomical Society. 138: 109–121. Bibcode:1968MNRAS.138..109S. doi:10.1093/mnras/138.1.109.
- Crowther, P. A.; Smith, L. J. (1997). "Fundamental parameters of Wolf–Rayet stars. VI. Large Magellanic Cloud WNL stars". Astronomy and Astrophysics. 320: 500. Bibcode:1997A&A...320..500C.
- Conti, Peter S.; Massey, Philip (1989). "Spectroscopic studies of Wolf–Rayet stars. IV - Optical spectrophotometry of the emission lines in galactic and large Magellanic Cloud stars". The Astrophysical Journal. 337: 251. Bibcode:1989ApJ...337..251C. doi:10.1086/167101.
- Smith, L. F.; Michael m., S.; Moffat, A. F. J. (1996). "A three-dimensional classification for WN stars". Monthly Notices of the Royal Astronomical Society. 281 (1): 163–191. Bibcode:1996MNRAS.281..163S. doi:10.1093/mnras/281.1.163.
- Kingsburgh, R. L.; Barlow, M. J.; Storey, P. J. (1995). "Properties of the WO Wolf–Rayet stars". Astronomy and Astrophysics. 295: 75. Bibcode:1995A&A...295...75K. ISSN 0004-6361.
- Smith, J. D. T.; Houck, J. R. (2001). "A Mid-Infrared Spectral Survey of Galactic Wolf–Rayet Stars". The Astronomical Journal. 121 (4): 2115–2123. Bibcode:2001AJ....121.2115S. doi:10.1086/319968.
- Crowther, Paul A. (2007). "Physical Properties of Wolf–Rayet Stars". Annual Review of Astronomy and Astrophysics. 45 (1): 177–219. arXiv:astro-ph/0610356. Bibcode:2007ARA&A..45..177C. doi:10.1146/annurev.astro.45.051806.110615.
- Todt, H.; et al. (2010). "The central star of the planetary nebula PB 8: a Wolf–Rayet-type wind of an unusual WN/WC chemical composition". Astronomy and Astrophysics. 515: A83. arXiv:1003.3419. Bibcode:2010A&A...515A..83T. doi:10.1051/0004-6361/200912183.
- Miszalski, B.; et al. (2012). "IC 4663: the first unambiguous [WN] Wolf–Rayet central star of a planetary nebula". Monthly Notices of the Royal Astronomical Society. 423 (1): 934–947. arXiv:1203.3303. Bibcode:2012MNRAS.423..934M. doi:10.1111/j.1365-2966.2012.20929.x.
- Todt, H.; et al. (2013). "Abell 48 - a rare WN-type central star of a planetary nebula". Monthly Notices of the Royal Astronomical Society. 430 (3): 2301–2312. arXiv:1301.1944. Bibcode:2013MNRAS.430.2302T. doi:10.1093/mnras/stt056.
- Frew, David J.; et al. (2014). "The planetary nebula Abell 48 and its [WN] nucleus". Monthly Notices of the Royal Astronomical Society. 440 (2): 1345–1364. arXiv:1301.3994. Bibcode:2014MNRAS.440.1345F. doi:10.1093/mnras/stu198.
- Hamann, W.-R. (1997). "Spectra of Wolf–Rayet type central stars and their analysis (Invited Review)". Proceedings of the 180th Symposium of the International Astronomical Union. Kluwer Academic Publishers. p. 91. Bibcode:1997IAUS..180...91H.
- Hamann, Wolf-Rainer (1996). "Spectral analysis and model atmospheres of WR central stars (Invited paper)". Astrophysics and Space Science. 238 (1): 31. Bibcode:1996Ap&SS.238...31H. doi:10.1007/BF00645489 (inactive 2020-01-22).
- Liu, Q.-Z.; Hu, J.-Y.; Hang, H.-R.; Qiu, Y.-L.; Zhu, Z.-X.; Qiao, Q.-Y. (2000). "The supernova 1998S in NGC 3877: Another supernova with Wolf–Rayet star features in pre-maximum spectrum" (PDF). Astronomy and Astrophysics Supplement Series. 144 (2): 219–225. Bibcode:2000A&AS..144..219L. doi:10.1051/aas:2000208.
- Groh, Jose H. (2014). "Early-time spectra of supernovae and their precursor winds". Astronomy. 572: L11. arXiv:1408.5397. Bibcode:2014A&A...572L..11G. doi:10.1051/0004-6361/201424852.
- Crowther, Paul A.; Walborn, Nolan R. (2011). "Spectral classification of O2-3.5 If*/WN5-7 stars". Monthly Notices of the Royal Astronomical Society. 416 (2): 1311. arXiv:1105.4757. Bibcode:2011MNRAS.416.1311C. doi:10.1111/j.1365-2966.2011.19129.x.
- Walborn, N. R. (1982). "The O3 stars". Astrophysical Journal. 254: L15. Bibcode:1982ApJ...254L..15W. doi:10.1086/183747.
- Walborn, N. R. (1982). "Ofpe/WN9 circumstellar shells in the Large Magellanic Cloud". Astrophysical Journal. 256: 452. Bibcode:1982ApJ...256..452W. doi:10.1086/159922.
- Smith, L. J.; Crowther, P. A.; Prinja, R. K. (1994). "A study of the luminous blue variable candidate He 3-519 and its surrounding nebula". Astronomy and Astrophysics. 281: 833. Bibcode:1994A&A...281..833S.
- Crowther, P. A.; Bohannan, B. (1997). "The distinction between OIafpe and WNLha stars. A spectral analysis of HD 151804, HD 152408 and HDE 313846". Astronomy and Astrophysics. 317: 532. Bibcode:1997A&A...317..532C.
- Vamvatira-Nakou, C.; Hutsemékers, D.; Royer, P.; Cox, N. L. J.; Nazé, Y.; Rauw, G.; Waelkens, C.; Groenewegen, M. A. T. (2015). "The Herschel view of the nebula around the luminous blue variable star AG Carinae". Astronomy & Astrophysics. 578: A108. arXiv:1504.03204. Bibcode:2015A&A...578A.108V. doi:10.1051/0004-6361/201425090.
- Neugent, Kathryn F; Massey, Philip; Morrell, Nidia (2018). "A Modern Search for Wolf-Rayet Stars in the Magellanic Clouds. IV. A Final Census". The Astrophysical Journal. 863 (2): 181. arXiv:1807.01209. Bibcode:2018ApJ...863..181N. doi:10.3847/1538-4357/aad17d.
- Roberts, M. S. (1962). "The galactic distribution of the Wolf–Rayet stars". The Astronomical Journal. 67: 79. Bibcode:1962AJ.....67...79R. doi:10.1086/108603.
- Campbell, W. W. (1895). "Stars whose spectra contain both bright and dark hydrogen lines". The Astrophysical Journal. 2: 177. Bibcode:1895ApJ.....2..177C. doi:10.1086/140127.
- Gaposchkin, Cecilia Payne (1930). The stars of high luminosity. Harvard Observatory Monographs. 3. p. 1. Bibcode:1930HarMo...3....1P.
- Fleming, Williamina Paton Stevens; Pickering, Edward Charles (1912). "Stars having peculiar spectra". Annals of the Astronomical Observatory of Harvard College. 56 (6): 165. Bibcode:1912AnHar..56..165F.
- Van Der Hucht, Karel A.; Conti, Peter S.; Lundström, Ingemar; Stenholm, Björn (1981). "The Sixth Catalogue of galactic Wolf–Rayet stars, their past and present". Space Science Reviews. 28 (3): 227–306. Bibcode:1981SSRv...28..227V. doi:10.1007/BF00173260.
- Van Der Hucht, K. A. (2006). "New Galactic Wolf–Rayet stars, and candidates". Astronomy and Astrophysics. 458 (2): 453–459. arXiv:astro-ph/0609008. Bibcode:2006A&A...458..453V. doi:10.1051/0004-6361:20065819.
- Shara, Michael M.; Faherty, Jacqueline K.; Zurek, David; Moffat, Anthony F. J.; Gerke, Jill; Doyon, René; Artigau, Etienne; Drissen, Laurent (2012). "A Near-Infrared Survey of the Inner Galactic Plane for Wolf–Rayet Stars. Ii. Going Fainter: 71 More New W-R Stars". The Astronomical Journal. 143 (6): 149. arXiv:1106.2196. Bibcode:2012AJ....143..149S. doi:10.1088/0004-6256/143/6/149.
- Rosslowe, C. K.; Crowther, P. A. (2015). "Spatial distribution of Galactic Wolf–Rayet stars and implications for the global population". Monthly Notices of the Royal Astronomical Society. 447 (3): 2322–2347. arXiv:1412.0699. Bibcode:2015MNRAS.447.2322R. doi:10.1093/mnras/stu2525.
- Breysacher, J.; Azzopardi, M.; Testor, G. (1999). "The fourth catalogue of Population I Wolf–Rayet stars in the Large Magellanic Cloud". Astronomy and Astrophysics Supplement Series. 137: 117–145. Bibcode:1999A&AS..137..117B. doi:10.1051/aas:1999240.
- Breysacher, J. (1981). "Spectral Classification of Wolf–Rayet Stars in the Large Magellanic Cloud". Astronomy and Astrophysics Supplement. 43: 203. Bibcode:1981A&AS...43..203B.
- Hainich, R.; Rühling, U.; Todt, H.; Oskinova, L. M.; Liermann, A.; Gräfener, G.; Foellmi, C.; Schnurr, O.; Hamann, W.-R. (2014). "The Wolf–Rayet stars in the Large Magellanic Cloud. A comprehensive analysis of the WN class". Astronomy & Astrophysics. 565: A27. arXiv:1401.5474. Bibcode:2014A&A...565A..27H. doi:10.1051/0004-6361/201322696.
- Azzopardi, M.; Breysacher, J. (1979). "A search for new Wolf–Rayet stars in the Small Magellanic Cloud". Astronomy and Astrophysics. 75: 120. Bibcode:1979A&A....75..120A.
- Massey, Philip; Olsen, K. A. G.; Parker, J. Wm. (2003). "The Discovery of a 12th Wolf‐Rayet Star in the Small Magellanic Cloud". Publications of the Astronomical Society of the Pacific. 115 (813): 1265–1268. arXiv:astro-ph/0308237. Bibcode:2003PASP..115.1265M. doi:10.1086/379024.
- Massey, Philip; Duffy, Alaine S. (2001). "A Search for Wolf‐Rayet Stars in the Small Magellanic Cloud". The Astrophysical Journal. 550 (2): 713–723. arXiv:astro-ph/0010420. Bibcode:2001ApJ...550..713M. doi:10.1086/319818.
- Bonanos, A. Z.; Lennon, D. J.; Köhlinger, F.; Van Loon, J. Th.; Massa, D. L.; Sewilo, M.; Evans, C. J.; Panagia, N.; Babler, B. L.; Block, M.; Bracker, S.; Engelbracht, C. W.; Gordon, K. D.; Hora, J. L.; Indebetouw, R.; Meade, M. R.; Meixner, M.; Misselt, K. A.; Robitaille, T. P.; Shiao, B.; Whitney, B. A. (2010). "Spitzersage-Smc Infrared Photometry of Massive Stars in the Small Magellanic Cloud". The Astronomical Journal. 140 (2): 416–429. arXiv:1004.0949. Bibcode:2010AJ....140..416B. doi:10.1088/0004-6256/140/2/416.
- Shara, Michael M.; Moffat, Anthony F. J.; Gerke, Jill; Zurek, David; Stanonik, Kathryn; Doyon, René; Artigau, Etienne; Drissen, Laurent; Villar-Sbaffi, Alfredo (2009). "A Near-Infrared Survey of the Inner Galactic Plane for Wolf–Rayet Stars. I. Methods and First Results: 41 New Wr Stars". The Astronomical Journal. 138 (2): 402–420. arXiv:0905.1967. Bibcode:2009AJ....138..402S. doi:10.1088/0004-6256/138/2/402.
- Neugent, Kathryn F.; Massey, Philip (2011). "The Wolf–Rayet Content of M33". The Astrophysical Journal. 733 (2): 123. arXiv:1103.5549. Bibcode:2011ApJ...733..123N. doi:10.1088/0004-637X/733/2/123.
- Neugent, Kathryn F.; Massey, Philip; Georgy, Cyril (2012). "The Wolf–Rayet Content of M31". The Astrophysical Journal. 759 (1): 11. arXiv:1209.1177. Bibcode:2012ApJ...759...11N. doi:10.1088/0004-637X/759/1/11.
- Bibby, Joanne; Shara, M. (2012). "A Study of the Wolf–Rayet Population of M101 using the Hubble Space Telescope". American Astronomical Society. 219: #242.13. Bibcode:2012AAS...21924213B.
- Schaerer, Daniel; Vacca, William D. (1998). "New Models for Wolf‐Rayet and O Star Populations in Young Starbursts". The Astrophysical Journal. 497 (2): 618–644. arXiv:astro-ph/9711140. Bibcode:1998ApJ...497..618S. doi:10.1086/305487.
- Hamann, W.-R.; Gräfener, G.; Liermann, A. (2006). "The Galactic WN stars". Astronomy and Astrophysics. 457 (3): 1015–1031. arXiv:astro-ph/0608078. Bibcode:2006A&A...457.1015H. doi:10.1051/0004-6361:20065052.
- Barniske, A.; Hamann, W.-R.; Gräfener, G. (2006). "Wolf–Rayet stars of the carbon sequence". ASP Conference Series. 353: 243. Bibcode:2006ASPC..353..243B.
- Sander, A. A. C.; Hamann, W. -R.; Todt, H.; Hainich, R.; Shenar, T.; Ramachandran, V.; Oskinova, L. M. (2019). "The Galactic WC and WO stars. The impact of revised distances from Gaia DR2 and their role as massive black hole progenitors". Astronomy and Astrophysics. 621: A92. arXiv:1807.04293. Bibcode:2019A&A...621A..92S. doi:10.1051/0004-6361/201833712.
- Tylenda, R.; Acker, A.; Stenholm, B. (1993). "Wolf–Rayet Nuclei of Planetary Nebulae - Observations and Classification". Astronomy and Astrophysics Supplement. 102: 595. Bibcode:1993A&AS..102..595T.
- Hainich, R.; Pasemann, D.; Todt, H.; Shenar, T.; Sander, A.; Hamann, W.-R. (2015). "Wolf–Rayet stars in the Small Magellanic Cloud. I. Analysis of the single WN stars". Astronomy & Astrophysics. 581: A21. arXiv:1507.04000. Bibcode:2015A&A...581A..21H. doi:10.1051/0004-6361/201526241. ISSN 0004-6361.
- Toalá, J. A.; Guerrero, M. A.; Ramos-Larios, G.; Guzmán, V. (2015). "WISE morphological study of Wolf–Rayet nebulae". Astronomy & Astrophysics. 578: A66. arXiv:1503.06878. Bibcode:2015A&A...578A..66T. doi:10.1051/0004-6361/201525706.
- Foellmi, C.; Moffat, A. F. J.; Guerrero, M. A. (2003). "Wolf–Rayet binaries in the Magellanic Clouds and implications for massive-star evolution – I. Small Magellanic Cloud". Monthly Notices of the Royal Astronomical Society. 338 (2): 360–388. Bibcode:2003MNRAS.338..360F. doi:10.1046/j.1365-8711.2003.06052.x.
- Frew, David J.; Parker, Quentin A. (2010). "Planetary Nebulae: Observational Properties, Mimics and Diagnostics". Publications of the Astronomical Society of Australia. 27 (2): 129–148. arXiv:1002.1525. Bibcode:2010PASA...27..129F. doi:10.1071/AS09040.
- Conti, Peter S.; Vacca, William D. (1994). "HST UV Imaging of the Starburst Regions in the Wolf–Rayet Galaxy He 2-10: Newly Formed Globular Clusters?". Astrophysical Journal Letters. 423: L97. Bibcode:1994ApJ...423L..97C. doi:10.1086/187245.
- Leitherer, Claus; Vacca, William D.; Conti, Peter S.; Filippenko, Alexei V.; Robert, Carmelle; Sargent, Wallace L. W. (1996). "Hubble Space Telescope Ultraviolet Imaging and Spectroscopy of the Bright Starburst in the Wolf–Rayet Galaxy NGC 4214". Astrophysical Journal. 465: 717. Bibcode:1996ApJ...465..717L. doi:10.1086/177456.
- Campbell, W. W. (1894). "The Wolf–Rayet stars". Astronomy and Astro-Physics (Formerly the Sidereal Messenger). 13: 448. Bibcode:1894AstAp..13..448C.
- Zanstra, H.; Weenen, J. (1950). "On physical processes in Wolf-Rayet stars. Paper 1: Wolf-Rayet stars and Beals' hypothesis of pure recombination (Errata: 11 357)". Bulletin of the Astronomical Institutes of the Netherlands. 11: 165. Bibcode:1950BAN....11..165Z.
- Limber, D. Nelson (1964). "The Wolf-Rayet Phenomenon". The Astrophysical Journal. 139: 1251. Bibcode:1964ApJ...139.1251L. doi:10.1086/147863.
- Underhill, Anne B. (1968). "The Wolf-Rayet Stars". Annual Review of Astronomy and Astrophysics. 6: 39–78. Bibcode:1968ARA&A...6...39U. doi:10.1146/annurev.aa.06.090168.000351.
- Underhill, Anne B. (1960). "A Study of the Wolf-Rayet Stars H. D. 192103 and H. D. 192163". Publications of the Dominion Astrophysical Observatory Victoria. 11: 209. Bibcode:1960PDAO...11..209U.
- Sahade, J. (1958). "On the nature of the Wolf-Rayet stars". The Observatory. 78: 79. Bibcode:1958Obs....78...79S.
- Westerlund, B. E.; Smith, L. F. (1964). "Worlf-Rayet Stars in the Large Magellanic Cloud". Monthly Notices of the Royal Astronomical Society. 128 (4): 311–325. Bibcode:1964MNRAS.128..311W. doi:10.1093/mnras/128.4.311.
- Abbott, David C.; Conti, Peter S. (1987). "Wolf-rayet stars". Annual Review of Astronomy and Astrophysics. 25: 113–150. Bibcode:1987ARA&A..25..113A. doi:10.1146/annurev.aa.25.090187.000553.
- Paczyński, B. (1967). "Evolution of Close Binaries. V. The Evolution of Massive Binaries and the Formation of the Wolf-Rayet Stars". Acta Astronomica. 17: 355. Bibcode:1967AcA....17..355P.
- Nugis, T.; Lamers, H. J. G. L. M. (2000). "Mass-loss rates of Wolf-Rayet stars as a function of stellar parameters". Astronomy and Astrophysics. 360: 227. Bibcode:2000A&A...360..227N.
- Humphreys, R. M. (1991). "The Wolf–Rayet Connection - Luminous Blue Variables and Evolved Supergiants (review)". Proceedings of the 143rd Symposium of the International Astronomical Union. 143. p. 485. Bibcode:1991IAUS..143..485H.
- Groh, Jose H.; Meynet, Georges; Georgy, Cyril; Ekström, Sylvia (2013). "Fundamental properties of core-collapse supernova and GRB progenitors: Predicting the look of massive stars before death". Astronomy & Astrophysics. 558: A131. arXiv:1308.4681. Bibcode:2013A&A...558A.131G. doi:10.1051/0004-6361/201321906.
- Georges Meynet; Cyril Georgy; Raphael Hirschi; Andre Maeder; Phil Massey; Norbert Przybilla; M-Fernanda Nieva (2011). "Red Supergiants, Luminous Blue Variables and Wolf–Rayet stars: The single massive star perspective". Bulletin de la Société Royale des Sciences de Liège. v1. 80 (39): 266–278. arXiv:1101.5873. Bibcode:2011BSRSL..80..266M.
- Tramper, Frank (2013). "The nature of WO stars: VLT/X-Shooter spectroscopy of DR1". Massive Stars: From α to Ω: 187. arXiv:1312.1555. Bibcode:2013msao.confE.187T.
- Eldridge, John J.; Fraser, Morgan; Smartt, Stephen J.; Maund, Justyn R.; Crockett, R. Mark (2013). "The death of massive stars - II. Observational constraints on the progenitors of Type Ibc supernovae". Monthly Notices of the Royal Astronomical Society. 436 (1): 774–795. arXiv:1301.1975. Bibcode:2013MNRAS.436..774E. doi:10.1093/mnras/stt1612.
- Groh, Jose; Meynet, Georges; Ekstrom, Sylvia; Georgy, Cyril (2014). "The evolution of massive stars and their spectra I. A non-rotating 60 Msun star from the zero-age main sequence to the pre-supernova stage". Astronomy & Astrophysics. 564: A30. arXiv:1401.7322. Bibcode:2014A&A...564A..30G. doi:10.1051/0004-6361/201322573.
- Oberlack, U.; Wessolowski, U.; Diehl, R.; Bennett, K.; Bloemen, H.; Hermsen, W.; Knödlseder, J.; Morris, D.; Schönfelder, V.; von Ballmoos, P. (2000). "COMPTEL limits on 26Al 1.809 MeV line emission from gamma2 Velorum". Astronomy and Astrophysics. 353: 715. arXiv:astro-ph/9910555. Bibcode:2000A&A...353..715O.
- Banerjee, Sambaran; Kroupa, Pavel; Oh, Seungkyung (2012). "The emergence of super-canonical stars in R136-type starburst clusters". Monthly Notices of the Royal Astronomical Society. 426 (2): 1416–1426. arXiv:1208.0826. Bibcode:2012MNRAS.426.1416B. doi:10.1111/j.1365-2966.2012.21672.x.
- Mauerhan, Jon C.; Smith, Nathan; Van Dyk, Schuyler D.; Morzinski, Katie M.; Close, Laird M.; Hinz, Philip M.; Males, Jared R.; Rodigas, Timothy J. (2015). "Multiwavelength Observations of NaSt1 (WR 122): Equatorial Mass Loss and X-rays from an Interacting Wolf–Rayet Binary". Monthly Notices of the Royal Astronomical Society. 1502 (3): 1794. arXiv:1502.01794. Bibcode:2015MNRAS.450.2551M. doi:10.1093/mnras/stv257.
- Dessart, Luc; Hillier, D. John; Livne, Eli; Yoon, Sung-Chul; Woosley, Stan; Waldman, Roni; Langer, Norbert (2011). "Core-collapse explosions of Wolf–Rayet stars and the connection to Type IIb/Ib/Ic supernovae". Monthly Notices of the Royal Astronomical Society. 414 (4): 2985. arXiv:1102.5160. Bibcode:2011MNRAS.414.2985D. doi:10.1111/j.1365-2966.2011.18598.x.
- Groh, Jose H.; Georgy, Cyril; Ekström, Sylvia (2013). "Progenitors of supernova Ibc: A single Wolf–Rayet star as the possible progenitor of the SN Ib iPTF13bvn". Astronomy & Astrophysics. 558: L1. arXiv:1307.8434. Bibcode:2013A&A...558L...1G. doi:10.1051/0004-6361/201322369.
- Cerda-Duran, Pablo; Elias-Rosa, Nancy (2018). "Neutron Stars Formation and Core Collapse Supernovae". The Physics and Astrophysics of Neutron Stars. Astrophysics and Space Science Library. 457. pp. 1–56. arXiv:1806.07267. doi:10.1007/978-3-319-97616-7_1. ISBN 978-3-319-97615-0.
- Milisavljevic, D. (2013). "The Progenitor Systems and Explosion Mechanisms of Supernovae". New Horizons in Astronomy (Bash 2013): 9. Bibcode:2013nha..confE...9M.
- Kilpatrick, Charles D.; Takaro, Tyler; Foley, Ryan J.; Leibler, Camille N.; Pan, Yen-Chen; Campbell, Randall D.; Jacobson-Galan, Wynn V.; Lewis, Hilton A.; Lyke, James E.; Max, Claire E.; Medallon, Sophia A.; Rest, Armin (2018). "A potential progenitor for the Type Ic supernova 2017ein". Monthly Notices of the Royal Astronomical Society. 480 (2): 2072–2084. arXiv:1808.02989. Bibcode:2018MNRAS.480.2072K. doi:10.1093/mnras/sty2022.
- Acker, A.; Neiner, C. (2003). "Quantitative classification of WR nuclei of planetary nebulae". Astronomy and Astrophysics. 403 (2): 659–673. Bibcode:2003A&A...403..659A. doi:10.1051/0004-6361:20030391.
- Peña, M.; Rechy-García, J. S.; García-Rojas, J. (2013). "Galactic kinematics of Planetary Nebulae with [WC] central star". Revista Mexicana de Astronomía y Astrofísica. 49: 87. arXiv:1301.3657. Bibcode:2013RMxAA..49...87P.
- Tuthill, Peter G.; Monnier, John D.; Danchi, William C.; Turner, Nils H. (2003). "High-resolution near-IR imaging of the WCd(+OB) environments: Pinwheels". Proceedings of the 212th International Union of Astronomy Symposium. 212. p. 121. Bibcode:2003IAUS..212..121T.
- Monnier, J. D.; Tuthill, P. G.; Danchi, W. C. (1999). "Pinwheel Nebula around WR 98[CLC]a[/CLC]". The Astrophysical Journal. 525 (2): L97–L100. arXiv:astro-ph/9909282. Bibcode:1999ApJ...525L..97M. doi:10.1086/312352. PMID 10525463.
- Dougherty, S. M.; Beasley, A. J.; Claussen, M. J.; Zauderer, B. A.; Bolingbroke, N. J. (2005). "High-Resolution Radio Observations of the Colliding-Wind Binary WR 140". The Astrophysical Journal. 623 (1): 447–459. arXiv:astro-ph/0501391. Bibcode:2005ApJ...623..447D. doi:10.1086/428494.
|Wikimedia Commons has media related to Wolf-Rayet stars.| |
- Our students experience Computer Science through a series of engaging and challenging project-based units.
- Creativity and computational thinking underpin our Computer Science: we have developed a broad and balanced curriculum in years 7 to 9 that enables our learners to develop their skills in these areas.
- Programming is a key part of our curriculum, but not its sole component. The new curriculum enables learners to explore digital creativity and takes them on a journey through the fundamental concepts of computing.
- Technology enhanced learning (TEL) and Digital literacy play a key role in the modern school: our curriculum helps students become confident and creative with their use of technology in all their subjects.
- We monitor and assess learner’s progression through our curriculum using a bespoke progression map and regular digital assessments supported by a constant feedback dialogue between staff and student.
Our curriculum has the following six strands:
Algorithms and Computational Thinking
- Algorithmic thinking is a way of getting to a solution through a clear definition of the steps. It is needed when similar problems have to be solved over and over again. Learning algorithms for doing multiplication or division is an example; If simple rules are followed precisely, by a computer or a person, the solution to any multiplication can be found. Algorithmic thinking is the ability to think in terms of sequences and rules as a way of solving problems or understanding situations. It is a core skill that our students develop when they learn to write their own computer programs.
- Computational thinking skills help students solve problems through logical reasoning; it enables students to access our subject content and equips them for the study of this subject at GCSE. They relate to thinking skills and problem solving across the whole curriculum and through life in general. We learn to:
- think algorithmically
- think in terms of decomposition
- think in generalisations, identifying and making use of patterns
- think in abstractions, focusing on just the important details
- and think in terms of evaluation.
Programming & Development
- Through a series of projects that promote resilience, creative computing, teamwork and problem solving students are taught the fundamentals of programming in block and text-based languages.
- They solve a variety of computational problems; make appropriate use of data structures and design and develop modular programs that use procedures or functions.
- They tinker and experiment, create new programs, test and fix those programs and learn about physical computing with hardware such as the bbc:microbit.
Data and data representation
- Students understand how instructions are stored and executed within a computer system; they understand how data of various types (including text, sounds and pictures) can be represented and manipulated digitally, in the form of binary digits.
- They are able to select and use appropriate software to work with different types of data.
Hardware and processing
- Students learn to see computer systems as made up of parts each with different functions. They can make the distinction between hardware and software and understand how computers store and process instructions.
- They learn how computers use sensors and actuators through experiencing this with hands-on programming. They develop an understanding of the input, process, output model and apply it to design their own algorithms.
Communication and Networks
- Students are taught about important security issues. They learn to protect themselves personally online and move on to understanding how networks and the internet are kept safe and secure.
- Students are taught basic web design and development using modern technologies.
Computer Science in Society
- Students reflect on the legal, ethical and environmental impact of technology on society and the individual.
- They learn to document and reflect on their work in a professional way and to give constructive feedback to others. |
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
A budget is a quantitative expression of a financial plan for a defined period of time. It may include planned sales volumes and revenues, resource quantities, costs and expenses, assets, liabilities and cash flows. It expresses strategic plans of business units, organizations, activities or events in measurable terms.
A budget is the sum of money allocated for a particular purpose and the summary of intended expenditures along with proposals for how to meet them.
A budget is an important concept in microeconomics, which uses a budget line to illustrate the trade-offs between two or more goods. In other terms, a budget is an organizational plan stated in monetary terms.
Budget helps to aid the planning of actual operations by forcing managers to consider how the conditions might change and what steps should be taken now and by encouraging managers to consider problems before they arise. It also helps co-ordinate the activities of the organization by compelling managers to examine relationships between their own operation and those of other departments. Other essentials of budget include:
- To control resources
- To communicate plans to various responsibility center managers.
- To motivate managers to strive to achieve budget goals.
- To evaluate the performance of managers
- To provide visibility into the company's performance
- For accountability
In summary, the purpose of budgeting tools:
- Tools provide a forecast of revenues and expenditures, that is, construct a model of how a business might perform financially if certain strategies, events and plans are carried out.
- Tools enable the actual financial operation of the business to be measured against the forecast.
- Lastly, tools establish the cost constraint for a project, program, or operation.
The budget of a company is often compiled annually, but may not be a finished budget, usually requiring considerable effort, is a plan for the short-term future, typically allows hundreds or even thousands of people in various departments (operations, human resources, IT, etc.) to list their expected revenues and expenses in the final budget.
If the actual figures delivered through the budget period come close to the budget, this suggests that the managers understand their business and have been successfully driving it in the intended direction. On the other hand, if the figures diverge wildly from the budget, this sends an 'out of control' signal, and the share price could suffer. Campaign planners incur two types of cost in any campaign: the first is the cost of human resource necessary to plan and execute the campaign. the second type of expense that campaign planners incur is the hard cost of the campaign itself.
Event management budget
A budget is a fundamental tool for an event director to predict with a reasonable accuracy whether the event will result in a profit, a loss or will break-even. A budget can also be used as a pricing tool.
There are two basic approaches or philosophies, when it comes to budgeting. One approach is telling you on mathematical models, and the other on people.
The first school of thought believes that financial models, if properly constructed, can be used to predict the future. The focus is on variables, inputs and outputs, drivers and the like. Investments of time and money are devoted to perfecting these models, which are typically held in some type of financial spreadsheet application.
The other school of thought holds that it’s not about models, it’s about people. No matter how sophisticated models can get, the best information comes from the people in the business. The focus is therefore in engaging the managers in the business more fully in the budget process, and building accountability for the results. The companies that adhere to this approach have their managers develop their own budgets. While many companies would say that they do both, in reality the investment of time and money falls squarely in one approach or the other.
The budget of a government is a summary or plan of the intended revenues and expenditures of that government. There are three types of government budget : the operating or current budget, the capital or investment budget, and the cash or cash flow budget.
The budget is prepared by the Treasury team led by the Chancellor of the Exchequer and is presented to Parliament by the Chancellor of the Exchequer on Budget Day. It is customary for the Chancellor to stand on the steps of Number 11 Downing Street with his or her team for the media to get photographic shots of the Red Box, immediately prior to them going to the House of Commons. Once presented in the House of Commons it is debated and then voted on. Minor changes may be made however with the budget being written and presented by the party with the majority in the House of Commons (the Government), the Whips will ensure that is it passed as written by the Chancellor.
The federal budget is prepared by the Office of Management and Budget, and submitted to Congress for consideration. Invariably, Congress makes many and substantial changes. Nearly all American states are required to have balanced budgets, but the federal government is allowed to run deficits.
The budget is prepared by the Budget Division Department of Economic Affairs of the Ministry of Finance annually. This includes supplementary excess grants and when a proclamation by the President as to failure of Constitutional machinery is in operation in relation to a State or a Union Territory, preparation of the Budget of such State. The railway budget is presented separately by the Ministry of Railways. Thus budget is presented in two categories: The General Budget and The Railway Budget
The Philippine budget is considered the most complicated in the world, incorporating multiple approaches in one single budget system: line-item (budget execution), performance (budget accountability), and zero-based budgeting. The Department of Budget and Management (DBM) prepares the National Expenditure Program and forwards it to the Committee on Appropriations of the House of Representative to come up with a General Appropriations Bill (GAB). The GAB will go through budget deliberations and voting; the same process occurs when the GAB is transmitted to the Philippine Senate.
After both houses of Congress approves the GAB, the President signs the bill into a General Appropriations Act (GAA); also, the President may opt to veto the GAB and have it returned to the legislative branch or leave the bill unsigned for 30 days and lapse into law. There are two types of budget bill veto: the line-item veto and the veto of the whole budget.
Personal or family budget
In a personal or family budget all sources of income (inflows) are identified and expenses (outflows) are planned with the intent of matching outflows to inflows (making ends meet). In consumer theory, the equation restricting an individual or household to spend no more than its total resources is often called the budget constraint.
Elements of a personal or family budget usually include, fixed expenses, monthly payments, insurance, entertainment, and savings.
There are many informational sites and software available for use in personal and family budgeting.
Types of Budget
- Sales budget – an estimate of future sales, often broken down into both units and currency. It is used to create company sales goals.
- Production budget - an estimate of the number of units that must be manufactured to meet the sales goals. The production budget also estimates the various costs involved with manufacturing those units, including labor and material. Created by product oriented companies.
- Capital budget - used to determine whether an organization's long-term investments such as new machinery, replacement machinery, new plants, new products, and research development projects are worth pursuing.
- Cash flow/cash budget – a prediction of future cash receipts and expenditures for a particular time period. It usually covers a period in the short-term future. The cash flow budget helps the business determine when income will be sufficient to cover expenses and when the company will need to seek outside financing.
- Marketing budget – an estimate of the funds needed for promotion, advertising, and public relations in order to market the product or service.
- Project budget – a prediction of the costs associated with a particular company project. These costs include labour, materials, and other related expenses. The project budget is often broken down into specific tasks, with task budgets assigned to each. A cost estimate is used to establish a project budget.
- Revenue budget – consists of revenue receipts of government and the expenditure met from these revenues. Tax revenues are made up of taxes and other duties that the government levies.
- Expenditure budget – includes spending data items.
- "CIMA Official Terminilogy" (PDF).
- O'Sullivan, Arthur; Sheffrin, Steven M. (2003). Economics: Principles in Action. Upper Saddle River, New Jersey 07458: Pearson Prentice Hall. p. 502. ISBN 0-13-063085-3.
- Cliche, P. (2012). “Budget,” in L. Côté and J.-F. Savard (eds.), Encyclopedic Dictionary of Public Administration, [online], http://www.dictionnaire.enap.ca/Dictionnaire/en/home.aspx
|Library resources about |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.