text stringlengths 4 602k |
|---|
The reconstruction of Warsaw in 1945 was the first attempt in history to reconstruct not only the individual monuments, but also to recreate the entire historical tissue of a city. Many years later, the idea made it onto the UNESCO list of national heritage sites, but it was far from obvious that the enterprise would be successful back in the first days of 1945.
In January 1945, Warsaw was a virtual sea of ruins – an appalling sight documented in numerous photographs. The devastation of the city, which had been home to over 1 million people before the war, was almost complete, and the new Communist authorities even considered moving the capital to another Polish city. According to one idea, Warsaw was to be left the way it was – a lunar landscape of ruins – as a war memorial for future generations.
The city was gradually destroyed throughout WW2. 10% of its buildings had already been destroyed by September 1939. The devastation continued in 1941, when the city suffered under Soviet bombings. In 1943, the destruction was brought to an unprecedented level with the liquidation of the Warsaw ghetto. In the aftermath of the Ghetto Uprising, the entire district of northern Warsaw was literally wiped from the surface of the earth. The final stage of destruction came with the Warsaw Uprising, when large parts of the Old Town, Powiśle, the city centre and Wola were destroyed. Whatever was left was methodically looted and then razed to the ground by the German Vernichtungs- and Verbrenungskommando, even as late as mid-January 1945.
See also The Urn in the Library – How the Nazis burned Warsaw's libraries
As a result, the losses in urban architecture of Warsaw at the beginning of 1945 were estimated at around 84%: with industrial infrastructure and historic monuments destroyed at 90%, and residential buildings at 72%. After the Warsaw Uprising, a city which before the war was home to over 1 million people was almost deserted, with only a few thousand people living in its ruins.
Stalin needs Warsaw in Yalta
All this made the rebuilding of Warsaw highly unlikely. In fact, at the beginning of 1945, the new Communist authorities were even considering moving the capital to Łodź, which had most of its buildings still standing. There were also serious plans to turn Warsaw into a kind of reserve – a quasi-memorial of war.
Why was Warsaw rebuilt, then? Two reasons proved crucial. First, there was the human factor – starting in January 1945, there was a steady influx of people to the city, former residents as well as all kinds of displaced persons flocked into the frozen ruins, virtually starting the reconstruction process on their own. Then there was politics. Stalin, who was preparing for the Yalta conference, needed international recognition, this meant a Poland with its capital in Warsaw. On February 3, 1945, the National Council passed a resolution that called for the rebuilding of the capital. A couple of days later, on February 14, the Office for the Reconstruction of the Capital (Biuro Odbudowy Stolicy) was formed.
Warsaw 1945: Reconstruction
With the establishing of the BOS, one of the most ambitious projects in human history was initiated. No one had ever attempted to reconstruct the monuments of a war-torn city on such a scale. The decision to do so was also in blatant contrast with the prevailing conservation doctrine of the times. After the war, when faced with rebuilding a town which had been virtually erased from the face of the earth, Germany, the UK, Holland, France and Italy reconstructed only selected individual historical buildings. The reconstruction of Warsaw followed exactly the opposite tactic. And, as Jerzy S. Majewski and Tomasz Markiewicz, the authors of the book Building a New Home, explain, the credit for much of the exceptional character of Warsaw’s re-construction actually goes to one person:
Seeking to persuade the political authorities, as well as conservators and architects, Professor Jan Zachwatowicz, who was the head of the BOS’s Department of Monumental Architecture at that time, argued that in the case of Polish monuments destroyed by the Germans during the war, and particularly those in the capital city, full reconstruction was uniquely justified.
Zachwatowicz’s motivation was indeed patriotic: a nation and its cultural monuments are one entity, he would say. His stance, however, was not shared by all members of the re-construction team. Over the whole period of reconstruction (which lasted until 1952), the activities of BOS were marked by a sharp conflict between the ‘monumentalists’ centred around Zachwatowicz and the ‘modernisers’, led by the Head of the BOS, Roman Piotrowski and his deputy Józef Sigalin. This division mirrored the individual political leanings of the architects: the group of Zachwatowicz was connected with the AK underground, Piotrowski and Sigalin belonged to the new order.
In fact, Zachwatowicz’s idea often meant reconstructing whole buildings and monuments from scratch – based on documentation, memory and whatever other sources there were, like the drawings of Caneletto. It also meant that a large part of the rebuilt city would be basically... a replica.
The initial range of the reconstruction proposed by Zachwatowicz was eventually drastically reduced. Still, thanks to the determination of Zachwatowicz and his team, huge parts of the Old Town and the Royal Route were meticulously reconstructed.
The pioneering and unique effort of reconstruction in Warsaw was already recognized by public opinion in 1980, when Warsaw’s Old Town was selected as part of UNESCO's World Cultural Heritage list. And in 2011, the Archives of BOS were recognized as one of the most valuable examples of human documental heritage, and enlisted on the Memory of the World list.
Building the New Socialist Capital
Historical reconstruction was naturally only a part of the rebuilding effort. The city needed new urban planning, new streets and new buildings to accommodate the growing numbers of new Varsovians.
The immediate post-war years saw the launch and completion of several huge projects, like the building of Trasa W-Z (the East-West Route), which was a huge engineering achievement on its opening in 1949, with its tunnel running under Castle Square. In Mariensztat, which was the first post-war housing estate in Warsaw, some of the houses (interestingly, they were stylized merchant houses typical of the 17th century) went up in record time.
One of the peculiarities of the first years of the rebuilding effort was that the architects still enjoyed artistic freedom, at least until the new official style of Social Realism was imposed in 1949. Some projects, like the Warsaw Housing Cooperative (WSM) Estate in Koło, designed by Helena and Szymon Syrkus and rooted in the Functionalist style of the 30s, strike one as shockingly modernist. Just like the Moskwa Cinema, opened in 1950, they were designed before the official introduction of Social Realism in Poland, which became Warsaw's obligatory style for many years to come, epitomized in such architectural projects as Muranów housing estate (1948-1953) or the Palace of Culture and Science (1953-56).
The first period of reconstruction ended with the dissolution of BOS in 1952, but many reconstruction projects continued well into the 60s and even later (eg. the reconstruction of the Royal Castle being completed only in 1974).
The Entire Nation is Building its Capital
In this context, it may be important to ask how such a great logistical enterprise was at all possible in a country so economically devastated by the war, and considering that Poland was not part of a Marshall Plan of any sort. In fact, the sole source of financing was the donations made by the people to the Social Fund for the Rebuilding of the Capital (SFOS). Established in 1945, SFOS was the only legitimate state institution to deal with financing the reconstruction effort. It was dissolved as late as 1965.
Warsaw was truly, as the popular Socialist slogan goes, (re-)built by the whole nation, with donations and workers coming from all around Poland, and with a whole lot of volunteer work. The widespread enthusiasm, which was caught on newsreels from the period, cannot be dismissed as a Communist propaganda. In fact, it just might have been a prerequisite for the whole rebuilding project happening.
This goes together with another positive aspect of the rebuilding of Warsaw – its indisputable social achievement, consisting in providing many people with a place to live, and a chance at a new start. This concerned whole social classes, formerly excluded from any participation in urban life.
Destroying the Old
Building the new Socialist capital also involved the destruction of the old face of Warsaw, or rather what was left of it. As Jerzy S. Majewski and Tomasz Markiewicz note, the future capital of Poland was conceived of as a model Socialist city in accordance with the ideology imposed by a foreign power. This meant that the land was nationalized, and many buildings that had survived the war were pulled down. This pertains to many late 19th-century/early 20th-century tenant houses, which made up the most important part of Warsaw’s character before WW2.
The BOS Emergency Services undertook controversial decisions with regard to the demolition of dozens of 19th-century buildings, even those which had already been rebuilt. Such decisions were frequently taken in order to prevent the buildings being returned to their rightful private owners.
For the Communists, the turn-of-the-century architecture was quintessentially Bourgeois, and for Modernist architects, it was seen as an obstacle to building a better urban environment
Urban planners and architects, who before the war had designed nothing more than groups of buildings, could now let their imaginations run free and create entire districts, with no regard for former division of title rights, explain Majewski and Markiewicz.
The Long Shadow of Nationalization
This rebuilding process and the way it was carried out, while a great success in its own right, also had its inherent drawbacks which are becoming apparent only today. To facilitate the reconstruction effort the Communist regime introduced the so-called Dekret Bierut (Bierut’s law). Declared on November 1945, Dekret Bieruta stated that all land within the pre-war borders of Warsaw was to be nationalized. (Although this did not pertain to buildings, in practice the buildings were also subject to nationalization). Today many historians believe that without it, the rebuilding of the capital on this scale wouldn’t have been possible.
But this Communist legal standard, apart from violating the constitutional right of personal property, also gave ground to the future claims made by the pre-war owners and their inheritors. This re-privatization has been gaining momentum in recent years - since 1990 around 3,7 thousand addresses have returned to their pre-war owners (their inheritors or simply those who have acquired the legal rights) and another 2000 is still on the list to be reprivatized. This process is considered problematic for a couple of reasons; most importantly as these claims often concern land and buildings of public utility, and pose a threat to social tissue of the city.
The solution to this problem, rooted deep in the Warsaw's post-war history, lies still ahead.
Author: Mikołaj Gliński, February 3, 2015 |
Since a map is a representation, the original shape of the represented subject must first be defined. An important branch of cartography, geodesy studies the Earth shape and how it is related to its surface's features.
Geoid, from the Greek for "Earth-shaped", is the common definition of our world's shape. This recursive description is necessary because no simple geometric shape matches the Earth:
Taken in account, those factors greatly complicate the cartographer's job but, depending on the task, some irregularities can be ignored. For instance, although important locally, terrain levels are minuscule in planetary scale: the tallest land peak stands less than 9km above sea level, or nearly 1/1440 of Earth diameter; the depth of the most profound sea abyss is roughly 1/1150 diameter.
For maps covering very large areas, especially worldwide, the Earth may be assumed perfectly spherical, since any shape imprecision is dwarfed by unavoidable errors in data and media resolution. This assumption holds for most of this document. Conversely, for very small areas terrain features dominate and measurements can be based on a flat Earth.
For highly precise maps of smaller regions, the basic ellipsoidal shape can not be ignored. A geodetic datum is a set of parameters (including axis lengths and offset from true center of the Earth) defining a reference ellipsoid. For each mapped region, a different datum can be carefully chosen so that it best matches average sea level, therefore terrain features. Thus, data acquisition for a map involves surveying, or measuring heights and distances of reference points as deviations from a specific datum (a delicate task: due to mentioned irregularities, gravity — and therefore plumb bobs and levers — is not always aligned towards the center of the Earth).
Several standard datums were adopted for regional or national maps. International datums do exist, but may not fit any particular area as well as a local one.
Although the Earth is a three-dimensional object, when supposed spherical its surface has a constant radius, so any point on it is uniquely identified using a polar two-coordinate system.
Given a polar axis (around which the planet rotates daily), an orthogonal plane which divides the globe in halves (i.e., an equatorial plane) and an arbitrary reference axis on it, any surface point determines a latitude, or the smallest angle, measured from the center of the Earth, from it towards the equatorial plane, and a longitude, or the smallest angle from the arbitrary axis to the projection of the point on the Equator determined by the latitude.
A graticule is a spherical grid of coordinate lines over the planetary surface, comprising circles on planes normal — i.e., perpendicular to the north-south axis — called parallels, and semicircular arcs with that axis as chord, called meridians. True to their name, no parallels ever cross one another, while all meridians meet at each geographic pole. Every parallel crosses every meridian at an angle of 90°. This and other properties help assessing map distortion.
Both sets of parallels and meridians are infinite, but of course only a subset can be included in any map. A point's latitude and longitude, both usually measured in degrees, define the crossing of a parallel and a meridian, respectively. So, latitudes mean north-to-south angles from the equatorial plane, while longitudes express west-to-east angles from a particular meridian defined by the reference axis. Latitudes conventionally range from 90° South to 90° North, while longitudes range from 180° West to 180° East. reference axis.
A natural reference, the longest parallel divides the Earth in two equal hemispheres, north and south; thus its name, Equator. Four other important parallels are defined by astronomical constraints. The geographical north-south axis is actually tilted slightly less than 23.5° from the plane of the Earth's orbit around the sun. This accounts for the different seasons and different lengths of day and night periods throughout the year.
Every year about December 21st, the solar rays fall vertically upon a parallel near 23.5°S. That is the longest day in the southern hemisphere (notice how most of it is exposed to the sun, so that date is called the southern summer solstice), but the shortest day in the northern hemisphere (therefore winter solstice); not only shorter daylight periods but a shallower angle of incidence of solar rays explain the lower temperatures north of Equator.
Near June 21st, a similar phenomenon happens along the parallel opposite North. By definition, these two parallels encircle the torrid or tropical zone; they are named after the zodiacal constellations where the sun is at those dates, thus Tropic of Capricorn (south) and Tropic of Cancer (north). In regions south of the Tropic of Capricorn the sun around noon appears to run always north of the observer; at the same hour, in places north of the Tropic of Cancer the sun runs always south of the observer, while in tropical regions the sun appears sometimes south, sometimes north, depending on the season.
Subtracting the axial tilt from 90° we get the latitudes of the Arctic (about 66.5°N) and Antarctic (about 66.5°S) polar circles. Around December 21 the sun does not set at the Antarctic circle for a full day. Going south, we get even longer consecutive daylight periods, up to six months at the pole. There are correspondingly long nights at the Antarctic winter. Of course, the same occurs at the northern latitudes, with a six-month offset.
Points on the same parallel suffer similar rates of exposure to the sun, therefore are prone to similar climates (disregarding other factors like altitude, wind/sea conditions and terrain).
A point's latitude can be inferred from the sun's angle above the horizon at noon, the moment when the sun appears highest at the sky and a vertical stake projects its shortest shadow. Sailors use instruments like the sextant for measuring the latitude.
All points on a meridian have the same solar, or local, time. Due to different day lengths throughout the year, correction formulas are applied to convert it to a local mean time. Since it would be impractical having nearby regions with different time reckonings (one nautical mile, approximately 1853 meters, corresponds to an angle of 0°1' along the Equator, or a temporal difference of 4 seconds), the world is divided in 24 fuses, or time zones, each 15° wide. For everyday purposes, every point inside a zone is considered having the same standard time (actually, a few countries still use solar time). In practice, the time-jumping boundaries seldom follow the meridians, bending (usually at national or regional borderlines) to keep related places conveniently synchronized.
Unlike the Equator, there's no easily defined prime or "main" meridian, which was fixed, mainly by political consensus, in 1884 over the Royal Observatory in Greenwich, near London, UK. This choice's only obvious advantage is setting the opposite meridian — near or at both left and right edges of many world maps — away from most inhabited areas. That opposite meridian is the base of the international date line, which separates world halves in two different days. Again, this line is somewhat irregular in order to keep national territories, mostly Pacific islands, in a single timezone.
Compared to finding a point's latitude, getting its longitude is a much more involved procedure, usually comparing the time separating the noon at the reference meridian and at the point in question.
|A Swedish translation of this page, courtesy of Weronika Pawlak.| |
Assignment #1: (Due Feb. 7)
Vectors vs. Pseudovectors
Show that the vector cross product of two pseudovectors is a pseudovector and that the vector cross product of a vector and a pseudovector is a vector. (Recall that a pseudovector is defined as the result of the cross product of two vectors.)
Consider the effect that a mirror operation on the components of a vector. Describe any differences between a vector and a pseudovector under the mirror operation.
Velocity and Acceleration
a. A body moving in a straight line with uniform acceleration passes two consecutive equal space each of width w in times t and t . Show that the acceleration is: 2wt1 t2 .
a. A reel of thread whose rim and spindle are of radius a and b respectively rests of a rough horizontal table. The loose end of the thread passes under the spindle and leads off at an angle
above the horizontal 12 . Show that the least tension in it will in general wind or unwind the tread according as is less or greater than a certain value. When has this critical
value show that there will be no motion unless the tension exceeds another critical value.
4. Orbital Motion
A mass approaches the solar system with a velocity v0 and if it had not been attracted toward
the sun it would have missed the sun by a distance b . Use the laws of conservation of energy and angular momentum and the law of gravitation to compute its closest distance of approach
a to the sun. Neglect the gravitational attractions of the planets and assume the sun is fixed.
Note the answer is a b2 b2 1/ 2 b where b GM with M being the mass of the sun.
1 2 tt t t 1212
0 0 0 v02
1. Would the mass
density of an object be the same if the object were on the moon rather than on
the earth? Would the weight be the same?
2. A steel column in a building has a
cross-sectional area of?3500 cm?^2 and supports a
weight of 2.5 x?10?^5 N. Find the stress of
3. Each vertical steel column of an office
building supports a weight of 1.30 x?10?^5 N and is
compressed to 5.90 x?10?^(-3) cm.
Find the compression in each column if a weight
of 5.50 x?10?^5 N is supported.
If the compression of each steel column is
0.0710 cm, what weight is supported by each column?4. The specific gravity of
an unknown substance is 0.80.Will it float on or sink in gasoline?5. In a deep
dive, a whale is appreciably compressed by the pressure of the surrounding
water. What happens to the whale’s density?6. Would it be slightly more
difficult to draw soda through a straw at sea level that on top of a very high
mountain? Explain.7. When a steadily flowing gas flows from a larger diameter
pipe to a smaller diameter pipe, what happens to the following?
The spacing between its streamlines8. The large
piston in a hydraulic lift has a radius of?250 cm?^2. What force must be
applied to the small piston with a radius of?25 cm?^2 in order to raise a car
of mass 1500 kg?9. Water flows through a 13.0 cm diameter fire hose at a rate
of 4.53 m/s.
What is the rate of flow through the hose in
How many liters pass through the hose in 25.0
min?10. Suppose you cut a small gap in a metal ring. If you heat the ring, will
the gap become wider or narrower?11. The temperature of a 1 m long aluminum rod
is 20° C. If the temperature is increased to 70° C what is the length of the
rod if the coefficient of linear expansion for aluminum is 2.3x?10?^(-5)/° C.12. The specific
heat capacity of copper is 0.092 C/gm/° C. Show that the amount of heat needed
to raise the temperature of a 10-gram piece of copper from 0° C to 100° C is 92
calories. How does this compare with the heat needed to raise the temperature
of the same mass of water through the same difference?13. A liter of water is
used to cool electronics. If 1,000 Joules of heat is given off by the
electronics, by what temperature does the water increase? One liter of water
has a mass of 1 kg and a specific heat of 4.184 Joules/(g C°) |
How do you prove something is a theorem?
In order for a theorem be proved, it must be in principle expressible as a precise, formal statement. However, theorems are usually expressed in natural language rather than in a completely symbolic form—with the presumption that a formal statement can be derived from the informal one.
How do you prove a theorem in logic?
To prove a theorem you must construct a deduction, with no premises, such that its last line contains the theorem (formula). To get the information needed to deduce a theorem (the sentence letters that appear in the theorem) you can use two rules of sentential deduction: EMI and Addition.
How do you prove theorems natural deductions?
In natural deduction, to prove an implication of the form P ⇒ Q, we assume P, then reason under that assumption to try to derive Q. If we are successful, then we can conclude that P ⇒ Q. In a proof, we are always allowed to introduce a new assumption P, then reason under that assumption.
What is a logic theorem?
A theorem in logic is a statement which can be shown to be the conclusion of a logical argument which depends on no premises except axioms. A sequent which denotes a theorem ϕ is written ⊢ϕ, indicating that there are no premises.
What is an example of a theorem?
A result that has been proved to be true (using operations and facts that were already known). Example: The “Pythagoras Theorem” proved that a2 + b2 = c2 for a right angled triangle. Lots more!
What is the easiest way to learn theorems?
The steps to understanding and mastering a theorem follow the same lines as the steps to understanding a definition.
- Make sure you understand what the theorem says. …
- Determine how the theorem is used. …
- Find out what the hypotheses are doing there. …
- Memorize the statement of the theorem.
Can one prove invalidity with the natural deduction proof method?
So, using natural deduction, you can’t prove that this argument is invalid (it is). Since we aren’t guaranteed a way to prove invalidity, we can’t count on Natural Deduction for that purpose.
How do you solve natural deductions?
Both ways we can prove from a to b. And we can also prove from b to a okay so proving an equivalence is a matter of doing the proof both ways from a to b.
What is natural deduction system explain in detail?
Natural Deduction (ND) is a common name for the class of proof systems composed of simple and self-evident inference rules based upon methods of proof and traditional ways of reasoning that have been applied since antiquity in deductive practice.
What are the types of theorem?
For Class 10, some of the most important theorems are:
- Pythagoras Theorem.
- Midpoint Theorem.
- Remainder Theorem.
- Fundamental Theorem of Arithmetic.
- Angle Bisector Theorem.
- Inscribed Angle Theorem.
- Ceva’s Theorem.
- Bayes’ Theorem.
How many theorems are there?
Wikipedia lists 1,123 theorems , but this is not even close to an exhaustive list—it is merely a small collection of results well-known enough that someone thought to include them.
How do you write theorem in math?
Well one way to do that is to write a proof that shows that all three sides of one triangle are congruent to all three sides of the other triangle.
What are the 3 types of theorem?
Table of Contents
What are the 5 theorems?
In particular, he has been credited with proving the following five theorems: (1) a circle is bisected by any diameter; (2) the base angles of an isosceles triangle are equal; (3) the opposite (“vertical”) angles formed by the intersection of two lines are equal; (4) two triangles are congruent (of equal shape and size …
How do you solve a theorem?
We can set up the equation 6 squared plus 8 squared equals x squared simplifying from here 6 squared is 6 times 6 or 36. And 8 squared is 8 times 8 or 64.
What Pythagoras theorem states?
Pythagorean theorem, the well-known geometric theorem that the sum of the squares on the legs of a right triangle is equal to the square on the hypotenuse (the side opposite the right angle)—or, in familiar algebraic notation, a2 + b2 = c2.
What is Pythagoras theorem Class 10?
Pythagoras theorem states that “In a right-angled triangle, the square of the hypotenuse side is equal to the sum of squares of the other two sides“. The sides of this triangle have been named Perpendicular, Base and Hypotenuse.
Is a theorem always true?
A theorem is a statement having a proof in such a system. Once we have adopted a given proof system that is sound, and the axioms are all necessarily true, then the theorems will also all be necessarily true. In this sense, there can be no contingent theorems.
What is the difference between a theory and a theorem?
A theorem is a result that can be proven to be true from a set of axioms. The term is used especially in mathematics where the axioms are those of mathematical logic and the systems in question. A theory is a set of ideas used to explain why something is true, or a set of rules on which a subject is based on.
Why is the Pythagorean Theorem a theorem?
The misconception is that the Pythagorean theorem is a statement about the relationship between the lengths of the sides of right triangles found in the real world. It is not. It is a statement about the relationship between the lengths of the sides of a mathematical concept known as a right triangle.
What is difference between theorem and lemma?
Theorem : A statement that has been proven to be true. Proposition : A less important but nonetheless interesting true statement. Lemma: A true statement used in proving other true statements (that is, a less important theorem that is helpful in the proof of other results).
Do I need to prove lemma?
Theorem — a mathematical statement that is proved using rigorous mathematical reasoning. In a mathematical paper, the term theorem is often reserved for the most important results. Lemma — a minor result whose sole purpose is to help in proving a theorem. It is a stepping stone on the path to proving a theorem.
Can a lemma be proved?
A lemma is an easily proved claim which is helpful for proving other propositions and theorems, but is usually not particularly interesting in its own right. |
Food fortification or enrichment is the process of adding micronutrients (essential trace elements and vitamins) to food. It can be purely a commercial choice to provide extra nutrients in a food, or sometimes it is a public health policy which aims to reduce numbers of people with dietary deficiencies in a population.
Diets that lack variety can be deficient in certain nutrients. Sometimes the staple foods of a region can lack particular nutrients, due to the soil of a region, or because of the inherent inadequacy of the normal diet. Addition of micronutrients to staples and condiments can prevent large-scale deficiency diseases in these cases.
While it is true that both fortification and enrichment refer to the addition of nutrients to food, the true definitions do slightly vary. As defined by the World Health Organization (WHO) and the Food and Agricultural Organization of the United Nations (FAO), fortification refers to "the practice of deliberately increasing the content of an essential micronutrient, ie. vitamins and minerals (including trace elements) in a food irrespective of whether the nutrients were originally in the food before processing or not, so as to improve the nutritional quality of the food supply and to provide a public health benefit with minimal risk to health," whereas enrichment is defined as "synonymous with fortification and refers to the addition of micronutrients to a food which are lost during processing."
Food fortification was identified as the second strategy of four by the WHO and FAO to begin decreasing the incidence of nutrient deficiencies at the global level.
As outlined by the FAO, the most common fortified foods are:
- Cereals and cereal based products
- Milk and Milk products
- Fats and oils
- Accessory food items
- Tea and other beverages
- Infant formulas
- 1 Types of Food Fortification
- 2 Rationale
- 3 Criticism
- 4 Food supplements
- 5 Examples of fortified foods
- 6 Fortification for body building
- 7 Fortification for medical treatment
- 8 See also
- 9 References
Types of Food Fortification
The four main methods of food fortification (named as to indicate the procedure that is used in order to fortify the food):
- Biofortification (i.e. breeding crops to increase their nutritional value, which can include both conventional selective breeding, and modern genetic modification)
- Synthetic biology (i.e. addition of probiotic bacteria to foods)
- Commercial and industrial fortification (i.e. flour, rice, oils (common cooking foods))
- Home fortification (e.g. vitamin D drops)
The WHO and FAO, among many other nationally recognized organizations, have recognized that there are over 2 billion people worldwide who suffer from a variety of micronutrient deficiencies. In 1992, 159 countries pledged at the FAO/WHO International Conference on Nutrition to make efforts to help combat these issues of micronutrient deficiencies, highlighting the importance of decreasing the number of those with iodine, vitamin A, and iron deficiencies. A significant statistic that led to these efforts was the discovery that approximately 1 in 3 people worldwide were at risk for either an iodine, vitamin A, or iron deficiency. Although it is recognized that food fortification alone will not combat this deficiency, it is a step towards reducing the prevalence of these deficiencies and their associated health conditions.
In Canada, The Food and Drug Regulations have outlined specific criterion which justifies food fortification:
- To replace nutrients which were lost during manufacturing of the product (i.e. the manufacturing of flour)
- To act as a public health intervention
- To ensure the nutritional equivalence of substitute foods (i.e. to make butter and margarine similar in content, soy milk and cow's milk, etc.)
- To ensure the appropriate vitamin and mineral nutrient composition of foods for special dietary purposes (i.e. Boost, gluten-free products, low sodium, or any other products specifically designed for special dietary requirements from an individual).
There are also several advantages to approaching nutrient deficiencies among populations via food fortification as opposed to other methods. These may include, but are not limited to: treating a population without specific dietary interventions therefore not requiring a change in dietary patterns, continuous delivery of the nutrient, does not require individual compliance, and potential to maintain nutrient stores more efficiently if consumed on a regular basis.
Several organizations such as the WHO, FAO, Health Canada, and the Nestlé Research Center acknowledge that there are limitations to food fortification. Within the discussion of nutrient deficiencies the topic of nutrient toxicities can also be immediately questioned. Fortification of nutrients in foods may deliver toxic amounts of nutrients to an individual and also cause its associated side effects. As seen with the case of fluoride toxicity below, the result can be irreversible staining to the teeth. Although this may be a minor toxic effect to health, there are several that are more severe.
The WHO states that limitations to food fortification may include: human rights issues indicating that consumers have the right to choose if they want fortified products or not, the potential for insufficient demand of the fortified product, increased production costs leading to increased retail costs, the potential that the fortified products will still not be a solution to nutrient deficiencies amongst low income populations who may not be able to afford the new product, and children who may not be able to consume adequate amounts thereof.
Food safety worries led to legislation in Denmark in 2004 restricting foods fortified with extra vitamins or minerals. Products banned include: Rice Crispies, Shreddies, Horlicks, Ovaltine and Marmite.
Danes said [Kelloggs] Corn Flakes, Rice Krispies and Special K wanted to include "toxic" doses which, if eaten regularly, could damage children's livers and kidneys and harm fetuses in pregnant women.
One factor that limits the benefits of food fortification is that isolated nutrients added back into a processed food that has had many of its nutrients removed, does not always result in the added nutrients being as bioavailable as they would be in the original, whole food. An example is skim milk that has had the fat removed, and then had vitamin A and vitamin D added back. Vitamins A and D are both fat soluble and not water soluble, so a person consuming skim milk in the absence of fats may not be able to absorb enough of these vitamins as one would be able to absorb from drinking whole milk.
Phytochemicals such as polyphenols can also impact nutrient absorption.
Different forms of micronutrients
There is a concern that micronutrients are legally defined in such a way that does not distinguish between different forms, and that fortified foods often have nutrients in a balance that would not occur naturally. For example, in the U.S., food is fortified with folic acid, which is one of the many naturally-occurring forms of folate, and which only contributes a minor amount to the folates occurring in natural foods. In many cases, such as with folate, it is an open question of whether or not there are any benefits or risks to consuming folic acid in this form.
In many cases, the micronutrients added to foods in fortification are synthetic.
In some cases, certain forms of micronutrients can be actively toxic in a sufficiently high dose, even if other forms are safe at the same or much higher doses. There are examples of such toxicity in both synthetic and naturally-occurring forms of vitamins. Retinol, the active form of Vitamin A, is toxic in a much lower dose than other forms, such as beta carotene. Menadione, a synthetic form of Vitamin K, is also known to be toxic.
There are several main groups of food supplements like:
- Vitamins and co-vitamins
- Essential minerals
- Essential fatty acids
- Essential amino acids
Examples of fortified foods
Many foods and beverages worldwide have been fortified, whether a voluntary action by the product developers or by law. Although some may view these additions as strategic marketing schemes to sell their product, there is a lot of work that must go into a product before simply fortifying it. In order to fortify a product, it must first be proven that the addition of this vitamin or mineral is beneficial to health, safe, and an effective method of delivery. The addition must also abide by all food and labeling regulations and support nutritional rationale. From a food developer's point of view, they also need to consider the costs associated with this new product and whether or not there will be a market to support the change.
Examples of foods and beverages that have been fortified and shown to have positive health effects:
"Iodine deficiency disorder (IDD) is the single greatest cause of preventable mental retardation. Severe deficiencies cause cretinism, stillbirth and miscarriage. But even mild deficiency can significantly affect the learning ability of populations........ Today over 1 billion people in the world suffer from iodine deficiency, and 38 million babies born every year are not protected from brain damage due to IDD."—Kul Gautam, Deputy Executive Director, UNICEF, October 2007
Iodised salt has been used in the United States since before World War II. It was discovered in 1821 that goiters could be treated by the use of iodized salts. However, it was not until 1916 that the use of iodized salts could be tested in a research trial as a preventative measure against goiters. By 1924, it became readily available in the US.
In many industrialized countries, the addition of folic acid to flour has prevented a significant number of NTDs in infants. Two common types of NTDs, spina bifida and anencephaly, affect approximately 2500-3000 infants born in the US annually. Research trials have shown the ability to reduce the incidence of NTDs by supplementing pregnant mothers with folic acid by 72%.
Niacin has been added to bread in the USA since 1938 (when voluntary addition started), a programme which substantially reduced the incidence of pellagra. As early as 1755, pellagra was recognized by doctors as being a niacin deficiency disease. Although not officially receiving its name of pellagra until 1771. Pellagra was seen amongst poor families who used corn as their main dietary staple. Although corn itself does contain niacin, it is not a bioavailable form unless it undergoes Nixtamalization (treatment with alkali, traditional in Native American cultures) and therefore was not contributing to the overall intake of niacin. Although pellagra can still be seen in developing countries, fortification of food with niacin played a huge role in eliminating the prevalence of the disease.
Diseases associated with niacin deficiency include: Pellagra which consisted of signs and symptoms called the 3D's-"Dermatitis, dementia, and diarrhea. Others may include vascular or gastrointestinal diseases.
Common diseases which present a high frequency of niacin deficiency: alcoholism, anorexia nervosa, HIV infection, gastrectomy, malabsorptive disorders, certain cancers and their associated treatments.
Since Vitamin D is a fat soluble vitamin it cannot be added to a wide variety of foods. Foods that it is commonly added to are margarine, vegetable oils and dairy products. During the late 1800s, after the discovery of curing conditions of scurvy and beriberi had occurred, researchers were aiming to see if the disease, later known as rickets, could also be cured by food. Their results showed that sunlight exposure and cod liver oil were the cure. It was not until the 1930s that vitamin D was actually linked to curing rickets. This discovery led to the fortification of common foods such as milk, margarine, and breakfast cereals. This took the astonishing statistics of approximately 80-90% of children showing varying degrees of bone deformations due to vitamin D deficiency to being a very rare condition.
Risk factors for vitamin D deficiencies include:
- In infants, being exclusively or primarily breast-fed
- Dark skin
- Living in cold climates and having little sun exposure
- Being elderly
- Covering all or almost all of one's skin while outdoors
- Liberal use of high-SPF sunscreens
- Fat malabsorption syndromes
- Inflammatory bowel diseases
- Obesity
Diseases associated with a vitamin D deficiency include rickets, osteoporosis, and certain types of cancer (breast, prostate, colon and ovaries). It has also been associated with increased risks for fractures, heart disease, type 2 diabetes, autoimmune and infectious diseases, asthma and other wheezing disorders, myocardial infarction, hypertension, congestive heart failure, and peripheral vascular disease.
Although fluoride is not considered an essential mineral, it is seen as crucial in prevention of tooth decay and maintaining adequate dental health. In the mid-1900s it was discovered that towns with a high level of fluoride in their water supply was causing the residents' teeth to have both brown spotting and a strange resistance to dental caries. This led to the fortification of water supplies with fluoride with safe amounts to retain the properties of resistance to dental caries but avoid the staining cause by fluorosis (a condition caused by a fluoride toxicity).
The tolerable upper intake level (UL) set for fluoride ranges from 0.7 mg/day for infants aged 0–6 months and 10 mg/day for adults over the age of 19.
Some other examples of fortified foods:
- Calcium is frequently added to fruit juices, carbonated beverages and rice.
- White rice is frequently enriched to replace lost nutrients during milling or adding extras in.
- "Golden rice" is a variety of rice which has been genetically modified to produce beta carotene.
- Amylase rich flour is utilized for food making to increase dietary consumption.
Fortification for body building
Despite having some scientific basis, but with controversial ethics, is the science of using foods and food supplements to achieve a defined health goal. A common example of this use of food supplements is the extent to which body builders will use amino acid mixtures, vitamins and phytochemicals to enhance natural hormone production, increase muscle and reduce fat. The literature is not concrete on an appropriate method for use of fortification for body builders and therefore may not be recommended due to safety concerns.
Fortification for medical treatment
There is interest in the use of food supplements in established medical conditions. This nutritional supplementation using foods as medicine (nutraceuticals) has been effectively used in treating disorders affecting the immune system up to and including cancers. This goes beyond the definition of "food supplement", but should be included for the sake of completeness.
- World Health Organization and Food and Agriculture Organization of the United Nations Guidelines on food fortification with micronutrients. 2006 [cited on 2011 Oct 30].
- Micronutrient Fortification of Food: Technology and Quality Control
- Liyanage, C.; Hettiarachchi, M. (2011). "Food fortification". Ceylon Medical Journal 56 (3): 124–127. doi:10.4038/cmj.v56i3.3607. PMID 22164753.
- Darnton-Hill, E. (1998). Overview: Rationale and elements of a successful food-fortification programme. Food and Nutrition Bulletin. 19(2):92-100
- Food Safety Network. Food Fortification. 2011 [cited 2011 Oct 30]. Available from: http://www.uoguelph.ca/foodsafetynetwork/food-fortification
- Bruno Waterfield (24 May 2011). "Marmite made illegal in Denmark". The Telegraph.
- James Meikle and Luke Harding (12 August 2004). "Denmark bans Kellogg's vitamins". The Guardian.
- A. David Smith, "Folic acid fortification: the good, the bad, and the puzzle of vitamin", American Society for Clinical Nutrition, Vol. 85, No. 1, 3-5. January 2007.
- Higdon (February 2008). "Vitamin K". Linus Pauling Institute, Oregon State University. Retrieved 2008-04-12.
- http://journals.cambridge.org/download.php?file=%2FPNS%2FPNS49_01%2FS0029665190000106a.pdf&code=bff50d94f640323690aecae20ceafeb1[dead link]
- Salt Institute. Iodized Salt. 2011 [cited 2011 Oct 30]. Available from: http://www.saltinstitute.org/Uses-benefits/Salt-in-Food/Essential-nutrient/Iodized-salt
- History of Iodized Salt. ICCID. 2011 [cited 2011 Oct 30] Available from: http://www.iccidd.org/pages/protecting-children/fortifying-salt/history-of-salt-iodization.php
- Linus Pauling Institute Micronutrient Research for Optimum Health. Micronutrient Information Center. 2003 [cited 2011 Oct 30]. Available from: http://lpi.oregonstate.edu/infocenter/minerals/iodine/
- The Ohio State University Extension. Extension Fact Sheet. 2004 [cited 2011 Oct 30]. Available from: http://ohioline.osu.edu/hyg-fact/5000/pdf/5553.pdf
- Honein MA, Paulozzi LJ, Mathews TJ, Erickson JD, Wong LY (2001). "Impact of folic acid fortification of the US food supply on the occurrence of neural tube defects". JAMA 285 (23): 2981–6. doi:10.1001/jama.285.23.2981. PMID 11410096.
- Office of Dietary Supplements National Institute of Health. Dietary Supplement Fact Sheet: Folate. [cited 2011 Oct 30] Available from:http://ods.od.nih.gov/factsheets/folate
- Linus Pauling Institute Micronutrient Research for Optimum Health. Micronutrient Information Center. 2002 [cited 2011 Oct 30]. Available from: http://lpi.oregonstate.edu/infocenter/vitamins/fa/
- Park YK, Sempos CT, Barton CN, Vanderveen JE, Yetley EA (2000). "Effectiveness of food fortification in the United States: the case of pellagra". American Journal of Public Health 90 (5): 727–38. doi:10.2105/AJPH.90.5.727. PMC 1446222. PMID 10800421.
- Prousky, J., Millman, C.G., Kirkland, J.B. Pharmacologic Use of Niacin. Journal of Evidence-Based Complementary & Alternative Medicine. 2001; 16(2): 91-101.
- Linus Pauling Institute Micronutrient Research for Optimum Health. Micronutrient Information Center. 2002 [cited 2011 Oct 30]. Available from: http://lpi.oregonstate.edu/infocenter/vitamins/niacin/
- FAO Agricultural and Consumer Protection. Food Fortification Technology. 1996 [cited 2011 Oct 30]. Available from:http://www.fao.org/docrep/W2840E/w2840e03.htm
- Authors unknown. A dose of vitamin D history. Nature Structural Biology. 2002; 9(2):77.
- Holick, M.F. The Vitamin D Deficiency Pandemic: a Forgotten Hormone Important for Health. Health Reviews. 2010; 32: 267-283.
- Linus Pauling Institute Micronutrient Research for Optimum Health. Micronutrient Information Center. 2004 [cited 2011 Oct 30]. Available from: http://lpi.oregonstate.edu/infocenter/vitamins/vitaminD/
- Linus Pauling Institute Micronutrient Research for Optimum Health. Micronutrient Information Center. 2001 [cited 2011 Oct 30]. Available from: http://lpi.oregonstate.edu/infocenter/minerals/fluoride/
- National Institute of Dental and Craniofacial Research. The Story of Fluoridation. 2011 [cited 2011 Oct 30]. Available from: http://www.nidcr.nih.gov/oralhealth/topics/fluoride/thestoryoffluoridation.htm
- Linus Pauling Institute Micronutrient Research for Optimum Health. Micronutrient Information Center. 2003 [cited 2011 Oct 30]. Available from:http://lpi.oregonstate.edu/infocenter/minerals/calcium/
- USA Rice Federation. Brown and White Rice FAQ's. 2006 [cited 2011 Oct 30]. Available from: http://www.usarice.com/doclib/196/158/3403.pdf
- Dawe,D. Crop Case Study: GMO Golden Rice in Asia with Enhanced Vitamin A Benefits for Consumers. The Journal of Agrobiotechnology Management and Economics. 2007; 10(3): 154-160.
- Hossain, M.I., Wahed, M.A., Ahmed, S. Increased food intake after the addition of amylase-rich flour to supplementary food for malnourished children in rural communities of Bangladesh. Food Nutr Bull. 2005; 26(4):323-9.
- Chromiak, J.A., Antonio, J. Use of amino acids as growth hormone-releasing agents by athletes. Nutrition. 2002; 18(7-8): 657-661
- Agriculture and Agri-Food Canada. Functional Foods and Nutraceuticals. 2011 [cited 2011 Oct 30]. Available from: http://www4.agr.gc.ca/AAFC-AAC/display-afficher.do?id=1170856376710&lang=eng |
1964 by Arno Penzias and Robert Wilson at Bell Telephone Laboratories. This radiation is believed to be the afterglow of the Big Bang, the event that marked the beginning of the universe. The cosmic microwave background radiation is one of the most important pieces of evidence supporting the Big Bang theory and has provided valuable insights into the structure and evolution of the universe. In this article, we will explore the discovery, properties, and significance of this fascinating phenomenon.
Understanding Cosmic Microwave Background Radiation
The universe is vast and full of mysteries. One of these mysteries is the cosmic microwave background radiation (CMBR), which has been studied for decades. The CMBR is a form of radiation that fills the entire universe, and it is believed to be the oldest light in the universe. The CMBR has played an essential role in understanding the universe’s origins and evolution and has led to breakthrough discoveries in cosmology.
What is Cosmic Microwave Background Radiation?
Cosmic microwave background radiation is a form of radiation that is present everywhere in the universe. It was first discovered in 1964 by two scientists, Arno Penzias and Robert Wilson. They were using a radio telescope to study radio waves emitted by stars when they noticed a faint, constant noise that seemed to be coming from all directions. They concluded that this noise was not from any known source and that it must be coming from outer space. This discovery led to the confirmation of the Big Bang theory, which states that the universe began as a singularity and has been expanding ever since.
The Origin of Cosmic Microwave Background Radiation
The CMBR is believed to be the oldest light in the universe, dating back to just 380,000 years after the Big Bang. At this time, the universe was a hot, dense, and opaque plasma. As the universe expanded and cooled, the plasma began to cool and form atoms. This process, known as recombination, released photons that began to travel freely through space. These photons are what we now observe as the CMBR.
Why is Cosmic Microwave Background Radiation Important?
The CMBR has played an essential role in understanding the universe’s origins and evolution. It has led to breakthrough discoveries in cosmology, including the confirmation of the Big Bang theory, the discovery of dark matter, and the measurement of the universe’s age and geometry. The CMBR has also helped to determine the universe’s composition and structure, including the distribution of galaxies and the formation of large-scale structures.
Measuring Cosmic Microwave Background Radiation
The study of cosmic microwave background radiation involves measuring the radiation’s temperature and intensity. Scientists use specialized telescopes and detectors to measure the CMBR’s temperature and intensity and map its distribution across the sky.
The Cosmic Microwave Background Explorer (COBE)
One of the most significant advancements in the study of CMBR was the launch of the Cosmic Microwave Background Explorer (COBE) satellite in 1989. COBE was designed to measure the CMBR’s temperature and intensity to an unprecedented level of accuracy. The data collected by COBE confirmed the Big Bang theory and provided evidence for the universe’s homogeneity and isotropy.
The Wilkinson Microwave Anisotropy Probe (WMAP)
The Wilkinson Microwave Anisotropy Probe (WMAP) was launched in 2001 and was designed to measure the CMBR’s temperature and intensity with even greater accuracy than COBE. WMAP’s data provided a more detailed map of the CMBR’s distribution across the sky, which allowed scientists to study the universe’s composition and structure in greater detail.
The Planck Satellite
The Planck satellite, launched in 2009, was the most advanced telescope designed to study the CMBR to date. Planck’s data provided the most precise measurements of the CMBR’s temperature and intensity and provided new insights into the universe’s age, composition, and structure.
The Future of Cosmic Microwave Background Radiation Research
The study of cosmic microwave background radiation continues to be a critical area of research in cosmology. Scientists are continually developing new techniques and technologies to study the CMBR and gain a deeper understanding of the universe’s origins and evolution. Some of the most promising areas of research include studying the CMBR’s polarization, which could provide new insights into the early universe’s conditions and the nature of dark matter and dark energy.
Challenges in Studying Cosmic Microwave Background Radiation
Studying cosmic microwave background radiation poses several challenges, including the presence of other sources of radiation that can interfere with measurements and the need for highly sensitive and precise detectors. Overcoming these challenges requires the development of new technologies and techniques, which requires significant investment and collaboration between scientists and institutions worldwide.
FAQs for the topic: cosmic microwave background radiation was first discovered in
Cosmic microwave background radiation (CMB) is a type of electromagnetic radiation that permeates the entire universe. It is the remnant of the thermal energy that was released shortly after the Big Bang, when the universe was just a hot, dense, and opaque plasma. As the universe expanded and cooled down, this radiation began to spread out and cool down as well, eventually becoming faint microwaves that have been detectable since the 1960s.
Who discovered cosmic microwave background radiation?
Cosmic microwave background radiation was first discovered by two Bell Labs scientists, Arno Penzias and Robert Wilson, in 1964. The two scientists were working on a project to study radio waves from the Milky Way galaxy, but they were puzzled by a persistent background of microwave radiation that seemed to be coming from every direction in the sky. They eventually realized that this radiation was not coming from the Milky Way or any other local source, but was actually the relic radiation of the Big Bang.
How was cosmic microwave background radiation first detected?
Cosmic microwave background radiation was first detected using a special type of radio telescope called a horn antenna. Penzias and Wilson were using the horn antenna at Bell Labs to study radio waves from the Milky Way, but they noticed that their instrument was picking up a constant background noise that they could not explain. After ruling out all possible sources of interference, they realized that the noise was actually the cosmic microwave background radiation, which was detected in all directions in the sky.
Why is the discovery of cosmic microwave background radiation important?
The discovery of cosmic microwave background radiation was a crucial piece of evidence for the Big Bang theory, which is the prevailing scientific explanation for the origin and evolution of the universe. The existence of the CMB radiation supports the idea that the universe was once in a hot, dense state, and has been expanding and cooling down ever since. The CMB radiation also provides scientists with valuable information about the structure and composition of the early universe, and has helped to refine our understanding of the universe’s age, size, and composition. |
For the very first time in its nascent history, the United States established a permanent central banking institution, thanks to the passage of the Federal Reserve Act of 1913. Today, this influential central bank – known as the Federal Reserve – is responsible for guiding the course of the U.S. economy by raising and lowering interest rates borrowers have to pay to lenders. How exactly does the Federal Reserve control interest rates? And why does the interest rate impact the broader economy so much?
Prior to the establishment of the Federal Reserve in 1913, economic panics caused by banking emergencies were common events as investors would lose confidence in the safety of their bank deposits. In order to fight this sense of financial instability, the U.S. Congress authorized 12 leading national banks to legally issue Federal Reserve notes (also known as paper money), adjust lending rates these banks could charge their borrowers and to purchase or sell U.S. treasuries.
Interest Rates and the Economy
Consider the following scenario: Would you be more willing to purchase a house if the interest rate you had to pay every month for borrowing the money from the bank was higher, or lower? Whether they are interested in a car, a big-ticket item like refrigerators and dishwashers or even a house, buyers are more likely to purchase goods and services when the interest rate is low. As more goods and services are consumed, the economic growth rate climbs higher. This growth further fuels demand for more jobs as well as industrial machinery and raw goods.
Interest Rates and Inflation
As the demand for labor increases, workers begin asking for more pay, resulting in higher inflation. As the price of goods increases, the value of money decreases. Buyers will not be able to purchase as many goods and services as before. This lack in consumer and business demand can cause the economy to contract, resulting in job losses. It is the mandate of the Federal Reserve to maintain financial stability throughout this cycle of growth and contraction by properly adjusting interest rates.
As inflation increases, the value of money decreases and the Federal Reserve counters by increasing the interest rates. During times when job growth is low and the economy is stagnant, the Federal Reserve lowers the interest rates to spur economic growth.
According to the Federal Reserve website, manipulation of the interest rates “triggers a chain of events that affect other short-term interest rates, foreign exchange rates, long-term interest rates, the amount of money and credit, and, ultimately, a range of economic variables including employment, output and prices of goods and services.”
As history has shown, the twin effects of uncertainty and panic resulting from tough financial crises can trigger periods of even greater economic malaise. Due in large part to the stabilizing effects afforded by the Federal Reserve Act, such unstable periods became less common as the U.S. economy experienced periods of rapid economic expansion in the last century.
For more information, check out ” The Dangers of an All-Powerful Federal Reserve“. |
Everyone knows it was a large asteroid striking Earth that led to the demise of the dinosaurs. But how many near misses were there? Modern humans have been around for about 225,000 years, so we must have come close to death by asteroid more than once in our time. We would have had no clue.
Of course, it’s the actual strikes that are cause for concern, not near misses. Efforts to predict asteroid strikes, and to catalogue asteroids that come close to Earth, have reached new levels. NASA’s newest tool in the fight against asteroids is called Scout. Scout is designed to detect asteroids approaching Earth, and it just passed an important test. Scout was able to give us 5 days notice of an approaching asteroid.
Here’s how Scout works. A telescope in Hawaii, the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) detected the asteroid, called 2016 UR36, and then alerted other ‘scopes. Three other telescopes confirmed 2016 UR36 and were able to narrow down its trajectory. They also learned its size, about 5 to 25 meters across.
Remove All Ads on Universe Today
Join our Patreon for as little as $3!
Get the ad-free experience for life
After several hours, we knew that UR 36 would come close to us, but was not a threat to impact Earth. UR 36 would pass Earth at a distance of about 498,000 km. That’s about 1.3 times further away than the Moon.
The key part of this is that we had 5 days notice. And five days notice is a lot more than the few hours that we usually have. The approach of 2016 UR36 was the first test for the Scout system, and it passed the test.
Asteroids that come close to Earth are called Near Earth Objects (NEOs) and finding them and tracking them has become a growing concern for NASA. In fact NASA has about 15,000 NEOs catalogued, and they’re still finding about 5 more every night.
Not only does NASA have the Scout system, whose primary role is to speed up the confirmation process for approaching asteroids, but they also have the Sentry program. Sentry’s role is a little different.
Sentry’s job is to focus on asteroids that are large enough to wipe out a city and cause widespread destruction. That means NEOs that are larger than about 140 metres. Sentry has over 600 large NEOs catalogued, and astronomers think there are a lot more of them out there.
NASA also has the Planetary Defense Coordination Office (PDCO), which has got to be the greatest name for an office ever. (Can you imagine having that on your business card?) Anyway, the PDCO has the over-arching role of preparing for asteroid impacts. The Office is there to make emergency plans to deal with the impact aftermath.
5 days notice for a small asteroid striking Earth is a huge step for preparedness. Resources can be mobilized, critical infrastructure can be protected, maybe things like atomic power plants can be shut down if necessary. And, of course, people can be evacuated.
We haven’t always had any notice for approaching asteroids. Look at the Chelyabinsk meteor from 2013. It was a 10,000 ton meteor that exploded over the Chelyabinsk Oblast, injuring 1500 people and damaging an estimated 3,000 building in 6 cities. If it had been a little bigger, and reached the surface of the Earth, the damage would have been widespread. 5 days notice would likely have saved a lot of lives.
Smaller asteroids may be too small to detect when they’re very far away. But larger ones can be detected when they’re still 10, 20, even 30 years away. That’s enough time to figure out how to stop them. And if you can reach them when they’re that far away, you only need to nudge them a little to deflect them away from Earth, and maybe to the Sun to be destroyed.
Large asteroids with the potential to cause widespread destruction are the attention-getters. Hollywood loves them. But it may be more likely that we face numerous impacts from smaller asteroids, and that they could cause more damage overall. Scout’s ability to detect these smaller asteroids, and give us several days notice of their approach, could be a life-saver. |
by Dean J. Campbell*
*Bradley University, Peoria, Illinois
Adding solids at a known temperature to water at a different known temperature is the basis of some classic calorimetry experiments.1 In these experiments, the temperature of the solid and of the water converge on some intermediate temperature, and the differences in temperature are combined with some given information such as mass and specific heat to obtain a missing quantity, such as mass or specific heat. These calorimetry experiments rely on stable initial and final temperatures. However, if the temperatures of the components can be monitored as a function of time, the rates of heating or cooling can be explored. The size and/or shape of the solids influences their surface area, and solids with greater surface area have faster heat exchange with water. The simple calorimetry experiment modifications described here are designed to get students thinking about the importance of surface area, in this case, on the rates of temperature change. The rate of temperature change can in turn influence the timing of a chemical process such as a thermochromic color change. Increasing the surface area of reactants or catalysts provides more locations for reactions at surfaces, yielding faster reactions.
In these experiments, samples of glass or iron spheres of uniform sizes were either heated or cooled and then added to water near room temperature. The temperature of the water was measured with a Vernier LabQuest 2 and thermometer at 0.1 second intervals. For the heating experiments, the spheres were heated in a drying oven for at least two hours in polystyrene foam cups. When the cups were removed from the oven, the thermometer inside read about 74 to 71 °C. The spheres were added to samples of about 49 g of water at room temperature (21 °C in that lab) in polystyrene foam cups. For the cooling experiments, the spheres were cooled in a freezer overnight in polystyrene foam cups. When the cups were removed from the freezer, the thermometer inside read about -13 to -8 °C. The spheres were added to samples of about 49 g of water at room temperature (18 °C in that lab) in polystyrene foam cups.
For these experiments, it is desirable to clearly detect differences in the rate of temperature change as sphere sizes change. Materials with higher specific heat produce temperature curves of the surroundings (e.g., water) that have larger changes in the direction of the temperature axis. Materials with lower thermal conductivity more slowly transfer heat between the solid and the surroundings, so it will be easier to detect variations in the rate of temperature change in the direction of the time axis of a temperature curve. The exact compositions of the glass (probably soda-lime glass) and iron or steel in the spheres used in these experiments are not known. The literature shows variation in these values, but there is general agreement that glass has a higher specific heat than iron and conducts heat much more slowly than iron.1 One source lists soda-lime glass as having a specific heat of 0.88 J/g K and a thermal conductivity of 0.937 W/m K2. Another source lists iron as having a specific heat of 0.412 J/g K and a thermal conductivity of 80 W/m K.3
Approximately 30 g of glass spheres were used in each trial. The larger glass spheres were marbles with an average diameter of 1.58cm and a total surface area of 47 cm2. Six were used for each trial. The smaller glass spheres had an average diameter of 0.49 cm and a total surface area of 142 cm2. About 190 were used for each trial. Figure 1 shows the results of twelve trials with glass spheres, with three trials of each combination of heating or cooling with large or small spheres. The temperature curves were all shifted along the x-axis to all begin changing at the same time. Clearly, heat exchange with the water is faster for samples comprised of many small spheres rather than a few large spheres. It is important to note that thermal curves produced by samples of the same composition and mass had the same initial and final temperatures, independent of sphere size. This means that regardless of the rate of heat transfer, the total quantity of heat transferred (what we focus on in calorimetry measurements) did not change.
Figure 1. Temperature curves of 50 g of water with the addition of 30 g of glass spheres. Variations include heating or cooling with large or small spheres.
Approximately 33 g of iron spheres were used in each trial. The iron-based spheres were all deliberately rusted for use in other experiments, using an aqueous solution of sodium chloride, acetic acid, and hydrogen peroxide.4 Although rust is probably not necessary for these experiments, iron tends to rust over time in contact with water and Earth’s oxidizing atmosphere. The larger iron-based spheres had an average diameter of 1.25 cm and a total surface area of 20 cm2. Four were used for each trial. The smaller iron-based spheres were made from BB gun ammo that had their copper cladding removed by abrasion with sand in a rock tumbler. These spheres had an average diameter of 0.43 cm and a total surface area of 58 cm2. About 100 were used for each trial. Figure 2 shows the results of twelve trials with iron spheres, with three trials of each combination of heating or cooling with large or small spheres. The temperature curves were all shifted along the x-axis to all begin changing at the same time. Heat exchange with the water is still faster for samples comprised of many small spheres rather than a few large spheres, but the difference is harder to see than when glass was used. This is likely because iron’s greater thermal conductivity causes the temperatures to change faster and the resulting temperature curves are less distinguishable from one another under these experimental conditions. It should also be noted that a small amount of iron oxide comes off of the rusty spheres into the water, turning it reddish. As with the glass spheres, thermal curves produced by samples of the same composition and mass had the same initial and final temperatures, independent of sphere size.
Figure 2. Temperature curves of 50 g of water with the addition of 33 g of iron spheres. Variations include heating or cooling with large or small spheres.
To demonstrate the relationship between surface area and heat transfer to an audience, one could display the temperature curves collected by a LabQuest module or similar device. Alternative demonstrations used glass spheres and a pair of thermochromic beverage cups, see Figure 3. The cups were designed to change colors, such as from colorless to red, when cool liquids were added. They likely used leuco dye chemistry, similar to that used by thermochromic paper.5 About 93 g of larger 1.58 cm glass spheres were added to the left side cup, and about 93 g of smaller 0.49 cm glass spheres were added to the right side cup, Figure 3 TOP. In one demonstration, the glass spheres were cooled in a freezer. The colorless cups at room temperature were placed upright on a surface and the cold glass spheres were added. For the cup at the left side of the images in Figure 3 MIDDLE, red spots appeared that corresponded to where the larger spheres touched and cooled the interior surface of the cup, causing the thermochromic reaction to take place. Because the spheres were larger, the relatively few points of contact between their surfaces and the cup interior surface were sufficiently far apart as to make the cooling spots visually distinguishable. For the cup at the right side of the images in Figure 3 MIDDLE, so many more red spots appeared that they became indistinguishable more quickly. The greater surface area of contact between the many smaller spheres and the cup interior caused the entire bottom of the cup to change color from the thermochromic reaction more quickly than when the larger spheres were added. Video 1 shows the demonstration described by Figure 3 MIDDLE.
Another demonstration essentially crosses the color transition in the other direction. The thermochromic cups were first cooled in the freezer to change them from colorless to red and then placed upright on a surface. This time, glass spheres were added that had been heated in a drying oven. For the cup at the left side of the images in Figure 3 BOTTOM, colorless spots appeared that corresponded to where the larger spheres touched and cooled the interior surface of the cup. For the cup at the right side of the images in Figure 3 BOTTOM, so many more colorless spots appeared that they became indistinguishable more quickly. Video 2 shows the demonstration described by Figure 3 BOTTOM.
Figure 3. (TOP) ~93 g of cold 1.58 cm glass spheres added to the left thermochromic cup and ~93 g of 0.49 cm glass spheres added to the right thermochromic cup. (MIDDLE) Glass spheres removed from a freezer and added to room temperature thermochromic plastic cups to turn them from colorless to red. (BOTTOM) Glass spheres removed from a drying oven and added to cold thermochromic plastic cups to turn them from red to colorless.
Video 1: 93 g of cold 1.6 cm (left cup) and 0.5 cm (right cup) glass spheres removed from a freezer and added to colorless thermochromic plastic cups. The many smaller spheres turn more area of the colorless cup to red faster than the fewer larger spheres, because the many smaller spheres make greater contact with the interior of the plastic cup. The spheres themselves are shown at the end of the video. Note that there is also a similar video where warm spheres change the color of cold thermochromic cups. ChemDemos YouTube Channel (accessed 7/25/2022)
Video 2: 93 g of 1.6 cm (left cup) and 0.5 cm (right cup) glass spheres removed from a drying oven and added to colorless thermochromic plastic cups that have been turned red in a freezer. The many smaller spheres turn more area of the red cup to colorless faster than the fewer larger spheres, because the many smaller spheres make greater contact with the interior of the plastic cup. The spheres themselves are shown at the end of the video. ChemDemos YouTube Channel (accessed 7/25/2022)
The “greenness” of these demonstrations can be considered from the perspective of the Twelve Principles of Green Chemistry.6 The greenest aspect of these activities are their reusability. Production of metallic iron and glass are energy-intensive processes, but these materials can be used repeatedly for these and potentially other activities. The plastic thermochromic cups are likely sourced from non-renewable petrochemicals, however, they can also be used repeatedly. Even the thermochromic reactions inside the cups are reversible. From a safety standpoint, glass spheres can potentially crack due to physical or thermal shock, but no other glassware was used, so there were no issues with hard solid spheres accidentally cracking glass containers.
Energy usage can also be considered. The calorimetry labs at this University have traditionally used boiling water baths over Bunsen burners or hot pates to heat metal samples in test tubes before adding the metal samples to water. The activities described here heated samples using an oven or cooled them using a freezer. It would be an interesting study to figure out which temperature-changing process (oven, hot-plate, Bunsen burner, or freezer) would be most-energy efficient and produce the least amount of greenhouse gases. The results of those studies could very much depend on the number of experimental trials run, the number of students running the experiments, or even how much the oven or freezer is open or closed during its use. Regardless of the process used to heat or cool the solids, thermal curves can be added to calorimetry experiments to introduce the importance of surface area in processes such as heat transfer. The thermochromic cups use chemistry to provide simple but very visible illustrations of surface area concepts.
Safety Goggles should be used when working with the activities. Glass spheres can potentially crack due to physical or thermal shock. Always wash your hands after working on laboratory activities.
Acknowledgements This work was supported by Bradley University and the Mund-Lagowski Department of Chemistry and Biochemistry with additional support from the Illinois Heartland Section of the American Chemical Society, the Bradley University BEST Program, and Beyond Benign. The material contained in this document is based upon work supported by a National Aeronautics and Space Administration (NASA) grant or cooperative agreement. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author and do not necessarily reflect the views of NASA. This work was supported through a NASA grant awarded to the Illinois/NASA Space Grant Consortium.
- Ellis, A. B.; Geselbracht, M. J.; Johnson, B. J.; Lisensky, G. C.; Robinson, W. R. Teaching General Chemistry: A Materials Science Companion, 1st ed.; Oxford University Press: Oxford, 1993.
- Continental Trade. Soda-lime glass properties. https://www.continentaltrade.com.pl/soda-lime-glass, there is an excellent data sheet available for download here. (accessed July 22, 2022).
- Nuclear Power. Iron – Specific Heat, Latent Heat of Fusion, Latent Heat of Vaporization. https://www.nuclear-power.com/iron-specific-heat-latent-heat-vaporization-fusion/ (accessed July 22, 2022).
- MrDiyDork. How to rust metal in minutes! https://www.youtube.com/watch?v=RjAPyFQGYp4 (accessed July 22, 2022).
- Campbell, D. J.; Lojpur, B.; Liu, R. “Thermal Paper as a Polarity and Acidity Detector.” ChemEd Exchange. October 28, 2021. https://www.chemedx.org/blog/thermal-paper-polarity-and-acidity-detector (accessed July 23, 2022).
- American Chemical Society. 12 Principles of Green Chemistry. https://www.acs.org/content/acs/en/greenchemistry/principles/12-principles-of-green-chemistry.html (accessed July 23, 2022).
For Laboratory Work: Please refer to the ACS Guidelines for Chemical Laboratory Safety in Secondary Schools (2016).
For Demonstrations: Please refer to the ACS Division of Chemical Education Safety Guidelines for Chemical Demonstrations.
Other Safety resources
RAMP: Recognize hazards; Assess the risks of hazards; Minimize the risks of hazards; Prepare for emergencies |
The concept of fairness is crucial to a well-functioning society. We all hope that we will be treated fairly by other people, by institutions, and by systems such as the criminal justice system. When this breaks down, bad publicity or even civil unrest can quickly follow. Responsible AI is a powerful technology, but it too needs to be able to treat people fairly. But what do we mean by fairness in this context?
A good principal is:
A fair AI system should affect similarly situated people in the same way
- A model to help with disease diagnosis should treat people with similar symptoms and medical history in a similar way
- A model to help with loan adjudication should treat people with similar financial circumstances in a similar way
- A model to help with criminal justice sentencing should treat people with similar criminal histories the same way
Put another way, we don’t want a machine learning model to give discriminatory outcomes based on protected characteristics such as:
- Religious affiliation
- Sexual orientation
There are, of course, exceptions – for example gender is an important (and non-controversial) factor in breast cancer screening.
Also, the concept of fairness is not a static concept – ideas or fairness change over time. A key example here is race: modern ideas of racial equity are very different to those that were prevalent 200 years ago. And as we see in our daily lives, they are still evolving today. This means that technological solutions need to be combined with social solutions.
Why Can AI Systems Be Unfair?
At its core, the idea behind a machine learning model is quite simple – find patterns between inputs (the data) and outputs (an outcome).
Despite all the hype, the data a machine learning model learns from is chosen by humans, and that data exists in a society that may be inherently unfair. This leads to some major causes of bias in data:
- Appropriate data that covers the entire range of use cases isn’t chosen
- The data contains societal biases, from which faulty inference may be drawn
Let’s consider an example. Say we want to build a model to predict how much someone is likely to get paid in their next job. How might this model be made to be biased? Some ways include:
- Not choosing a representative range of jobs – eg choosing only male investment bankers and female retail workers
- There is a known gender pay gap, and even a well selected set of data is likely to exhibit this if the person building it is not careful.
- People from different cities, with different racial populations, get paid different amounts and the model may use this to make discriminatory inferences based on race
How Can We Make Our Responsible AI System Fairer?
The good news is that a large ongoing R & D effort exists to develop technologies to mitigate bias as much as possible. One example which we will look at in a future post is the Fairlearn python package. Here are just some ways in which in which data scientists can look to design fairer AI systems:
Before building our model, we can perform an analysis on a range of different groupings such as gender and race, to determine if there is some undesirable difference in outcome driven by that characteristic. For example, is the average pay of a female workers in the dataset systematically lower than of males. The dataset can then be rebalanced to reduce this impact.
Analysis of Models
Data scientists will often report on metrics like accuracy and precision for the entire dataset, but you can also do this for subgroups. We should analyze our model after it is built to make sure that the average predicted salary is equal (within error) for both male and female employees, if all other variables are the same. If it fails this validation, we need to take additional mitigation measures.
Debiasing During Training
These techniques are pretty new and on the cutting edge. These algorithms attempt to build active de-biasing into the model training process. However, at this point in time these are not a replacement for good quality dataset construction and model analysis, but can be a powerful complement.
In conclusion, creating fair and equitable AI is a crucial component of a responsible AI strategy. If you’d like to discuss this more, feel to free to reach out via social media, or connect with Catapult Systems here. |
Depiction of the siege of Kedah, the battle between Beemasenan's Chola naval infantry and the defenders of Kedah fort.
|Founded||3rd century CE|
|Part of||Chola military|
The Chola Navy (Tamil: சோழர் கடற்படை; Cōḻar kadatpadai) comprised the naval forces of the Chola Empire along with several other naval-arms of the country. The Chola navy played a vital role in the expansion of the Chola Empire, including the conquest of the Ceylon islands and naval raids on Sri Vijaya (present-day Indonesia).
The navy grew both in size and status during the Medieval Cholas reign. The Chola Admirals commanded much respect and prestige in the society. The navy commanders also acted as diplomats in some instances. From 900 to 1100, the navy had grown from a small backwater entity to that of a potent power projection and diplomatic symbol in all of Asia, but was gradually reduced in significance when the Cholas fought land battles for subjugating the Chalukyas of Andhra-Kannada area in South India.
- 1 History
- 2 Trade, commerce and diplomacy
- 3 Organization and administration
- 4 Vessels and weapons
- 5 Campaigns
- 6 Political, cultural and economic impact
- 7 Popular culture
- 8 Timeline of events
- 9 Gallery
- 10 See also
- 11 References
- 12 External links
|List of Chola kings and emperors|
|Interregnum (c. 200 – c. 848)|
Historians divide the Chola Reign into three distinct phases. The first era is the period of Early Cholas .The second phase is of Vijalaya Cholas and the final phase in the empire was the Chalukya Chola period.
The Cholas were at the height of their power from the later half of the 9th century through to the early 13th century. Under Rajaraja Chola I and his son Rajendra Chola I, the dynasty became a military, economic and cultural power in Asia. During the period 1010–1200, the Chola territories stretched from the islands of the Maldives in the south to as far north as the banks of the Godavari River in Andhra Pradesh. Rajaraja Chola conquered peninsular South India, annexed parts of Sri Lanka and occupied the islands of the Maldives. Rajendra Chola sent a victorious expedition to North India that touched the river Ganges and defeated the Pala ruler of Pataliputra, Mahipala. He also successfully raided kingdoms of Maritime Southeast Asia.
The earliest Chola kings of whom there is tangible evidence are mentioned in the Sangam literature. Scholars now generally agree that this literature belongs to the first few centuries of the common era. The Sangam literature is full of names of the kings and the princes, and of the poets who extolled them. Despite literature that depicts the life and work of these people, these cannot be worked into connected history.
The earliest record of Chola naval activity by an external source dates to around the 1st century, the Roman report of Kaveripoompattinam (presently known as Poombuhar) as Haverpoum and a description of how the Trade vessels were escorted by the King's fleet to the estuary as it was a natural harbor in the mouth of the river Kaveri.
Little archeological evidence exists of the maritime activities of this era, except some excavated wooden plaques depicting naval engagements in the vicinity of the old city (See Poompuhar for more details). However, much insight into the naval activities of the Cholas has been gathered from Periplus of the Erythrean Sea. In this work, the unknown merchant describes the activity of escort-ships assigned to the merchant vessels with valuable cargo. These early naval ships had some sort of a rudimentary flame-thrower and or a catapult type weapon.
Colandia, the great ships which was used by Early Cholas.[clarification needed] By this they sailed to pacific islands from Kaveripatnam(as center). At that time, Pattinathu Pillai is the chief of the Chola's Navy.
Little is known about the transition period of around three centuries from the end of the Sangam age (c. 300) up to the time when the Pandyas and Pallavas dominated the Tamil country (c. 600). An obscure dynasty, the Kalabhras, invaded the Tamil country, displaced the existing kingdoms and ruled for around three centuries. They were displaced by the Pallavas and the Pandyas in the 6th century.
This period from the 3rd century until the 7th century is a blind spot in the maritime tradition of the Cholas. Little is known of the fate of the Cholas during the succeeding three centuries until the accession of Vijayalaya in the second quarter of the 9th century. In the Interregnum, the Cholas were probably reduced to Vassals of Pallavas, though at times they switched sides and allied with Pandyas and tried to dispose their overlords. But, there is no concrete line of kings or court recordings.
However, even during this time the Cholas had maintained a small but potent Naval force based inland in the Kaveri river. During this time they dominated the inland trade in the Kaveri basin and Musuri is their major inland port. Dry-docks built during this period exist to this day.
This phase of the history is the most well documented one, partly due the survival of the edicts and inscriptions from the time along with reliable foreign narratives. This has enabled historians to interpolate various accounts and come up with a clear account of Chola Naval activities of the time.
The Imperial Chola navy took its shape in the aftermath of the resurgence of Chola power, with the rise of Vijalaya dynasty. During the Pallavas rule, the Cholas took control of not only the territories, but the cultural and socio-economic mantle. Thus, the Medieval Cholas inherited the will to dominate trade and control seas from the Pallavas.
The evolution of combat ships and naval-architecture elsewhere played an important part in the development of the Pallava Navy. There were serious efforts in the period of the Pallava king Simavishnu to control the piracy in South East Asia and to establish a Tamil friendly regime in the Malay peninsula. However, this effort was accomplished only three centuries later by the new Naval power of the Cholas.
The three decades of conflict with the Sinhalese King Mahinda V came to a swift end, after Raja Raja Chola I's (985-1014) ascent to the throne and his decisive use of the naval flotilla to subdue the Sinhalese.
This period also marked the departure in thinking from the age-old traditions. Rajaraja commissioned various foreigners (Prominently, the Arabs and Chinese) in the naval building program. These effort were continued and the benefits were reaped by his successor, Rajendra Chola I. Rajendra led a successful expedition against the Sri Vijaya kingdom (present day Indonesia) and subdued Sailendra. Though there were friendly exchanges between the Sri Vijaya empire and the Chola Empire in preceding times (including the construction of chudamani Pagoda in Nagapattinam), the raid seems to have been motivated by the commercial interests rather than any political motives.
Trade, commerce and diplomacy
The Cholas excelled in foreign trade and maritime activity, extending their influence overseas to China and Southeast Asia. A fragmentary Tamil inscription found in Sumatra cites the name of a merchant guild Nanadesa Tisaiyayirattu Ainnutruvar (literally, "the five hundred from the four countries and the thousand directions"), a famous merchant guild in the Chola country. The inscription is dated 1088, indicating that there was an active overseas trade during the Chola period.
Towards the end of the 9th century, southern India had developed extensive maritime and commercial activity, especially with the Chinese and Arabs. The Cholas, being in possession of parts of both the west and the east coasts of peninsular India, were at the forefront of these ventures. The Tang dynasty of China, the Srivijaya empire in the Malayan archipelago under the Sailendras, and the Abbasid caliphate at Baghdad were the main trading partners.
The trade with the Chinese was a very lucrative enterprise, and Trade guilds needed the king's approval and the license from the customs force/department to embark on overseas voyages for trade. The normal trade voyage of those day involved three legs of journey, starting with the Indian goods (mainly spices, cotton and gems) being shipped to China and in the return leg the Chinese goods (silk, incense,iron) were brought back to Chola ports. After some materials were utilized for local consumption, the remaining cargo along with Indian cargo was shipped to the Arabs. Traditionally, this involved transfer of material/cargo to many ships before the ultimate destination was reached.
Combating Piracy in Southeast Asia
The Strategic position of Sri Vijaya and Khamboj (modern-day Cambodia) as a midpoint in the trade route between Chinese and Arabian ports was crucial. Up to the 5th century, the Arabs traded with Chinese directly using Sri Vijaya as a port of call and replenishment hub. Realizing their potential, the Sri Vijaya empire began to encourage the sea piracy surrounding the area. The benefits were twofold, the loot from piracy was a good bounty and it ensured their sovereignty and cooperation from all the trading parties. Piracy also grew stronger due to a conflict of succession in Sri Vijaya, when two princes fought for the throne and in turn, relied on the loot from the sea-piracy for their civil war.
The pirate menace grew to unprecedented levels. Sea trade with China was virtually impossible without the loss of 1/3 of the convoy for every voyage. Even escorted convoys came under attacks, which was a new factor. Repeated diplomatic missions urged the Sri Vijaya empire to curb the piracy, with little effect. With the rise in piracy, and in the absence of Chinese commodity, the Arabs, on whom the Cholas were dependent of horses for their cavalry corps, began to demand high prices for their trade. This led to a slew of reduction in the Chola army. The Chinese were equally infuriated by the piracy menace, as they too were losing revenue.
The culmination of three century's combined naval traditions of Pallavas and Cholas led to the most known accomplishment of the Chola Navy (or any Indian power for that matter)., Namely the 1st expedition of the Chola navy into the Malay peninsula.
Cooperation with the Chinese
Chinese Song Dynasty reports record that an embassy from Chulian (Chola) reached the Chinese court in the year 1077, and that the king of the Chulien at the time was called Ti-hua-kia-lo. It is possible that these syllables denote "Deva Kulo[tunga]" (Kulothunga Chola I). This embassy was a trading venture and was highly profitable to the visitors, who returned with '81,800 strings of copper coins in exchange for articles of tributes, including glass articles, and spices'.
The close diplomatics tie between the Song dynasty of China and the Medieval Cholas facilitated many technological innovations to travel both ways. The more interesting ones to have reached Chinese shores are:
- The famous Chola ship-designs employing independent water tight compartments in the hull of a ship.
- The mariner's compass
- The continuously shooting flamethrowers for naval warfare.
Organization and administration
The Ancient Chola navy was based on trade vessel designs with little more than boarding implements, though this changed throughout the history. The later day navy was a specialized force with specially built ships for each type of combat.
The Imperial Navy of the Medieval Cholas was composed of a multitude of forces in its command. In addition to the regular navy (Kappal-Padai), there were many auxiliary forces that could be used in naval combat. The Chola Navy was an autonomous service unlike many of its contemporaries. The Army depended on the Naval-fleets for transportation and logistics. The navy also had a core of marines. Even saboteurs, who were trained pearl-fishermen ,were used to dive and disable enemy vessels by destroying or damaging the rudder.
The Chola Navy could undertake any of the following combat and non-combat missions,
- Peacetime patrol and interdiction of piracy.
- Escort trade conveys.
- Escort friendly vessels.
- Naval battle close to home ports and at high-seas.
- Establish a beachhead and or reinforce the army in times of need.
- Denial of passage for allies of the state's enemies.
- Sabotage of enemy vessels
This multi-dimensional force enabled the Cholas to achieve the Military, Political and cultural hegemony over their vast dominion.
The king/emperor was the supreme commander of all the military forces including the navy.
The navy is organized mostly on role based squadrons ans divisions, containing various types of ships assigned for a specific role and home-ported in an associated base/port. This procedure became necessary, especially after the conquest of Ceylon. Normally, a Ganam (Fleet-Squadron) would (the largest individual unit)be commanded by a Ganathipathy (not to be confused with the elephant headed god Ganapathy).
There were numerous sub-units of operational reasons and organizational reasons or otherwise. Some are presented below,
|Unit Name||Commander||Modern-day equivalent||Composition||Functions/Duties||Notes|
|Kanni - Wartime/special purpose formation||Senior Kalapathy, Normally Kalapathy is the rank of a commanding officer of a Ship (akin to Captain)||Not more than five ships of any role.||Kanni In Tamil means trap.‡1 A tactical formation, it was used to lure enemy combatants to a particular area. Where larger bodies (usually, a Thalam or 2) ships will ambush the enemy.||During a strategic deployment, the formation would be used many times before engaging in the main combat to decimate the enemy fleet.||Also had a very bad reputation for losses, since high numbers of ships were lost in this role if the friendlies arrival was delayed in unfavorable currents.|
|Jalathalam or simply Thalam‡2 - A permanent formation.||Jalathalathipathy - The lord of Thalam||The smallest self-sustained unit in naval formation, consisted 5 main battle vessels, 3 Auxiliaries and 2 Logistics and 1 or 2 Privateers. A Thalam could be used for reconnaissance, patrol or interdiction.||Normally, 2-3 Thalam operated in a vicinity on scouting or search and destroy missions. while can search a wide area, can reach to each other's aid in short duration.||A fully equipped Chola Thalam is said to have been able to withstand an attack by more than twice its size. This is attributed to the superior range of missile weapons in Chola Inventory.|
|Mandalam - A semi-permanent formation. Mostly used in battle/Overseas deployment.||Mandalathipathy - The lord of Mandalam||Roughly equivalent to Task force or Battle groups||Composed of 48 Ships of various roles. (Mandalam in Tamil and various Indian languages is the word of 48)||They can used as an individual combat unit, especially during pincer or break-neck maneuvering in high-seas.|
|Ganam - A permanent formation||Ganathipathy - Literally, Athipathy (lord) of the Ganam, equivalent to modern-day rear-admiral||Fleet-Squadron||Composed of 100-150 Ships of various roles. (Ganam in Tamil means volume and three). A ganam comprises three Mandalams.||A self-reliant unit of the force, only smaller than the Fleet. Had combat, reconnaissance, logistics and resupply/repair units.||Normally, this would be the minimum strength/size of the overseas deployment.|
|Ani||Anipathy - lord of an Ani||Taskforce or battle group||Composed of 3 Ganams (Fleet-division) minimum. Normally consisting of 300-500 ships.||Mainly an Expedition order than normal formation. But, during long deployments, they were deployed (only 2 instances of an Ani being deployed in a combat have been documented.)|
|Pirivu||Normally headed by a prince/confidante of the King, title depends on the sea where the fleet is based. For example, The eastern fleet would be named as Keelpirivu-athipathy or Nayagan or Thevan/r, depending on the person.||Fleet||They functioned much like modern Fleets. There were two to four fleets in the Chola navy during various times. The principal fleet was based in the east. Later on a second fleet was based on Ceylon/Sri Lanka. During and after the Rajendra I, three or four fleets existed.||The rise of Chera naval power gave more than a little loss in revenue, prompting the Cholas to station a Fleet permanently in the Malabar and to engage Mercenary navies to support the Chola strategic design.|
‡1. Kanni May mean any of the following in Tamil, the application on the meaning is in context of the usage.(கன்னி) Virgin/Unmarried Girl, First timer, the Eastern corner/direction. A trap is also called as 'kanni', that is 'ka NN i'(கண்ணி) which is a different word both pronounced and written differently.
2. Thalam being both the name of a tactical formation of the army and navy. Thalapathy meaning the lord of a Thalam, roughly a division, and the rank is comparable to a modern-day colonel.
The Chola navy used a hybrid rank structure. There were dedicated naval ranks as well as army-derived ranks. The Chola Navy used both naval ranks and army-style ranks. While some of the modern-day convention of ranks did apply, for example, the army captain is equal to a lieutenant in the navy and a navy captain is equal to a colonel in the army; others were totally different. So a small comparison is provided for comparison.
- The supreme commander :Chakravarthy - The emperor
- The commander-in chief of navy :Jalathipathhi - roughly, the admiral of the navy.
- The commander of the fleet : Pirivu+ Athipathy or Devar/n or Nayagan - The equivalent of an admiral
- The commander of the fleet-squadron : Ganathipathy - roughly the equivalent of a rear-admiral
- The commander of a group : Mandalathipathy#(refer below) - the equivalent of a vice-admiral
- The commander of the ship : Kalapathy -The equivalent of a captain in modern navies.
- The officer in-charge of arms in a ship : Kaapu - Roughly the executive officer and weapons officer rolled into one.
- The officer in-charge of the oarsmen/masts : Seevai - roughly the equivalent of the master chief and engineering officer.
- The officer in-charge of boarding party (marines) : Eeitimaar - major or captain in marines.
The auxiliary forces of the Chola Navy In addition to the standing navy of the state, there were other services which had a naval arm of its own. Notable among them are the customs department, militia and the state monopoly of pearl fisheries. In addition to the state services, a small but formidable forces were maintained by various trade-guilds, these guilds are highly regulated and acted as mercenaries and reinforcements in times of need.
Customs and excise
The Customs force, called Sungu (SUNGA ILLAKA) was highly organized and unlike anything in the ancient world. It was under the command of a Director-general like position called Thalai-Thirvai. Thalai - Head, Thirvai - duty (customs). It was highly evolved and had various departments Some are
|Thirvai (Customs duty and Excise)||This unit employed some of the brilliant merchants of the time and most were professional economists. They deduced and fixed the percentage of the Customs duty of a commodity for a particular season. (trade-voyages were influenced by ocean currents and hence the price changed accordingly)||They normally had boarding officers, boarding crafts and some sea vessels; as most of their duty was inland.|
|Aaivu (Inspection and enforcement)||This unit was the Action arm of the trade law, they inspected ships for contraband, illegal goods, wrong declaring of tonnage, small crimes control and the protection of the Harbors under Chola dominion.||These units employed some of the fast assault and boarding vessels of the time and in more than one reported occasion, the navy had sought its help in intercepting rogue vessels.†|
|Ottru (intelligence corps)||They were the intelligence corps of the territorial waters of the Chola dominion. They normally tailed foreign vessels, performed path-finding for larger forces or convoys and gave periodic updates for the kings and the trade guilds of the happenings in the sea.||They operated highly capable vessels which are noted for stealth and speed, rather than brute force and weapons platforms. Most of the ships they operated were privateers and contained no national markings. We have some understandings of their crafts, which seemed to have been equipped with concealable catapults and napalm throwers (not trebuchets like the ones employed by the naval ships.)|
|Kallarani (pirate squad)||Technically, they were not employed by either the sovereign or the state. But rather, they are pirates themselves who have received the Royal Pardon on the pledge of their support of the Chola Empire. They had been used in more than a few instances to deal with Arab piracy in western waters. They have also been used as auxiliary units of the Coast Guard.||These mercenaries operated anything that they could capture and composed of multi-national-ethnic corps. Notable among them are the Arabian Amirs, who were highly respected upon their oath of allegiance and their fervor in combat.|
|Karaipirivu (Coastal defense)||They performed duties akin to the modern coast-guard, search and rescue and coastal patrols. But mainly they were land-based and scattered along the long coast-line to provide a seaward defense.||They operated substantially smaller crafts and occasionally even catamarans . Nevertheless, they were feared by petty crooks and coastal thieves.|
In the later years of the 1100[clarification needed], the navy was constantly battling in many fronts to protect Chola commercial, religious and political interests. So the home ports were literally, undefended. This led to a change in Chola naval strategy, the sturdier and larger vessels were repeatedly called to reinforce the high-sea flotilla, leading to the development of a specialized auxiliary force of fast and heavily armed light ships in large numbers. The erstwhile Karaipirivu was the natural choice for this expansion and in time they became an autonomous force vested with the duties of protecting the Chola territorial waters, home ports, patrol of newly captured ports and coastal cities.
The state's dependence on overseas trade for much valued foreign exchange created the powerful Trade-guilds, some of which grew more powerful than the regional governors. And in the increasingly competitive field of international trade, the state faced with difficulties to reinforce and or rescue stranded Merchant ships in high seas, in a timely manner. This led to the establishment of privateer navies. Like its European counterparts, they had no National markings and employed multi-national crews.
But, they were employed by the Trade-guilds rather than the Empire, giving the Traders an edge in the seas. Normally, they performed path-finding, escort and protection duties. but, in more than a few occasions, these forces had been summoned to serve the Empire's interests.
Notable Trade guilds which employed a privateer navy were,
- Nanadesa Tisaiyayirattu Ainnutruvar - literally, "the five hundred from the four countries and the thousand directions"
- Maalainattu Thiribuvana Vaanibar kzhulumam - The merchants from the high-country in three worlds (meaning the 3 domiciles of Chinese, Indian and Arabian empires)
- Maadathu valaingair (or valainzhr)vaanibar Kzhu - The pearl exporters form the Kanchipuram
Vessels and weapons
Even before the accounts of the 1st century BCE, there were written accounts of shipbuilding and war-craft at sea. Professor R. C. Majumdar says that there existed a comprehensive book of naval-architecture in India dating back to the 2nd century BCE, if not earlier.
- Dharani - The equivalent of modern-day destroyers designed to take combat to high-seas.
- Loola - The equivalent of modern-day corvettes; designed to perform light combat and escort duties.
- Vajra - The equivalent of a frigate maybe, a fast attack craft lightly armored.
- Thirisadai - Probably the battle cruisers or battleships of the day, they are reported to be armored heavily and could engage more than 2 targets in combat, and relied on its built rather than speed to survive and attack.
Though all ships of the time employed a small Marine force (for boarding enemy vessels), this class of ship seems to have had a separate cabins and training area for them. This ship also is said to be able to engage in asymmetrical warfare.
|Dharani||The primary weapons platform with extensive endurance (up to 3 months), they normally engaged in groups and avoided one on one encounters.||Probably equivalent to modern-day Destroyers.|
|Lola||They were lightly armored, fast attack vessels. Normally performed escort duties. They could not perform frontal assaults.||Equivalent to modern-day Corvettes.|
|Vajara||They were highly capable fast attack crafts, typically used to reinforce/rescue a stranded fleet.||Probably equivalent to modern-day Frigates.|
|Thirisadai||The heaviest class known, they had extensive war-fighting capabilities and endurance, with a dedicated marine force of around 400 Marines to board enemy vessels. They are reported to be able to engage three vessels of Dharani class, hence the name Thirisadai, which means, three braids. (Braid was also the time's name for oil-fire.)||This class can be attributed/compared to modern Battle cruisers or Battleships.|
Apart from class definitions, there are names of Royal Yachts and their architecture. Some of which are,
- Akramandham - A royal Yacht with the Royal quarters in the stern.
- Neelamandham - A royal Yacht with extensive facilities for conducting courts and accommodation for hi-officials/ministers.
- Sarpammugam - these were smaller yachts used in the Rivers (with ornamental snake heads)
In addition to these, we find many names of Ship classes in Purananuru and its application in both inland waters and open oceans. Some of them are,
- Yanthiram - Hybrid ship employing bot sails and oars or probably Paddle wheels of some type (as Yanthiram is literally translated to mechanical wheel)
- Kalam - Large vessels with 3 masts which can travel in any direction irrespective of winds.
- Punai - medium-sized vessels that can be used to coastal shipping as well as inland.
- Patri - Large barge type vessel used to ferrying trade goods.
- Oodam - Small boat with large oars.
- Ambi - Medium-sized boat with a single mast and oars.
- Toni - small boat used in rocky terrain.
In the tenure spanning the 700 years of its documented existence, the Chola Navy was involved in confrontations for probably 500 years. There were frequent skirmishes and many pitched battles. Not to mention long campaigns and expeditions. The 5th centuries of conflict between the Pandyas and Cholas for the control of the peninsula gave rise to many legends and folktales. Not to mention the heroes in both sides. The notable campaigns are below
- War of Pandya Succession (1172)
- War of Pandya succession (1167)
- The destruction of the Bali fleet (1148)
- Sea battle of the Kalinga Campaighn (1081-1083)
- The second expedition of Sri Vijaya (1031-1034)
- The first expedition of Sri Vijaya (1027-1029)
- The Annexation of Kedah (1024-1025)
- Annexation of the Kamboja (?-996)
- The invasion of Ceylon/Sri Lanka.(977-?)
- Skirmishes with Pallava Navy (903-8)
Recruitment and service
The chola emperors gave a free hand to the admirals in recruiting and training of sailors, engineers, oarsmen and marines. There were no complicated tests and evaluation process. Any citizen or even non citizen could sign up for the naval service. But, one did not end up in the work of his choice. Preference were given to ex-servicemen, their sons and noblemen. But, this attitude changed in later days. And many class of soldiers / sailors distinguished themselves, irrespective of rank and class.
Ports and fleets
The most ancient of ports used by Cholas was Poompuhar. Later on, they used many more ports and even built some new ones. Some of the famous ports are:
In addition to these sea ports there were many inland ports and dry dock connected by Rivers Kaveri and Thamarabarani which served commercial fleets and in times of war, to facilitate mass production, ships were built inland and ferried through the rivers to the Ocean.
- Worayur or Urayur
The fleets were normally named after the dead monarchs and god's name. The most distinguished ones were granted Royal prefixes like Theiva-sovereign's name-fleet name. During the reign of Rajaraja Chola I and Rajendra Chola I, there were 5 fleets, each catering to particular needs. The main fleet was home ported in Nagapatinam. The other fleets were home ported in Kadalur and a small fleet was also based in Kanchipuram.
In addition to the main fleets of war ships, there were two fleets of logistics and transport ships to serve the needs of the army; involved in a bloody war in Ceylon and later in southeast Asia.
In the later years these numbers increased drastically and a several fleets were created anew. During the late 11th century, there were a total of nine battle fleets, based in various dominians across the vast expanses of the Chola empire ranging from the present day Aceh, Ankorwat to the southern reaches of Ceylon/Sri Lanka.
Political, cultural and economic impact
The Grand vision and imperial energy of the Father and son duo Raja Raja Chola I and Rajendra Chola I is undoubtedly the underlying reason for expansion and prosperity. But, this was accomplished by the tireless efforts and pains of the navy. In essence, Raja Raja was the first person in the sub-continent to realize the power projection capabilities of a powerful navy. He and his successors initiated a massive naval buildup and continued supporting it, and they used it more than just wars.
The Chola navy was a potent diplomatic symbol, the carrier of Chola might and prestige. It spread Dravidian culture, its literary and architectural grandeur. For the sake of comparison, it was the equivalent of the " Gunboat diplomacy " of the modern-day Great powers and super powers.
There is evidence to show that the king of Kambujadesa (modern Cambodia) sent an ornamental chariot to the Chola Emperor, probably to appease him to limit his strategic attention to the Malay peninsula.
From the Sangam age poems to commemorate the victory of the sovereign of the day to the immortalized Kalinga Campaign of the Kulothunga Chola I in the Kalingattuparani. Parani is a special type of literary work, which; according to the traditions and rules of linguistics of Tamizh can only be composed on a king/general whose forces have killed a thousand elephants in combat.
In modern times, more than a few Romance has been inspired by the Chola Navy, and mostly in Tamil Language and literature.
- Yavana rani : A historical novel by Sandilyian surrounding the events of the Karikala's Ascendence to throne.
- Ponniyin selvan : The crowning glory of the Rajaraja is idolized in this Novel surrounding the assassination of his brother and crown prince Aditha Karikalan. More than a passing note is given of the navy and its organization in this Magnum opus by Kalki. Krishnamoorthy.
- Kadal pura : Another historical novel by sandilyan surrounding the foundation of the Chalukya Chola dynasty in India and the Song Dynasty in china. Sandilyan gives more than a passing evidence to prove that the song-emperor and Kulothunga chola were friends. By far, this work gives the most intricate details of the navies of the day and naval warfare. In this work he describes the various weapons and tactics employed by the Cholas and Chinese navies and their combined efforts to overthrow the Sri Vijaya dynasty.
- Kanni Maadam : A historical novel by Sandilyan in the time of Rajathiraja Chola. The work describes the Pandyas' civil war .It elaborates the war by proxy, between the sinhalese and cholas. The pallavas are all but gone, they are in the service of both Cholas and pandyas. It features some of the most detailed tactical maneuvering in battlefield. It also highlights the importance of the Naval power and logistics in an overseas campaign.
- Aayirathil Oruvan (2010 film) : A movie about the search for an exiled Chola prince directed by Selvaraghavan.
Timeline of events
The major events which had direct and some of them deep impact in the development of the Chola Naval capability are listed here, which is in no case comprehensive.
Archeological evidence: The dated excavations,
- 3000 BCE - Dugout canoes were found in Arikamedu, what is now in Puducherry.
- 2400 BCE - Highly functional port is in operation in Lothal of what is now Gujarat.
- 700 BCE - The first mention of the word Yavana in pottery around korkai.(meaning Greeks or Romans)
- 300 BCE - A load-stone compass with Chinese inscriptions is found off the coast of Kaaveripoompatnam.
- 100 BCE - A settlement of Tamil/Pakrit speaking merchants founded in Rome.
- Late 1st century BCE - Roman glass was found in southern coastal regions of Tamil Nadu, see: Indo-Roman trade relations
Literary references and recordings
- 356-321 BCE: The Periplus of Niarchus, an officer of Alexander the Great, describes the Persian coast. Niarchus commissioned thirty oared galleys to transport the troops of Alexander the Great from northwest India back to Mesopotamia, via the Persian Gulf and the Tigris, an established commercial route.
- 334-323 BCE: Eratosthenes, the librarian at Alexandria, drew a map which includes Sri Lanka and the mouth of the Ganges. Which states the exchange of traffic and commodity in the regions.
- 1st century BCE : When Vennikkuyithiar mentions about Karikala, he mentions several class of inland vessels by name. Some are Kalam, Punai, and Patri.
- K.A. Nilakanta Sastri, A History of South India, p 175
- K.A. Nilakanta Sastri, A History of South India, p 5
- Kulke and Rothermund, p 115
- Keay, p 215
- Majumdar, p 407
- The kadaram campaign is first mentioned in Rajendra's inscriptions dating from his 14th year. The name of the Srivijaya king was Sangrama Vijayatungavarman. K.A. Nilakanta Sastri, The CōĻas, pp 211–220
- Meyer, p 73
- "History of India by Literary Sources", Prof. E.S. Narayana Pillai, Cochin University
- "South India Handbook", Robert Bradnock, pp 142.
- "The Commerce and Navigation of the Ancients in the Indian Ocean", William Vincent, Page 517-521
- "periplus mentions 3 ports in Tamil country of which Kaveripatnam as center, as the places from which great ships which calls colondia sailed to pacific islands" - K.M.Panikkar in "geographical factors in indian history", page-81.
- 'Mayillai.Seeni. VenkataSwamy', சங்ககால தமிழக வரலாற்றில் சில செய்திகள் (TAMIL BOOK), page-149
- The Archaeological Survey of India's report on Ancient ports, 1996, Pages 76-79
- "India and China- Oceanic, Educational and technological cooperation", Journal of Indian Ocean Studies 10:2 (August 2002), Pages 165-171
- Kulke and Rothermund, pp 116–117
- Kulke and Rothermund, p 118
- Kulke and Rothermund, p 117
- Kulke and Rothermund, p 12
- Kulke and Rothermund, p 124
- Tripathi, p 465
- Tripathi, p 477
- K.A. Nilakanta Sastri, The CōĻas, p 604
- "Antiquities of India: An Account of the History and Culture of Ancient Hindustan", Lionel D. Barnett, Page 216.
- Prakash Nanda,. Rediscovering Asia: Evolution of India's Look-East Policy. pp. 56–57. ISBN 81-7062-297-2.CS1 maint: extra punctuation (link)
- The Military History of south Asia, By Col. Peter Stanford, 1932.
- Military Leadership in India: Vedic Period to Indo-Pak Wars By Rajendra Nath, ISBN 81-7095-018-X, Pages: 112-119
- Keay, p 223
- See Thapar, p xv
- K.A. Nilakanta Sastri, The CōĻas, p 316
- The Tamil merchants took glassware, camphor, sandalwood, rhinoceros horns, ivory, rose water, asafoetida, spices such as pepper, cloves, etc. K.A. Nilakanta Sastri, A History of South India, p 173
- Historical Military Heritage of the Tamils By Ca. Vē. Cuppiramaṇiyan̲, Ka.Ta. Tirunāvukkaracu, International Institute of Tamil Studies
- "Indian Ocean Strategies Through the Ages, with Rare and Antique Maps", Moti Lal Bhargava, Reliance publication house, ISBN 81-85047-57-X
- "The Encyclopedia of Military History from 3500 B.C. to the Present", Page 1470-73 by Richard Ernest Dupuy, Trevor Nevitt Dupuy -1986,
- The history of the navies of India, BY William Shaf 1996, Pages-45-47
- The Corporate Life in ancient India, By Prof RC Majumdar, Ramesh Chandra. 1920, Madras University Press, Available online at http://deas.repec.org/b/hay/hetboo/majumdar1920.html[permanent dead link]
- Maritime trade and state development in early Southeast Asia, Kenneth Hallp.34, citing Pattinapalai, a Sangam poem of the 1st century, quoted in K.V. Subrahmanya Aiyer, 'Largest provincial organisations in ancient India', Quarterly Journal of the Mythic Society 65, 1 (1954-55): 38.,
- The Corporate Life in ancient India, By Prof RC Majumdar, Ramesh Chandra. 1920, Madras University Press,
- Southeast Asia, Past and Present By D. R. Sardesai, Page 47
- The History and Culture of the Indian People, By Prof R.C. Majumdar Pages, 642-646
- The History shipbuilding in the sub-continent , By Prof R C Majumdar, Pages, 521-523, 604-616
- A History of South-east Asia - Page 55 by Daniel George Edward Hall - Asia, Southeastern Publishers, 1955, Pages 465-472, 701-706
- The Politics of Plunder: The Cholas in Eleventh-Century Ceylon,George W. Spencer,The Journal of Asian Studies, Vol. 35, No. 3 (May, 1976), pp. 405-419, Summary at JSTOR 2053272
- "An atlas and survey of south Asian History" , By M E Sharpe, 1995, Published by Lynne Rienner, Pages 22-28
- The geo-Politics of Asia, By Michael D. Swaine & Ashley J. Tellis, Published by Konark publishers for the center for policy research, New Delhi, Page 217-219
- D The Chola Maritime Activities in Early Historical Setting, By: Dr. K.V. Hariharan
- Casson, Lionel (2012). The Periplus Maris Erythraei: Text with Introduction, Translation, and Commentary. Princeton University Press. p. 229. Retrieved 18 December 2016.
- "Archived copy". Archived from the original on 4 June 2003. Retrieved 25 December 2008.CS1 maint: archived copy as title (link)
- "Archived copy". Archived from the original on 14 August 2007. Retrieved 25 December 2008.CS1 maint: archived copy as title (link)
- K.A. Nilakanta Sastri, A History of South India, p 18.
- Chopra et al., p 31
- http://www.tifr.res.in/~akr/crab_webtifr.html (Indian subcontinent section) |
How the World Works
Recognizing patterns within our changing world can lead to new solutions.
Lines of Inquiry:
Students will inquire into:
~Identifying patterns in Earth’s features
~Cause and effect relationships of Earth’s physical events
~Engineering ethical solutions
Form, Causation, Responsibility
Patterns, Impact, Cause/Effect
Thinker, Knowledgeable, Reflective
weathering, erosion, landforms, fossils, rock layers, earthquake, tsunami, volcano, design, floods, glaciers, cause and effect, continents, claim evidence reason, eruption, plate tectonics
In this next Unit of Inquiry, students will use both a Science and a Social Studies lens to investigate patterns in the Earth’s land features. Students will look at how the Earth is constantly changing in both fast (earthquakes and landslides) and slow ways (plate tectonics). They will identify landforms and look for clues as to how they might have been created.
Students will participate in a range of experiments and inquiry-style science investigations that will get them handling sand, soil, clay and other earth materials, investigating layers in land formations, and watching weathering and erosion in action. They will make earthquake-proof buildings and test them in a simulation, simulate landslides and tsunamis in the classroom, and look at the impacts these types of natural disasters have on a community.
Students will also explore systems that communities have in place to provide early-warning systems for impending disasters, as well as explore the community response after a major natural event.
As we delve deeper into the reader’s and writer’s workshops, our grade 4 text analysts and authors are focusing on more complex works. This includes continuing with fictional texts but also integrating a more detailed use of the nonfiction genre. Students will be working on opinion writing through the lens of research and uncovering information through their nonfiction reading and other multimedia sources.
This unit emphasizes organization, detailed note taking, and providing reasoning through evidence to support our claims in our thesis and topic sentences. In reading, students will review text features and be able to identify and use more sophisticated vocabulary as they use each feature. They will reflect upon strategies they can use to better comprehend informational text as they read. As a whole class they will be reading nonfiction texts based on various physical features, landforms, and patterns present on Earth. These texts will embed literacy within our unit of inquiry to ensure the unit remains transdisciplinary. The nonfiction books and texts we will be reading will be rich with diverse and complex topics, vocabulary, and encourage complex thinking.
Finally, students will continue developing reading and writing strategies that provide a platform for growth and goal-setting as they reflect on the past two units as a whole and compare the growth they see within themselves as progressing readers and writers. Stay tuned for an invitation to view students’ published writing pieces (among other student work) in our next unit.
The focus of math Unit 2 is measurement. Students will use length (km, m, cm), mass (kg, g), and volume (L, mL) in the metric system to convert between units using place value knowledge. We will explore the patterns in the place value system through metric unit conversions to prepare for fraction and decimal operations which come later in the year. This unit will also include time conversions between hours, minutes, and seconds.
In this unit, students will also be introduced to the multiplication model based on the area of a rectangle. This will support conceptual array models. We will practice various ways to model these problems, moving from concrete (using manipulatives) to models (pictures) to abstract symbols (equations). Flexibly thinking about numbers is stressed above memorizing facts. The multiplication algorithm is not addressed until grade 5. During this unit we will solve division problems using what we know about inverse multiplication equations (i.e. 18/3=? will be solved by “using what we know” and thinking about missing factor multiplication problems 3 x ? = 18.)
Key words and vocabulary will include:
- Length: the measurement of something from end to end
- Kilometer (km), Meter (m), centimeter (cm): units of measure for length
- Weight: the measurement of how heavy something is
- Mass: the measure of the amount of matter in an object
- Kilogram (kg), gram (g): units of measure for mass
- Capacity: the maximum amount that something can contain
- Liter (L), milliliter (mL): unit of measure for liquid volume
- Mixed units: e.g., 3 m 43 cm
- Convert: to express a measurement in a different unit
- =, <, > : equal, less than, greater than
- Estimate: an approximation of the value of a number or quantity)
Please see the “I can” statements below:
- I can tell relative size of measurement units (km, m, cm, kg, g, L, mL, hrs, min, sec).
- I can convert larger units of measurement to smaller units.
- I can solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money.
- I can tell that the formula for perimeter of a rectangle. That means Perimeter = 2L + 2W or L+L+W+W.
- I can tell that the formula for area of a rectangle. That means Area = L x W
- I can use the formulas for area and perimeter of a rectangle to solve real world and math problems.
- I can explain multiplication strategies.
- I can use strategies based on place value and the properties of operations to multiply numbers.
- I can explain my answer using written equations.
- I can explain my answer using rectangular arrays and area models.
- I can explain my answer using the relationship between multiplication and division
In order to practice the skills needed for this Unit of Inquiry, students should be reading more nonfiction texts both at school and at home. It is important that students who need more support with their English read more than the daily 20 minutes currently required for homework. Ideally, students will be reading 30-40 minutes a day and keeping track of any new vocabulary they may find.
Please encourage your child’s learning by discussing the content from this Unit of Inquiry using your home language, especially if it that is the language that both of you are most comfortable with.
In order to familiarize themselves with the vocabulary for this unit, students should use Quizlet (https://quizlet.com/_751euk) for a few minutes each day.
How you can help at home:
- Go on nature walks and talk to your child about the landforms and rock formations you or he/she may notice. Encourage your child to think about how they may have formed.
- Discuss and breakdown the important vocabulary terms we will be exploring in this unit like weathering, erosion, landforms, earthquake, tsunami, and eruption in your home language.
- Use a graphic organizer to compare/contrast solutions to natural disasters like earthquakes, tsunamis, floods, and/or volcanic eruptions and how those responses look different in different places. For example, compare an earthquake in Japan and the engineering solutions applied there to an earthquake in Haiti and the engineering solutions applied there (Venn Diagram, T-chart, and/or checklist).
- Be an active listener when your child reads their opinion essays. Add an encouraging note in their writing journal or on Seesaw.
- Try to read and spark interest in earthquakes, volcanoes, landslides and floods in your home language.
- Talk about major natural disasters you know about from your home country or that you may have experienced. Story telling is one of the best ways to cultivate interest!
How you can help with math:
- Using metric measurement tools, encourage your student to measure objects around the house
- Use measurement tools when baking or cooking.
- Compare items by length, weight, or capacity.
- Take an object and estimate the weight. Then use a scale to determine the exact weight and compare the two amounts.
- Use a ruler to measure objects around the house in centimeters and meters.
- Continue to talk about place value patterns with your child, e.g. how many 10s in 100? How many 100s in 1000?
- Review basic math facts up to 100 (e.g. 3×4= 12). Aim for 3 seconds.
- Review math vocabulary terms like conversion, length, width, height, weight, capacity, and volume.
- Discuss the term conversion and how it works in metric measurements like millimeter, centimeter, meter, and kilometer for length, and milligram, gram, and kilogram for weight. |
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율|
|1 초||128 MB||14||0||0||0.000%|
An arithmetic sequence is one in which there is some first number, and then a series of numbers which are all a fixed number different.
For example 3, 5, 7, 9 is an arithmetic sequence that has a first number 3. Then each term after that in the sequence is formed by adding 2 to the previous term. (The terms are different by 2). The 3 is also called the first term (term 1) and 9 is the 4th term.
Given a starting number, a difference and a value, your program is to work out if the number could be part of the sequence. If so, output which term that number would be, and if not, output a letter X.
Input will consist of a number of lines, where each line has 3 numbers separated by spaces.
The first number is an integer that is the first term in the sequence. The second is the difference - this will be a non-zero integer. The third is the value that you will need to test to determine whether it can be part of the sequence or not.
Input is terminated by a zero value for each of the 3 numbers.
Output will consist of one line for each input line. It will consist of either a number indicating which term it is, or X if the number isn’t part of the sequence
3 2 11 -1 -3 -8 0 0 0
11 is the 5th term
The sequence is –1, -4, -7, -10
(-1+ -3 = -4, -4 + –3 = -7, -7+ -3 = -10)
-8 isn’t in the sequenc |
A team led by Adam Riess, a professor in The Johns Hopkins University's Henry A. Rowland Department of Physics and Astronomy and a Space Telescope Institute researcher, found that dark energy was already accelerating the expansion of the universe at least as long as 9 billion years ago. This picture of dark energy would be consistent with Albert Einstein's prediction, nearly a century ago, that a repulsive form of gravity emanates from empty space.
The team will announce these findings in a media teleconference at NASA headquarters in Washington at 1 p.m. EST on Thursday, Nov. 6. (For logistics, see below.) The findings also will be published in the Feb. 10, 2007, issue of Astrophysical Journal.
"Although dark energy accounts for more than 70 percent of the energy of the universe, we know very little about it, so each clue is precious," said Riess, who in 1998 led one of the first studies to reveal the presence of dark energy. "Our latest clue is that the stuff we call dark energy was present as long as 9 billion years ago, when it was starting to make its presence felt."
Hubble's new evidence is important, because it will help astrophysicists start ruling out competing explanations that predict that the strength of dark energy changes over time, Riess said.
In addition, the researchers found that the exploding stars, or supernovae, used as markers to measure the expansion of space today look remarkably similar to those which exploded 9 billion years ago and are just now seen by Hubble. This is an important finding, say researchers, because it gives added credibility to the use of these supernovae as tools for tracking the cosmic expansion over most of the universe's lifetime.
To study the behavior of dark energy long ago, Hubble had to peer far across the universe and back into time to detect ancient supernovae, which can be used to trace the universe's expansion and determine its expansion rate at various times. The method, Riess said, is analogous to watching fireflies on a summer night. Because all fireflies glow with about the same brightness, you can judge how they are distributed throughout the backyard by their comparative apparent faintness or brightness, which depends on their distance from you.
Only Hubble can measure these supernovae because they are too distant, and therefore too faint, to be studied by the largest ground-based telescopes.
Albert Einstein first conceived of the notion of a repulsive force in space in his attempt to explain a balance the universe against the inward pull of its own gravity. If such an opposing force did not exist, he reasoned, gravity would ultimately cause the universe to implode.
But Einstein eventually rejected his own so-called "cosmological constant" idea and it remained a curious hypothesis until 1998, when Riess and the members of the High-Z Supernova Team and the Supernova Cosmology Project used ground-based telescopes and Hubble to first detect the acceleration of the expansion of space from observations of distant supernovae. Astrophysicists came to the realization that Einstein may have been right after all, that there really was a repulsive form of gravity in space. It soon after was dubbed "dark energy."
Over the past eight years, astrophysicists have been trying to uncover two of dark energy's most fundamental properties: its strength and its permanence. The new observations reveal that dark energy was present and obstructing the gravitational pull of the matter in the universe even before it began to win this cosmic "tug of war."
Hubble observations of the most distant supernovae known, reported in 2004 by Riess and colleagues, revealed that the early universe was dominated by matter whose gravity was slowing down the universe's expansion rate, like a ball rolling up a slight incline. The observations also confirmed that the expansion rate of the cosmos began speeding up about 5 billion to 6 billion years ago, like a roller coaster zooming down a track. That is when astronomers believe that dark energy's repulsive force overtook gravity's attractive grip.
The latest results are based on an analysis of the 24 most distant supernovae known, most found within the last two years.
By measuring the universe's relative size over time, astrophysicists have tracked the universe's growth spurts, much as a parent may witness the growth spurts of a child by tracking changes in height on a doorframe. Distant supernovae provide the doorframe markings read by Hubble.
"After we subtract the gravity from the known matter in the universe, we can see the dark energy pushing to get out," said the University of Western Kentucky's Lou Strolger, a supernova hunter on the Riess team.
Further observations are presently underway with Hubble by Riess and his team which should continue to offer new clues to the nature of dark energy.
One-way roads for spin currents
23.05.2018 | Singapore University of Technology and Design
Tunable diamond string may hold key to quantum memory
23.05.2018 | Harvard John A. Paulson School of Engineering and Applied Sciences
At the LASYS 2018, from June 5th to 7th, the Laser Zentrum Hannover e.V. (LZH) will be showcasing processes for the laser material processing of tomorrow in hall 4 at stand 4E75. With blown bomb shells the LZH will present first results of a research project on civil security.
At this year's LASYS, the LZH will exhibit light-based processes such as cutting, welding, ablation and structuring as well as additive manufacturing for...
There are videos on the internet that can make one marvel at technology. For example, a smartphone is casually bent around the arm or a thin-film display is rolled in all directions and with almost every diameter. From the user's point of view, this looks fantastic. From a professional point of view, however, the question arises: Is that already possible?
At Display Week 2018, scientists from the Fraunhofer Institute for Applied Polymer Research IAP will be demonstrating today’s technological possibilities and...
So-called quantum many-body scars allow quantum systems to stay out of equilibrium much longer, explaining experiment | Study published in Nature Physics
Recently, researchers from Harvard and MIT succeeded in trapping a record 53 atoms and individually controlling their quantum state, realizing what is called a...
The historic first detection of gravitational waves from colliding black holes far outside our galaxy opened a new window to understanding the universe. A...
A team led by Austrian experimental physicist Rainer Blatt has succeeded in characterizing the quantum entanglement of two spatially separated atoms by observing their light emission. This fundamental demonstration could lead to the development of highly sensitive optical gradiometers for the precise measurement of the gravitational field or the earth's magnetic field.
The age of quantum technology has long been heralded. Decades of research into the quantum world have led to the development of methods that make it possible...
02.05.2018 | Event News
13.04.2018 | Event News
12.04.2018 | Event News
23.05.2018 | Life Sciences
23.05.2018 | Life Sciences
23.05.2018 | Physics and Astronomy |
Voice leading is the term used to describe the linear progression of melodic lines (voices) and their interaction with one another to create harmonies, according to the principles of common-practice harmony and counterpoint.
Voice leading practices can be codified into rules for pedagogical purposes. In these settings, "voice leading" is often synonymous with "part writing," and the "rules" are usually applied in exercises in four-part harmonic writing and in 18th-century counterpoint. David Huron has demonstrated that many of the standard pedagogical rules have a basis in perceptual principles.
A more nuanced view of voice leading principles is found in the theories of Heinrich Schenker. Schenkerian analysis examines how the outer voices work together to establish form in common-practice music.[specify] See Linear progression for an example from Beethoven's Sonata op. 109.
Rigorous concern for voice leading in all parts is more a feature of common-practice music, although jazz and pop music also demonstrate attention to voice leading to varying degrees:
- "At the surface level, jazz voice-leading conventions seem more relaxed than they are in common-practice music."
- "[Although it's untrue] that popular music has no voice leading in it, [...] the largest amount of popular music is simply conceived with chords as blocks of information, and melodies are layered on top of the chords."
The score in the following example reproduces the first four measures of Johann Sebastian Bach's Preludium in C major (BWV 846a) from the 1722 keyboard work Well Tempered Keyboard, volume 1. Letter (a) presents the original score while (b) and (c) present reductions (simplified versions) intended to clarify the harmony and implied voice leading, respectively.
In (b), the same measures are presented as consisting in four block chords: the first and the fourth ones are the same, a triad of C major (I); the second is a minor 7th chord on D (II), inverted to show C in the bass; the third is a dominant 7th on G (V), inverted to show B in the bass.
In (c), the four measures are presented as formed of five horizontal parts (voices) identified by the direction of the stems, each consisting in only three notes: from top to bottom, (1) E F — E; (2) C D — C; (3) G A G —; (4) E D — E; (5) C — B C. The four chords result from the fact that not every voice moves at the same time. To see this, look at the highest note of each chord - E, F, F, and E - this corresponds to 1), the second highest note of each chord is C, D, D, and C - corresponding to 2) etc.
- All musical technique is derived from two basic ingredients: voice leading and the progression of scale degrees [i.e. of harmonic roots]. Of the two, voice leading is the earlier and the more original element.
- The theory of voice leading is to be presented here as a discipline unified in itself; that is, I shall show how […] it everywhere maintains its inner unity.
Schenker indeed did not present the rules of voice leading merely as contrapuntal rules, but showed how they are inseparable from the rules of harmony and how they form one of the most essential aspects of musical composition. (See Schenkerian analysis: voice leading).
Common-practice conventions and pedagogy
Although pacing a piece's various arrivals is the most important result of voice leading, Western musicians have tended to teach voice leading by focusing on connecting adjacent harmonies because that skill is foundational to meeting larger, structural objectives.
On a chord-to-chord level, common-practice conventions dictate that lines should be smooth (by avoiding leaps and retaining common tones) and independent (by avoiding simultaneous movement of all voices in the same direction and parallel perfect intervals). Contrapuntal conventions likewise consider permitted or forbidden melodic intervals in individual parts, intervals between parts, the direction of the movement of the voices with respect to each other, etc. (See Counterpoint for more details on rules, especially in species counterpoint; see also Contrapuntal motion.) Whether dealing with counterpoint or harmony, these conventions emerge not only from a desire to create easy-to-sing parts but also from the constraints of tonal materials and from the objectives behind writing certain textures. In other words, the practical, technical, and aesthetic considerations surrounding voice leading reinforce one another.
- Move each voice the shortest distance possible. One of the main conventions of common-practice part-writing is that, between successive harmonies, voices should avoid leaps and retain common tones as much as possible. This principle was commonly discussed among 17th- and 18th-century musicians as a rule of thumb. For example, Rameau taught "one cannot pass from one note to another but by that which is closest." In the 19th century, as music pedagogy became a more theoretical discipline in some parts of Europe, the 18th-century rule of thumb became codified into a more strict definition. Johann August Dürrnberger coined the term "rule of the shortest way" for it and delineated that:
- When a chord contains one or more notes that will be reused in the chords immediately following, then these notes should remain, that is retained in the respective parts.
- The parts which do not remain, follow the law of the shortest way (Gesetze des nächsten Weges), that is that each such part names the note of the following chord closest to itself if no forbidden succession arises from this.
- If no note at all is present in a chord which can be reused in the chord immediately following, one must apply contrary motion according to the law of the shortest way, that is, if the root progresses upwards, the accompanying parts must move downwards, or inversely, if the root progresses downwards, the other parts move upwards and, in both cases, to the note of the following chord closest to them.
- This rule was taught by Bruckner to Schoenberg and Schenker, who both had followed his classes in Vienna. Schenker re-conceived the principle as the "rule of melodic fluency":
- "If one wants to avoid the dangers produced by larger intervals [...], the best remedy is simply to interrupt the series of leaps — that is, to prevent a second leap from occurring by continuing with a second or an only slightly larger interval after the first leap; or one may change the direction of the second interval altogether; finally both means can be used in combination. Such procedures yield a kind of wave-like melodic line which as a whole represents an animated entity, and which, with its ascending and descending curves, appears balanced in all its individual component parts. This kind of line manifests what is called melodic fluency [Fließender Gesang]."
- Schenker attributed the rule to Cherubini, but this is the result of a somewhat inexact German translation. Cherubini only said that conjunct movement should be preferred. Franz Stoepel, the German translator, used the expression Fließender Gesang to translate mouvement conjoint. The concept of Fließender Gesang is a common concept of German counterpoint theory. Modern Schenkerians made the concept of "melodic fluency" an important one in their teaching of voice leading.
- Move the soprano and bass in contrary or oblique motion if possible. Common-practice composers preferred contrary and oblique motion as it promoted voice independence.[not in citation given]
- Voice crossing should be avoided unless to create melodic interest.
- Avoid parallel perfect intervals such as parallel unisons, parallel 5ths and parallel octaves between any two voices to promote voice independence.
As the Renaissance gave way to the Baroque era in the 1600s, part writing reflected the increasing stratification of harmonic roles. This differentiation between outer and inner voices was an outgrowth of both tonality and homophony. In this new Baroque style, the outer voices took a commanding role in determining the flow of the music and tended to move more often by leaps. Inner voices tended to move stepwise or repeat common tones.
A Schenkerian analysis perspective on these roles shifts the discussion somewhat from "outer and inner voices" to "upper and bass voices." Although the outer voices still play the dominant, form-defining role in this view, the leading soprano voice is often seen as a composite line that draws on the voice leadings in each of the upper voices of the imaginary continuo. Approaching harmony from a non-Schenkerian perspective, Dmitri Tymoczko nonetheless also demonstrates such "3+1" voice leading as a feature of tonal writing.
Conventions in the 19th century and beyond
Much music that doesn't follow common-practice part-writing conventions nonetheless often follows larger voice leading principles. For instance, Debussy's "Nocturnes" from the 19th century and Morton Feldman's "The Viola in My Life" (pieces as different from each other as they are distinct from common-practice era music) both derive their formal connections from soprano voice leading.[further explanation needed]
In this sense, the idea of "part-writing rules" can be somewhat misleading. Wise pedagogues understand that such rules aren't there to be "broken," but to help students hone their perception and develop judgment about the larger principles.
Neo-Riemannian theory examines another facet of this principle. That theory decomposes movements from one chord to another into one or several "parsimonious movements" between pitch classes instead of actual pitches (i.e., neglecting octave shifts). Such analysis shows the deeper continuity underneath surface disjunctions, as in the Bach example from BWV 941.
- Clendinning, Jane (2011). The Musicians Guide to Theory and Analysis. Norton. pp. A73.
- Huron, David. "Tone and Voice: A Derivation of the Rules of Voice-leading from Perceptual Principles" Music Perception, Vol. 19, No. 1 (2001) pp. 1-64.
- Terefenko, Dariusz (2014). Jazz Theory: From Basic to Advanced Study, p.33. Routledge. ISBN 9781135043018.
- Schonbrun, Marc (2011). The Everything Music Theory Book, pp.174, 149. Adams Media. ISBN 9781440511820.
- Heinrich Schenker, Counterpoint, vol. I, transl. J. Rothgeb and J. Thym, New York, Schirmer, 1987, p. xxv.
- Schenker, Counterpoint, vol. I, transl. (1987), p. xxx.
- "[Schenker's] theory of Auskomponierung ['Elaboration'] shows voice-leading as the means by which the chord, as a harmonic concept, is made to unfold and extend in time. This, indeed, is the essence of music". Oswald Jonas, "Introduction" to Heinrich Schenker, Harmony, transl. by E. Mann Borgese, ed. by Oswald Jonas, Chicago, The University of Chicago Press, 1954, p. ix; "Heinrich Schenker has shown the correct relationship between the horizontal [counterpoint] and the vertical [harmony]. His theory is drawn from a profound understanding of the masterpieces of music [...]. Thus he indicates to us the way: to satisfy the demands of harmony while mastering the task of voice-leading," id., p. xv.
- Miller, Michael (2005). The Complete Idiot's Guide to Music Theory, p.193. Penguin. ISBN 9781592574377.
- Bartlette, Christopher, and Steven G. Laitz (2010). Graduate Review of Tonal Theory. New York: Oxford University Press, pg 47-50. ISBN 978-0-19-537698-2
- Tymoczko, Dmitri (2011). A Geometry of Tonal Music. New York: Oxford University Press. ISBN 978-0-19-533667-2
- Jean-Philippe Rameau, Traité de L'Harmonie Reduite à ses Principes naturels, Paris, 1722, Book 4, pp. 186-7: On ne peut passer d'une Notte à une autre que par celle qui en est la plus voisine. An even earlier version can be found in Charles Masson, Nouveau traité des regles pour la composition de la musique, Paris, Ballard, 1705, p. 47: Quand on jouë sur la Basse pour accompagner, les Parties superieures pratiquent tous les Accords qui peuvent être faits sans quitter la corde où ils se trouvent; ou bien elles doivent prendre ceux qu'on peut faire avec le moindre intervalle, soit en montant soit en descendant.
- Johann August Dürrnberger, Elementar-Lehrbuch der Harmonie- und Generalbass-Lehre, Linz, 1841, p. 53.
- Anton Bruckner, Vorlesungen über Harmonielehre und Kontrapunkt an der Universität Wien, E. Schwanzara ed., Vienna, 1950, p. 129. See Robert W. Wason, Viennese Harmonic Theory from Albrechtsberger to Schenker and Schoenberg, Ann Arbor, London, UMI Research Press, 1985, p. 70. ISBN 0-8357-1586-8
- Schoenberg, Arnold, Theory of Harmony, trans. Roy E. Carter. Belmont Music Publishers, 1983, 1978 (original quote 1911). Page 39. ISBN 0-520-04944-6. Schoenberg writes: "Thus, the voices will follow (as I once heard Bruckner say) the law of the shortest way".
- Heinrich Schenker, Kontrapunkt, vol. I, 1910, p. 133; Counterpoint, J. Rothgeb and J. Thym transl., New York, Schirmer, 1987, p. 94.
- Luigi Cherubini, Cours de Contrepoint et de Fugue, bilingual ed. French/German, Leipzig and Paris, ca 1835, p. 7.
- See Schenkerian analysis.
- See for instance Johann Philipp Kirnberger, Die Kunst des reinen Satzes in der Musik, vol. II, Berlin, Königsberg, 1776, p. 82.
- Allen Cadwallader and David Gagné, Analysis of Tonal Music, 3d ed., Oxford University Press, 2011, p. 17.
- 1955-, Marvin, Elizabeth West, (2011-01-01). The musician's guide to theory and analysis. W.W. Norton. ISBN 9780393930818. OCLC 320193510.
- Cadwaller, Allan; Gagne, David (2010). Analysis of Tonal Music: A Schenkerian Approach. Oxford University Press. ISBN 978-0199732470.
- Tymoczko, Dmitri (2011). A Geometry of Tonal Music. Oxford University Press. pp. 204–207.
- Richard Cohn, "Neo-Riemannian Operations, Parsimonious Trichords, and their 'Tonnetz' Representations", note 4, writes that the term "parsimony" is used in this context in Ottokar Hostinský, Die Lehre von den musikalischen Klangen, Prag, H. Dominicus, 1879, p. 106. Cohn considers the principe of parsimony to be the same thing as the "law of the shortest way", but this is only partly true.
- McAdams, S. and Bregman, A. (1979). "Hearing musical streams", in Computer Music Journal 3(4): 26–44 and in Roads, C. and Strawn, J., eds. (1985). Foundations of Computer Music, p. 658–98. Cambridge, Massachusetts: MIT Press.
- "Voice Leading Overview", Harmony.org.uk.
- Voice Leading: The Science Behind a Musical Art by David Huron, 2016, MIT Press
- Mathematical Musick The Contrapuntal Formula of Dr. Thomas Campion "" |
How the Central Limit Theorem tutorial fits into the typical statistics course: WISE tutorials are modularized to allow instructors to pick or choose modules that best fit their course needs. Each module is a self-contained lesson that does not depend on any of the other modules, although some specific prerequisite information may be required.
The Central Limit Theorem (CLT) Module was designed with the assumption that students have some familiarity with basic elementary statistics, such as mean, standard deviation, variance, the normal curve, and sampling distributions. You may find it helpful for your students to complete the Sampling Distribution Module before the CLT Module. The CLT Module is intended to prepare students to learn about hypothesis testing and confidence intervals.
When to use the CLTtutorial? Instructors often introduce the Central Limit Theorem after they’ve discussed descriptive statistics and the z-probability distribution and before an introduction to formal hypothesis testing procedures. Some instructors may wish to use Activity 2 of this module for review later in the course. This relatively advanced component emphasizes conditions where it may not be appropriate to assume that sampling distributions are close to normal. This critical concept is relevant to students who have already learned the importance of the normality assumption for parametric hypothesis testing. You may consider having students return to this component later in the course, after t-tests and ANOVA have been introduced.
Suggestions for Using the CLT Tutorial
- Class demonstration/Lecture aid
- Lab assignment
- Homework assignment
- Review assignment
There are many ways in which the CLT Module can be inserted into your lesson plan. Your choices may depend on students’ level of computer literacy, computer resources available at your school, and class time restrictions. Here are a few suggestions:
1. Pre-lecture Assignment
Assign the module as homework to introduce the Central Limit Theorem to students. This will allow you to use more class time for in-depth discussions and activities instead of a full lecture.
2. Live Demonstration
As part of either a lecture or guided lab assignment, the SDM applet itself may be used by the instructor to demonstrate visually different aspects of the sampling distribution and the Central Limit Theorem. Some instructors may choose to step through parts or all of the tutorial in a demonstration mode. This demonstration may serve as a stimulus for classroom discussion and/or introduction to an assignment for students. See our step-by-step guide for a live demonstration using the applet.
3. Post-lecture Assignment
After your presentation of the Central Limit Theorem material, the module can be used to demonstrate lecture points and give students practice using the concepts. This applet allows students to gain a perspective on the concepts that complements a lecture or other presentations. The more perspectives students are exposed to in the course of instruction, the more likely they are to understand and retain the material.
For more information, see the Introduction to the tutorial.
- Multiple-choice questions – The main portion of the module is designed to give students feedback without evaluating their performance. The multiple-choice questions provide feedback on both correct and incorrect responses. However, no record is kept of student answers.
- Essay questions – There are follow-up questions after the main part of the module. These questions are multiple-choice and short-answer essays and are designed to examine conceptual understanding of the topic. You may want students to complete this portion of the module and hand in their responses for your evaluation. This will give you an opportunity to evaluate what your students have learned. We have not posted answers to these questions.
WISE modules are designed as self-contained lessons that students can use with little, if any, guidance. If you are concerned that students may not feel comfortable using web pages and applets, you may consider using the module as part of an in-class activity. Most students complete the module in 40 – 50 minutes.
We hope this tutorial is helpful for you and your students, and we welcome your feedback on this tutorial and other aspects of the WISE site. Please send your comments to firstname.lastname@example.org.
1,763 total views, 1 views today |
Gross|Definition & Meaning
Gross refers to some amount (often of money) before applying any deductions. The gross profit of a shop, for example, is the revenue earned from the sale of products without subtracting the cost of those sold goods (business expenses like manufacturing, supply, etc.). The amount left over after deductions is called the net (net profit in this case).
Concepts of Gross
In mathematics, the term “gross” indicates the total or full amount of a quantity before any deductions or adjustments are made. It is often used in accounting and finance to describe revenues, profits, and other financial metrics. For example, gross income is the total amount of money earned before subtracting expenses or taxes.
Gross profit is the simplest form of profit: revenue minus cost. Alternatively, it is the difference between the revenue and the cost of goods sold (COGS). To get the gross margin, we additionally divide this difference by the revenue and multiply it by 100 to get the gross margin as a percentage.
In some cases, “gross” can also be used to indicate the product of a number and 12. For example, “12 dozen” equals “12 gross” or 12*12.
Gross typically refers to the full or total amount of a quantity before any deductions or adjustments are made.
What Does Gross Mean?
“Gross” typically refers to the full amount of a quantity before any deductions or adjustments are made. This can include the gross domestic product (GDP), the total amount of all services and goods produced within a country in each period. GDP is measured by the economic activity of a country and is often used as an indicator of the overall health of an economy.
Another important economic measure that uses the term “gross” is gross national product (GNP), which is like GDP but also includes income earned by people of that country despite where they live in that country. This is a more comprehensive measure of a country’s economic activity, as it considers both domestic production and income earned abroad by citizens.
In addition, gross investment is the total amount of capital invested in an economy, including both private and government investment. It is a measure of the economy’s capacity to grow and expand.
Gross domestic income (GDI) is measured by the total income generated by the services and goods produced by a country. It measures the value added to the economy by producing goods and services.
Another use of “gross” in economics is gross margin, which is the difference between revenue and the sale price of goods as the percentage of revenue. It measures the profitability of a company or industry.
In accounting, “gross profit” is the difference between revenue and cost of goods sold. It measures how much profit a company makes from its sales before deducting expenses such as overhead costs, taxes, and interest.
The term “gross,“ as opposed to “net,” indicates the total or full amount of a quantity before any deductions or adjustments are made. This can include measures of economic activity such as GDP and GNP, as well as measures of profitability and investment. These figures are important indicators of the overall health and growth of an economy and are closely watched by economists and policymakers.
In the above figure, Gross Domestic Product (GDP) is measured by any country’s economic progress and is often used to indicate the overall health of an economy. It is the total value of all services and goods produced in a country at a given time.
In mathematics, “gross profit” is the difference between revenue and cost of goods sold. It measures how much profit a company makes from its sales before deducting expenses such as overhead costs, taxes, and interest. Net profit, though, is the total amount of profit any company makes after subtracting all expenses from revenue.
One of the most common examples of the difference between gross and net is with respect to income. Gross income is the sum of the amount of earned money any company earns before any deductions. On the other side, net income refers to the amount of money earned after all deductions and expenses have been subtracted.
These deductions include taxes, overhead costs, and other expenses that are necessary to run a business or maintain a household.
Here in the above figure, the gross income refers to the total money earned without any deductions, whereas the net income refers to the amount of money earned after all deductions.
Quick Summary and Formulae
Gross income, again, shows the raw earnings without any deductions for any expenses. Net income, like net profit, refers to the amount of money earned after all deductions and expenses have been subtracted.
Gross Income = Total Sales – Cost of Goods Sold
Net Income = Gross Income – All Deductions (e.g., taxes and other costs)
Gross Domestic Product (GDP) refers to the country’s economic growth and is often used as an indicator of the overall health of an economy. Gross National Product (GNP) includes the income earned by citizens of a country regardless of where they are located, and it considers both domestic production and income earned abroad by citizens.
GDP = Private Consumption + Government Spending + Gross Private Investment + Government Investment (Exports – Imports)
GNP = GDP + NIA
Where NIA is net income from abroad:
NIA = money flowing in from foreign countries + money flowing out to foreign countries
Gross investment is basically the total amount of capital invested in an economy, including both private and government investment. Net investment, on the other hand, is the total amount of capital invested in an economy after deducting depreciation.
Gross Investment = Net Working Capital + Fixed Assets + Accumulated Depreciation + Accumulated Amortization
Net Investment = Gross Investment – Capital Depreciation
Gross profit is the difference between the sale price and revenue. It measures how much profit a company makes from its sales before deducting expenses such as overhead costs, taxes, and interest. Net profit, on the other hand, is the profit that a company earns after all expenses have been subtracted from revenue.
Gross Profit = (Gross Profit / Revenue) * 100
Net Profit = Gross Profit – All Deductions and Other Expenses
Examples of Gross in Finance
If a school has 50 teachers, and each teacher earns a salary of \$2,000, the gross salary of all teachers is 50 * \$2,000 = \$10,000. In this case, “gross” refers to the total salary amount before any deductions for taxes or benefits.
Let’s say you sold 60 units of a product for \$5 each. The total revenue generated would be \$300, which is the gross amount, before considering the costs of production or any other expenses that may decrease the net profit.
All the figures above were created on GeoGebra. |
Ecology of Ecosystems
- Describe the basic ecosystem types
- Explain the methods that ecologists use to study ecosystem structure and dynamics
- Identify the different methods of ecosystem modeling
- Differentiate between food chains and food webs and recognize the importance of each
Life in an ecosystem is often about competition for limited resources, a characteristic of the theory of natural selection. Competition in communities (all living things within specific habitats) is observed both within species and among different species. The resources for which organisms compete include organic material, sunlight, and mineral nutrients, which provide the energy for living processes and the matter to make up organisms’ physical structures. Other critical factors influencing community dynamics are the components of its physical and geographic environment: a habitat’s latitude, amount of rainfall, topography (elevation), and available species. These are all important environmental variables that determine which organisms can exist within a particular area.
An ecosystem is a community of living organisms and their interactions with their abiotic (nonliving) environment. Ecosystems can be small, such as the tide pools found near the rocky shores of many oceans, or large, such as the Amazon Rainforest in Brazil (Figure).
There are three broad categories of ecosystems based on their general environment: freshwater, ocean water, and terrestrial. Within these broad categories are individual ecosystem types based on the organisms present and the type of environmental habitat.
Ocean ecosystems are the most common, comprising over 70 percent of the Earth's surface and consisting of three basic types: shallow ocean, deep ocean water, and deep ocean surfaces (the low depth areas of the deep oceans). The shallow ocean ecosystems include extremely biodiverse coral reef ecosystems, and the deep ocean surface is known for its large numbers of plankton and krill (small crustaceans) that support it. These two environments are especially important to aerobic respirators worldwide as the phytoplankton perform 40 percent of all photosynthesis on Earth. Although not as diverse as the other two, deep ocean ecosystems contain a wide variety of marine organisms. Such ecosystems exist even at the bottom of the ocean where light is unable to penetrate through the water.
Freshwater ecosystems are the rarest, occurring on only 1.8 percent of the Earth's surface. Lakes, rivers, streams, and springs comprise these systems. They are quite diverse, and they support a variety of fish, amphibians, reptiles, insects, phytoplankton, fungi, and bacteria.
Terrestrial ecosystems, also known for their diversity, are grouped into large categories called biomes, such as tropical rain forests, savannas, deserts, coniferous forests, deciduous forests, and tundra. Grouping these ecosystems into just a few biome categories obscures the great diversity of the individual ecosystems within them. For example, there is great variation in desert vegetation: the saguaro cacti and other plant life in the Sonoran Desert, in the United States, are relatively abundant compared to the desolate rocky desert of Boa Vista, an island off the coast of Western Africa (Figure).
Ecosystems are complex with many interacting parts. They are routinely exposed to various disturbances, or changes in the environment that effect their compositions: yearly variations in rainfall and temperature and the slower processes of plant growth, which may take several years. Many of these disturbances result from natural processes. For example, when lightning causes a forest fire and destroys part of a forest ecosystem, the ground is eventually populated by grasses, then by bushes and shrubs, and later by mature trees, restoring the forest to its former state. The impact of environmental disturbances caused by human activities is as important as the changes wrought by natural processes. Human agricultural practices, air pollution, acid rain, global deforestation, overfishing, eutrophication, oil spills, and waste dumping on land and into the ocean are all issues of concern to conservationists.
Equilibrium is the steady state of an ecosystem where all organisms are in balance with their environment and with each other. In ecology, two parameters are used to measure changes in ecosystems: resistance and resilience. Resistance is the ability of an ecosystem to remain at equilibrium in spite of disturbances. Resilience is the speed at which an ecosystem recovers equilibrium after being disturbed. Ecosystem resistance and resilience are especially important when considering human impact. The nature of an ecosystem may change to such a degree that it can lose its resilience entirely. This process can lead to the complete destruction or irreversible altering of the ecosystem.
Food Chains and Food Webs
The term “food chain” is sometimes used metaphorically to describe human social situations. Individuals who are considered successful are seen as being at the top of the food chain, consuming all others for their benefit, whereas the less successful are seen as being at the bottom.
The scientific understanding of a food chain is more precise than in its everyday usage. In ecology, a food chain is a linear sequence of organisms through which nutrients and energy pass: primary producers, primary consumers, and higher-level consumers are used to describe ecosystem structure and dynamics. There is a single path through the chain. Each organism in a food chain occupies what is called a trophic level. Depending on their role as producers or consumers, species or groups of species can be assigned to various trophic levels.
In many ecosystems, the bottom of the food chain consists of photosynthetic organisms (plants and/or phytoplankton), which are called primary producers. The organisms that consume the primary producers are herbivores: the primary consumers. Secondary consumers are usually carnivores that eat the primary consumers. Tertiary consumers are carnivores that eat other carnivores. Higher-level consumers feed on the next lower tropic levels, and so on, up to the organisms at the top of the food chain: the apex consumers. In the Lake Ontario food chain shown in Figure, the Chinook salmon is the apex consumer at the top of this food chain.
One major factor that limits the length of food chains is energy. Energy is lost as heat between each trophic level due to the second law of thermodynamics. Thus, after a limited number of trophic energy transfers, the amount of energy remaining in the food chain may not be great enough to support viable populations at yet a higher trophic level.
The loss of energy between trophic levels is illustrated by the pioneering studies of Howard T. Odum in the Silver Springs, Florida, ecosystem in the 1940s (Figure). The primary producers generated 20,819 kcal/m2/yr (kilocalories per square meter per year), the primary consumers generated 3368 kcal/m2/yr, the secondary consumers generated 383 kcal/m2/yr, and the tertiary consumers only generated 21 kcal/m2/yr. Thus, there is little energy remaining for another level of consumers in this ecosystem.
There is a one problem when using food chains to accurately describe most ecosystems. Even when all organisms are grouped into appropriate trophic levels, some of these organisms can feed on species from more than one trophic level; likewise, some of these organisms can be eaten by species from multiple trophic levels. In other words, the linear model of ecosystems, the food chain, is not completely descriptive of ecosystem structure. A holistic model—which accounts for all the interactions between different species and their complex interconnected relationships with each other and with the environment—is a more accurate and descriptive model for ecosystems. A food web is a graphic representation of a holistic, nonlinear web of primary producers, primary consumers, and higher-level consumers used to describe ecosystem structure and dynamics (Figure).
A comparison of the two types of structural ecosystem models shows strength in both. Food chains are more flexible for analytical modeling, are easier to follow, and are easier to experiment with, whereas food web models more accurately represent ecosystem structure and dynamics, and data can be directly used as input for simulation modeling.
Link to Learning
Head to this online interactive simulator to investigate food web function. In the Interactive Labs box, under Food Web, click Step 1. Read the instructions first, and then click Step 2 for additional instructions. When you are ready to create a simulation, in the upper-right corner of the Interactive Labs box, click OPEN SIMULATOR.
Two general types of food webs are often shown interacting within a single ecosystem. A grazing food web (such as the Lake Ontario food web in Figure) has plants or other photosynthetic organisms at its base, followed by herbivores and various carnivores. A detrital food web consists of a base of organisms that feed on decaying organic matter (dead organisms), called decomposers or detritivores. These organisms are usually bacteria or fungi that recycle organic material back into the biotic part of the ecosystem as they themselves are consumed by other organisms. As all ecosystems require a method to recycle material from dead organisms, most grazing food webs have an associated detrital food web. For example, in a meadow ecosystem, plants may support a grazing food web of different organisms, primary and other levels of consumers, while at the same time supporting a detrital food web of bacteria, fungi, and detrivorous invertebrates feeding off dead plants and animals.
Three-spined SticklebackIt is well established by the theory of natural selection that changes in the environment play a major role in the evolution of species within an ecosystem. However, little is known about how the evolution of species within an ecosystem can alter the ecosystem environment. In 2009, Dr. Luke Harmon, from the University of Idaho, published a paper that for the first time showed that the evolution of organisms into subspecies can have direct effects on their ecosystem environment.Nature (Vol. 458, April 1, 2009)
The three-spined stickleback (Gasterosteus aculeatus) is a freshwater fish that evolved from a saltwater fish to live in freshwater lakes about 10,000 years ago, which is considered a recent development in evolutionary time (Figure). Over the last 10,000 years, these freshwater fish then became isolated from each other in different lakes. Depending on which lake population was studied, findings showed that these sticklebacks then either remained as one species or evolved into two species. The divergence of species was made possible by their use of different areas of the pond for feeding called micro niches.
Dr. Harmon and his team created artificial pond microcosms in 250-gallon tanks and added muck from freshwater ponds as a source of zooplankton and other invertebrates to sustain the fish. In different experimental tanks they introduced one species of stickleback from either a single-species or double-species lake.
Over time, the team observed that some of the tanks bloomed with algae while others did not. This puzzled the scientists, and they decided to measure the water's dissolved organic carbon (DOC), which consists of mostly large molecules of decaying organic matter that give pond-water its slightly brownish color. It turned out that the water from the tanks with two-species fish contained larger particles of DOC (and hence darker water) than water with single-species fish. This increase in DOC blocked the sunlight and prevented algal blooming. Conversely, the water from the single-species tank contained smaller DOC particles, allowing more sunlight penetration to fuel the algal blooms.
This change in the environment, which is due to the different feeding habits of the stickleback species in each lake type, probably has a great impact on the survival of other species in these ecosystems, especially other photosynthetic organisms. Thus, the study shows that, at least in these ecosystems, the environment and the evolution of populations have reciprocal effects that may now be factored into simulation models.
Research into Ecosystem Dynamics: Ecosystem Experimentation and Modeling
The study of the changes in ecosystem structure caused by changes in the environment (disturbances) or by internal forces is called ecosystem dynamics. Ecosystems are characterized using a variety of research methodologies. Some ecologists study ecosystems using controlled experimental systems, while some study entire ecosystems in their natural state, and others use both approaches.
A holistic ecosystem model attempts to quantify the composition, interaction, and dynamics of entire ecosystems; it is the most representative of the ecosystem in its natural state. A food web is an example of a holistic ecosystem model. However, this type of study is limited by time and expense, as well as the fact that it is neither feasible nor ethical to do experiments on large natural ecosystems. It is difficult to quantify all different species in an ecosystem and the dynamics in their habitat, especially when studying large habitats such as the Amazon Rainforest.
For these reasons, scientists study ecosystems under more controlled conditions. Experimental systems usually involve either partitioning a part of a natural ecosystem that can be used for experiments, termed a mesocosm, or by recreating an ecosystem entirely in an indoor or outdoor laboratory environment, which is referred to as a microcosm. A major limitation to these approaches is that removing individual organisms from their natural ecosystem or altering a natural ecosystem through partitioning may change the dynamics of the ecosystem. These changes are often due to differences in species numbers and diversity and also to environment alterations caused by partitioning (mesocosm) or recreating (microcosm) the natural habitat. Thus, these types of experiments are not totally predictive of changes that would occur in the ecosystem from which they were gathered.
As both of these approaches have their limitations, some ecologists suggest that results from these experimental systems should be used only in conjunction with holistic ecosystem studies to obtain the most representative data about ecosystem structure, function, and dynamics.
Scientists use the data generated by these experimental studies to develop ecosystem models that demonstrate the structure and dynamics of ecosystems. They use three basic types of ecosystem modeling in research and ecosystem management: a conceptual model, an analytical model, and a simulation model. A conceptual model is an ecosystem model that consists of flow charts to show interactions of different compartments of the living and nonliving components of the ecosystem. A conceptual model describes ecosystem structure and dynamics and shows how environmental disturbances affect the ecosystem; however, its ability to predict the effects of these disturbances is limited. Analytical and simulation models, in contrast, are mathematical methods of describing ecosystems that are indeed capable of predicting the effects of potential environmental changes without direct experimentation, although with some limitations as to accuracy. An analytical model is an ecosystem model that is created using simple mathematical formulas to predict the effects of environmental disturbances on ecosystem structure and dynamics. A simulation model is an ecosystem model that is created using complex computer algorithms to holistically model ecosystems and to predict the effects of environmental disturbances on ecosystem structure and dynamics. Ideally, these models are accurate enough to determine which components of the ecosystem are particularly sensitive to disturbances, and they can serve as a guide to ecosystem managers (such as conservation ecologists or fisheries biologists) in the practical maintenance of ecosystem health.
Conceptual models are useful for describing ecosystem structure and dynamics and for demonstrating the relationships between different organisms in a community and their environment. Conceptual models are usually depicted graphically as flow charts. The organisms and their resources are grouped into specific compartments with arrows showing the relationship and transfer of energy or nutrients between them. Thus, these diagrams are sometimes called compartment models.
To model the cycling of mineral nutrients, organic and inorganic nutrients are subdivided into those that are bioavailable (ready to be incorporated into biological macromolecules) and those that are not. For example, in a terrestrial ecosystem near a deposit of coal, carbon will be available to the plants of this ecosystem as carbon dioxide gas in a short-term period, not from the carbon-rich coal itself. However, over a longer period, microorganisms capable of digesting coal will incorporate its carbon or release it as natural gas (methane, CH4), changing this unavailable organic source into an available one. This conversion is greatly accelerated by the combustion of fossil fuels by humans, which releases large amounts of carbon dioxide into the atmosphere. This is thought to be a major factor in the rise of the atmospheric carbon dioxide levels in the industrial age. The carbon dioxide released from burning fossil fuels is produced faster than photosynthetic organisms can use it. This process is intensified by the reduction of photosynthetic trees because of worldwide deforestation. Most scientists agree that high atmospheric carbon dioxide is a major cause of global climate change.
Conceptual models are also used to show the flow of energy through particular ecosystems. Figure is based on Howard T. Odum’s classical study of the Silver Springs, Florida, holistic ecosystem in the mid-twentieth century.Howard T. Odum, “Trophic Structure and Productivity of Silver Springs, Florida,” Ecological Monographs 27, no. 1 (1957): 47–112. This study shows the energy content and transfer between various ecosystem compartments.
Why do you think the value for gross productivity of the primary producers is the same as the value for total heat and respiration (20,810 kcal/m2/yr)?
Analytical and Simulation Models
The major limitation of conceptual models is their inability to predict the consequences of changes in ecosystem species and/or environment. Ecosystems are dynamic entities and subject to a variety of abiotic and biotic disturbances caused by natural forces and/or human activity. Ecosystems altered from their initial equilibrium state can often recover from such disturbances and return to a state of equilibrium. As most ecosystems are subject to periodic disturbances and are often in a state of change, they are usually either moving toward or away from their equilibrium state. There are many of these equilibrium states among the various components of an ecosystem, which affects the ecosystem overall. Furthermore, as humans have the ability to greatly and rapidly alter the species content and habitat of an ecosystem, the need for predictive models that enable understanding of how ecosystems respond to these changes becomes more crucial.
Analytical models often use simple, linear components of ecosystems, such as food chains, and are known to be complex mathematically; therefore, they require a significant amount of mathematical knowledge and expertise. Although analytical models have great potential, their simplification of complex ecosystems is thought to limit their accuracy. Simulation models that use computer programs are better able to deal with the complexities of ecosystem structure.
A recent development in simulation modeling uses supercomputers to create and run individual-based simulations, which accounts for the behavior of individual organisms and their effects on the ecosystem as a whole. These simulations are considered to be the most accurate and predictive of the complex responses of ecosystems to disturbances.
Link to Learning
Visit The Darwin Project to view a variety of ecosystem models.
Ecosystems exist on land, at sea, in the air, and underground. Different ways of modeling ecosystems are necessary to understand how environmental disturbances will affect ecosystem structure and dynamics. Conceptual models are useful to show the general relationships between organisms and the flow of materials or energy between them. Analytical models are used to describe linear food chains, and simulation models work best with holistic food webs.
Figure Why do you think the value for gross productivity of the primary producers is the same as the value for total heat and respiration (20,810 kcal/m2/yr)?
Figure According to the first law of thermodynamics, energy can neither be created nor destroyed. Eventually, all energy consumed by living systems is lost as heat or used for respiration, and the total energy output of the system must equal the energy that went into it.
The ability of an ecosystem to return to its equilibrium state after an environmental disturbance is called ________.
A re-created ecosystem in a laboratory environment is known as a ________.
Decomposers are associated with which class of food web?
The primary producers in an ocean grazing food web are usually ________.
What term describes the use of mathematical equations in the modeling of linear aspects of ecosystems?
- analytical modeling
- simulation modeling
- conceptual modeling
- individual-based modeling
The position of an organism along a food chain is known as its ________.
- trophic level
The loss of an apex consumer would impact which trophic level of a food web?
- primary producers
- primary consumers
- secondary consumers
- all of the above
A food chain would be a better resource than a food web to answer which question?
- How does energy move from an organism in one trophic level to an organism on the next trophic level?
- How does energy move within a trophic level?
- What preys on grasses?
- How is organic matter recycled in a forest?
Compare and contrast food chains and food webs. What are the strengths of each concept in describing ecosystems?
Food webs show interacting groups of different species and their many interconnections with each other and the environment. Food chains are linear aspects of food webs that describe the succession of organisms consuming one another at defined trophic levels. Food webs are a more accurate representation of the structure and dynamics of an ecosystem. Food chains are easier to model and use for experimental studies.
Describe freshwater, ocean, and terrestrial ecosystems.
Freshwater ecosystems are the rarest, but have great diversity of freshwater fish and other aquatic life. Ocean ecosystems are the most common and are responsible for much of the photosynthesis that occurs on Earth. Terrestrial ecosystems are very diverse; they are grouped based on their species and environment (biome), which includes forests, deserts, and tundras.
Compare grazing and detrital food webs. Why would they both be present in the same ecosystem?
Grazing food webs have a primary producer at their base, which is either a plant for terrestrial ecosystems or a phytoplankton for aquatic ecosystems. The producers pass their energy to the various trophic levels of consumers. At the base of detrital food webs are the decomposers, which pass this energy to a variety of other consumers. Detrital food webs are important for the health of many grazing food webs because they eliminate dead and decaying organic material, thus, clearing space for new organisms and removing potential causes of disease. By breaking down dead organic matter, decomposers also make mineral nutrients available to primary producers; this process is a vital link in nutrient cycling.
How does the microcosm modeling approach differ from utilizing a holistic model for ecological research?
In a microcosm model, an ecologist recreates an ecosystem in a controlled environment. Since the ecologist is populating the environment, he can control the variables and the different species involved in the study to ask specific questions.
How do conceptual and analytical models of ecosystems compliment each other?
Conceptual models allow ecologists to see the “big picture” of how different components of the ecosystem interact with each other, energy sources, and resources. However, this approach is more descriptive than quantitative, so it is difficult to make conclusions about the resistance or resilience of a system. Analytical modeling creates a model that can predict how the ecosystem’s relationships will change in response to disturbances, but does not convey the complexity of the relationships seen with conceptual modeling. |
NASA’s Spitzer Space Telescope has gathered surprising new details about a supersized and superheated version of Earth called 55 Cancri e. According to Spitzer data, the exoplanet is less dense than previously thought, a finding which profoundly changes the portrait of this exotic world. Instead of a dense rock scorched dry by its sun, 55 Cancri e likely has water vapor and other gases steaming from its molten surface.
Spitzer measured the extraordinarily small amount of light 55 Cancri e blocked when the planet crossed in front of its star. These mini-eclipses, called transits, allow astronomers to accurately determine a planet’s size and calculate its density. Promisingly, the results show how astronomers can use Spitzer, operating in “warm” mode since depleting its liquid coolant in May 2009, to probe the properties of strange alien worlds…..
“This work demonstrates that ‘warm’ Spitzer can measure an extremely faint eclipse caused by exoplanets’ transits with very high precision,” said Brice-Olivier Demory, a post-doctoral associate in Professor Sara Seager’s group in the Earth, Atmospheric and Planetary Sciences department at the Massachusetts Institute of Technology (MIT). Demory, who is lead author of a paper accepted for publication in Astronomy & Astrophysics, said that the study “emphasizes the important role Spitzer still has to play for the detection and characterization of transiting planets.”
Blazing Hot and on the Move
Astronomers first discovered 55 Cancri e in 2004, and continued investigation of the exoplanet has shown it to be a truly bizarre place. The world revolves around its sunlike star in the shortest time period of all known exoplanets – just 17 hours and 40 minutes. (In other words, a year on 55 Cancri e lasts less than 18 hours.) The exoplanet orbits about 26 times closer to its star than Mercury, the most Sun-kissed planet in our solar system. Such proximity means that 55 Cancri e’s surface roasts at a minimum of 3,200 degrees Fahrenheit (1,760 degrees Celsius).
The new observations with Spitzer reveal 55 Cancri e to have a mass 7.8 times and a radius just over twice that of Earth. Those properties place 55 Cancri e in the “super-Earth” class of exoplanets, a few dozen of which have been found. Only a handful of known super-Earths, however, cross the face of their stars as viewed from our vantage point in the cosmos. At just 40 light years away, 55 Cancri e stands as the smallest transiting super-Earth in our stellar neighborhood. In fact, 55 Cancri is so bright and close that it can be seen with the naked eye on a clear, dark night.
Based on the precise Spitzer data, Demory and his colleagues came up with a revised, lower density for 55 Cancri e. Coupled with its tight orbit, 55 Cancri e possesses a unique combination of super-Earth traits. Its low density is similar to that of a cooler super-Earth called GJ1214b, discovered in 2009 orbiting a tiny, dim star. Yet 55 Cancri e’s orbit is more like that of the denser, inferno worlds CoRoT-7b and Kepler-10b. “What makes 55 Cancri e so remarkable is that despite its high temperature, the planet has a low density,” said Demory.
Previously, a separate international team of astronomers had made observations of 55 Cancri e in visible light with Canada’s MOST telescope. Initially, their evidence implied that 55 Cancri e’s diameter was smaller by 25 percent, leading to reports of 55 Cancri e as actually the densest planet known. Refinements to those observations, however, now agree with the new Spitzer findings, which rely on a transit seen in longer-wavelength infrared light.
Exoplanetary Origins and Future Demise
No longer looking like a dense planet of solid rock, 55 Cancri e instead appears to be an unprecedented world with an intriguing history. The Spitzer results suggest that about a fifth of the planet’s mass must be made of light elements and compounds, including water. In the intense heat of 55 Cancri e’s terribly close sun, those light materials would exist in a “supercritical” state, between that of a liquid and a gas, and might sizzle out of the planet’s surface.
New developments in planetary formation and evolution theory will probably be necessary to explain 55 Cancri e’s back story. According to our models of the birth of solar systems, for example, 55 Cancri e could not have formed so near its star. Maybe it started out as a more distant planet with a large gaseous atmosphere. As worlds took shape in the 55 Cancri solar system, gravitational interactions amongst the system’s five known planets could have prodded a young 55 Cancri e to migrate in toward its sun. In the process, the Neptune-like exoplanet might have lost most of its atmosphere, exposing a core that sputters with the venting of heated chemicals.
It seems certain that 55 Cancri e is on a “death spiral,” soon to be devoured or ripped apart by its host star. But for now, the world’s serendipitous placement in our sky will allow Spitzer and other instruments to study 55 Cancri e in further detail, expanding our knowledge of how exoplanets work.
“55 Cancri e orbits a very bright star thus enabling the possibility of obtaining a wealth of observations with space-based facilities at various wavelengths,” said study co-author Michael Gillon of the University of Liege in Belgium and principal investigator for the warm Spitzer program aimed at detecting transiting low-mass exoplanets. “This fact will make 55 Cancri e a landmark for our understanding of the planetary interior and atmospheric composition of super-Earths.”
Other authors of the paper are Diana Valencia, Sara Seager and Bjorn Benneke of MIT; Drake Deming of the University of Maryland; Christophe Lovis, Michel Mayor, Francesco Pepe, Didier Queloz, Damien Ségransan, and Stéphane Udry of the University of Geneva; and Patricio Cubillos, Joseph Harrington, and Kevin B. Stevenson of the University of Central Florida. |
This tutorial is the Typography chapter from Processing: A Programming Handbook for Visual Designers and Artists, Second Edition, published by MIT Press. © 2014 MIT Press. If you see any errors or have comments, please let us know.
The evolution of typographic reproduction and display technologies has and continues to impact human culture. Early printing techniques developed by Johannes Gutenberg in fifteenth-century Germany using letters cast from lead provided a catalyst for increased literacy and the scientific revolution. Automated typesetting machines, such as the Linotype invented in the nineteenth century, changed the way information was produced, distributed, and consumed. In the digital era, the way we consume text has changed drastically since the proliferation of personal computers in the 1980s and the rapid growth of the Internet in the 1990s. Text from emails, websites, and instant messages fill computer screens, and while many of the typographic rules of the past apply, type on screen requires additional considerations for enhanced communication and legibility.
Letters on screen are created by setting the color of pixels. The quality of the typography is constrained by the resolution of the screen. Because, historically, screens have a low resolution in comparison to paper, techniques have been developed to enhance the appearance of type on screen. The fonts on the earliest Apple Macintosh computers comprised small bitmap images created at specific sizes like 10, 12, and 24 points. Using this technology, a variation of each font was designed for each size of a particular typeface. For example, the character A in the San Francisco typeface used a different image to display the character at size 12 and 18. When the LaserWriter printer was introduced in 1985, Postscript technology defined fonts with a mathematical description of each character’s outline. This allowed type on screen to scale to large sizes and still look smooth. Apple and Microsoft later developed TrueType, another outline font format. More recently, these technologies were merged into the OpenType format. In the meantime, methods to smooth text on screen were introduced. These anti-aliasing techniques use gray pixels at the edge of characters to compensate for low screen resolution.
The proliferation of personal computers in the mid-1980s spawned a period of rapid typographic experimentation. Digital typefaces are software, and the old rules of metal and photo type no longer apply. The Dutch typographers known as LettError explain, “The industrial methods of producing typography meant that all letters had to be identical… Typography is now produced with sophisticated equipment that doesn’t impose such rules. The only limitations are in our expectations.”1 LettError expanded the possibilities of typography with their typeface Beowolf (p. 131). It printed every letter differently so that each time an A is printed, for example, it will have a different shape. During this time, typographers such as Zuzana Licko and Barry Deck created innovative typefaces with the assistance of new software tools. The flexibility of software has enabled extensive font revivals and historic homages such as Adobe Garamond from Robert Slimbach and The Proteus Project from Jonathan Hoefler. Typographic nuances such as ligatures—connections between letter pairs such as fi and æ—made impractical by modern mechanized typography are flourishing again through software font tools.
The text() function is used to draw letters, words, and paragraphs to the screen. In the simplest use, the first parameter can be a String, char, int, or float. The second and third parameters set the position of the text. By default, the second parameter defines the distance from the left edge of the window the third parameter defines the distance from the text’s baseline to the top of the window. The textSize() function defines the size the letters will draw in units of pixels. The number used to define the text size will not be the precise height of each letter, the difference depends on the design of each font. For instance, the statement textSize(30) won’t necessarily draw a capital H at 30 pixels high. The fill() function controls the color and transparency of text. This function affects text the same way it affects shapes such as rect() and ellipse(), but text is not affected by stroke().
textSize(32) # Set text size to 32
Another version of text() draws the characters inside a rectangle. In this use, the second and third parameters define the position of the upper-left corner of the box and fourth and fifth parameters define the width and height of the box. If the length of the text exceeds the dimensions of the defined box, the text will not display.
s = "Five hexing wizard bots jump quickly."
s = "Five hexing wizard bots jump quickly."
The examples in this chapter are the first to load external media into a sketch. Up to now, all examples have used only graphics generated within Processing through drawing functions such as line() and ellipse(). Processing is capable of loading and displaying other media, including fonts, images, vector files, formatted data, and sounds. While this chapter focuses on loading fonts and other chapters discuss specific information about other media types, there are a few things about loading media that apply to all categories. These similarities are discussed here.
Before external media can be used in a Processing sketch, it needs to be loaded each time the program is run. Media can be loaded directly from a sketch’s folder, another location on the computer, or though the Internet. Most typically, the media is loaded directly from the sketch’s folder. Media is usually placed into a folder called data there are three ways to get media into this folder:
To confirm the file was added correctly, select “Show Sketch Folder” from the Sketch menu. The file will be inside the data folder. With the image file in the right place, it’s ready to load. Be sure to include the file format extension as a part of the name and to put the entire name in quotes (e.g., “pup.gif”, “kat.jpg”, “ignatz.png”). When loading the file, be careful to use the correct capitalization when writing the file name. If the file is arch.jpg, trying to load Arch.jpg or arch.JPG will create an error. Also, adef the use of spaces in file names, which can cause problems.
To make media files accessible from anywhere in a program, they are typically declared as globally available variables outside of setup() and draw(). Files are usually loaded inside setup() because they need only be loaded once and because it takes time to load them. Loading a file inside draw() reduces the frame rate of a program because it causes the file to reload each frame. Once a file is loaded in setup(), it may be utilized anywhere in the program. In most Processing programs, all files are loaded when the program starts.
To work with fonts different than the default, more functions are needed to prepare a font to be used with Processing. The createFont() function is used to convert a TrueType font (.ttf) or OpenType font (.otf) so that is can display through text(). The textFont() function is used to define the current font to display. Any compatible font installed on the computer running Processing or stored in the sketch’s data folder may be used. The following short program is used to print the list of the available installed fonts to the console:
String fontList = PFont.list()
The printArray() function (p. 420) is used to write each font on a new line. The first few options printed to the console are general typographic classifications such as Serif, SansSerif, and Monospaced. Use these options to define a style, but not a specific font. When the list is generated on the computer used to write this book, a list of 573 font options are printed to the console. Your computer will produce different results depending on the operating system and custom fonts installed. The list starts with general font categories that will work across platforms, then continues with specific font names. A short excerpt from our list follows:
Before a font is used in a program, it must be converted and set as the current font. Processing has a unique data type called PFont to store font data. Make a variable and use the createFont() function to convert the font. The first parameter to createFont() is the name of the font to convert and the second parameter defines the base size of the font. (Optional third and fourth parameters are defined in the Reference.) The textFont() function must then be used to set the current font. On our development computer, to work with Ziggurat Black, list option 571 above, the following code is run:
To make this program work on your computer, you will likely need to modify line 5 to work with a font on your machine. This program is similar to code 12-01, but notice the differences in the letters in the Ziggurat font in relation to the default font.
To ensure a font will load on all computers, regardless if the font is installed, add the file to the sketch’s data folder. (Fonts in the data folder don’t print in the console list as demonstrated in code 12-07.) Follow the instructions on page 10 to add a font to the data folder. When fonts inside the data folder are used, the complete file name, including the data type extension, needs to be written as the parameter to createFont(). The following example is similar to the prior example, but it uses an OpenType font inside the data folder. It uses Source Code Pro, an open source typeface that can be found online and downloaded through a web browser.
To use two fonts in one program, create two global variables and use the textFont() function to change the current font. Based on the prior two examples, the Ziggurat-Black font loads from its location on the local computer and Source Code Pro loads from the data folder.
global zigBlack, sourceLight
Processing can also work with fonts that it converts into small image textures. These fonts aren’t as flexible and crisp as fonts converted for Processing with createFont() and used with the default renderer, but they are more optimized for use with the P2D and P3D renderers. The difference between renderers is discussed on page 547. The pixel font format used by Processing was developed at the MIT Media Lab in the mid 1990s in the Visual Language Workshop (VLW). The VLW format stores each alphanumeric character as a grid of pixels. It is a quick way to render text and makes it possible to include a font with a sketch without including the vector data.
To convert a font to the VLW format, select the “Create Font” option from the Tools menu. A window opens and displays the names of the fonts installed on your computer that are compatible. Select a font from the list and click “OK.” The font is generated and copied into the current sketch’s data folder. To make sure the font is there, click on the Sketch menu and select “Show Sketch Folder.” The Create Font tool offers the option to set the size of the font and to select whether it will have smooth, antialiased edges. This tool also offers the option to export “All Characters,” which means every character in the font will be included and will therefore increase the file size.
The following example uses the same font as the prior createFont() example. The only difference is the replacement of that function with loadFont(). To run these examples, first use the “Create Font” tool to turn a font into a VLW file. Change the name of the parameter to loadFont() to match the name of the VLW file created.
When the font is drawn at a different size from the size at which it was created, it is scaled and therefore does not always look as crisp and smooth. For example, if a font is created at 12 pixels and is displayed at 96 pixels, it will appear blurry.
For the best results, draw a font at the size at which it was created. If the same font needs to be used at multiple sizes, consider rendering and loading it at these precise sizes. When VLW fonts are used in 3D, letters with different z-coordinates can sometimes occlude other letters. This can be corrected with a hint, see page 547.
Processing includes functions to control the leading (the spacing between lines of text) and alignment. Processing can also calculate the width of any character or group of characters, a useful technique for arranging shapes and typographic elements. The textLeading() function sets the spacing between lines of text. It has one parameter that defines this space in units of pixels.
lines = "L1 L2 L3"
Letters and words can be drawn from their center, left, and right edges. The textAlign() function sets the alignment for drawing text through its parameter, which can be LEFT, CENTER, or RIGHT. It sets the display characteristics of the letters in relation to the x-coordinate stated in the text() function.
The settings for textSize(), textLeading(), and textAlign() will be used for all subsequent calls to the text() function. However, note that the textSize() function will reset the text leading, and the textFont() function will reset both the size and the leading.
The textWidth() function calculates and returns the pixel width of any character or text string. This number is calculated from the current font and size as defined by the textFont() and textSize() functions. Because the letters of every font are a different size and letters within many fonts have different widths, this function is the only way to know how wide a string or character is when displayed on screen. For this reason, always use textWidth() to position elements relative to text, rather than hard-coding them into your program.
s = "AEIOU"
Drawing letters to the screen becomes more engaging when used in combination with the keyboard. The keyPressed() event function introduced on page 97 can be used to record each letter as it is typed. The following two examples use this function to read and analyze input from the keyboard by using the String methods introduced in the Text chapter (p. 143). In both, the String variable letters starts empty. Each key typed is added to the end of the string. The first example displays the string as it grows as keys are pressed and removes letters from the end when backspace is pressed. The second example builds on the first—when the Return or Enter key is pressed, the program checks if the word “gray” or “black” was typed. If one of these words was input, the background changes to that value.
letters = ""
letters = ""
Many people spend hours a day inputting letters into computers, but this action is very constrained. What features could be added to a text editor to make it more responsive to the typist? For example, the speed of typing could decrease the size of the letters, or a long pause in typing could add many spaces, mimicking a person’s pause while speaking. What if the keyboard could register how hard a person is typing (the way a piano plays a soft note when a key is pressed gently) and could automatically assign attributes such as italics for soft presses and bold for forceful presses? These analogies suggest how conservatively current software treats typography and typing.
Many artists and designers are fascinated with type and have created unique ways of exploring letterforms with the mouse, keyboard, and more exotic input devices. A minimal yet engaging example is John Maeda’s Type, Tap, Write software, created in 1998 as homage to manual typewriters. This software uses the keyboard as the input to a black-and-white screen representation of a keyboard. Pressing the number keys cause the software to cycle through different modes, each revealing a playful interpretation of keyboard data. In Jeffrey Shaw and Dirk Groeneveld’s The Legible City (1989–91), buildings are replaced with three-dimensional letters to create a city of typography that conforms to the streets of a real place. In the Manhattan version, for instance, texts from the mayor, a taxi driver, and Frank Lloyd Wright comprise the city. The image is presented on a projection screen, and the user navigates by pedaling and steering a stationary bicycle situated in front of the projected image. Projects such as these demonstrate that software presents an extraordinary opportunity to extend the way we read and write.
Typographic elements can be assigned behaviors that define a personality in relation to the mouse or keyboard. A word can express aggression by moving quickly toward the mouse, or moving away slowly can express timidity. The following examples demonstrate basic applications of this area. In the first, the word adef stays away from the mouse because its position is set to the inverse of the cursor position. In the second, the word tickle jitters when the cursor hovers over its position.
x, y = 33, 60
global x, y |
Lesson Plan - Get It!
8 ducklings were in the pond when 7 more ducklings joined them. How many ducklings are in the pond? If 4 fly away, how many are left? Don't "quack up" trying to figure out the answers; we'll show you how to solve fact problems!
Addition is combining two or more groups into one group.
The answer to an addition problem is called the sum. An addition number sentence can be written vertically or horizontally. A number sentence uses numbers and symbols instead of words.
Two ways to show addition are shown below. When we add two numbers, the order doesn’t matter. Either number can be written first.
5 + 3 = 8
3 + 5 = 8
Subtraction is separating one group into two groups. The answer to a subtraction problem is called the difference. A subtraction number sentence can also be written vertically or horizontally. However, the order of numbers matters in subtraction. The number with the greater value is written first when solving most subtraction problems.
8 - 3 = 5
8 - 5 = 3
A subtraction problem can be checked by adding the difference to the number subtracted. Think of doing the problem “in reverse.”
4 + 6 = 10, so 10 - 6 = 4
6 + 4 = 10, so 10 - 4 = 6
Addition and subtraction facts form fact families, groups of three numbers that are arranged to form number sentences. Fact families help us learn facts and allow us to improve at solving math problems.
As you watch a Discovery Education video about Fact Families, see if you can find the answer to this question:
How are addition and subtraction fact families related? Discuss with a parent or teacher.
Create an addition and subtraction fact family from the numbers 4, 6, and 10, and share with a parent or teacher.
In the Got It? section, you will practice adding and subtracting numbers as you play an online game and complete an interactive quiz. |
|Part of a series on|
Rational choice theory refers to a set of guidelines that help understand economic and social behaviour. The theory originated in the eighteenth century and can be traced back to political economist and philosopher, Adam Smith. The theory postulates that an individual will perform a cost-benefit analysis to determine whether an option is right for them. It also suggests that an individual's self-driven rational actions will help better the overall economy. Rational choice theory looks at three concepts: rational actors, self interest and the invisible hand.
Rationality can be used as an assumption for the behaviour of individuals in a wide range of contexts outside of economics. It is also used in political science, sociology, and philosophy.
The basic premise of rational choice theory is that the decisions made by individual actors will collectively produce aggregate social behaviour. The theory also assumes that individuals have preferences out of available choice alternatives. These preferences are assumed to be complete and transitive. Completeness refers to the individual being able to say which of the options they prefer (i.e. individual prefers A over B, B over A or are indifferent to both). Alternatively, transitivity is where the individual weakly prefers option A over B and weakly prefers option B over C, leading to the conclusion that the individual weakly prefers A over C. The rational agent will then perform their own cost-benefit analysis using a variety of criterion to perform their self-determined best choice of action.
One version of rationality is instrumental rationality, which involves achieving a goal using the most cost effective method without reflecting on the worthiness of that goal. Duncan Snidal emphasises that the goals are not restricted to self-regarding, selfish, or material interests. They also include other-regarding, altruistic, as well as normative or ideational goals.
Rational choice theory does not claim to describe the choice process, but rather it helps predict the outcome and pattern of choice. It is consequently assumed that the individual is self-interested or being homo economicus. Here, the individual comes to a decision that maximizes personal advantage by balancing costs and benefits. Proponents of such models, particularly those associated with the Chicago school of economics, do not claim that a model's assumptions are an accurate description of reality, only that they help formulate clear and falsifiable hypotheses. In this view, the only way to judge the success of a hypothesis is empirical tests. To use an example from Milton Friedman, if a theory that says that the behavior of the leaves of a tree is explained by their rationality passes the empirical test, it is seen as successful.
Without explicitly dictating the goal or preferences of the individual, it may be impossible to empirically test or invalidate the rationality assumption. However, the predictions made by a specific version of the theory are testable. In recent years, the most prevalent version of rational choice theory, expected utility theory, has been challenged by the experimental results of behavioral economics. Economists are learning from other fields, such as psychology, and are enriching their theories of choice in order to get a more accurate view of human decision-making. For example, the behavioral economist and experimental psychologist Daniel Kahneman won the Nobel Memorial Prize in Economic Sciences in 2002 for his work in this field.
Rational choice theory has proposed that there are two outcomes of two choices regarding human action. Firstly, the feasible region will be chosen within all the possible and related action. Second, after the preferred option has been chosen, the feasible region that has been selected was picked based on restriction of financial, legal, social, physical or emotional restrictions that the agent is facing. After that, a choice will be made based on the preference order.
The concept of rationality used in rational choice theory is different from the colloquial and most philosophical use of the word. In this sense, "rational" behaviour can refer to "sensible", "predictable", or "in a thoughtful, clear-headed manner." Rational choice theory uses a much more narrow definition of rationality. At its most basic level, behavior is rational if it is goal-oriented, reflective (evaluative), and consistent (across time and different choice situations). This contrasts with behavior that is random, impulsive, conditioned, or adopted by (unevaluative) imitation.
Early neoclassical economists writing about rational choice, including William Stanley Jevons, assumed that agents make consumption choices so as to maximize their happiness, or utility. Contemporary theory bases rational choice on a set of choice axioms that need to be satisfied, and typically does not specify where the goal (preferences, desires) comes from. It mandates just a consistent ranking of the alternatives.: 501 Individuals choose the best action according to their personal preferences and the constraints facing them. E.g., there is nothing irrational in preferring fish to meat the first time, but there is something irrational in preferring fish to meat in one instant and preferring meat to fish in another, without anything else having changed.
The basic premise of rational choice theory is that the decisions made by individual actors will collectively produce aggregate social behaviour. Thus, each individual makes a decision based on their own preferences and the constraints (or choice set) they face.
Rational choice theory can be viewed in different contexts. At an individual level, the theory suggests that the agent will decide on the action (or outcome) they most prefer. If the actions (or outcomes) are evaluated in terms of costs and benefits, the choice with the maximum net benefit will be chosen by the rational individual. Rational behaviour is not solely driven by monetary gain, but can also be driven by emotional motives.
|Part of a series on|
The theory can be applied to general settings outside of those identified by costs and benefits. In general, rational decision making entails choosing among all available alternatives the alternative that the individual most prefers. The "alternatives" can be a set of actions ("what to do?") or a set of objects ("what to choose/buy"). In the case of actions, what the individual really cares about are the outcomes that results from each possible action. Actions, in this case, are only an instrument for obtaining a particular outcome.
The available alternatives are often expressed as a set of objects, for example a set of j exhaustive and exclusive actions:
For example, if a person can choose to vote for either Roger or Sara or to abstain, their set of possible alternatives is:
The theory makes two technical assumptions about individuals' preferences over alternatives:
Together these two assumptions imply that given a set of exhaustive and exclusive actions to choose from, an individual can rank the elements of this set in terms of his preferences in an internally consistent way (the ranking constitutes a total ordering, minus some assumptions), and the set has at least one maximal element.
The preference between two alternatives can be:
Research that took off in the 1980s sought to develop models that drop these assumptions and argue that such behaviour could still be rational, Anand (1993). This work, often conducted by economic theorists and analytical philosophers, suggests ultimately that the assumptions or axioms above are not completely general and might at best be regarded as approximations.
Alternative theories of human action include such components as Amos Tversky and Daniel Kahneman's prospect theory, which reflects the empirical finding that, contrary to standard preferences assumed under neoclassical economics, individuals attach extra value to items that they already own compared to similar items owned by others. Under standard preferences, the amount that an individual is willing to pay for an item (such as a drinking mug) is assumed to equal the amount they are willing to be paid in order to part with it. In experiments, the latter price is sometimes significantly higher than the former (but see Plott and Zeiler 2005, Plott and Zeiler 2007 and Klass and Zeiler, 2013). Tversky and Kahneman do not characterize loss aversion as irrational. Behavioral economics includes a large number of other amendments to its picture of human behavior that go against neoclassical assumptions.
Often preferences are described by their utility function or payoff function. This is an ordinal number that an individual assigns over the available actions, such as:
The individual's preferences are then expressed as the relation between these ordinal assignments. For example, if an individual prefers the candidate Sara over Roger over abstaining, their preferences would have the relation:
A preference relation that as above satisfies completeness, transitivity, and, in addition, continuity, can be equivalently represented by a utility function.
The rational choice approach allows preferences to be represented as real-valued utility functions. Economic decision making then becomes a problem of maximizing this utility function, subject to constraints (e.g. a budget). This has many advantages. It provides a compact theory that makes empirical predictions with a relatively sparse model - just a description of the agent's objectives and constraints. Furthermore, optimization theory is a well-developed field of mathematics. These two factors make rational choice models tractable compared to other approaches to choice. Most importantly, this approach is strikingly general. It has been used to analyze not only personal and household choices about traditional economic matters like consumption and savings, but also choices about education, marriage, child-bearing, migration, crime and so on, as well as business decisions about output, investment, hiring, entry, exit, etc. with varying degrees of success.
In the field of political science rational choice theory has been used to help predict human decision making and model for the future; therefore it is useful in creating effective public policy, and enables the government to develop solutions quickly and efficiently.
Despite the empirical shortcomings of rational choice theory, the flexibility and tractability of rational choice models (and the lack of equally powerful alternatives) lead to them still being widely used.
Rational choice theory has become increasingly employed in social sciences other than economics, such as sociology, evolutionary theory and political science in recent decades. It has had far-reaching impacts on the study of political science, especially in fields like the study of interest groups, elections, behaviour in legislatures, coalitions, and bureaucracy. In these fields, the use of the rational choice theory to explain broad social phenomena is the subject of controversy.
Rational choice theory provides a framework to explain why groups of rational individuals can come to collectively irrational decisions. For example, while at the individual level a group of people may have common interests, applying a rational choice framework to their individually rational preferences can explain group-level outcomes that fail to accomplish any one individual's preferred objectives. Rational choice theory provides a framework to describe outcomes like this as the product of rational agents performing their own cost-benefit analysis to maximize their self-interests, a process that doesn't always align with the group's preferences. A major real world application of this is Arrow's Theorem, which states that no voting system can be fair under all circumstances due to the potential for any voting system to produce outcomes that don't align with individual preferences.
An further example of this can be shown by some of the world's most troubling problems, such as the climate crisis. Nation states can be seen as rational as they fulfil their own interests of economic growth, however, this economic growth often leads to pollution as increasing a nation's factors of production takes a toll on the environment. It is irrational for a state to forego this economic growth as the cost of pollution does not entirely fall on them, as one state's carbon emissions would not entirely affect that state alone, as it impacts elsewhere. This means the benefit of the economic growth outweighs the cost of pollution, according to the theory of Rational Choice. However, If all countries made this rational calculation it would lead to a massive amount of pollution. Making the outcome of a rational choice, a collectively irrational outcome.
Voter behaviour shifts significantly thanks to rational theory, which is ingrained in human nature, the most significant of which occurs when there are times of economic trouble. An example in economic policy, economist Anthony Downs concluded that a high income voter ‘votes for whatever party he believes would provide him with the highest utility income from government action’, using rational choice theory to explain people's income as their justification for their preferred tax rate.
Downs' work provides a framework for analyzing tax-rate preference in a rational choice framework. He argues that an individual votes if it is in their rational interest to do so. Downs models this utility function as B + D > C, where B is the benefit of the voter winning, D is the satisfaction derived from voting and C is the cost of voting. It is from this that we can determine that parties have moved their policy outlook to be more centric in order to maximise the number of voters they have for support. It is from this very simple framework that more complex adjustments can be made to describe the success of politicians as an outcome of their ability or failure to satisfy the utility function of individual voters.
Rational choice theory has become one of the major tools used to study international relations. Proponents of its use in this field typically assume that states and the policies crafted at the national outcome are the outcome of self-interested, politically shrewd actors including, but not limited to, politicians, lobbyists, businesspeople, activists, regular voters and any other individual in the national audience. The use of rational choice theory as a framework to predict political behavior has led to a rich literature that describes the trajectory of policy to varying degrees of success. For example, some scholars have examined how states can make credible threats to deter other states from a (nuclear) attack. Others have explored under what conditions states wage war against each other. Yet others have investigated under what circumstances the threat and imposition of international economic sanctions tend to succeed and when they are likely to fail.
Rational Choice Theory and Social exchange theory involves looking at all social relations in the form of costs and rewards, both tangible and non tangible.
According to Abell, Rational Choice Theory is "understanding individual actors... as acting, or more likely interacting, in a manner such that they can be deemed to be doing the best they can for themselves, given their objectives, resources, circumstances, as they seem them". Rational Choice Theory has been used to comprehend the complex social phenomena, of which derives from the actions and motivations of an individual. Individuals are often highly motivated by their wants and needs.
By making calculative decisions, it is considered as rational action. Individuals are often making calculative decisions in social situations by weighing out the pros and cons of an action taken towards a person. The decision to act on a rational decision is also dependent on the unforeseen benefits of the friendship. Homan mentions that actions of humans are motivated by punishment or rewards. This reinforcement through punishments or rewards determines the course of action taken by a person in a social situation as well. Individuals are motivated by mutual reinforcement and are also fundamentally motivated by the approval of others. Attaining the approval of others has been a generalized character, along with money, as a means of exchange in both Social and Economic exchanges. In Economic exchanges, it involves the exchange of goods or services. In Social exchange, it is the exchange of approval and certain other valued behaviors.
Rational Choice Theory in this instance, heavily emphasizes the individual's interest as a starting point for making social decisions. Despite differing view points about Rational choice theory, it all comes down to the individual as a basic unit of theory. Even though sharing, cooperation and cultural norms emerge, it all stems from an individual's initial concern about the self.
G.S Becker offers an example of how Rational choice can be applied to personal decisions, specifically regarding the rationale that goes behind decisions on whether to marry or divorce another individual. Due to the self-serving drive on which the theory of rational choice is derived, Becker concludes that people marry if the expected utility from such marriage exceeds the utility one would gain from remaining single, and in the same way couples would separate should the utility of being together be less than expected and provide less (economic) benefit than being separated would. Since the theory behind rational choice is that individuals will take the course of action that best serves their personal interests, when considering relationships it is still assumed that they will display such mentality due to deep-rooted, self-interested aspects of human nature.
Social Exchange and Rational Choice Theory both comes down to an individual's efforts to meet their own personal needs and interests through the choices they make. Even though some may be done sincerely for the welfare of others at that point of time, both theories point to the benefits received in return. These returns may be received immediately or in the future, be it tangible or not.
Coleman discussed a number of theories to elaborate on the premises and promises of rational choice theory. One of the concepts that He introduced was Trust. It is where "individuals place trust, in both judgement and performance of others, based on rational considerations of what is best, given the alternatives they confront". In a social situation, there has to be a level of trust among the individuals. He noted that this level of trust is a consideration that an individual takes into concern before deciding on a rational action towards another individual. It affects the social situation as one navigates the risks and benefits of an action. By assessing the possible outcomes or alternatives to an action for another individual, the person is making a calculated decision. In another situation such as making a bet, you are calculating the possible lost and how much can be won. If the chances of winning exceeds the cost of losing, the rational decision would be to place the bet. Therefore, the decision to place trust in another individual involves the same rational calculations that are involved in the decision of making a bet.
Even though rational theory is used in Economics and Social settings, there are some similarities and differences. The concept of reward and reinforcement is parallel to each other while the concept of cost is also parallel to the concept of punishment. However, there is a difference of underlying assumptions in both contexts. In social a social setting, the focus is often on the current or past reinforcements instead of the future although there is no guarantee of immediate tangible or intangible returns from another individual. In Economics, decisions are made with heavier emphasis on future rewards.
Despite having both perspectives differ in focus, they primarily reflect on how individuals make different rational decisions when given an immediate or long-term circumstances to consider in their rational decision making.
Both the assumptions and the behavioral predictions of rational choice theory have sparked criticism from various camps.
As mentioned above, some economists have developed models of bounded rationality, such as Herbert Simon, which hope to be more psychologically plausible without completely abandoning the idea that reason underlies decision-making processes. Simon argues factors such as imperfect information, uncertainty and time constraints all affect and limit our rationality, and therefore our decision-making skills. Furthermore, his concepts of 'satisficing' and 'optimizing' suggest sometimes because of these factors, we settle for a decision which is good enough, rather than the best decision. Other economists have developed more theories of human decision-making that allow for the roles of uncertainty, institutions, and determination of individual tastes by their socioeconomic environment (cf. Fernandez-Huerga, 2008).
Martin Hollis and Edward J. Nell's 1975 book offers both a philosophical critique of neo-classical economics and an innovation in the field of economic methodology. Further, they outlined an alternative vision to neo-classicism based on a rationalist theory of knowledge. Within neo-classicism, the authors addressed consumer behaviour (in the form of indifference curves and simple versions of revealed preference theory) and marginalist producer behaviour in both product and factor markets. Both are based on rational optimizing behaviour. They consider imperfect as well as perfect markets since neo-classical thinking embraces many market varieties and disposes of a whole system for their classification. However, the authors believe that the issues arising from basic maximizing models have extensive implications for econometric methodology (Hollis and Nell, 1975, p. 2). In particular it is this class of models – rational behavior as maximizing behaviour – which provide support for specification and identification. And this, they argue, is where the flaw is to be found. Hollis and Nell (1975) argued that positivism (broadly conceived) has provided neo-classicism with important support, which they then show to be unfounded. They base their critique of neo-classicism not only on their critique of positivism but also on the alternative they propose, rationalism. Indeed, they argue that rationality is central to neo-classical economics – as rational choice – and that this conception of rationality is misused. Demands are made of it that it cannot fulfill. Ultimately, individuals do not always act rationally or conduct themselves in a utility maximising manner.
Duncan K. Foley (2003, p. 1) has also provided an important criticism of the concept of rationality and its role in economics. He argued that
“Rationality” has played a central role in shaping and establishing the hegemony of contemporary mainstream economics. As the specific claims of robust neoclassicism fade into the history of economic thought, an orientation toward situating explanations of economic phenomena in relation to rationality has increasingly become the touchstone by which mainstream economists identify themselves and recognize each other. This is not so much a question of adherence to any particular conception of rationality, but of taking rationality of individual behavior as the unquestioned starting point of economic analysis.
Foley (2003, p. 9) went on to argue that
The concept of rationality, to use Hegelian language, represents the relations of modern capitalist society one-sidedly. The burden of rational-actor theory is the assertion that ‘naturally’ constituted individuals facing existential conflicts over scarce resources would rationally impose on themselves the institutional structures of modern capitalist society, or something approximating them. But this way of looking at matters systematically neglects the ways in which modern capitalist society and its social relations in fact constitute the ‘rational’, calculating individual. The well-known limitations of rational-actor theory, its static quality, its logical antinomies, its vulnerability to arguments of infinite regress, its failure to develop a progressive concrete research program, can all be traced to this starting-point.
More recently Edward J. Nell and Karim Errouaki (2011, Ch. 1) argued that:
The DNA of neoclassical economics is defective. Neither the induction problem nor the problems of methodological individualism can be solved within the framework of neoclassical assumptions. The neoclassical approach is to call on rational economic man to solve both. Economic relationships that reflect rational choice should be ‘projectible’. But that attributes a deductive power to ‘rational’ that it cannot have consistently with positivist (or even pragmatist) assumptions (which require deductions to be simply analytic). To make rational calculations projectible, the agents may be assumed to have idealized abilities, especially foresight; but then the induction problem is out of reach because the agents of the world do not resemble those of the model. The agents of the model can be abstract, but they cannot be endowed with powers actual agents could not have. This also undermines methodological individualism; if behaviour cannot be reliably predicted on the basis of the ‘rational choices of agents’, a social order cannot reliably follow from the choices of agents.
In their 1994 work, Pathologies of Rational Choice Theory, Donald P. Green and Ian Shapiro argue that the empirical outputs of rational choice theory have been limited. They contend that much of the applicable literature, at least in political science, was done with weak statistical methods and that when corrected many of the empirical outcomes no longer hold. When taken in this perspective, rational choice theory has provided very little to the overall understanding of political interaction - and is an amount certainly disproportionately weak relative to its appearance in the literature. Yet, they concede that cutting-edge research, by scholars well-versed in the general scholarship of their fields (such as work on the U.S. Congress by Keith Krehbiel, Gary Cox, and Mat McCubbins) has generated valuable scientific progress.
Schram and Caterino (2006) contains a fundamental methodological criticism of rational choice theory for promoting the view that the natural science model is the only appropriate methodology in social science and that political science should follow this model, with its emphasis on quantification and mathematization. Schram and Caterino argue instead for methodological pluralism. The same argument is made by William E. Connolly, who in his work Neuropolitics shows that advances in neuroscience further illuminate some of the problematic practices of rational choice theory.
Pierre Bourdieu fiercely opposed rational choice theory as grounded in a misunderstanding of how social agents operate. Bourdieu argued that social agents do not continuously calculate according to explicit rational and economic criteria. According to Bourdieu, social agents operate according to an implicit practical logic—a practical sense—and bodily dispositions. Social agents act according to their "feel for the game" (the "feel" being, roughly, habitus, and the "game" being the field).
Other social scientists, inspired in part by Bourdieu's thinking have expressed concern about the inappropriate use of economic metaphors in other contexts, suggesting that this may have political implications. The argument they make is that by treating everything as a kind of "economy" they make a particular vision of the way an economy works seem more natural. Thus, they suggest, rational choice is as much ideological as it is scientific.
Rational choice theorists discuss individual values and structural elements as equally important determinants of outcomes. However, for methodological reasons in the empirical application, more emphasis is usually placed on social structural determinants. Therefore, in line with structural functionalism and social network analysis perspectives, rational choice explanations are considered mainstream in sociology .
Some of the scepticism among sociologists regarding rational choice stems from a misunderstanding of the lack of realist assumptions. Social research has shown that social agents usually act solely based on habit or impulse, the power of emotion. Social Agents predict the expected consequences of options in stock markets and economic crises and choose the best option through collective "emotional drives," implying social forces rather than "rational" choices.
However, sociology commonly misunderstands rational choice in its critique of rational choice theory. The rational choice theory does not explain what rational people would do in a given situation, which falls under decision theory. Theoretical choice focuses on social outcomes rather than individual outcomes. Social outcomes are identified as stable equilibria in which individuals have no incentive to deviate from their course of action. This orientation of others' behaviour toward social outcomes may be unintended or undesirable. Therefore, the conclusions generated in such cases are relegated to the "study of irrational behaviour".
An evolutionary psychology perspective suggests that many of the seeming contradictions and biases regarding rational choice can be explained as being rational in the context of maximizing biological fitness in the ancestral environment but not necessarily in the current one. Thus, when living at subsistence level where a reduction of resources may have meant death it may have been rational to place a greater value on losses than on gains. Proponents argue it may also explain differences between groups.
Proponents of emotional choice theory criticize the rational choice paradigm by drawing on new findings from emotion research in psychology and neuroscience. They point out that rational choice theory is generally based on the assumption that decision-making is a conscious and reflective process based on thoughts and beliefs. It presumes that people decide on the basis of calculation and deliberation. However, cumulative research in neuroscience suggests that only a small part of the brain's activities operate at the level of conscious reflection. The vast majority of its activities consist of unconscious appraisals and emotions. The significance of emotions in decision-making has generally been ignored by rational choice theory, according to these critics. Moreover, emotional choice theorists contend that the rational choice paradigm has difficulty incorporating emotions into its models, because it cannot account for the social nature of emotions. Even though emotions are felt by individuals, psychologists and sociologists have shown that emotions cannot be isolated from the social environment in which they arise. Emotions are inextricably intertwined with people's social norms and identities, which are typically outside the scope of standard rational choice models. Emotional choice theory seeks to capture not only the social but also the physiological and dynamic character of emotions. It represents a unitary action model to organize, explain, and predict the ways in which emotions shape decision-making.
Herbert Gintis has also provided an important criticism to rational choice theory. He argued that rationality differs between the public and private spheres. The public sphere being what you do in collective action and the private sphere being what you do in your private life. Gintis argues that this is because “models of rational choice in the private sphere treat agents’ choices as instrumental”. “Behaviour in the public sphere, by contrast, is largely non-instrumental because it is non-consequential". Individuals make no difference to the outcome, “much as single molecules make no difference to the properties of the gas" (Herbert, G). This is a weakness of rational choice theory as it shows that in situations such as voting in an election, the rational decision for the individual would be to not vote as their vote makes no difference to the outcome of the election. However, if everyone were to act in this way the democratic society would collapse as no one would vote. Therefore, we can see that rational choice theory does not describe how everything in the economic and political world works, and that there are other factors of human behaviour at play. |
What is "bad" and "good" cholesterol ?
Cholesterol is a peculiar molecule. Frequently called lipid or fat, however, the chemical term for a molecule such as cholesterol is alcohol, although it does not behave like alcohol.
Numerous carbon and hydrogen atoms are assembled into a complex three-dimensional network, impossible to dissolve in the water.
This smartly designed grid integrates cholesterol into the cell walls to make the cells waterproof. This means that cells can regulate their internal environment without being disturbed by changes in their external environment.
It is a vital mechanism for proper cell function.
The fact that the cells are waterproof is particularly critical for the normal functioning of nerves, nerve cells, etc. Generally, the highest concentration of cholesterol in the body is found in the brain and other parts of the nervous system. Cholesterol is insoluble in water and therefore also in blood.
Therefore, it consists of fats or other lipids and proteins, so-called lipoproteins.
Lipoproteins are easily dissolved in water because their outer envelope consists mainly of water-soluble proteins, but their inner envelope consists of lipids, creating space for water-insoluble molecules such as cholesterol.
Lipoproteins function as endogenous transport vehicles, fluid environment, transferring cholesterol from one part of the body to another.
That's why we can compare them to "submarines"!"Underwater" or otherwise lipoproteins, have different names depending on their density.
The most well-known are HDL (high density lipoprotein) and LDL (low density lipoprotein).
The main task of HDL is to transfer cholesterol from the peripheral tissues, including the walls of the artery, to the liver. It is then excreted from the bile or used for other purposes, for example as a starting point for the production of significant hormones.
While LDL carries mainly cholesterol in the opposite direction. LDL is transported by the liver, where most of the body's cholesterol is produced in the peripheral tissues, including the vascular walls.
All cells can produce cholesterol, but if they need more than they are able to produce, they are looking for "underwater" LDL, which will then carry the extra cholesterol into the cells. The bulk of cholesterol in the blood, between 60 and 80%, is transported by LDL and is called "bad cholesterol".
Only 15-20% is transported by HDL and is called "good cholesterol".
In addition, a small portion of circulating cholesterol is transferred from other lipoproteins.
What is "Bad" and "Good" Cholesterol
The question is because a natural substance in our blood, with important biological functions, is called "bad" when transported from the liver to peripheral tissues with LDL and "good" when inverted with HDL.
The reason is, as several follow-up studies have shown, that:
a lower than normal HDL-cholesterol level and a higher than normal LDL-cholesterol level are associated with a higher risk of heart attack and vice versa.
a higher than normal HDL-cholesterol level and a lower than normal LDL-cholesterol are associated with a lower risk,
yet a low ratio of HDL / LDL is a risk factor for coronary artery disease.
However, a risk factor is not necessarily the same as the cause. Many factors are known to affect this ratio.
Something can cause a heart attack and at the same time reduce the HDL / LDL ratio.
Is it bad to be an obese, to smoke, to not exercise, to have high blood pressure and stress, or is it bad to have increased "bad" cholesterol or all together?
Is it good to be weak, not to smoke, to exercise, to keep his blood pressure on normal levels and to be emotionally calm, or is it good to have increased "good" cholesterol or all together?
In conclusion, the risk of heart attack is greater than normal for people with high cholesterol LDL, but also for obese people, smokers, those with hypertension or experiencing intense stress.
Of course, it is known that these individuals usually have elevated LDL cholesterol levels, but it is impossible to know whether the increased risk is due to the aforementioned risk factors or high LDL.
The calculation of the risk of high LDL, ignoring other risk factors, is called univariate analysis.
But to prove that high LDL is an independent risk factor, it is necessary to ask whether people are obese, smokers, have hypertension, and experience intense stress.
These individuals are at greater risk for coronary artery disease than they have low or normal LDL cholesterol.
By using complex statistical formulas, it is possible to make such comparisons in a population of individuals with different grades of risk factors and varying levels of LDL.
This is called multivariate analysis.
An analysis of the prognostic value of LDL cholesterol also takes into account body weight, then it is only considered to be adapted to it.
A major problem with such calculations is that the data generated by them and other complex statistical methods is almost impossible for most people, including most doctors, to understand.
For many years, researchers in this field have not presented primary data, simple meanings or simple correlations, but misty-meaning words, relative dangers, p-values, not to mention dark concepts, such as the standardized logic regression coefficient or the combined risk ratio.
Instead of being a science aid, statistics are used to impress people and cover the fact that scientific findings are insignificant and of no practical significance. In conclusion, a high LDL is not necessarily bad, and this is shown by many studies.
Indeed, in a recent post-analysis of 19 studies, where the authors watched nearly 70,000 people, after several periods, after measuring their LDL-cholesterol, it was observed that those with the highest LDL values lived for the longest time, even more, than those who were treated with statins. |
Several types of angles are employed in surveying, including azimuth and bearing. Bearings with azimuth are called azimuths. When it comes to measuring the angle of inclination, both of them vary. Different values may be assigned to each. Azimuth and Bearing have a wide range of variations. Numerical values are used in Azimuth. The values of Bearing are shown in alphanumeric form.
Azimuth Vs. Bearing
Azimuth utilizes only angles from 0 to 90 degrees, but Bearing only uses angles from 0 to 360 degrees. This is the primary distinction between Azimuth and Bearing. Bearing can measure angles in both the clockwise and counterclockwise directions, but Azimuth can only measure angles in the clockwise way.
Bearings with azimuth are called azimuths. Azimuth is used to determine a point’s position in relation to the horizon. In addition, it is a sort of bearing. Known as the complete circle bearing system, this is a bearing system. Surveying, for example, is one of the numerous domains in which it is used. It begins with taking readings in the direction of north. Azimuths from the South tend to be used by astronomers and the military.
A bearing is a measurement of the angle between the specified line and the reference meridian. The Azimuth is represented in a totally different way from the Bearing. To the north or south, the degree is followed by east or west, then north or south. In all cases, the degree is less than 360 degrees. The needle on the compass is used to determine the magnetic meridian.
Originally derived from the Arabic word for direction, azimuth is now often used to refer to any angle in space. Horizontal angles are the subject of this term. The reference median is used to begin these measurements of angles. Take this measurement clockwise. Surveys are the most common usage for them.
Surveying by compass and plane is made easier with its assistance. The direction of measuring is from north to south. Measuring methods used by both the military and astronomers all begin from the south.
From 0 to 360 degrees, the lines always have the same range of values. Azimuths come in a variety of shapes and sizes. They may be astronomical, geodetic, or magnetic in nature.
At the start of the survey, the reference median should be mentioned. In order to prevent any errors in the measurement, this is done. Using forward and backward azimuths, the measurement is made.
It’s important to note that although an azimuth indicates forward movement, a backward azimuth denotes the opposite direction. More and more people are turning to this method for things like control surveys and topographic surveys.
Angular divergence from North or South is quantified by the term “angle.” It’s a horizontal angle measurement from a fixed point toward the object’s direction. All angle measurements in celestial navigation are made using azimuth.
An acute angle is called a bearing. Given line and reference meridian are at a right angle to each other. North or South, then East or West, is the starting point for the measurement. The angle will never be more than 360 degrees. An alphanumeric is used to denote the angle.
Bearings come in a variety of shapes and sizes. Measurement of magnetic bearings is done by use of an area’s local magnetic meridian. A geodetic meridian is used to determine geodetic bearing, whereas a grid meridian is used to determine grid bearing.
To measure magnetic meridian, the compass needle is tapped. There are two ways to write bearings: mils and degrees. This may be used for a variety of other things, too. Angles may be measured with the use of a bearing. Azimuth is the same thing.
Surveys are carried out using bearings. The cardinal directions symbolize them. East and west are the last two directions of a circle that begins with north or south. The quadrant is represented by the East/West axis. Any kind of bearing must be done in relation to a north or south polar axis.
The first and third quadrants are always measured clockwise in North-East and South-West, respectively. The second and fourth quadrants are measured in an anticlockwise fashion in the north-west and south-east directions.
Difference Between Azimuth and Bearing
Between 0 and 360 degrees of angle are measured by azimuth; between 0 and 90 degrees are measured by bearing
Azimuth is shown numerically, whereas bearing is represented alphabetically.
The direction in which Azimuth measures is always counterclockwise, while the direction in which Bearing measures is always clockwise.
As opposed to bearing, which measures angles in relation to North or South, azimuth measures angles in relation to North or South.
The horizontal plane is measured using Azimuth whereas angles are measured with Bearing.
Azimuth is utilized for critical and sensitive surveys, while Bearing is employed for particular purposes.”
In surveying, azimuth and bearing are two common measurements of angle. Depending on the plane on which the measurement is made, the results will be different.
Bearing and Azimuth are used to guide measurements. Numerous other types of surveys make use of azimuth as well. It begins with taking readings in the direction of north.
From 0 to 360 degrees, the lines always have the same range of values. A bearing may measure an angle in degrees from 0 to 90. Azimuths are a unit of measurement used by both astronomers and the military.
It’s important to note that although an azimuth indicates forward movement, a backward azimuth denotes the opposite direction. The bearing has a clockwise and counterclockwise measurement capability. North or South, then East or West, is the starting point for the measurement.
The angle will never be more than 360 degrees. Bearings are used for a variety of purposes. It’s important to note that although an azimuth indicates forward movement, a backward azimuth denotes the opposite direction. Azimuth is sometimes referred to as the bearing system for the whole circle.
East and west are the last two directions of a circle that begins with north or south. The quadrant is represented by the East/West axis. Measurement of magnetic bearings is done by use of an area’s local magnetic meridian. |
Age 7-9 Math Worksheets
In this section, you can view all of our math worksheets and resources that are suitable for 7 to 9-year-olds.
We add dozens of new worksheets and materials for math teachers and homeschool parents every month. Below are the latest age 7-9 worksheets added to the site.
7-9 Years Of Age Math Learning Objectives & Standards:
- This age bracket is an entry level to the last two basic fundamental math operations: multiplication and division. Learners are expected to attain mastery of addition and subtraction of whole numbers up to thousands and start to dig into the concepts of multiplication and division within 100.
- They must realize the interconnectedness of the four basic fundamental math operations by associating multiplication as repeated addition and division as repeated subtraction and their properties. They should also get familiarized with the parts of a multiplication/division sentences and represent them as number bonds and fact families. More importantly, to express a particular real-life problem that needs to be solved using multiplication/division such as money transactions, number of bottles in a box, total number of chairs arranged in rows and columns, equal and fair sharing, etc. Solving mathematical sentences that involve two or more basic fundamental operations are also given importance at this level.
- Learners are continuously developing broader concepts of fractions by comparing, composing, and expressing equivalence of like fractions (fractions that have the same denominator). Aside from these, they will also add and subtract fractions and arrange them in a number line.
- They will also begin to indulge with illustrations and names of decimals — beginning with tenths and hundredths. They must express 0.34 as three tenths and four hundredths.
- In the aspect of Geometry and measurements, learners will explore shape transformations e.g. rotation, reflection, translation and the idea of area by primarily using square grids. |
Grazing lambs on pastures regrown after wildfires did not significantly alter metal content in meat and wool
California Agriculture 76(4):141-147. https://doi.org/10.3733/ca.2022a0016
Published online February 09, 2023
Wildfires can drastically change rangeland by depositing ash contaminated with metals that are not part of normal diets. This can pose health threats to humans and animals. This risk, along with alterations of essential minerals in livestock grazing on regrowth on burnt lands, is not well known. To better understand this, our study investigated metal concentrations in water, soil, plant forage, and meat and wool of sheep grazing on the regrowth of burned lands. We compared metal concentrations in sheep grazed on regrowth to stored meat samples from grazing sheep a year prior to the wildfire. Lead, mercury, arsenic, molybdenum, cadmium, beryllium, cobalt and nickel were not detected above reporting limits in meat, wool or water samples. Contamination from chromium and thallium was detected in three of 26 meat samples from sheep grazed on regrowth. These metals were not detected in 22 stored meat samples from sheep the year before. Copper concentrations found in the meat of animals grazing regrowth was lower than in animals grazing unburned pastures; it is important to monitor copper concentrations in grazing animals to avoid diseases associated with copper deficiency.
Fire has been used to manage grazing lands, control pests, and stimulate new plant growth for centuries. Prescribed grazing is also used for fire prevention. Forage quality and palatability on rangelands may improve following recovery from fires. For example, a four-fold increase has been seen in crude protein concentrations in burned versus unburned regrowth of tall grass on prairies (Allred et al. 2011). Livestock and wildlife may be drawn to graze on the regrowth in post-burn plots of land, because of the improved palatability of new growth forage (Allred et al. 2011). However, increased pressures at the urban-wildland interface, rangeland and woodland management practices, livestock production and other agricultural activities, and structure construction have changed land and thus distorted natural and agricultural burning practices. Globally, human activity has contributed to climate change with longer, hotter, drier fire seasons (Intergovernmental Panel on Climate Change 2014). In 2018, and in subsequent years, California experienced its most destructive wildfire seasons, with unprecedented damage (Bates 2019). Experts anticipate this trend in California will continue (NASA 2021).
Ewes and lambs graze in February 2021 on a Hopland Research and Extension Center pasture that was burned in the 2018 River Fire. UC Davis researchers analyzed meat, wool, soil, plant and water samples to assess the risk of metal contamination in sheep grazed on recently burned pasture regrowth. Photo: Valerie Eviner.
The character and type of ash is a product of what burned and at what temperature (Amiro et al. 1996; Jensen et al. 2017; Panichev et al. 2008; Qi et al. 2017). Lands that have not recently burned might have high concentrations of essential and non-essential metals, particularly mercury, sequestered into vegetation through natural deposits or pollution deposition over decades, and these metals may accumulate in ash after vegetation burns (Giesler et al. 2017), and contaminate surface waters (Abraham et al. 2017). Nonessential metals in the ash and water runoff may be inadvertently ingested by livestock and accumulated in the carcass, and thus represent a potential risk to the health of animals or humans consuming animal-derived foods. Mercury is of particular concern due to its known accumulation in plant biomass, as well as in the muscle tissue of contaminated animals (Castro-González and Méndez-Armenta 2008; Giesler et al. 2017; Jensen et al. 2017; Qi et al. 2017). However, there is a paucity of literature providing evidence-based recommendations regarding the risk of metal contamination in the meat of animals grazed on recently burned lands.
The objective of this study was to investigate non-essential metal contamination and changes in essential trace mineral content in the meat and wool of lambs grazed on recently burned pasture regrowth, compared to samples obtained from animals not grazed on burn regrowth. A secondary objective was to assess the usefulness of wool sampling to estimate meat concentrations of non-essential metals, which could potentially provide a minimally invasive way to test animals for non-essential metal contamination prior to slaughter. Hair analysis has been studied previously as an indicator of non-essential metal contamination in humans, some grazing species, and wildlife, with variable results (Combs 1987; Liang et al. 2017; Roug et al. 2015; Weiss-Penzias et al. 2019).
On July 27, 2018, the River Fire burned approximately two-thirds of the lands at the University of California Agriculture and Natural Resources Hopland Research and Extension Center (UC ANR HREC), including pastures used for grazing approximately 500 cross-bred ewes and their lambs. Hopland's ecosystems include oak woodland, grassland, chaparral and riparian areas, with sheep grazing largely concentrated on grasslands and low-density oak woodlands. We used this natural exposure to compare muscle tissue from lambs that grazed on fire regrowth pastures and were slaughtered in the spring of 2019 to frozen samples from the previous year's 2018 lamb crop, grazed on the same property prior to the wildfire. Additionally, the relationship between metal concentrations in meat and wool samples was evaluated.
We hypothesized that lambs grazed on the first season's regrowth from burned plots of land had greater concentrations of metals in their meat samples compared to stored meat samples obtained from lambs that were not exposed to fire regrowth, which had grazed on the same property the previous year. We also hypothesized that metal concentrations in wool samples from lambs grazed on burn regrowth were correlated with concentrations in meat from matched samples. There is limited data describing metal concentrations in ruminant tissues associated with grazing burn regrowth. Our study aims to generate initial data for further investigations into metal concentrations in grazing ruminants.
Sampling from animals and land
The non-essential metal of greatest concern for bio-accumulation was mercury; therefore, calculations were based on estimations of mercury contamination. Meat samples from lambs not exposed to burn regrowth are estimated to have mercury concentrations of 0.01 milligram per kilogram (mg/kg) or less on a wet weight basis (Sell et al. 1975), while samples from animals exposed to recent burn regrowth are estimated to contain 0.025 mg/kg or more (a relative risk of 2.5). To obtain results with an 80% chance of detecting results and a 95% confidence interval, at least 20 lambs per group were required. To account for an estimated dropout rate of 25% due to predation, other causes of mortality, or loss of samples at slaughter, a minimum of 25 lambs were enrolled. Commercial statistical software was used to calculate the sample size (JMP Pro v16, SAS Institute, Cary, N.C.).
Frozen neck meat samples from 22 cross-bred lambs that were born in February 2018 and raised at the HREC until routine slaughter were available for analysis as the pre-fire regrowth grazing group (PRE). Neck meat and wool samples from 26 cross-bred lambs born in February 2019 and raised at the HREC until routine slaughter were obtained at the time of slaughter as the post-fire regrowth grazing group (POST). The study was approved by the UC Davis Institutional Animal Care and Use Committee (#21015).
All samples obtained from the PRE group were from lambs grazed together in one group on the same pastures throughout the 2018 grazing season. The PRE group were grazed on the HREC property, prior to any recent burning, finished on a concentrate feed for the final six weeks prior to slaughter, and slaughtered at a U.S. Department of Agriculture (USDA)-approved facility prior to the 2018 River Fire.
All POST lambs and their ewes were turned out to pasture when growth in recently burned pastures was sufficient to graze sheep in late spring 2019. The animals grazing in 2019 were exposed to pastures burned in the 2018 River Fire, as well as prescribed burning that occurred approximately one month prior to the River Fire. Ewe-lamb pairs were grazed in small groups on a combination of pastures, including recently burned as well as non-burned pastures. Each pasture was grazed until the vegetation no longer supported grazing, at which time the animals were moved to the next pasture, as is standard for this grazing operation. The total days of grazing on each pasture were recorded for each animal; burn exposure for each pasture was available for review. All animals were confined in pens and fed a similar type of supplemental concentrate feed from the same mill as the PRE group for the final six weeks prior to slaughter at the same facility in September 2019.
Neck meat from each lamb in both the PRE and POST groups was used for sampling, due to availability of neck meat in the PRE group. This also ensured that each carcass was sampled only once, and from the same anatomic site. Neck meat was obtained after routine slaughter in a USDA-approved sheep slaughter facility. The proximal cervical vertebrae with attached musculature was identified in all frozen PRE and POST samples, and submitted for elemental metal analysis. Both the PRE and POST groups were slaughtered as a single group in their respective years.
A minimum of 5 grams (g) wool sample was obtained from each lamb of the POST group by clipping from the flank region just prior to exposure to grazing on burn regrowth pastures. A second wool sample was obtained at the time of slaughter by clipping wool from an approximately 10 centimeter (cm)-square section of the hide.
Twenty-eight water samples were obtained after completion of 2019 grazing, from all animal drinking water sources available (including natural and man-made) for each pasture grazed by the POST lambs. Water was collected by dipping sterile polypropylene plastic containers directly from the water source where it was available to the sheep, and samples were immediately frozen at −68°F (−20°C) to minimize changes in water content due to biologic activity. Water samples remained frozen until submission for analysis.
A view of Hopland Research and Extension Center in October 2018, after the River Fire, before pasture regrowth. Photo: Jennie Lane.
Stored environmental samples of soil and aboveground grassland biomass were available for mineral testing from nine plots within or adjacent to grazing pastures at the Hopland site. These samples were collected after the fire, during the study grazing season. Focal study plots were 50 meters (m) by 20 m, running lengthwise (50 m) downslope to upslope. Soil samples were collected in March 2019 with a 7-cm-diameter auger, to a depth of 20 cm. Two samples were taken per plot (one in the bottom third of the plot, one in the top third of the plot) and bulked. Soil samples were air-dried after collection, and stored at room temperature until analysis. Aboveground plant biomass samples were collected in June 2019 in three locations per plot (bottom third, middle third, top third) and bulked. Each biomass sample was collected from plants rooted within a 15-cm-diameter ring, cut to within 1 cm of the ground surface. Biomass samples were dried at 122°F (50°C) for one week after collection, and stored at room temperature until analysis.
Analyzing metal content
All samples were analyzed at the California Animal Health and Food Safety Laboratory System (CAHFS) for elemental metal analysis, including lead (Pb), mercury (Hg), arsenic (As), thallium (Tl), molybdenum (Mo), copper (Cu), cadmium (Cd), beryllium (Be), cobalt (Co), chromium (Cr), nickel (Ni), manganese (Mn), iron (Fe), zinc (Zn), barium (Ba) and vanadium (V). The method of analysis was inductive coupled plasma optical emission spectrometry (ICP-OES) (iCAP 6500, Thermo Electron North America, Madison, Wis.). Meat samples were also analyzed for water content for dry weight conversion. Preparation of wool samples prior to analysis included filling a 50-milliliter (mL) centrifuge tube with the wool, followed by addition of acetone up to the 40 mL mark. The tube was then capped and was shaken with a tissue grinder (2010 Geno/Grinder, SPEX SamplePrep, Metuchen, N.J.) for 5 minutes. The acetone with residue was then decanted. This washing step was then repeated two more times with acetone and three more times with 18 MΩ water. The cleaned wool was then dried at 185°F (85°C) overnight. For analysis of metals, 1 g of tissue or 0.5 g of wool, soil or biomass were digested with 3 mL of nitric acid at 374°F (190°C). After the digestion was completed, 2 mL of hydrochloric acid was added, and the sample was brought to 10 mL with 18 MΩ water. The sample was then analyzed by ICP-OES. To ensure data quality, a method blank, laboratory control spike, sample over-spike, and a CRM (certified reference material from the National Research Council of Canada) was digested and analyzed with each batch. For every 10 samples, a drift check was also run to ensure the instrument stability throughout the analysis.
Descriptive statistics for grazing data, metal concentrations in the POST group's meat and wool samples, water and environmental samples collected during the 2019 grazing season, and PRE group stored meat samples, were calculated. Metal concentrations data for meat, wool and environmental samples were tested for normality using a Shapiro-Wilk test. Mean and standard deviation were reported when data were normally distributed, whereas median (range) were reported when data were not normally distributed. Metal concentrations between PRE and POST in meat samples or between meat and wool (POST group only) were compared using multivariate analyses of variance (MANOVA). In the MANOVA, group assignment (PRE vs. POST or meat vs wool for POST only) were considered predictor variables and the concentrations of the metals were considered outcome variables. Correlations among metal concentrations was determined using Pearson's (r) or Spearman's correlation (rho) coefficient. For the POST group only, a Wilcoxon rank-sum test was used to determine differences in the metal concentrations in the wool before and after grazing regrowth pastures. For all analyses, commercial statistical software was used (JMP Pro v16, SAS Institute, Cary, N.C.). P < 0.05 was considered significant.
Results of the study
A total of 22 frozen neck meat samples were available from the PRE group of lambs for analysis. A total of 26 neck meat samples, with matching wool samples obtained prior to grazing on burn regrowth pastures, as well as at the time of slaughter, were available for the POST group lambs. Reporting limits are provided in A-table 1 in the online technical appendix.
Grazing data for both the PRE and POST grazing groups is depicted in table 1, demonstrating that the POST group spent 147–158 total days grazing, with 24–46 of those days grazed on pastures burned by either wildfire or prescribed fire.
A total of 28 water samples were obtained, and the metals Mn, Fe, Zn, Ba and V were identified in 7, 5, 2, 23 and 5 of 28 water samples, respectively. No Pb, Hg, As, Mo, Cu, Cd, Be, Co, Cr, Ni or Tl were detected above the reporting limits in any water samples (table 2).
Concentration data for Mn, Fe, Zn, Cu, Ba, Cr, Tl and V in meat and wool are depicted in table 1; no Pb, Hg, As, Mo, Cd, Be, Co or Ni were detected above reporting limits in any meat or wool samples.
Differences in metal concentrations in the PRE and POST meat samples were detected (P < 0.0001). The POST group had higher concentrations of Mn and Fe compared to the PRE group sheep, whereas the PRE group sheep had higher concentrations of Cu compared to the POST group. There was no difference in Zn, Ba or V in meat samples between the two groups. Positive correlations were detected in concentrations between Fe and Mn, as well as Mn and V. In contrast, Mn and Zn concentrations were negatively correlated. No Pb, Hg, As, Mo, Cd, Be, Co or Ni were detected above reporting limits in meat samples. Chromium was detected in one meat sample (0.78 parts per million [ppm]) and Tl was detected in two meat samples (1.4 and 1.3 ppm). All three of these Cr and Tl detections were in the POST group; however, due to the low number of samples testing positive for Cr and Tl, statistical comparisons were not determined between the groups. No V was detected in wool samples obtained prior to release on burn regrowth, so V could not be compared between groups. Ba concentrations in wool were higher (P = 0.008) in post-grazing samples compared to pre-grazing samples. Wool concentrations for Mn (P = 0.147), Fe (P = 0.503), Zn (P = 0.129) and Cu (P = 0.105) were not different between pre-grazing and post-grazing time points.
The type of sample (meat or wool) was a significant predictor of metal concentrations (P < 0.0001). Concentrations of Fe, Zn and Cu were higher in wool compared to meat samples. Mn concentrations were lower in wool compared to meat samples. There was no difference detected in Ba concentrations between meat and wool samples. The Cr and Tl detected in three meat samples were not detected in any wool samples. Tl, Cr, V and Mo were not statistically compared between meat and wool due to lack of consistent detection in both biologic matrices.
Four study plots were on land that remained unburned in recent prescribed or wild fire, and five study plots were on land that had regrown from recent prescription (n = 3) or wildfire (n = 2) burning. Concentration data for Pb, Mn, Fe, As, Zn, Cu, Cd, Ba, Be, Co, Cr, Ni and V from nine soil samples and nine biomass samples from the same nine study plot sites are depicted in table 3. No Hg, Mo or Tl were detected above reporting limits in any soil or plant biomass samples. Additionally, no As, Cd, Be or Co were detected in any plant biomass samples.
Interpretation of findings
The primary objective of this study was to investigate whether non-essential metal contamination occurs in the meat of sheep grazing on pastures on recent regrowth of burnt lands. The essential metals Mn, Fe, Zn, Cu and V were consistently detected in meat and wool samples; this finding is not surprising because these metals have important biological roles in mammalian tissues (Radostits et al. 2007; Rehder 2015). However, differences in these elements between the PRE and POST fire groups were limited to increased Fe and Mn, and decreased Cu in the meat of the POST grazing group.
The decrease in copper in the POST group is not of toxicological concern, although copper deficiency can have deleterious health effects in ruminants. The meat Cu concentrations reported herein (2.75 ppm PRE and 1.2 ppm POST) are both within ranges previously published for sheep (Coleman et al. 1992; Pereira et al. 2021). A summary of Cu concentrations in sheep meat over the last 30 years reported a range of study means of 0.75 to 5.9 mg/kg (ppm), with the only U.S. study reporting a mean of 2.32 mg/kg (ppm) (Pereira et al. 2021). However, muscle Cu concentrations are a poor reflection of total body Cu storage in ruminants, with liver being a more appropriate tissue to monitor deficiencies or excess of Cu. Further investigation into the effects of pasture burning on animal tissue Cu concentrations may be warranted, and attention to Cu concentration screening and species-appropriate supplementation is suggested for grazing livestock.
Burn regrowth at the Hopland Research and Extension Center, December 2018. Researchers did not detect lead, mercury, arsenic, molybdenum, cadmium, beryllium, cobalt or nickel above reporting limits in any meat or wool samples. Photo: Sarah Depenbrock.
Watching for toxic metals
Metals of particular toxicological concern, which are not expected to be present in ruminant tissues, include Pb, Hg, As, Cd, Be, Co, Ni, Cr and Tl. The absence of detection of Pb, Hg, As, Mo, Cd, Be, Co or Ni in any of the meat or wool samples obtained in the PRE or POST groups suggests that contamination from these metals did not occur following exposure to burn regrowth for a range of 24–46 of 156 days grazing on this site. However, three meat samples from the POST group contained detectable Cr or Tl. Although there were insufficient numbers of samples in which these metals were detected above reporting limits to analyze differences between the PRE and POST groups, the detection of these potentially toxic metals only in the POST group may suggest that grazing burn regrowth exposes some grazing animals to Cr or Tl. Or, it could be that the exposure to these metals was an unidentified, unrelated event that occurred only in the POST group. Detection of Cr in meat samples from grazing animals has been previously reported (Hassan et al. 2012; Ribeiro et al. 2020). In reindeer, mean Cr concentration reported was at 1.7 μg/100 g (0.017 ppm) wet weight (Hassan et al. 2012). In three sheep breeds on varying diets, mean concentrations of Cr ranged between 1.66 and 2.42 mg/kg, on a dry matter basis (approximately 0.45–0.65 ppm on a wet weight basis if moisture content was similar to our study, at approximately 73%) (Ribeiro et al. 2020). The specific toxicological risk of the concentration of Cr found in our study is unknown and depends on the specific form of Cr. However, a Cr concentration of 0.78 ppm likely would exceed values reported for adequate intake for humans (25 to 35 μg/day) if consumers eat more than approximately 50 g of lamb per day (Trumbo et al. 2001). There is a paucity of literature documenting the detection of Tl in meat of grazing animals; a single review cites a typical value of 0.74 ng/g (0.00074 ppm) in muscle tissue of cattle used as analytical reference material (Karbowska 2016). There is no safe Tl limit published for meat; however, limits for Tl in edible plants range from 0.03 to 0.3 mg/kg (ppm). The concentrations found in our study of 1.3 and 1.4 ppm exceed Tl limits for edible plants, and likely exceed the oral reference dose of 0.056 mg per day if more than approximately 40 g are consumed (Karbowska 2016).
The source of Tl exposure was not identified in our study; no Tl was detected above reporting limits in soil, biomass or water sampled at the site after the fire. However, chromium was identified in soil samples and in some biomass from the site after the fire. Further investigation, specifically into Cr and Tl exposure on grazing lands, and the effect of pasture burning on contamination with these metals, is warranted.
Hg was hypothesized to be the metal most likely to be bio-accumulated and deposited on grazing lands after burning. However, no Hg was detected in any substrate sampled. This is an interesting finding after fire converted much of the nearby biomass, including mature oak trees, into ash, which was distributed across the entire site. However, the pastures are largely dominated by annual herbaceous species with scattered trees, which may not accumulate heavy metals to the extent of woody tissues.
Does wool predict metal in meat?
For all metals evaluated, only Ba had similar concentrations between meat and wool; however, the clinical utility of ante-mortem Ba testing is unknown, because Ba toxicosis is considered an unlikely foodborne risk. Due to the lack of samples with detectable Pb, Hg, As, Mo, Cd, Be, Co and Ni, the correlation between these metal concentrations in wool and meat could not be evaluated. Although not evaluated statistically, the detection of two metals of potential toxicological concern (Cr or Tl) in three meat samples without corresponding detection in any wool samples suggests that wool may not be an appropriate matrix to use for ante-mortem detection of Cr or Tl.
Was water contaminated?
Water samples contained only essential minerals, with no non-essential minerals or minerals of potential toxicological concern. This finding suggests that water contamination with metals of potential toxicological concern from wildfire was below detectable concentrations, or did not remain in water sources throughout the following grazing season on the study premise. These findings likewise suggest that water sources were not a likely source of Cr or Tl contamination. However, the single sampling time point, obtained after the grazing period, may have been insufficient to detect transient water contamination associated with the fire and subsequent runoff.
Room for future studies
Our study was limited to a single wildfire event, and was a longitudinal, semi-prospective study design, with limited sample types and numbers available from the PRE group. Due to animal management needs, there was a lack of prospective grazing on the regrowth of grazing lands from different burn intensities. Therefore, inferences about the effects of grazing pastures regrown from prescribed burn compared to wildfire burn, or regrowth from different burn intensities, could not be made. A full toxicological investigation into the source of Cr and Tl contamination was outside the scope of this study; the source of contamination was not determined. Analysis of all feed and forage was also outside the scope of this study, which limits conclusions based on feed history. Potential confounders when comparing meat and wool samples include the relative dilution of the wool for analysis (0.5 g wool vs. 1 g meat per 10 mL final diluent) and the time delay represented in wool growth relative to meat sampling; mature wool fiber samples inherently represent mineral incorporation during wool development before it grows out enough to sample, whereas concentrations in meat represent the most recent physiologic concentration in tissues. Future investigations would benefit from controlled, prospective, contemporaneously matched grazing assignments on regrowth from different burn intensities and environments, and could be expanded by more robust toxicological investigation.
A-Table 1: Reporting limits. Reporting limits for neck meat, wool and water in ppm. |
KINDER: MODULE 4 - Number Pairs, Addition and Subtraction to 10
GFletchy Counting Progression: Addition and Subtraction
K.ATO.1 Model situations that involve addition and subtraction within 10 using objects, fingers, mental images, drawings, acting out situations, verbal explanations, expressions, and equations.
K.ATO.2 Solve real-world/story problems using objects and drawings to find sums up to 10 and differences within 10.
K.ATO.3 Compose and decompose numbers up to 10 using objects, drawings, and equations.
K.ATO.4 Create a sum of 10 using objects and drawings when given one of two addends 1 – 9. (CCSS: K.OA.4 For any number from 1 to 9, find the number that makes 10 when added to the given number, e.g., by using objects or drawings, and record the answer with a drawing or equation.)
GRADE 1: MODULE 3 - Ordering and Comparing Length Measurements as Numbers
How Big Is a Foot? (read aloud)
1.ATO.1 Solve real-world/story problems using addition (as a joining action and as a partpart-whole action) and subtraction (as a separation action, finding parts of the whole, and as a comparison) through 20 with unknowns in all positions.
1.MDA.1 Order three objects by length using indirect comparison.
1.MDA.2 Use nonstandard physical models to show the length of an object as the number of same size units of length with no gaps or overlaps.
1.MDA.4 Collect, organize, and represent data with up to 3 categories using object graphs, picture graphs, t-charts and tallies.
GRADE 2: MODULES 8 - Time, Shapes, and Fractions as Equal Parts of Shapes
GFletchy Counting Progression: Fraction Comparison and Equivalence
2.MDA.6 Use analog and digital clocks to tell and record time to the nearest five-minute interval using a.m. and p.m.
2.G.1 Identify triangles, quadrilaterals, hexagons, (and cubes). Recognize and draw shapes having specified attributes, such as a given number of angles or a given number of equal faces.
2.G.3 Partition squares, rectangles and circles into two or four equal parts, and describe the parts using the words halves, fourths, a half of, and a fourth of. Understand that when partitioning a square, rectangle or circle into two or four equal parts, the parts become smaller as the number of parts increases. |
The law of supply and demand, which dictates that a product's availability and appeal impacts its price, had several discoverers. But the principle, one of the best-known in economics, was noticed in the marketplace long before it was mentioned in a published work – or even given its name.
Philosopher John Locke is credited with one of the earliest written descriptions of this economic principle in his 1691 publication, Some Considerations on the Consequences of the Lowering of Interest and the Raising of the Value of Money. Locke addressed the concept of supply and demand as part of a discussion about interest rates in 17th-century England. Many merchants wanted the government to lower the cap on interest rates charged by private lenders so that people could borrow more money and thus purchase more goods. Locke argued that the free-market economy should set rates because government regulation could have unintended consequences. If the lending industry were left alone, interest rates would regulate themselves, Locke wrote: "The price of any commodity rises or falls, by the proportion of the number of buyers and sellers."
Sir James Steuart
Locke did not actually use the term "supply and demand," however. Its first appearance in print came in 1767, with Sir James Steuart's Inquiry into the Principles of Political Economy. When Steuart wrote his treatise on political economy, one of his main concerns was the impact of supply and demand on laborers. Steuart noted that when supply levels were higher than demand, prices were significantly reduced, lowering the profits realized by merchants. When merchants made less money, they could not afford to pay workers, resulting in high unemployment.
Adam Smith dealt extensively with the topic in his 1776 epic economic work, The Wealth of Nations. Smith, often referred to as the Father of Economics, explained the concept of supply and demand as an "invisible hand" that naturally guides the economy. Smith described a society where bakers and butchers provide products that individuals need and want, providing a supply that meets demand and developing an economy that benefits everyone.
After Smith's 1776 publication, the field of economics developed rapidly, and refinements were to the supply and demand law. In 1890, Alfred Marshall's Principles of Economics developed a supply-and-demand curve that is still used to demonstrate the point at which the market is in equilibrium.
One of Marshall's most important contributions to microeconomics was his introduction of the concept of price elasticity of demand, which examines how price changes affect demand. In theory, people buy less of a particular product if the price increases, but Marshall noted that in real life, this behavior was not always true. The prices of some goods can increase without reducing demand, which means their prices are inelastic. Inelastic goods tend to include items such as medication or food, that consumers deem crucial to daily life. Marshall argued that supply and demand, costs of production and price elasticity all work together. |
There are many different telescope types, actually the term “telescope” in fact combines two optical systems, reflectors and refractors, each with its strengths and weaknesses, depending on your needs. In each case, the optical system is comprised of an objective and an eyepiece, although the eyepiece is sometimes able to be replaced by a regular camera or an astrophotography camera.
The objective captures the light and concentrates it in a point called the focus. It is there that the eyepiece is located, an optical device with a series of lenses. The observer looks into the eyepiece, which enlarges the image.
The larger the diameter of the objective, the easier it will be to capture light, and as a result, the better an observer will be able to see fainter objects. On the other hand, the further the focus is positioned from the objective, the more the image is magnified. We refer to the distance that separates the lens from the focus, the instrument’s focal length. The greater the instrument’s focal length, the stronger its magnification will be. Unfortunately, the longer the focal length, the darker the image will appear. If you want to observe a galaxy that is far away and diffuse, first and foremost you will need an objective with a substantial diameter. Conversely, if you want to observe the details of a luminous object, for example, the craters on the surface of the Moon or the rings of Saturn, you will benefit from having a long focal length.
Read also: Telescopes for Beginners
Refracting telescopes all function on the same principle, but the quality of the lenses varies and consequently, so does the cost of the instrument. The refractor is essentially a spyglass for which the concept was invented in the Netherlands in 1608. It was the Italian mathematician, Galileo, who first had the idea to use it as an instrument for observing the sky. He improved upon the Dutch spyglass and used it to make discoveries that revolutionized astronomy.
In a refracting telescope, the objective is a lens, and in the most sophisticated instruments, many lenses stacked together. The objective is situated at one extremity of the tube, and the focus at the other.
In a reflecting telescope, the objective is not actually a lens, but rather a mirror. And, it is not a flat mirror, but a curved mirror that fulfills exactly the same function as a lens, this is to say that it concentrates the light into a focal point. With few exceptions, before reaching this point, the light is diverted by a secondary mirror (which can be flat, but doesn’t have to be), so that the eyepiece, and consequently the observer, don’t block the path of light. By contrast to refracting telescopes, there are different types of reflectors. The first of its kind was invented by Sir Isaac Newton in 1668 and bears his name: the Newtonian telescope. It is a simple and effective instrument, the primary mirror is at the back of a tube and the secondary mirror, which is flat, is at the opening of the tube.
This secondary mirror reflects the light back at a 90˚ angle and so the eyepiece is placed on the side of the tube. The Newtonian telescopes are very popular among amateur astronomers because of their high quality optics and low manufacturing cost by comparison to other types of telescopes. The Schmidt-Cassegrain telescopes are also very popular among amateur astronomers, although they are generally more expensive. The Schmidt-Cassegrain optical system is more complex than that of the Newton telescopes.
The primary mirror has a spherical curvature; it is a mirror that is easy to make, but which causes what is called a spherical aberration. It’s a matter of the distortion of the image caused by the fact that this geometry does not really concentrate all the rays of light at the same point, and consequently, there is no proper focus. To correct this optical aberration, a lens is placed at the entrance of the tube. At first glance, one might think that it is a thin glass plate, but it is actually a lens, precisely cut to correspond to the mirror with which it is paired. This optical component is called a corrector plate. Thanks to this plate, the image in a Schmidt-Cassegrain telescope is clear. However, if the corrector plate is damaged, the telescope becomes unusable; because it was cut for that particular mirror, it is unfortunately not possible to replace it with another without having to have it custom-made, which would be very expensive.
Finally, the last type of reflector that we will discuss in this article is the Maksutov telescope. It is a telescope that resembles the Schmidt-Cassegrain in that the principle is much the same except that the lens, which is called the meniscus corrector in this kind of telescope, one can easily see the curve with the naked eye. The Schmidt-Cassegrains and the Maksutovs are relatively equivalent optical systems. The advantage of the Schmidt-Cassegrain is that it is easy to manufacture its large-diameter optics, whereas in the Maksutov, the prices quickly start to rise as the size of the diameter increases.
In any case, these two types of telescopes have the advantage of combining the focal length in a compact tube, as the light captured by the primary mirror is reflected by the secondary mirror back toward the center of the primary mirror, which is pierced in the middle, and the focus is formed at the back of the telescope. On the one hand, the light therefore travels back and forth within the tube, thus virtually doubling its length; yet on the other hand, the secondary mirror is not flat, but rather convex and thus adds to the instrument’s focal length. If your curiosity has been piqued enough to browse an astronomical equipment seller’s offerings, you will possibly notice some hybrid models. You will see, for example, some Newtonian telescopes equipped with a corrector plate. This is because different optical systems have their advantages and their disadvantages, and some have been more adapted to certain uses than to others. The manufacturers therefore make choices and attempt to propose solutions that meet a variety of needs. |
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations . (March 2019) (Learn how and when to remove this template message)
Integrated circuit design, or IC design, is a subset of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography.
The process of circuit design can cover systems ranging from complex electronic systems all the way down to the individual transistors within an integrated circuit. For simple circuits the design process can often be done by one person without needing a planned or structured design process, but for more complex designs, teams of designers following a systematic approach with intelligently guided computer simulation are becoming increasingly common. In integrated circuit design automation, the term "circuit design" often refers to the step of the design cycle which outputs the schematics of the integrated circuit. Typically this is the step between logic design and physical design.
An electronic component is any basic discrete device or physical entity in an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form and are not to be confused with electrical elements, which are conceptual abstractions representing idealized electronic components.
An electrical network is an interconnection of electrical components or a model of such an interconnection, consisting of electrical elements. An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Linear electrical networks, a special type consisting only of sources, linear lumped elements, and linear distributed elements, have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response.
IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, and resistance. Fidelity of analog signal amplification and filtering is usually critical and as a result, analog ICs use larger area active devices than digital designs and are usually less dense in circuitry.
Digital data, in information theory and information systems, is the discrete, discontinuous representation of information or works. Numbers and letters are commonly used representations.
Random-access memory is a form of computer data storage that stores data and machine code currently being used. A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement.
Read-only memory (ROM) is a type of non-volatile memory used in computers and other electronic devices. Data stored in ROM can only be modified slowly, with difficulty, or not at all, so it is mainly used to store firmware or application software in plug-in cartridges.
Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. The rules for what can and cannot be manufactured are also extremely complex. Common IC processes of 2015 have more than 500 rules. Furthermore, since the manufacturing process itself is not completely predictable, designers must account for its statistical nature. The complexity of modern IC design, as well as market pressure to produce designs rapidly, has led to the extensive use of automated design tools in the IC design process. In short, the design of an IC using EDA software is the design, test, and verification of the instructions that the IC is to carry out.
Statistics is a branch of mathematics dealing with data collection, organization, analysis, interpretation and presentation. In applying statistics to, for example, a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Statistics deals with all aspects of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics.
Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design.
Integrated circuit design involves the creation of electronic components, such as transistors, resistors, capacitors and the interconnect of these components onto a piece of semiconductor, typically silicon. A method to isolate the individual components formed in the substrate is necessary since the substrate silicon is conductive and often forms an active region of the individual components. The two common methods are p-n junction isolation and dielectric isolation. Attention must be given to power dissipation of transistors and interconnect resistances and current density of the interconnect, contacts and vias since ICs contain very tiny devices compared to discrete components, where such concerns are less of an issue. Electromigration in metallic interconnect and ESD damage to the tiny components are also of concern. Finally, the physical layout of certain circuit subblocks is typically critical, in order to achieve the desired speed of operation, to segregate noisy portions of an IC from quiet portions, to balance the effects of heat generation across the IC, or to facilitate the placement of connections to circuitry outside the IC.
In integrated circuits (ICs), interconnects are structures that connect two or more circuit elements together electrically. The design and layout of interconnects on an IC is vital to its proper function, performance, power efficiency, reliability, and fabrication yield. The material interconnects are made from depends on many factors. Chemical and mechanical compatibility with the semiconductor substrate, and the dielectric in between the levels of interconnect is necessary, otherwise barrier layers are needed. Suitability for fabrication is also required; some chemistries and processes prevent integration of materials and unit processes into a larger technology (recipe) for IC fabrication. In fabrication, interconnects are formed during the back-end-of-line after the fabrication of the transistors on the substrate.
Silicon is a chemical element with symbol Si and atomic number 14. It is a hard and brittle crystalline solid with a blue-grey metallic lustre; and it is a tetravalent metalloid and semiconductor. It is a member of group 14 in the periodic table: carbon is above it; and germanium, tin, and lead are below it. It is relatively unreactive. Because of its high chemical affinity for oxygen, it was not until 1823 that Jöns Jakob Berzelius was first able to prepare it and characterize it in pure form. Its melting and boiling points of 1414 °C and 3265 °C respectively are the second-highest among all the metalloids and nonmetals, being only surpassed by boron. Silicon is the eighth most common element in the universe by mass, but very rarely occurs as the pure element in the Earth's crust. It is most widely distributed in dusts, sands, planetoids, and planets as various forms of silicon dioxide (silica) or silicates. More than 90% of the Earth's crust is composed of silicate minerals, making silicon the second most abundant element in the Earth's crust after oxygen.
A wafer, also called a slice or substrate, is a thin slice of semiconductor material, such as a crystalline silicon, used in electronics for the fabrication of integrated circuits and in photovoltaics for conventional, wafer-based solar cells. The wafer serves as the substrate for microelectronic devices built in and over the wafer and undergoes many microfabrication process steps such as doping or ion implantation, etching, deposition of various materials, and photolithographic patterning. Finally, the individual microcircuits are separated (dicing) and packaged.
A typical IC design cycle involves several steps:
Roughly saying, digital IC design can be divided into three parts.
C is a general-purpose, imperative computer programming language, supporting structured programming, lexical variable scope and recursion, while a static type system prevents many unintended operations. By design, C provides constructs that map efficiently to typical machine instructions, and therefore it has found lasting use in applications that had formerly been coded in assembly language, including operating systems, as well as various application software for computers ranging from supercomputers to embedded systems.
C++ is a general-purpose programming language. It has imperative, object-oriented and generic programming features, while also providing facilities for low-level memory manipulation.
SystemC is a set of C++ classes and macros which provide an event-driven simulation interface. These facilities enable a designer to simulate concurrent processes, each described using plain C++ syntax. SystemC processes can communicate in a simulated real-time environment, using signals of all the datatypes offered by C++, some additional ones offered by the SystemC library, as well as user defined. In certain respects, SystemC deliberately mimics the hardware description languages VHDL and Verilog, but is more aptly described as a system-level modeling language.
Note that the second step, RTL design, is responsible for the chip doing the right thing. The third step, physical design, does not affect the functionality at all (if done correctly) but determines how fast the chip operates and how much it costs.
The integrated circuit (IC) development process starts with defining product requirements, progresses through architectural definition, implementation, bringup and finally productization. The various phases of the integrated circuit development process are described below. Although the phases are presented here in a straightforward fashion, in reality there is iteration and these steps may occur multiple times.
An integrated circuit or monolithic integrated circuit is a set of electronic circuits on one small flat piece of semiconductor material that is normally silicon. The integration of large numbers of tiny transistors into a small chip results in circuits that are orders of magnitude smaller, cheaper, and faster than those constructed of discrete electronic components. The IC's mass production capability, reliability and building-block approach to circuit design has ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the small size and low cost of ICs.
Iteration is the repetition of a process in order to generate a sequence of outcomes. The sequence will approach some end point or end value. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration.
Before an architecture can be defined some high level product goals must be defined. The requirements are usually generated by a cross functional team that addresses market opportunity, customer needs, feasibility and much more. This phase should result in a product requirements document.
The architecture defines the fundamental structure, goals and principles of the product. It defines high level concepts and the intrinsic value proposition of the product. Architecture teams take into account many variables and interface with many groups. People creating the architecture generally have a significant amount of experience dealing with systems in the area for which the architecture is being created. The work product of the architecture phase is an architectural specification.
The micro-architecture is a step closer to the hardware. It implements the architecture and defines specific mechanisms and structures for achieving that implementation. The result of the micro-architecture phase is a micro-architecture specification which describes the methods used to implement the architecture.
In the implementation phase the design itself is created using the micro-architectural specification as the starting point. This involves low level definition and partitioning, writing code, entering schematics and verification. This phase ends with a design reaching tapeout.
After a design is created, taped-out and manufactured, actual hardware, 'first silicon', is received which is taken into the lab where it goes through bringup. Bringup is the process of powering, testing and characterizing the design in the lab. Numerous tests are performed starting from very simple tests such as ensuring that the device will power on to much more complicated tests which try to stress the part in various ways. The result of the bringup phase is documentation of characterization data (how well the part performs to spec) and errata (unexpected behavior).
Productization is the task of taking a design from engineering into mass production manufacturing. Although a design may have successfully met the specifications of the product in the lab during the bringup phase there are many challenges that face product engineers when trying to mass-produce those designs. The IC must be ramped up to production volumes with an acceptable yield. The goal of the productization phase is to reach mass production volumes at an acceptable cost.
Once a design is mature and has reached mass production it must be sustained. The process must be continually monitored and problems dealt with quickly to avoid a significant impact on production volumes. The goal of sustaining is to maintain production volumes and continually reduce costs until the product reaches end of life.
The initial chip design process begins with system-level design and microarchitecture planning. Within IC design companies, management and often analytics will draft a proposal for a design team to start the design of a new chip to fit into an industry segment. Upper-level designers will meet at this stage to decide how the chip will operate functionally. This step is where an IC's functionality and design are decided. IC designers will map out the functional requirements, verification testbenches, and testing methodologies for the whole project, and will then turn the preliminary design into a system-level specification that can be simulated with simple models using languages like C++ and MATLAB and emulation tools. For pure and new designs, the system design stage is where an Instruction set and operation is planned out, and in most chips existing instruction sets are modified for newer functionality. Design at this stage is often statements such as encodes in the MP3 format or implements IEEE floating-point arithmetic . At later stages in the design process, each of these innocent looking statements expands to hundreds of pages of textual documentation.
Upon agreement of a system design, RTL designers then implement the functional models in a hardware description language like Verilog, SystemVerilog, or VHDL. Using digital design components like adders, shifters, and state machines as well as computer architecture concepts like pipelining, superscalar execution, and branch prediction, RTL designers will break a functional description into hardware models of components on the chip working together. Each of the simple statements described in the system design can easily turn into thousands of lines of RTL code, which is why it is extremely difficult to verify that the RTL will do the right thing in all the possible cases that the user may throw at it.
To reduce the number of functionality bugs, a separate hardware verification group will take the RTL and design testbenches and systems to check that the RTL actually is performing the same steps under many different conditions, classified as the domain of functional verification. Many techniques are used, none of them perfect but all of them useful – extensive logic simulation, formal methods, hardware emulation, lint-like code checking, code coverage, and so on.
A tiny error here can make the whole chip useless, or worse. The famous Pentium FDIV bug caused the results of a division to be wrong by at most 61 parts per million, in cases that occurred very infrequently. No one even noticed it until the chip had been in production for months. Yet Intel was forced to offer to replace, for free, every chip sold until they could fix the bug, at a cost of $475 million (US).[ citation needed ]
RTL is only a behavioral model of the actual functionality of what the chip is supposed to operate under. It has no link to a physical aspect of how the chip would operate in real life at the materials, physics, and electrical engineering side. For this reason, the next step in the IC design process, physical design stage, is to map the RTL into actual geometric representations of all electronics devices, such as capacitors, resistors, logic gates, and transistors that will go on the chip.
The main steps of physical design are listed below. In practice there is not a straightforward progression - considerable iteration is required to ensure all objectives are met simultaneously. This is a difficult problem in its own right, called design closure.
Before the advent of the microprocessor and software based design tools, analog ICs were designed using hand calculations and process kit parts. These ICs were low complexity circuits, for example, op-amps, usually involving no more than ten transistors and few connections. An iterative trial-and-error process and "overengineering" of device size was often necessary to achieve a manufacturable IC. Reuse of proven designs allowed progressively more complicated ICs to be built upon prior knowledge. When inexpensive computer processing became available in the 1970s, computer programs were written to simulate circuit designs with greater accuracy than practical by hand calculation. The first circuit simulator for analog ICs was called SPICE (Simulation Program with Integrated Circuits Emphasis). Computerized circuit simulation tools enable greater IC design complexity than hand calculations can achieve, making the design of analog ASICs practical. The computerized circuit simulators also enable mistakes to be found early in the design cycle before a physical device is fabricated. Additionally, a computerized circuit simulator can implement more sophisticated device models and circuit analysis too tedious for hand calculations, permitting Monte Carlo analysis and process sensitivity analysis to be practical. The effects of parameters such as temperature variation, doping concentration variation and statistical process variations can be simulated easily to determine if an IC design is manufacturable. Overall, computerized circuit simulation enables a higher degree of confidence that the circuit will work as expected upon manufacture.
A challenge most critical to analog IC design involves the variability of the individual devices built on the semiconductor chip. Unlike board-level circuit design which permits the designer to select devices that have each been tested and binned according to value, the device values on an IC can vary widely which are uncontrollable by the designer. For example, some IC resistors can vary ±20% and β of an integrated BJT can vary from 20 to 100. In the latest CMOS processes, β of vertical PNP transistors can even go below 1. To add to the design challenge, device properties often vary between each processed semiconductor wafer. Device properties can even vary significantly across each individual IC due to doping gradients. The underlying cause of this variability is that many semiconductor devices are highly sensitive to uncontrollable random variances in the process. Slight changes to the amount of diffusion time, uneven doping levels, etc. can have large effects on device properties.
Some design techniques used to reduce the effects of the device variation are:
The three largest companies selling electronic design automation tools are Synopsys, Cadence, and Mentor Graphics.
Electronics comprises the physics, engineering, technology and applications that deal with the emission, flow and control of electrons in vacuum and matter. The identification of the electron in 1897, along with the invention of the vacuum tube, which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age.
Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining hundreds of thousands of transistors or devices into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI lets IC designers add all of these into one chip.
Digital electronics or digital (electronic) circuits are electronics that operate on digital signals. In contrast, analog circuits manipulate analog signals whose performance is more subject to manufacturing tolerance, signal attenuation and noise. Digital techniques are helpful because it is a lot easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values.
Transistor–transistor logic (TTL) is a logic family built from bipolar junction transistors. Its name signifies that transistors perform both the logic function and the amplifying function ; it is the same naming convention used in resistor–transistor logic (RTL) and diode–transistor logic (DTL).
Complementary metal–oxide–semiconductor (CMOS) is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits. CMOS technology is also used for several analog circuits such as image sensors, data converters, and highly integrated transceivers for many types of communication. Frank Wanlass patented CMOS in 1963 while working for Fairchild Semiconductor.
A system on a chip or system on chip is an integrated circuit that integrates all components of a computer or other electronic system. These components typically include a central processing unit (CPU), memory, input/output ports and secondary storage – all on a single substrate. It may contain digital, analog, mixed-signal, and often radio frequency signal processing functions, depending on the application. As they are integrated on a single electronic substrate, SoCs consume much less power and take up much less area than multi-chip designs with equivalent functionality. Because of this, SoCs are very common in the mobile computing and edge computing markets. Systems on chip are commonly used in embedded systems and the Internet of Things.
An application-specific integrated circuit is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder or a high-efficiency bitcoin miner is an ASIC. Application-specific standard products (ASSPs) are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series.
Resistor–transistor logic (RTL) is a class of digital circuits built using resistors as the input network and bipolar junction transistors (BJTs) as switching devices. RTL is the earliest class of transistorized digital logic circuit used; other classes include diode–transistor logic (DTL) and transistor–transistor logic (TTL). RTL circuits were first constructed with discrete components, but in 1961 it became the first digital logic family to be produced as a monolithic integrated circuit. RTL integrated circuits were used in the Apollo Guidance Computer, whose design was begun in 1961 and which first flew in 1966.
Silvaco, Inc. is a privately owned provider of electronic design automation (EDA) software and TCAD process and device simulation software. Silvaco was founded in 1984 and is headquartered in Santa Clara, California, and in 2006 the company had about 250 employees worldwide.
In computer engineering, a logic family may refer to one of two related concepts. A logic family of monolithic digital integrated circuit devices is a group of electronic logic gates constructed using one of several different designs, usually with compatible logic levels and power supply characteristics within a family. Many logic families were produced as individual components, each containing one or a few related basic logical functions, which could be used as "building-blocks" to create systems or as so-called "glue" to interconnect more complex integrated circuits. A "logic family" may also refer to a set of techniques used to implement logic within VLSI integrated circuits such as central processors, memories, or other complex functions. Some such logic families use static techniques to minimize design complexity. Other such logic families, such as domino logic, use clocked dynamic techniques to minimize size, power consumption and delay.
A mixed-signal integrated circuit is any integrated circuit that has both analog circuits and digital circuits on a single semiconductor die. In real-life applications mixed-signal designs are everywhere, for example, a smart mobile phone. However, it is more accurate to call them mixed-signal systems. Mixed-signal ICs also process both analog and digital signals together. For example, an analog-to-digital converter is a mixed-signal circuit. Mixed-signal circuits or systems are typically cost-effective solutions for building any modern consumer electronics applications.
In electronic design a semiconductor intellectual property core, IP core, or IP block is a reusable unit of logic, cell, or integrated circuit layout design that is the intellectual property of one party. IP cores may be licensed to another party or can be owned and used by a single party alone. The term is derived from the licensing of the patent and/or source code copyright that exist in the design. IP cores can be used as building blocks within application-specific integrated circuit (ASIC) designs or field-programmable gate array (FPGA) logic designs.
A hybrid integrated circuit (HIC), hybrid microcircuit, hybrid circuit or simply hybrid is a miniaturized electronic circuit constructed of individual devices, such as semiconductor devices and passive components, bonded to a substrate or printed circuit board (PCB). A PCB having components on a Printed Wiring Board (PWB) is not considered a hybrid circuit according to the definition of MIL-PRF-38534.
In VLSI semiconductor manufacturing, the process of Design Closure is a part of the development workflow by which an integrated circuit design is modified from its initial description to meet a growing list of design constraints and objectives.
An electronic circuit is composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. To be referred to as electronic, rather than electrical, generally at least one active component must be present. The combination of components and wires allows various simple and complex operations to be performed: signals can be amplified, computations can be performed, and data can be moved from one place to another.
CircuitLogix is a software electronic circuit simulator which uses PSpice to simulate thousands of electronic devices, models, and circuits. CircuitLogix supports analog, digital, and mixed-signal circuits, and its SPICE simulation gives accurate real-world results. The graphic user interface allows students to quickly and easily draw, modify and combine analog and digital circuit diagrams. CircuitLogix was first launched in 2005, and its popularity has grown quickly since that time. In 2012, it reached the milestone of 250,000 licensed users, and became the first electronics simulation product to have a global installed base of a quarter-million customers in over 100 countries.
Electronic circuit simulation uses mathematical models to replicate the behavior of an actual electronic device or circuit. Simulation software allows for modeling of circuit operation and is an invaluable analysis tool. Due to its highly accurate modeling capability, many colleges and universities use this type of software for the teaching of electronics technician and electronics engineering programs. Electronics simulation software engages the user by integrating him or her into the learning experience. These kinds of interactions actively engage learners to analyze, synthesize, organize, and evaluate content and result in learners constructing their own knowledge.
In microelectronics, a three-dimensional integrated circuit is an integrated circuit manufactured by stacking silicon wafers or dies and interconnecting them vertically using, for instance, through-silicon vias (TSVs) or Cu-Cu connections, so that they behave as a single device to achieve performance improvements at reduced power and smaller footprint than conventional two dimensional processes. 3D IC is just one of a host of 3D integration schemes that exploit the z-direction to achieve electrical performance benefits. |
The Transmission Control Protocol (TCP) is one of those things that pretty much everyone should know about – yet very few people actually do.
People should know more about it because Transmission Control Protocol is essentially the backbone of the modern-day internet.
Jump to section:
What is TCP?
Also known as TCP/IP (Internet Protocol) or the Internet Protocol Suite, the Transmission Control Protocol is a widely-used protocol that governs how computers talk to each other when exchanging data. However, TCP’s sheer ubiquity doesn’t mean it’s the only data transfer protocol out there.
Other standards – such as User Datagram Protocol (UDP) or Open Systems Interconnection (OSI) – are also used in various circumstances.
But how does TCP work? And what’s it used for?
Fast, Secure, and Reliable TCP Transfers
Send terabytes of data over the internet with MASV.
How Does Transmission Control Protocol Work?
Being one the main data transfer protocols on the internet, TCP’s job is relatively simple:
It’s there to ensure all data sent by one computer to another is received successfully, without errors or glitches, and in the correct order.
That means whenever you browse a webpage with all the information right-side up, or an email that’s not complete gibberish, you can thank TCP.
On the other hand, how it accomplishes this task isn’t quite so rudimentary. Because it’s a connection-oriented protocol, TCP must first acknowledge the existence of a session between the two computers before doing any communicating.
Here’s how TCP establishes a connection between two computers (a process known as a “three-way handshake”):
- One computer (the sender) makes an initial message to the receiving computer to formally request that a connection be established. This is known as an SYN message (short for synchronize).
- The receiving computer must then send an acknowledgement of the SYN (what is known as an SYN-ACK message).
- Finally, the sender must then acknowledge the acknowledgement (known as an ACK RECEIVED message).
After these three steps have successfully completed, data transfer can begin.
If you think that’s a lot of steps simply to establish a communication channel, you’re right. It’s one reason why TCP connections are generally slower than UDP-based connections. They simply have to go through more steps before communicating.
TCP can also be combined with other protocols such as Microsoft’s Server Message Block (SMB) for connections to remote servers
Ultra-Fast Transfer of Large Files
Use MASV to deliver large amounts of data anywhere in the world.
The Four Layers of TCP
TCP is composed of four different layers: application, transport, internet, and network access. Let’s go through them:
- Application layer. This is the layer of TCP that applications, such as web browsers, interact with (the application layer includes further protocols such as HTTPS and SMTP).
- Transport layer. After the application receives the data from a web browser (for example), it then talks to the transport layer via a port. In a web browser’s case, this would be Port 80. The transport layer then slices and dices the received data into individual packets, which each take the fastest route to the destination. Each packet also comes with a header with instructions about how to deliver the packet payload (ie. the data being sent).
- Internet layer. Packets are next pushed to this layer, which uses the Internet Protocol to tag each packet with origin and destination IP addresses.
- Network layer. Finally! This is the layer in which actual data is converted into electrical impulses and sent out into the world. The network layer handles information such as media access control (MAC) addresses, which ensures each packet goes to the right computer.
Why is TCP used?
It’s probably obvious by now, but TCP is used in instances when all transmitted data absolutely must arrive (and with no errors). Indeed, the inherent value of TCP is that it guarantees the integrity of all data delivered. If there’s an error, TCP resends the data.
That’s why other high-level protocols that require perfection – such as Secure Shell (SSH), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP), and HTTP – all use TCP.
Some large file transfer solutions, such as MASV, also use accelerated TCP-based technology because it delivers all your data in order and doesn’t require firewall changes.
TCP vs. UDP
One issue with TCP is latency, especially through the public internet. This is largely due to all those steps I mentioned above, including data retransmittances and packet reordering.
That’s why a different protocol, UDP, exists. It is often used for real-time online gaming, streaming, voice over IP (VoIP), and other applications that require fast speeds but can live with some data being incomplete or missing.
However, UDP is not a connection-oriented protocol. Unlike TCP, it doesn’t establish a session between computers or guarantee the integrity of data delivered. So, dropped packets can be a common occurrence. Each data packet sent via UDP contains less header information, and if packets are lost in transit, they’re gone forever.
Why MASV is TCP-based and Not UDP-Based
Although some popular file transfer solutions such as Aspera, Signiant, and File Catalyst are UDP-based (either point-to-point using on-premises servers or over the cloud), many use TCP technology.
MASV uses TCP technology by choice, for several reasons. It allows our service to be much easier to set up and run, since no firewall changes are required. Our TCP-based transfers also guarantee file and folder trees arrive in exactly the same structure as they’re sent
Although TCP is slower than UDP, MASV gets around this by using an accelerated private network of more than 150 servers in all corners of the world. That means your file packages only need to travel a short distance before they start riding our accelerated network. And they always arrive at your client’s or partner’s machines exactly how they were sent.
Interested in giving MASV a try, with no commitment and zero hassle? Sign up in seconds and send up to 100 GBs for free today.
MASV File Transfer
Get 100 GB to use with the fastest, large file transfer service available today, MASV. |
A chemical reaction is a process that leads to the transformation of one set of chemical substances to another. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes may occur.
The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.
Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.
Reactions may proceed in the forward or reverse direction until they go to completion or reach equilibrium. Reactions that proceed in the forward direction to approach equilibrium are often described as spontaneous, requiring no input of free energy to go forward. Non-spontaneous reactions require input of free energy to go forward (examples include charging a battery by applying an external electrical power source, or photosynthesis driven by absorption of electromagnetic radiation in the form of sunlight).
Different chemical reactions are used in combinations during chemical synthesis in order to obtain a desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperatures and concentrations present within a cell.
The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays, and reactions between elementary particles as described by quantum field theory.
Chemical Reaction (song)
"Chemical Reaction'" is a song by German recording artist Sasha. It was written by Sasha, Pete Boyd Smith, Michael Kersting, and Stephan Baader for his second studio album ...You (2000), while production was overseen by the latter two. Released as the album's second single, it reached number seven in the Flemish portion of Belgium and the top forty in Austria, Germany and Switzerland.
Longman Dictionary of Contemporary English
n. (chemistry) a process in which one or more substances are changed into others; "there was a chemical reaction of the lime with the ground water" [syn: reaction]
n. (context chemistry English) A process, typically involving the breaking or making of interatomic bonds, in which one or more substances are changed into others.
Usage examples of "chemical reaction".
Furthermore, any change imposed on the insulin molecule by chemical reaction from without, unless the change is a rather trifling one that does not seriously affect the complexity of the molecule, produces loss of activity.
They had an instrument that expelled metal pellets at high speed by means of a controlled explosive chemical reaction.
However much the substance resulting from the chemical reaction of others might differ from these, its weight always proved to be the same as their total weight.
You can try to eliminate them with another chemical reaction, but that's going to have outputs also.
What if the compound deteriorates, and sets off a chemical reaction that causes it to explode?
There were hundreds of thousands, perhaps millions, of enzymes, each existing solely to aid a single chemical reaction. |
Type:Interactive, Lesson Plan
This resource has been contributed by Winpossible, and can also be accessed on their website by clicking here -
Inductive vs. Deductive reasoning
In this mini-lesson you'll learn how to find the difference between inductive and deductive reasoning. Inductive reasoning is the use of specific observations to broader generalizations and theories. It is also known as the “bottom up” approach. It begins with specific observations and ends with a conclusion that goes beyond any of the observations that led up to it. It is used to find answers to problems like "What is the missing number in a sequence 19, 23, 31, __, 35?". Deductive reasoning, on the other hand, starts from general observations rather than specific ones. It is also called the “top-down” approach, and is the opposite of inductive reasoning. For example: We know that all men are mortal. We also know that John is a man. Therefore, by deductive reasoning, we can say that John is mortal.
This FREE mini-lesson is a part of Winpossible's online course that covers all topics within Algebra I. Click on the video below to go through it. If you like it, you can buy our online course in Algebra I by clicking here.
- Mathematics > General
- Mathematics > Algebra
- Education > General
- Grade 9
- Grade 10
- Grade 11
- Grade 12
Keywords:logical video inductive deductive reasoning argument specific theory generalization "bottom-up" approach "top-down" approach practice questions quizzes
License Deed:Creative Commons Attribution 3.0 |
The number of days of extreme heat per year, when temperatures reach 50 degrees Celsius, have doubled since 1980, according to a BBC study.
Those temperatures are also being recorded in more and more areas of the world, posing an unprecedented challenge to our health and the way we live.
The total number of days above 50ºC increased in each of the past four decades.
Between 1980 and 2009, temperatures exceeded 50º C about 14 days a year on average, a figure that increased to 26 days per year between 2010 and 2019.
In the same period, temperatures of 45 ° C or more were recorded on average an extra two weeks per year.
“The increase can be attributed 100% to the burning of fossil fuels,” says Dr. Friederike Otto, a leading climate scientist.
As the entire planet warms, extreme temperatures become more likely and more intense.
High temperatures can be deadly to humans and nature, and cause major problems in buildings, roads, and power systems.
Temperatures of 50 ° C occur predominantly in the Middle East and Gulf regions.
And after record temperatures of 48.8 ° C were recorded in Italy and 49.6 ° C in Canada this summer, scientists have warned that temperatures above 50 ° C will be experienced elsewhere unless we reduce emissions of fossil fuels.
“We need to act quickly. The faster we reduce our emissions, the better off we will all be,” says climate researcher Sihan Li.
“With continued emissions and inaction, these extreme heat events will not only become more severe and frequent, but the emergency response and recovery will be more demanding,” warns Dr. Li.
The BBC analysis also found that in the past decade, maximum temperatures increased 0,5° C compared to the longer-term average recorded between 1980 and 2009.
But these increases have not been felt equally around the world: in Eastern Europe, southern Africa and Brazil some maximum temperatures increased by more than 1 ° C, and parts of the Arctic and the Middle East saw increases of more than 2 ° C .
The scientists called for urgent action on world leaders who will meet at the UN climate summit in November, in which governments will be asked to commit to further emissions cuts to limit the rise in global temperature.
Impact of extreme heat
This BBC analysis includes a documentary series called Life at 50C (Life at 50º C) which investigates how extreme heat is affecting people around the world.
Even below 50 ° C, high temperatures and humidity can create serious health risks.
Until 1.2 billion people worldwide could face heat stress conditions by the year 2100 if current levels of global warming continue, according to a Rutgers University study published last year. That is at least four times more than those affected today.
People also face difficult decisions as the landscape around them changes, as extreme heat increases the likelihood of droughts and wildfires.
Sheikh Kazem Al Kaabi grows wheat from a village in central Iraq that experiences extreme temperatures almost every year.
The land around him was once fertile enough to support him and his neighbors, but it has gradually become dry and barren.
“All this land was green, but all that is gone. Now it is a desert.”
Almost all the people in his village have moved to look for work in other provinces.
“I lost my brother, my dear friends and loyal neighbors. They shared everything with me, even my laughter. Now nobody shares anything with me, I’m just face to face with this empty land.”
In my area the 50º C was exceeded, why doesn’t it appear?
Record temperature reports generally come from measurements taken at a particular weather station. But the data we have studied represents areas larger than those covered by a single station.
For example, Death Valley National Park in Southern California is one of the hottest places on Earth. Temperatures in certain parts of the park regularly exceed 50 ° C in summer.
But by creating an average of maximum temperatures for the wider area around it, using different sources, a figure is reached below 50 ° C.
From where come from the data?
This analysis uses maximum daily temperatures from the ERA5 dataset, produced by the Copernicus Climate Change Service.
The ERA5 combines meteorological observations from many sources, such as stations and satellites, with data from weather forecast models.
This process fills in the gaps left by poor coverage due to the lack of weather stations in many parts of the world.
What do we analyze?
Using the maximum temperature for every day from 1980 to 2020, we identified how often temperatures exceeded 50 ° C.
We count the number of days and locations with a maximum temperature of 50 ° C or more for each year to determine the trend over time.
We also observe the change in maximum temperatures. We did this by calculating the difference between the average maximum temperature over land and sea during the most recent decade (2010-2019) compared to the previous 30 years (1980-2009).
Averages for at least 30 consecutive years are known as climatologies. The 30-year climatologies are used to show how recent periods compare to a climatic average.
What do we mean by “place”?
Each location is approximately 25 square kilometers or 27-28 square kilometers at the equator. These grids can cover large areas and can contain different types of landscape.
The grids in this dataset are 0.25 degrees latitude by 0.25 degrees longitude.
Methodology developed with the support of Dr Sihan Li, University of Oxford School of Geography and Environment, and Dr Zeke Hausfather, Berkeley Earth and The Breakthrough Institute. External review of the European Center for Medium-Range Weather Forecasts (ECMWF). A special thanks to Professor Ed Hawkins from the University of Reading, as well as Professor Richard Betts and Dr John Caesar from the UK National Weather Service.
Data analysis and journalism by Nassos Stylianou and Becky Dale. Design by Prina Shah, Sana Jasemi and Joy Roxas. Developed by Catriona Morrison, Becky Rush and Scott Jarvis. Data Engineering by Alison Benjamin. Namak Khoshnaw and Stephanie Stafford Case Study. Interview with Dr. Otto by Monica Garnsey.
Weather stripe visualization courtesy of Professor Ed Hawkins and the University of Reading.
Remember that you can receive notifications from BBC Mundo. Download the new version of our app and activate them so you don’t miss out on our best content. |
Computer memory is what stores information both permanently and temporarily. Computer memory, in the form of integrated circuits, refers to the storage space located in the computer hardware. It’s the part of a computer that stores the data (to be processed) and instructions (for data processing). Since it’s the storage we’re talking about, it’s important to know how much information can a particular memory unit hold, and the only effective way to know that is to learn ‘how computer memory is measured’. Maybe, you’ve been able to guess that this article is all about computer memory measurement.
Where the human brain is the biological version of memory, computer memory is the technical one that stores information using two distinct mechanisms such as ROM (Read-Only Memory) and RAM (Random Access Memory). These devices use integrated circuits, and different operating systems, hardware, and software applications use them for functioning. Did you get anywhere close to the way how computer memory is measured? Well, you will soon…
But before you get to learn the measurement techniques, you need to gather some insights into the entire ‘memory’ thing that a computer runs on and users benefit from.
Computer memory that we’re familiar with can be volatile and non-volatile, or RAM and ROM respectively, just in case you’re interested in knowing what they’re called in technical terms.
Volatile memory or RAM relies on the power that a computer runs on. It means this kind of memory loss what’s stored as a device loses its power. You might have seen that some of your works become unavailable once the computer reboots or shuts down. Often referred to as NVRAM, non-volatile memory keeps contents in case of power outage or sudden reboots. Did you ever hear of EPROM? It’s a physical form of non-volatile memory. Now, you want to know how one can measure one’s computer memory.
It all starts with the units, not exactly like the different units we use in our real world. Here’re the units of memory measurement.
- Binary Digit (Bit)
- Kilobyte (KB)
- Megabyte (MB)
- Gigabyte (GB)
- Terabyte (TB)
- Petabyte (PB)
- Exabyte (EB)
- Zettabyte (ZB)
- Yottabyte (YB)
About Binary Digit (Bit) and Byte
The bit is the smallest unit of measurement which has a binary value of 0 or 1. Four bits when combined/grouped equally to one nibble while eight of them, when combined/, grouped make a Byte. Computers are designed to manipulate bits in groups or combinations of a fixed size which, in technical jargons is known as words. Users barely work with one bit of information at a time as it’s the smallest increment of computer data. On the other hand, a byte contains usable information like one particular ASCII character.
It’s easy for a lot of users to get confused about the notations for bits and bytes. It’s because of the abbreviations and names for the numbers of bytes. Let’s get this confusion straight to your clear understanding.
The numbers of bits when abbreviated use “b” in the lower case and not the uppercase. As you’ve learned that one byte equals eight bits, keeping this in mind can be crucial to your understanding of the whole thing.
You can take the advertisement for a broadband connection package into consideration to understand this. If your connection comes advertised with 3.0 Mbps as the download speed, you should take that the speed is going to be 0.375 megabytes/second or 3.0 megabits/second. Furthermore, it would be expressed as 0.375 MBps.
Understanding the Distinction between Binary and Decimal Systems
Because computers use the binary (base two) method which is unlike the decimal (base ten) method, a kilobyte (KB) contains 1,024 bytes, not 1,000 bytes as many might have mistakenly regarded so far. So, 1 MB contains 1,048,576 bytes or 1,024 kilobytes instead of 1,000,000 bytes or 1,000 kilobytes. Similarly, 1 gigabyte contains 1,024 megabytes, or 1,073,741,824 bytes instead of 1,000 megabytes or 1,000,000,000 bytes. Here’s the mathematical calculation.
Our decimal method is based on 10
101 = 10; 102 = 10 x 10 = 100; 103 = 10 x 10 x 10 = 1,000; 106 = 1,000,000
Computer’s Binary Digit System is based on 2
21 = 2; 22 = 2 x 2 = 4; 23 = 2 x 2 x 2 = 8; 210 = 1,024; 220 = 1,048,576
How Do Manufacturers and Computers Define Memory Capacity?
Hard drive manufacturing companies take a different route other than binary digits or its subsequent larger units. Using a decimal system, manufacturers define the space a particular hard drive comes with. You’ll see 1 MB is described as one million bytes, 1 GB as one billion bytes, etc. So, you can now easily understand why there’s always a difference in the numeric expressions of the capacity of a hard drive in the particular of the manufacturer and the system of your computer.
For example, a 10GB HDD based on a decimal system stores technically 10,000,000,000 bytes The binary system of a computer, in terms of the way it’s defined, should say that the capacity is 10,737,418,240 bytes. But, the computer acknowledges only 9.31 GB instead of 10 GB. A beginner would easily call it a sign of malfunction which is not. The only thing that happens here is the different angles the memory is defined.
A Table Showing the Computer Memory Assessment
Here’s a list of all units along with their numeric expressions for the subsequent ones.
|Unit + Amount||Equivalent Unit|
|8 Bits||1 Byte|
|1024 Bytes||1 Kilobyte|
|1024 Kilobytes||1 Megabyte|
|1024 Megabytes||1 Gigabyte|
|1024 Gigabytes||1 Terabyte|
|1024 Terabytes||1 Petabyte|
|1024 Petabytes||1 Exabyte|
|1024 Exabytes||1 Zettabyte|
|1024 Zettabytes||1 Yottabyte|
Here’s another list of the units expressed for their byte equivalents.
|1 KB||1,024 Bytes|
|1 MB||1,048,576 Bytes|
|1 GB||1,073,741,824 Bytes|
|1 TB||1,099,511,627,776 Bytes|
|1 PB||1,125,899,906,842,624 Bytes|
A Few Practical Examples of Computer Memory
Some of the most commonly used units of computer memory are megabytes and gigabytes. So, in terms of practical applications, we often see the following things.
- 1 megabyte provides adequate space for a medium-sized novel.
- 1 terabyte (1024 gigabytes) provides space for the books and documents of a typically large library. The amount of data can also be equivalent to what as many as 1,610 CDs can store.
- 1 petabyte contains 1024 terabytes which can be equal to the data on roughly 223,100 DVDs (if written).
It may sound rather astonishing that the number of DVDs mentioned here makes a stack which can be as tall as 878 feet. In addition, if CDs are used in place of DVDs, the stack would run as tall as 1 mile.
So, you’re quite familiar with the process of how computer memory is measured. This knowledge should be helpful as you plan on getting a storage device or unit for your particular needs, or you’re just curious. |
We’re being asked to identify which has greater O-N-O bond angles. Since we don’t know the Lewis structure for nitrile ion (NO2-) or the nitrate ion (NO3-), we need to do the following steps:
Step 1: Determine the central atom in this molecule.
Step 2: Calculate the total number of valence electrons present.
Step 3: Draw the Lewis structure for the molecule.
Step 4: Determine the number of electron groups around the indicated atom.
Step 5: Determine the electron geometry and bond angle using this:
Electron Regions Electronic Geometry Bond Angles
2 linear 180˚
3 trigonal planar 120˚
4 tetrahedral 109.5˚
5 trigonal bipyramidal 90˚, 120˚, and 180˚
6 octahedral 90˚ and 180˚
Are the O-N-O bond angles greater in the nitrile ion (NO2-) or the nitrate ion (NO3-)?
Please select the answer the best explains your conclusion.
a. Nitrite has the greater bond angle because a trigonal planar bond angle is greater than a tetrahedral bond angle
b. Nitrite has the greater bond angle because a linear bond angle is greater than a trigonal planar bond angle
c. Nitrate has the greater bond angle because nitrite’s lone pairs take up more space than bonds
d. Nitrate has the greater bond angle because double bonds take up more space than single bonds
Frequently Asked Questions
What scientific concept do you need to know in order to solve this problem?
Our tutors have indicated that to solve this problem you will need to apply the Bond Angles concept. You can view video lessons to learn Bond Angles. Or if you need more Bond Angles practice, you can also practice Bond Angles practice problems. |
History of Spain
Part of a series on the
|History of Spain|
The history of Spain dates back to the Early Middle Ages. In 1516, Habsburg Spain unified a number of disparate predecessor kingdoms; its modern form of a constitutional monarchy was introduced in 1813, and the current democratic constitution dates to 1978.
After the completion of the Reconquista, the kingdoms of Spain were united under Habsburg rule in 1516. At the same time, the Spanish Empire began to expand to the New World across the ocean, marking the beginning of the Golden Age of Spain, during which, from the early 1500s to the 1650s, Habsburg Spain was among the most powerful states in the world.
During this period, Spain was involved in all major European wars, including the Italian Wars, the Eighty Years' War, the Thirty Years' War, and the Franco-Spanish War. In the later 17th century, however, Spanish power began to decline, and after the death of the last Habsburg ruler, the War of the Spanish Succession ended with the relegation of Spain, now under Bourbon rule, to the status of a second-rate power with a reduced influence in European affairs. The so-called Bourbon Reforms attempted the renewal of state institutions, with some success, but as the century ended, instability set in with the French Revolution and the Peninsular War, so that Spain never regained its former strength.
Fragmented by the war, Spain at the beginning of the 19th century was destabilised as different political parties representing "liberal", "reactionary", and "moderate" groups throughout the remainder of the century fought for and won short-lived control without any being sufficiently strong to bring about lasting stability. The former Spanish Empire overseas quickly disintegrated with the Latin American wars of independence and eventually the loss of what old colonies remained in the Spanish–American War of 1898.
A tenuous balance between liberal and conservative forces was struck in the establishment of constitutional monarchy during 1874–1931 but brought no lasting solution, and Spain descended into Civil War between the Republican and the Nationalist factions.
The war ended in a nationalist dictatorship, led by Francisco Franco, which controlled the Spanish government until 1975. The post-war decades were relatively stable (with the notable exception of an armed independence movement in the Basque Country), and the country experienced rapid economic growth in the 1960s and early 1970s.
Only with the death of Franco in 1975 did Spain return to Bourbon constitutional monarchy headed by Prince Juan Carlos and to democracy. Spain entered the European Economic Community in 1986 (transformed into the European Union with the Maastricht Treaty of 1992), and the Eurozone in 1999. The financial crisis of 2007–08 ended a decade of economic boom and Spain entered a recession and debt crisis and remains plagued by very high unemployment and a weak economy.
Spain is ranked as a middle power able to exert regional influence but unlike other powers with similar status (such as Germany, Italy and Japan) it is not part of the G8 and participates in the G20 only as a guest. Spain is part of the G6.
The earliest record of hominids living in Western Europe has been found in the Spanish cave of Atapuerca; a flint tool found there dates from 1.4 million years ago, and early human fossils date to roughly 1.2 million years ago. Modern humans in the form of Cro-Magnons began arriving in the Iberian Peninsula from north of the Pyrenees some 35,000 years ago. The most conspicuous sign of prehistoric human settlements are the famous paintings in the northern Spanish cave of Altamira, which were done c. 15,000 BC and are regarded as paramount instances of cave art.
Furthermore, archeological evidence in places like Los Millares and El Argar, both in the province of Almería, and La Almoloya near Murcia suggests developed cultures existed in the eastern part of the Iberian Peninsula during the late Neolithic and the Bronze Age.
Spanish prehistory extends to the pre-Roman Iron Age cultures that controlled most of Iberia: those of the Iberians, Celtiberians, Tartessians, Lusitanians, and Vascones and trading settlements of Phoenicians, Carthaginians, and Greeks on the Mediterranean coast.
Early history of the Iberian Peninsula
Before the Roman conquest the major cultures along the Mediterranean coast were the Iberians, the Celts in the interior and north-west, the Lusitanians in the west, and the Tartessians in the southwest. The seafaring Phoenicians, Carthaginians, and Greeks successively established trading settlements along the eastern and southern coast. The first Greek colonies, such as Emporion (modern Empúries), were founded along the northeast coast in the 9th century BC, leaving the south coast to the Phoenicians.
The Greeks are responsible for the name Iberia, apparently after the river Iber (Ebro). In the 6th century BC, the Carthaginians arrived in Iberia, struggling first with the Greeks, and shortly after, with the newly arriving Romans for control of the Western Mediterranean. Their most important colony was Carthago Nova (Latin name of modern-day Cartagena).
The peoples whom the Romans met at the time of their invasion in what is now known as Spain were the Iberians, inhabiting an area stretching from the northeast part of the Iberian Peninsula through the southeast. The Celts mostly inhabited the inner and north-west part of the peninsula. In the inner part of the peninsula, where both groups were in contact, a mixed culture arose, the Celtiberians. The Celtiberian Wars were fought between the advancing legions of the Roman Republic and the Celtiberian tribes of Hispania Citerior from 181 to 133 BC. The Roman conquest of the peninsula was completed in 19 BC.
Hispania was the name used for the Iberian Peninsula under Roman rule from the 2nd century BC. The populations of the peninsula were gradually culturally Romanized, and local leaders were admitted into the Roman aristocratic class.
The Romans improved existing cities, such as Tarragona (Tarraco), and established others like Zaragoza (Caesaraugusta), Mérida (Augusta Emerita), Valencia (Valentia), León ("Legio Septima"), Badajoz ("Pax Augusta"), and Palencia. The peninsula's economy expanded under Roman tutelage. Hispania supplied Rome with food, olive oil, wine and metal. The emperors Trajan, Hadrian, and Theodosius I, the philosopher Seneca, and the poets Martial, Quintilian, and Lucan were born in Hispania. Hispanic bishops held the Council of Elvira around 306.
The collapse of the Western Roman Empire did not lead to the same wholesale destruction of Western classical society as happened in areas like Roman Britain, Gaul and Germania Inferior during the Early Middle Ages, although the institutions and infrastructure did decline. Spain's present languages, its religion, and the basis of its laws originate from this period. The centuries of uninterrupted Roman rule and settlement left a deep and enduring imprint upon the culture of Spain.
Gothic Hispania (5th–8th centuries)
The first Germanic tribes to invade Hispania arrived in the 5th century, as the Roman Empire decayed. The Visigoths, Suebi, Vandals and Alans arrived in Spain by crossing the Pyrenees mountain range, leading to the establishment of the Suebi Kingdom in Gallaecia, in the northwest, the Vandal Kingdom of Vandalusia (Andalusia), and the Visigothic Kingdom in Toledo. The Romanized Visigoths entered Hispania in 415. After the conversion of their monarchy to Roman Catholicism and after conquering the disordered Suebic territories in the northwest and Byzantine territories in the southeast, the Visigothic Kingdom eventually encompassed a great part of the Iberian Peninsula.
As the Roman Empire declined, Germanic tribes invaded the former empire. Some were foederati, tribes enlisted to serve in Roman armies, and given land within the empire as payment, while others, such as the Vandals, took advantage of the empire's weakening defenses to seek plunder within its borders. Those tribes that survived took over existing Roman institutions, and created successor-kingdoms to the Romans in various parts of Europe. Iberia was taken over by the Visigoths after 410.
At the same time, there was a process of "Romanization" of the Germanic and Hunnic tribes settled on both sides of the limes (the fortified frontier of the Empire along the Rhine and Danube rivers). The Visigoths, for example, were converted to Arian Christianity around 360, even before they were pushed into imperial territory by the expansion of the Huns.
In the winter of 406, taking advantage of the frozen Rhine, refugees from (Germanic) Vandals and Sueves, and the (Sarmatian) Alans, fleeing the advancing Huns, invaded the empire in force. Three years later they crossed the Pyrenees into Iberia and divided the Western parts, roughly corresponding to modern Portugal and western Spain as far as Madrid, between them.
The Visigoths, having sacked Rome two years earlier, arrived in the region in 412, founding the Visigothic kingdom of Toulouse (in the south of modern France) and gradually expanded their influence into the Iberian peninsula at the expense of the Vandals and Alans, who moved on into North Africa without leaving much permanent mark on Hispanic culture. The Visigothic Kingdom shifted its capital to Toledo and reached a high point during the reign of Leovigild.
The Visigothic Kingdom conquered all of Hispania and ruled it until the early 8th century, when the peninsula fell to the Muslim conquests. The Muslim state in Iberia came to be known as Al-Andalus. After a period of Muslim dominance, the medieval history of Spain is dominated by the long Christian Reconquista or "reconquest" of the Iberian Peninsula from Muslim rule. The Reconquista gathered momentum during the 12th century, leading to the establishment of the Christian kingdoms of Portugal, Aragon, Castile and Navarre and by 1250, had reduced Muslim control to the Emirate of Granada in the south-east of the peninsula. Muslim rule in Granada survived until 1492, when it fell to the Catholic Monarchs.
Importantly, Spain never saw a decline in interest in classical culture to the degree observable in Britain, Gaul, Lombardy and Germany. The Visigoths, having assimilated Roman culture during their tenure as foederati, tended to maintain more of the old Roman institutions, and they had a unique respect for legal codes that resulted in continuous frameworks and historical records for most of the period between 415, when Visigothic rule in Spain began, and 711, when it is traditionally said to end. However, during the Visigothic dominion the cultural efforts made by the Franks and other Germanic tribes was not felt in the peninsula, and were not achieved in the lesser kingdoms that emerged after the Muslim conquest.
The proximity of the Visigothic kingdoms to the Mediterranean and the continuity of western Mediterranean trade, though in reduced quantity, supported Visigothic culture. Arian Visigothic nobility kept apart from the local Catholic population. The Visigothic ruling class looked to Constantinople for style and technology while the rivals of Visigothic power and culture were the Catholic bishops – and a brief incursion of Byzantine power in Córdoba.
Spanish Catholic religion also coalesced during this time. The period of rule by the Visigothic Kingdom saw the spread of Arianism briefly in Spain. The Councils of Toledo debated creed and liturgy in orthodox Catholicism, and the Council of Lerida in 546 constrained the clergy and extended the power of law over them under the blessings of Rome. In 587, the Visigothic king at Toledo, Reccared, converted to Catholicism and launched a movement in Spain to unify the various religious doctrines that existed in the land. This put an end to dissension on the question of Arianism. (For additional information about this period, see the History of Roman Catholicism in Spain.)
The Visigoths inherited from Late Antiquity a sort of feudal system in Spain, based in the south on the Roman villa system and in the north drawing on their vassals to supply troops in exchange for protection. The bulk of the Visigothic army was composed of slaves, raised from the countryside. The loose council of nobles that advised Spain's Visigothic kings and legitimized their rule was responsible for raising the army, and only upon its consent was the king able to summon soldiers.
The impact of Visigothic rule was not widely felt on society at large, and certainly not compared to the vast bureaucracy of the Roman Empire; they tended to rule as barbarians of a mild sort, uninterested in the events of the nation and economy, working for personal benefit, and little literature remains to us from the period. They did not, until the period of Muslim rule, merge with the Spanish population, preferring to remain separate, and indeed the Visigothic language left only the faintest mark on the modern languages of Iberia.
The most visible effect was the depopulation of the cities as they moved to the countryside. Even while the country enjoyed a degree of prosperity when compared to the famines of France and Germany in this period, the Visigoths felt little reason to contribute to the welfare, permanency, and infrastructure of their people and state. This contributed to their downfall, as they could not count on the loyalty of their subjects when the Moors arrived in the 8th century.
Islamic al-Andalus and the Christian Reconquest (8th–15th centuries)
The Arab Islamic conquest dominated most of North Africa by 710 AD. In 711 an Islamic Berber raiding party, led by Tariq ibn Ziyad, was sent to Iberia to intervene in a civil war in the Visigothic Kingdom. Tariq's army contained about 7,000 Berber horsemen, and Musa bin Nusayr is said to have sent an additional 5,000 reinforcements after the conquest. Crossing the Strait of Gibraltar, they won a decisive victory in the summer of 711 when the Visigothic King Roderic was defeated and killed on July 19 at the Battle of Guadalete.
Tariq's commander, Musa, quickly crossed with Arab reinforcements, and by 718 the Muslims were in control of nearly the whole Iberian Peninsula. The advance into Western Europe was only stopped in what is now north-central France by the West Germanic Franks under Charles Martel at the Battle of Tours in 732.
A decisive victory for the Christians took place at Covadonga, in the north of the Iberian Peninsula, in the summer of 722. In a minor battle known as the Battle of Covadonga, a Muslim force sent to put down the Christian rebels in the northern mountains was defeated by Pelagius of Asturias, who established the monarchy of the Christian Kingdom of Asturias. In 739, a rebellion in Galicia, assisted by the Asturians, drove out Muslim forces and it joined the Asturian kingdom. The Kingdom of Asturias became the main base for Christian resistance to Islamic rule in the Iberian Peninsula for several centuries.
Caliph Al-Walid I had paid great attention to the expansion of an organized military, building the strongest navy in the Umayyad Caliphate era (the second major Arab dynasty after Mohammad and the first Arab dynasty of Al-Andalus). It was this tactic that supported the ultimate expansion to Spain. Caliph Al-Walid I's reign is considered as the apex of Islamic power, though Islamic power in Spain specifically climaxed in the 10th century under Abd-ar-Rahman III.
Abbasids overthrow the Umayyad Caliphate
The rulers of Al-Andalus were granted the rank of Emir by the Umayyad Caliph Al-Walid I in Damascus. Emir Abd al-Rahman I challenged the Abbasids. The Umayyad Caliphate, with origin in Hejaz, Arabian peninsula or Emirate was overthrown by the Abbasid Caliphate or Emirate (second Arab dynasty), some of the remaining Umayyad leaders escaped to Castile and declared Córdoba an independent emirate. Al-Andalus was rife with internal conflict between the Islamic Umayyad rulers and people and the Christian Visigoth-Roman leaders and people.
In the 10th century Abd-ar-Rahman III declared the Caliphate of Córdoba, effectively breaking all ties with the Egyptian and Syrian caliphs. The Caliphate was mostly concerned with maintaining its power base in North Africa, but these possessions eventually dwindled to the Ceuta province. The first navy of the Caliph of Córdoba or Emir was built after the humiliating Viking ascent of the Guadalquivir in 844 when they sacked Seville.
In 942, pagan Magyars raided northern Spain. Meanwhile, a slow but steady migration of Christian subjects to the northern kingdoms in Christian Hispania was slowly increasing the latter's power. Even so, Al-Andalus remained vastly superior to all the northern kingdoms combined in population, economy and military might; and internal conflict between the Christian kingdoms contributed to keep them relatively harmless.
Al-Andalus coincided with La Convivencia, an era of relative religious tolerance, and with the Golden age of Jewish culture in the Iberian Peninsula. (See: Emir Abd-ar-Rahman III 912; the Granada massacre 1066).
Warfare between Muslims and Christians
Muslim interest in the peninsula returned in force around the year 1000 when Al-Mansur (also known as Almanzor) sacked Barcelona in 985. Under his son, other Christian cities were subjected to numerous raids. After his son's death, the caliphate plunged into a civil war and splintered into the so-called "Taifa Kingdoms". The Taifa kings competed against each other not only in war but also in the protection of the arts, and culture enjoyed a brief upswing.
Medieval Spain was the scene of almost constant warfare between Muslims and Christians. The Almohads, who had taken control of the Almoravids' Maghribi and al-Andalus territories by 1147, surpassed the Almoravides in fundamentalist Islamic outlook, and they treated the non-believer dhimmis harshly. Faced with the choice of death, conversion, or emigration, many Jews and Christians left.
By the mid-13th century Emirate of Granada was the only independent Muslim realm in Spain, which would last until 1492. Despite the decline in Muslim-controlled kingdoms, it is important to note the lasting effects exerted on the peninsula by Muslims in technology, culture, and society.
The Taifa kingdoms lost ground to the Christian realms in the north. After the loss of Toledo in 1085, the Muslim rulers reluctantly invited the Almoravides, who invaded Al-Andalus from North Africa and established an empire. In the 12th century the Almoravid empire broke up again, only to be taken over by the Almohad invasion, who were defeated by an alliance of the Christian kingdoms in the decisive battle of Las Navas de Tolosa in 1212. By 1250, nearly all of Iberia was back under Christian rule with the exception of the Muslim kingdom of Granada.
The Kings of Aragón ruled territories that consisted of not only the present administrative region of Aragon but also Catalonia, and later the Balearic Islands, Valencia, Sicily, Naples and Sardinia (see Crown of Aragon). Considered by most to have been the first mercenary company in Western Europe, the Catalan Company proceeded to occupy the Duchy of Athens, which they placed under the protection of a prince of the House of Aragon and ruled until 1379.
The Spanish language and universities
In the 13th century, many languages were spoken in the Christian kingdoms of Iberia. These were the Latin-based Romance languages of Castilian, Aragonese, Catalan, Galician, Aranese, Asturian and Leonese, and the ancient language isolate of Basque. Throughout the century, Castilian (what is also known today as Spanish) gained a growing prominence in the Kingdom of Castile as the language of culture and communication, at the expense of Leonese and of other close dialects.
One example of this is the epic song ('cantar') written after the military leader El Cid. In the last years of the reign of Ferdinand III of Castile, Castilian began to be used for certain types of documents, and it was during the reign of Alfonso X that it became the official language. Henceforth all public documents were written in Castilian; likewise all translations were made into Castilian instead of Latin.
At the same time, Catalan and Galician became the standard languages in their respective territories, developing important literary traditions and being the normal languages in which public and private documents were issued: Galician from the 13th to the 16th century in Galicia and nearby regions of Asturias and Leon, and Catalan from the 12th to the 18th century in Catalonia, the Balearic Islands and Valencia, where it was known as Valencian. Both languages were later substituted in its official status by Castillian Spanish, till the 20th century.
In the 13th century many universities were founded in León and in Castile. Some, such as the Leonese Salamanca and the Castilian Palencia, were among the earliest universities in Europe.
Early Modern Spain
In the 15th century, the most important among all of the separate Christian kingdoms that made up the old Hispania were the Kingdom of Castile (occupying northern and central portions of the Iberian Peninsula), the Kingdom of Aragon (occupying northeastern portions of the peninsula), and the Kingdom of Portugal occupying the far western Iberian Peninsula. The rulers of the kingdoms of Castille and Aragon were allied with dynastic families in Portugal, France, and other neighboring kingdoms.
The death of King Henry IV of Castile in 1474 set off a struggle for power called the War of the Castilian Succession (1475–79). Contenders for the throne of Castile were Henry's one-time heir Joanna la Beltraneja, supported by Portugal and France, and Henry's half-sister Queen Isabella I of Castile, supported by the Kingdom of Aragon and by the Castilian nobility.
Isabella retained the throne and ruled jointly with her husband, King Ferdinand II. Isabella and Ferdinand had married in 1469 in Valladolid. Their marriage united both crowns and set the stage for the creation of the Kingdom of Spain, at the dawn of the modern era. That union, however, was a union in title only, as each region retained its own political and judicial structure. Pursuant to an agreement signed by Isabella and Ferdinand on January 15, 1474, Isabella held more authority over the newly unified Spain than her husband, although their rule was shared. Together, Isabella of Castile and Ferdinand of Aragon were known as the "Catholic Monarchs" (Spanish: los Reyes Católicos), a title bestowed on them by Pope Alexander VI.
Conclusion of the Reconquista and start of the Spanish Inquisition
The monarchs oversaw the final stages of the Reconquista of Iberian territory from the Moors with the conquest of Granada, conquered the Canary Islands, and expelled the Jews from Spain under the Alhambra Decree. Although until the 13th century religious minorities (Jews and Muslims) had enjoyed considerable tolerance in Castilla and Aragon – the only Christian kingdoms where Jews were not restricted from any professional occupation – the situation of the Jews collapsed over the 14th century, reaching a climax in 1391 with large scale massacres in every major city except Ávila.
Over the next century, half of the estimated 80,000 Spanish Jews converted to Christianity (becoming "conversos"). The final step was taken by the Catholic Monarchs, who, in 1492, ordered the remaining Jews to convert or face expulsion from Spain. Depending on different sources, the number of Jews actually expelled, traditionally estimated at 120,000 people, is now believed to have numbered about 40,000.
Over the following decades, Muslims faced the same fate; and about 60 years after the Jews, they were also compelled to convert ("Moriscos") or be expelled. However, sufficient numbers of Moriscos stayed that Muslim culture remained influential in Spain. Jews and Muslims were not the only people to be persecuted during this time period. All Roma (Gitano, Gypsy) males between the ages of 18 and 26 were forced to serve in galleys – which was equivalent to a death sentence – but the majority managed to hide and avoid arrest.
Isabella and Ferdinand authorized the 1492 expedition of Christopher Columbus, who became the first known European to reach the New World since Leif Ericson. This and subsequent expeditions led to an influx of wealth into Spain, supplementing income from within Castile for the state that would prove to be a dominant power of Europe for the next two centuries.
Isabella ensured long-term political stability in Spain by arranging strategic marriages for each of her five children. Her firstborn, a daughter named Isabella, married Afonso of Portugal, forging important ties between these two neighboring countries and hopefully ensuring future alliance, but Isabella soon died before giving birth to an heir. Juana, Isabella's second daughter, married into the Habsburg dynasty when she wed Philip the Fair, the son of Maximilian I, King of Bohemia (Austria) and likely heir to the crown of the Holy Roman Emperor.
This ensured an alliance with the Habsburgs and the Holy Roman Empire, a powerful, far-reaching territory that assured Spain's future political security. Isabella's only son, Juan, married Margaret of Austria, further strengthening ties with the Habsburg dynasty. Isabella's fourth child, Maria, married Manuel I of Portugal, strengthening the link forged by her older sister's marriage. Her fifth child, Catherine, married King Henry VIII of England and was mother to Queen Mary I of England.
The Spanish Empire was one of the first modern global empires. It was also one of the largest empires in world history. In the 16th century, Spain and Portugal were in the vanguard of European global exploration and colonial expansion. The two kingdoms on the conquest and Iberian Peninsula competed with each other in opening of trade routes across the oceans. Spanish imperial conquest and colonization began with two Castilian expeditions. The first was an expedition to the Canary Islands in 1312 of a Castilian fleet led by a Genoese, Lancelotto Malocello. The second was another expedition to the Canaries in 1402 led by French adventurers, Jean de Béthencourt, Lord of Grainville in Normandy and Gadifer de la Salle of Poitou, which began the Castilian conquest of the Canary Islands, completed in 1495.
In the 15th and 16th centuries, trade flourished across the Atlantic between Spain and the Americas and across the Pacific between East Asia and Mexico via the Philippines. Conquistadors deposed the Aztec, Inca and Maya governments with extensive help from local factions and laid claim to vast stretches of land in North and South America.
This New World empire was at first a disappointment, as the natives had little to trade, though settlement did encourage trade. Diseases such as smallpox and measles that arrived with the colonizers devastated the native populations, especially in the densely populated regions of the Aztec, Maya and Inca civilizations, and this reduced the economic potential of conquered areas.
In the 1520s, large-scale extraction of silver from the rich deposits of Mexico's Guanajuato began to be greatly augmented by the silver mines in Mexico's Zacatecas and Bolivia's Potosí from 1546. These silver shipments re-oriented the Spanish economy, leading to the importation of luxuries and grain. The resource-rich colonies of Spain thus caused large cash inflows for the country. They also became indispensable in financing the military capability of Habsburg Spain in its long series of European and North African wars, though, with the exception of a few years in the 17th century, Spain itself (Castile in particular) was by far the most important source of revenue.
Spain enjoyed a cultural golden age in the 16th and 17th centuries. For a time, the Spanish Empire dominated the oceans with its experienced navy and ruled the European battlefield with its fearsome and well trained infantry, the famous tercios, in the words of the prominent French historian Pierre Vilar, "enacting the most extraordinary epic in human history".
The financial burden within the peninsula was on the backs of the peasant class while the nobility enjoyed an increasingly lavish lifestyle. From the time beginning with the incorporation of the Portuguese Empire in 1580 (lost in 1640) until the loss of its North and South American colonies in the 19th century, Spain maintained the largest empire in the world even though it suffered fluctuating military and economic fortunes from the 1640s.
Confronted by the new experiences, difficulties and suffering created by empire-building, Spanish thinkers formulated some of the first modern thoughts on natural law, sovereignty, international law, war, and economics; there were even questions about the legitimacy of imperialism – in related schools of thought referred to collectively as the School of Salamanca. Despite these innovations, many motives for the empire were rooted in the Middle Ages. Religion played a very strong role in the spread of the Spanish empire. The thought that Spain could bring Christianity to the New World certainly played a strong role in the expansion of Spain's empire.
Spanish Kingdoms under the Habsburgs (16th–17th centuries)
Spain's world empire reached its greatest territorial extent in the late 18th century but it was under the Habsburg dynasty in the 16th and 17th centuries it reached the peak of its power and declined. When Spain's first Habsburg ruler Charles I became king of Spain in 1516, Spain became central to the dynastic struggles of Europe. After he became king of Spain, Charles also became Charles V, Holy Roman Emperor and because of his widely scattered domains was not often in Spain. As he approached the end of his life he made provision for the division of the Habsburg inheritance into two parts. On the one hand was Spain, its possessions in Europe, North Africa, the Americas and the Netherlands; on the other hand was the Holy Roman Empire. This was to create enormous difficulties for his son Philip II of Spain.
Philip II became king on Charles I's abdication in 1556. Spain largely escaped the religious conflicts that were raging throughout the rest of Europe and remained firmly Roman Catholic. Philip saw himself as a champion of Catholicism, both against the Muslim Ottoman Empire and the Protestant heretics.
In the 1560s, plans to consolidate control of the Netherlands led to unrest, which gradually led to the Calvinist leadership of the revolt and the Eighty Years' War. This conflict consumed much Spanish expenditure during the later 16th century. Conflicts included an attempt to conquer England – a cautious supporter of the Dutch – in the unsuccessful Spanish Armada, an early battle in the Anglo-Spanish War (1585–1604), and war with France (1590–98).
Despite these problems, the growing inflow of New World silver from mid-16th century, the justified military reputation of the Spanish infantry and even the navy quickly recovering from its Armada disaster, made Spain the leading European power, a novel situation of which its citizens were only just becoming aware. The Iberian Union with Portugal in 1580 not only unified the peninsula, but added that country's worldwide resources to the Spanish crown.
However, economic and administrative problems multiplied in Castile, and the weakness of the native economy became evident in the following century. Rising inflation, financially draining wars in Europe, the ongoing aftermath of the expulsion of the Jews and Moors from Spain, and Spain's growing dependency on the gold and silver imports, combined to cause several bankruptcies that caused economic crisis in the country, especially in heavily burdened Castile.
Barbary pirates from North Africa became an increasing problem. The coastal villages of Spain and of the Balearic Islands were frequently attacked. Formentera was even temporarily abandoned by its population. This occurred also along long stretches of the Spanish and Italian coasts, a relatively short distance across a calm sea from the pirates in their North African lairs. The most famous corsair was the Turkish Barbarossa ("Redbeard"). According to Robert Davis between 1 million and 1.25 million Europeans were captured by North African pirates and sold as slaves in North Africa and Ottoman Empire between the 16th and 19th centuries. This was gradually alleviated as Spain and other Christian powers began to check Muslim naval dominance in the Mediterranean after the 1571 victory at Lepanto, but it would be a scourge that continued to afflict the country even in the next century.
The great plague of 1596–1602 killed 600,000 to 700,000 people, or about 10% of the population. Altogether more than 1,250,000 deaths resulted from the extreme incidence of plague in 17th-century Spain. Economically, the plague destroyed the labor force as well as creating a psychological blow to an already problematic Spain.
Philip II died in 1598, and was succeeded by his son Philip III. In his reign (1598–1621) a ten-year truce with the Dutch was overshadowed in 1618 by Spain's involvement in the European-wide Thirty Years' War. Government policy was dominated by favorites, but it was also the period in which the geniuses of Cervantes and El Greco flourished.
Philip III was succeeded in 1621 by his son Philip IV of Spain (reigned 1621–65). Much of the policy was conducted by the minister Gaspar de Guzmán, Count-Duke of Olivares. In 1640, with the war in central Europe having no clear winner except the French, both Portugal and Catalonia rebelled. Portugal was lost to the crown for good; in Italy and most of Catalonia, French forces were expelled and Catalonia's independence was suppressed
The Habsburg dynasty became extinct in Spain with Charles II's death in 1700, and the War of the Spanish Succession ensued in which the other European powers tried to assume control of the Spanish monarchy. King Louis XIV of France eventually lost the War of the Spanish Succession, but because the victors' (Great Britain, the Dutch Republic and Austria) candidate for the Spanish throne (Archduke Charles of Austria) became Holy Roman Emperor, control of Spain was allowed to pass to the Bourbon dynasty. However, the peace deals that followed included relinquishing the right to unite the French and Spanish thrones and the partitioning of Spain's European empire.
The Golden Age (Siglo de Oro)
The Spanish Golden Age (in Spanish, Siglo de Oro) was a period of flourishing arts and letters in the Spanish Empire (now Spain and the Spanish-speaking countries of Latin America), coinciding with the political decline and fall of the Habsburgs (Philip III, Philip IV and Charles II). It is interesting to note how arts during the Golden Age flourished despite the decline of the empire in the 17th century. The last great writer of the age, Sor Juana Inés de la Cruz, died in New Spain in 1695.
The Habsburgs, both in Spain and Austria, were great patrons of art in their countries. El Escorial, the great royal monastery built by King Philip II, invited the attention of some of Europe's greatest architects and painters. Diego Velázquez, regarded as one of the most influential painters of European history and a greatly respected artist in his own time, cultivated a relationship with King Philip IV and his chief minister, the Count-Duke of Olivares, leaving us several portraits that demonstrate his style and skill. El Greco, a respected Greek artist from the period, settled in Spain, and infused Spanish art with the styles of the Italian renaissance and helped create a uniquely Spanish style of painting.
Some of Spain's greatest music is regarded as having been written in the period. Such composers as Tomás Luis de Victoria, Luis de Milán and Alonso Lobo helped to shape Renaissance music and the styles of counterpoint and polychoral music, and their influence lasted far into the Baroque period.
Spanish literature blossomed as well, most famously demonstrated in the work of Miguel de Cervantes, the author of Don Quixote de la Mancha. Spain's most prolific playwright, Lope de Vega, wrote possibly as many as one thousand plays over his lifetime, over four hundred of which survive to the present day.
Decline in the 17th century
The Spanish "Golden Age" politically ends no later than 1659, with the Treaty of the Pyrenees, ratified between France and Habsburg Spain. Spain had experienced severe financial difficulties in the later 16th century, that had caused the Spanish Crown to declare bankruptcy four times in the late 1500s (1557, 1560, 1576, 1596). However, the constant financial strain did not prevent the rise of Spanish power throughout the 16th century.
Many different factors, excessive warfare, inefficient taxation, a succession of weak kings in the 17th century, power struggles in the Spanish court contributed to the decline of the Habsburg Spain in the second half of the 17th century.
During the long regency for Charles II, the last of the Spanish Habsburgs, favouritism milked Spain's treasury, and Spain's government operated principally as a dispenser of patronage. Plague, famine, floods, drought, and renewed war with France wasted the country. The Peace of the Pyrenees (1659) had ended fifty years of warfare with France, whose king, Louis XIV, found the temptation to exploit a weakened Spain too great. Louis instigated the War of Devolution (1667–68) to acquire the Spanish Netherlands.
By the 17th century, the Catholic Church and Spain had showcased a close bond to one another, attesting to the fact that Spain was virtually free of Protestantism during the 16th century. In 1620, there were 100,000 Spaniards in the clergy, by 1660 there were about 200,000 Spaniards in the clergy and the Church owned 20% of all the land in Spain. The Spanish bureaucracy in this period was highly centralized, and totally reliant on the king for its efficient functioning. Under Charles II, the councils became the sinecures of wealthy aristocrats despite various attempt at reform. Political commentators in Spain, known as arbitristas, proposed a number of measures to reverse the decline of the Spanish economy, with limited success. In rural areas of Spain, heavy taxation of peasants reduced agricultural output as peasants in the countryside migrated to the cities. The influx of silver from the Americas has been cited as the cause of inflation, although only one fifth of the precious metal actually went into Spain. A prominent internal factor was the Spanish economy's dependence on the export of luxurious Merino wool, which had its markets in northern Europe reduced by war and growing competition from cheaper textiles.
Spain under the Bourbons (18th century)
Charles II, having no direct heir, was succeeded by his great-nephew Philippe d'Anjou, a French prince, in 1700. Concern among other European powers that Spain and France united under a single Bourbon monarch would upset the balance of power led to the War of the Spanish Succession between 1701 and 1714. It pitted powerful France and fairly strong Spain against the Grand Alliance of England, Portugal, Savoy, the Netherlands and Austria.
After many battles, especially in Spain, the treaty of Utrecht recognised Philip, Duke of Anjou, Louis XIV's grandson, as King of Spain (as Philip V), thus confirming the succession stipulated in the will of the Charles II of Spain. However, Philip was compelled to renounce for himself and his descendants any right to the French throne, despite some doubts as to the lawfulness of such an act. Spain's Italian territories were apportioned.
Philip V signed the Decreto de Nueva Planta in 1715. This new law revoked most of the historical rights and privileges of the different kingdoms that formed the Spanish Crown, especially the Crown of Aragon, unifying them under the laws of Castile, where the Castillian Cortes Generales had been more receptive to the royal wish. Spain became culturally and politically a follower of absolutist France. Lynch says Philip V advanced the government only marginally over that of his predecessors and was more of a liability than the incapacitated Charles II; when a conflict came up between the interests of Spain and France, he usually favored France.
Philip made reforms in government, and strengthened the central authorities relative to the provinces. Merit became more important, although most senior positions still went to the landed aristocracy. Below the elite level, inefficiency and corruption was as widespread as ever.
The reforms started by Philip V culminated in much more important reforms of Charles III. However Israel argues that King Charles III cared little for the Enlightenment and his ministers paid little attention to the Enlightenment ideas influential elsewhere on the Continent. Israel says, "Only a few ministers and officials were seriously committed to enlightened aims. Most were first and foremost absolutists and their objective was always to reinforce monarchy, empire, aristocracy...and ecclesiastical control and authority over education."
The rule of the Spanish Bourbons continued under Ferdinand VI (1746–59) and Charles III (1759–88). Elisabeth of Parma, Philip V's widow, exerted great influence on Spain's foreign policy. Her principal aim was to have Spain's lost territories in Italy restored. She eventually received Franco-British support for this after the Congress of Soissons (1728–29).
Under the rule of Charles III and his ministers – Leopoldo de Gregorio, Marquis of Esquilache and José Moñino, Count of Floridablanca – the economy improved. Fearing that Britain's victory over France in the Seven Years' War (1756–63) threatened the European balance of power, Spain allied itself to France but suffered a series of military defeats and ended up having to cede Florida to the British at the Treaty of Paris (1763) while gaining Louisiana from France. Spain regained Florida with the Treaty of Paris (1783), which ended the American Revolutionary War (1775–83), and gained an improved international standing.
However, there were no reforming impulses in the reign of Charles IV (1788 to abdication in 1808), seen by some as mentally handicapped. Dominated by his wife's lover, Manuel de Godoy, Charles IV embarked on policies that overturned much of Charles III's reforms. After briefly opposing Revolutionary France early in the French Revolutionary Wars, Spain was cajoled into an uneasy alliance with its northern neighbor, only to be blockaded by the British. Charles IV's vacillation, culminating in his failure to honour the alliance by neglecting to enforce the Continental System led to Napoleon I, Emperor of the French, invading Spain in 1808, thereby triggering the Peninsular War, with enormous human and property losses, and loss of control over most of the overseas empire.
During most of the 18th century Spain had arrested its relative decline of the latter part of the 17th century. But despite the progress, it continued to lag in the political and mercantile developments then transforming other parts of Europe, most notably in Great Britain, the Low Countries, and France. The chaos unleashed by the Peninsular War caused this gap to widen greatly and Spain would not have an Industrial Revolution.
The Age of Enlightenment reached Spain in attenuated form about 1750. Attention focused on medicine and physics, with some philosophy. French and Italian visitors were influential but there was little challenge to Catholicism or the Church such as characterized the French philosophes. The leading Spanish figure was Benito Feijóo (1676–1764), a Benedictine monk and professor. He was a successful popularizer noted for encouraging scientific and empirical thought in an effort to debunk myths and superstitions. By the 1770s the conservatives had launched a counterattack and used censorship and the Inquisition to suppress Enlightenment ideas.
At the top of the social structure of Spain in the 1780s stood the nobility and the church. A few hundred families dominated the aristocracy, with another 500,000 holding noble status. There were 200,000 church men and women, half of them in heavily endowed monasteries that controlled much of the land not owned by the nobles. Most people were on farms, either as landless peons or as holders of small properties. The small urban middle class was growing, but was distrusted by the landowners and peasants alike.
19th century Spain
War of Spanish Independence (1808–14)
In the late 18th century, Bourbon-ruled Spain had an alliance with Bourbon-ruled France, and therefore did not have to fear a land war. Its only serious enemy was Britain, which had a powerful Royal Navy; Spain therefore concentrated its resources on its navy. When the French Revolution overthrew the Bourbons, a land war with France became a threat which the king tried to avoid. The Spanish army was ill-prepared. The officer corps was selected primarily on the basis of royal patronage, rather than merit. About a third of the junior officers have been promoted from the ranks, and they did have talent, but they had few opportunities for promotion or leadership. The rank-and-file were poorly trained peasants. Elite units included foreign regiments of Irishmen, Italians, Swiss, and Walloons, in addition to elite artillery and engineering units. Equipment was old-fashioned and in disrepair. The army lacked its own horses, oxen and mules for transportation, so these auxiliaries were operated by civilians, who might run away if conditions looked bad. In combat, small units fought well, but their old-fashioned tactics were hardly of use against the Napoleonic forces, despite repeated desperate efforts at last-minute reform. When war broke out with France in 1808, the army was deeply unpopular. Leading generals were assassinated, and the army proved incompetent to handle command-and-control. Junior officers from peasant families deserted and went over to the insurgents; many units disintegrated. Spain was unable to mobilize its artillery or cavalry. In the war, there was one victory at the Battle of Bailén, and many humiliating defeats. Conditions steadily worsened, as the insurgents increasingly took control of Spain's battle against Napoleon. Napoleon ridiculed the army as "the worst in Europe"; the British who had to work with it agreed. It was not the Army that defeated Napoleon, but the insurgent peasants whom Napoleon ridiculed as packs of "bandits led by monks" (they in turn believed Napoleon was the devil). By 1812, the army controlled only scattered enclaves, and could only harass the French with occasional raids. The morale of the army had reached a nadir, and reformers stripped the aristocratic officers of most of their legal privileges.
Spain initially sided against France in the Napoleonic Wars, but the defeat of her army early in the war led to Charles IV's pragmatic decision to align with the revolutionary French. Spain was put under a British blockade, and her colonies began to trade independently with Britain but it was the defeat of the British invasions of the Río de la Plata in South America (1806 and 1807) that emboldened independence and revolutionary hopes in Spain's North and South American colonies. A major Franco-Spanish fleet was lost at the Battle of Trafalgar in 1805, prompting the vacillating king of Spain to reconsider his difficult alliance with Napoleon. Spain temporarily broke off from the Continental System, and Napoleon – aggravated with the Bourbon kings of Spain – invaded Spain in 1808 and deposed Ferdinand VII, who had been on the throne only forty-eight days after his father's abdication in March 1808. On July 20, 1808, Joseph Bonaparte, eldest brother of Napoleon Bonaparte, entered Madrid and established a government by which he became King of Spain, serving as a surrogate for Napoleon.
The former Spanish king was dethroned by Napoleon, who put his own brother on the throne. Spaniards revolted. Thompson says the Spanish revolt was, "a reaction against new institutions and ideas, a movement for loyalty to the old order: to the hereditary crown of the Most Catholic kings, which Napoleon, an excommunicated enemy of the Pope, had put on the head of a Frenchman; to the Catholic Church persecuted by republicans who had desecrated churches, murdered priests, and enforced a "loi des cultes"; and to local and provincial rights and privileges threatened by an efficiently centralized government. Juntas were formed all across Spain that pronounced themselves in favor of Ferdinand VII. On September 26, 1808, a Central Junta was formed in the town of Aranjuez to coordinate the nationwide struggle against the French. Initially, the Central Junta declared support for Ferdinand VII, and convened a "General and Extraordinary Cortes" for all the kingdoms of the Spanish Monarchy. On February 22 and 23, 1809, a popular insurrection against the French occupation broke out all over Spain.
The peninsular campaign was a disaster for France. Napoleon did well when he was in direct command, but that followed severe losses, and when he left in 1809 conditions grew worse for France. Vicious reprisals, famously portrayed by Goya in "The Disasters of War", only made the Spanish guerrillas angrier and more active; the war in Spain proved to be a major, long-term drain on French money, manpower and prestige.
In March 1812, the Cádiz Cortes created the first modern Spanish constitution, the Constitution of 1812 (informally named La Pepa). This constitution provided for a separation of the powers of the executive and the legislative branches of government. The Cortes was to be elected by universal suffrage, albeit by an indirect method. Each member of the Cortes was to represent 70,000 people. Members of the Cortes were to meet in annual sessions. The King was prevented from either convening or proroguing the Cortes. Members of the Cortes were to serve single two-year terms. They could not serve consecutive terms; a member could serve a second term only by allowing someone else to serve a single intervening term in office. This attempt at the development of a modern constitutional government lasted from 1808 until 1814. Leaders of the liberals or reformist forces during this revolution were José Moñino, Count of Floridablanca, Gaspar Melchor de Jovellanos and Pedro Rodríguez, Conde de Campomanes. Born in 1728, Floridablanca was eighty years of age at the time of the revolutionary outbreak in 1808. He had served as Prime Minister under King Charles III of Spain from 1777 until 1792; However, he tended to be suspicious of the popular spontaneity and resisted a revolution. Born in 1744, Jovellanos was somewhat younger than Floridablanco. A writer and follower of the philosophers of the Enlightenment tradition of the previous century, Jovellanos had served as Minister of Justice from 1797 to 1798 and now commanded a substantial and influential group within the Central Junta. However, Jovellanos had been imprisoned by Manuel de Godoy, Duke of Alcudia, who had served as the prime minister, virtually running the country as a dictator from 1792 until 1798 and from 1801 until 1808. Accordingly, even Jovellanos tended to be somewhat overly cautious in his approach to the revolutionary upsurge that was sweeping Spain in 1808.
The Spanish army was stretched as it fought Napoleon's forces because of a lack of supplies and too many untrained recruits, but at Bailén in June 1808, the Spanish army inflicted the first major defeat suffered by a Napoleonic army; this resulted in the collapse of French power in Spain. Napoleon took personal charge and with fresh forces reconquered Spain in a matter of months, defeating the Spanish and British armies in a brilliant campaign of encirclement. After this the Spanish armies lost every battle they fought against the French imperial forces but were never annihilated; after battles they would retreat into the mountains to regroup and launch new attacks and raids. Guerrilla forces sprang up all over the country and with the army, tied down huge numbers of Napoleon's troops, making it difficult to sustain concentrated attacks on enemy forces. The attacks and raids of the Spanish army and guerrillas became a massive drain on Napoleon's military and economic resources. In this war, Spain was aided by the British and Portuguese, led by the Duke of Wellington. The Duke of Wellington fought Napoleon's forces in the Peninsular War, with Joseph Bonaparte playing a minor role as king at Madrid. The brutal war was one of the first guerrilla wars in modern Western history. French supply lines stretching across Spain were mauled repeatedly by the Spanish armies and guerrilla forces; thereafter, Napoleon's armies were never able to control much of the country. The war fluctuated, with Wellington spending several years behind his fortresses in Portugal while launching occasional campaigns into Spain.
After Napoleon's disastrous 1812 campaign in Russia, Napoleon began to recall his forces for the defence of France against the advancing Russian and other coalition forces, leaving his forces in Spain increasingly undermanned and on the defensive against the advancing Spanish, British and Portuguese armies. At the Battle of Vitoria in 1813, an allied army under the Duke of Wellington decisively defeated the French and in 1814 Ferdinand VII was restored as King of Spain.
Loss of North and South American colonies
Spain lost all of its North and South American colonies, except Cuba and Puerto Rico, in a complex series of revolts 1808–26. Spain was at war with Britain 1798–1808, and the British Navy cut off its ties to its colonies. Trade was handled by American and Dutch traders. The colonies thus had achieved economic independence from Spain, and set up temporary governments or juntas which were generally out of touch with the mother country. After 1814, as Napoleon was defeated and Ferdinand VII was back on the throne, the king sent armies to regain control and reimpose autocratic rule. In the next phase 1809–16, Spain defeated all the uprising. A second round 1816–25 was successful and drove the Spanish out of all of its mainland holdings. Spain had no help from European powers. Indeed, Britain (and the United States) worked against it. When they were cut off from Spain, the colonies saw a struggle for power between Spaniards who were born in Spain (called "peninsulares") and those of Spanish descent born in New Spain (called "creoles"). The creoles were the activists for independence. Multiple revolutions enabled the colonies to break free of the mother country. In 1824 the armies of generals José de San Martín of Argentina and Simón Bolívar of Venezuela defeated the last Spanish forces; the final defeat came at the Battle of Ayacucho in southern Peru. After that Spain played a minor role in international affairs. Business and trade in the ex-colonies were under British control. Spain kept only Cuba and Puerto Rico in the New World.
Reaction and change (1814–73)
Although the juntas, that had forced the French to leave Spain, had sworn by the liberal Constitution of 1812, Ferdinand VII had the support of conservatives and he rejected it. He ruled in the authoritarian fashion of his forebears.
The government, nearly bankrupt, was unable to pay her soldiers. There were few settlers or soldiers in Florida, so it was sold to the United States for 5 million dollars. In 1820, an expedition intended for the colonies revolted in Cadiz. When armies throughout Spain pronounced themselves in sympathy with the revolters, led by Rafael del Riego, Ferdinand relented and was forced to accept the liberal Constitution of 1812. This was the start of the second bourgeois revolution in Spain, which would last from 1820 to 1823. Ferdinand himself was placed under effective house arrest for the duration of the liberal experiment.
The tumultuous three years of liberal rule that followed (1820–23) were marked by various absolutist conspiracies. The liberal government, which reminded European statesmen entirely too much of the governments of the French Revolution, was viewed with hostility by the Congress of Verona in 1822, and France was authorized to intervene. France crushed the liberal government with massive force in the so-called "Hundred Thousand Sons of Saint Louis" expedition, and Ferdinand was restored as absolute monarch in 1823. In Spain proper, this marked the end of the second Spanish bourgeois revolution.
In Spain, the failure of the second bourgeois revolution was followed by a period of uneasy peace for the next decade. Having borne only a female heir presumptive, it appeared that Ferdinand would be succeeded by his brother, Infante Carlos of Spain. While Ferdinand aligned with the conservatives, fearing another national insurrection, he did not view the Carlos's reactionary policies as a viable option. Ferdinand – resisting the wishes of his brother – decreed the Pragmatic Sanction of 1830, enabling his daughter Isabella to become Queen. Carlos, who made known his intent to resist the sanction, fled to Portugal.
Ferdinand's death in 1833 and the accession of Isabella II as Queen of Spain sparked the First Carlist War (1833–39). Isabella was only three years old at the time so her mother, Maria Cristina of Bourbon-Two Sicilies, was named regent until her daughter came of age. Carlos invaded the Basque country in the north of Spain and attracted support from absolutist reactionaries and conservatives; these forces were known as the "Carlist" forces. The supporters of reform and of limitations on the absolutist rule of the Spanish throne rallied behind Isabella and the regent, Maria Christina; these reformists were called "Cristinos." Though Cristino resistance to the insurrection seemed to have been overcome by the end of 1833, Maria Cristina's forces suddenly drove the Carlist armies from most of the Basque country. Carlos then appointed the Basque general Tomás de Zumalacárregui as his commander-in-chief. Zumalacárregui resuscitated the Carlist cause, and by 1835 had driven the Cristino armies to the Ebro River and transformed the Carlist army from a demoralized band into a professional army of 30,000 of superior quality to the government forces. Zumalacárregui's death in 1835 changed the Carlists' fortunes. The Cristinos found a capable general in Baldomero Espartero. His victory at the Battle of Luchana (1836) turned the tide of the war, and in 1839, the Convention of Vergara put an end to the first Carlist insurrection.
The progressive General Espartero, exploiting his popularity as a war hero and his sobriquet "Pacifier of Spain", demanded liberal reforms from Maria Cristina. The Queen Regent, who resisted any such idea, preferred to resign and let Espartero become regent instead in 1840. Espartero's liberal reforms were then opposed by moderates, and the former general's heavy-handedness caused a series of sporadic uprisings throughout the country from various quarters, all of which were bloodily suppressed. He was overthrown as regent in 1843 by Ramón María Narváez, a moderate, who was in turn perceived as too reactionary. Another Carlist uprising, the Matiners' War, was launched in 1846 in Catalonia, but it was poorly organized and suppressed by 1849.
Isabella II of Spain took a more active role in government after coming of age, but she was immensely unpopular throughout her reign (1833–68). She was viewed as beholden to whoever was closest to her at court, and the people of Spain believed that she cared little for them. As a result, there was another insurrection in 1854 led by General Domingo Dulce y Garay and General Leopoldo O'Donnell y Jarris. Their coup overthrew the dictatorship of Luis Jose Sartorius, 1st Count of San Luis. As the result of the popular insurrection, the Partido Progresista (Progressive Party) obtained widespread support in Spain and came to power in the government in 1854. In 1856, Isabella attempted to form the Liberal Union, a pan-national coalition under the leadership of Leopoldo O'Donnell, who had already marched on Madrid that year and deposed another Espartero ministry. Isabella's plan failed and cost Isabella more prestige and favor with the people. In 1860, Isabella launched a successful war against Morocco, waged by generals O'Donnell and Juan Prim that stabilized her popularity in Spain. However, a campaign to reconquer Peru and Chile during the Chincha Islands War (1864–66) proved disastrous and Spain suffered defeat before the determined South American powers.
In 1866, a revolt led by Juan Prim was suppressed, but in 1868 there was a further revolt, known as the Glorious Revolution. The progresista generals Francisco Serrano and Juan Prim revolted against Isabella and defeated her moderado generals at the Battle of Alcolea (1868). Isabella was driven into exile in Paris.
Two years of revolution and anarchy followed, until in 1870 the Cortes declared that Spain would again have a king. Amadeus of Savoy, the second son of King Victor Emmanuel II of Italy, was selected and duly crowned King of Spain early the following year. Amadeus – a liberal who swore by the liberal constitution the Cortes promulgated – was faced immediately with the incredible task of bringing the disparate political ideologies of Spain to one table. The country was plagued by internecine strife, not merely between Spaniards but within Spanish parties.
First Spanish Republic (1873–74)
Following the Hidalgo affair and an army rebellion, Amadeus famously declared the people of Spain to be ungovernable, abdicated the throne, and left the country (11 February 1873).
In his absence, a government of radicals and Republicans was formed that declared Spain a republic. The First Spanish Republic (1873–74) was immediately under siege from all quarters. The Carlists were the most immediate threat, launching a violent insurrection after their poor showing in the 1872 elections. There were calls for socialist revolution from the International Workingmen's Association, revolts and unrest in the autonomous regions of Navarre and Catalonia, and pressure from the Catholic Church against the fledgling republic.
The Restoration (1874–1931)
Alfonso XII of Spain was duly crowned on 28 December 1874 after returning from exile. After the tumult of the First Spanish Republic, Spaniards were willing to accept a return to stability under Bourbon rule. The Republican armies in Spain – which were resisting a Carlist insurrection – pronounced their allegiance to Alfonso in the winter of 1874–75, led by Brigadier General Martínez-Campos. The Republic was dissolved and Antonio Cánovas del Castillo, a trusted advisor to the king, was named Prime Minister on New Year's Eve, 1874. The Carlist insurrection was put down vigorously by the new king, who took an active role in the war and rapidly gained the support of most of his countrymen. A system of turnos was established in Spain in which the liberals, led by Práxedes Mateo Sagasta and the conservatives, led by Antonio Cánovas del Castillo, alternated in control of the government. A modicum of stability and economic progress was restored to Spain during Alfonso XII's rule (1874–85), although progress was cut short by his sudden death at age 28.
Constitutional monarchy continued under King Alfonso XIII. Alfonso XIII was born after his father's death and was proclaimed king upon his birth. However, the government had become destabilized by Alfonso XII's unexpected death in 1885, followed by the assassination of prime minister Antonio Cánovas del Castillo in 1897. The reign of Alfonso XIII (1886–1931) saw the Spanish–American War of 1898, culminating in the loss of the Philippines plus Spain's last colonies in the Americas, Cuba and Puerto Rico; the "Great War" in Europe (now known as World War I, 1914–18), although Spain maintained neutrality throughout the conflict; the influenza pandemic nicknamed the Spanish Flu (1918–19); and the Rif War in Morocco (1920–26). His reign also saw the rise to dictatorship of General Miguel Primo de Rivera, who seized control of the government by military coup in 1923 and ruled as a dictator – with the monarch's support – for seven years (1923–30). The worldwide recession, marked first by the Wall Street Crash of 1929, caused deepening economic hardships in Spain and the resignation of Primo de Rivera's government in 1930. General elections were held in 1931 to replace the government, with Republican and anticlerical candidates winning the majority of votes. Alfonso XIII left the country in response to the proclamation of the Second Spanish Republic, although he never abdicated.
Disaster of 1898
Cuba rebelled against Spain in the Ten Years' War beginning in 1868, resulting in the abolition of slavery in Spain's colonies in the New World. American business interests in the island, coupled with concerns for the people of Cuba, aggravated relations between the two countries. The explosion of the USS Maine launched the Spanish–American War in 1898, in which Spain fared disastrously. Cuba gained its independence and Spain lost its remaining New World colony, Puerto Rico, which together with Guam and the Philippines were ceded to the United States for 20 million dollars. In 1899, Spain sold its remaining Pacific islands – the Northern Mariana Islands, Caroline Islands and Palau – to Germany and Spanish colonial possessions were reduced to Spanish Morocco, Spanish Sahara and Spanish Guinea, all in Africa.
The "disaster" of 1898 created the Generation of '98, a group of statesmen and intellectuals who demanded liberal change from the new government. However both Anarchism on the left and fascism on the right grew rapidly in Spain in the early 20th century. A revolt in 1909 in Catalonia was bloodily suppressed. Jensen (1999) argues that the defeat of 1898 led many military officers to abandon the liberalism that had been strong in the officer corps and turn to the right. They interpreted the American victory in 1898 as well as the Japanese victory against Russia in 1905 as proof of the superior value of willpower and moral values over technology. Over the next three decades, Jensen argues, these values shaped the outlook of Francisco Franco and other Falangists.
20th century Spain
Spain's neutrality in World War I allowed it to become a supplier of material for both sides to its great advantage, prompting an economic boom in Spain. The outbreak of Spanish influenza in Spain and elsewhere, along with a major economic slowdown in the postwar period, hit Spain particularly hard, and the country went into debt. A major workers' strike was suppressed in 1919.
Spanish colonial policies in Spanish Morocco led to an uprising known as the Rif War; rebels took control of most of the area except for the enclaves of Ceuta and Melilla in 1921. King Alfonso XIII decided to support the dictatorship of General Miguel Primo de Rivera in 1923. As Prime Minister Primo de Rivera promised to reform the country quickly and restore elections soon. He deeply believed that it was the politicians who had ruined Spain and that governing without them he could regenerate the nation. His slogan was "Country, Religion, Monarchy."
The late 1920s were prosperous until the worldwide Great Depression hit in 1929. In early 1930 bankruptcy and massive unpopularity forced the king to remove Primo de Rivera. Historians depict an idealistic but inept dictator who did not understand government, lacked clear ideas and showed very little political acumen. He consulted no one, had a weak staff, and made frequent strange pronouncements. He started with very broad support but lost every element until only the army was left. His projects ran large deficits which he kept hidden. His multiple repeated mistakes discredited the king and ruined the monarchy, while heightening social tensions that led in 1936 to a full-scale Spanish Civil War. Urban voters had lost faith in the king, and voted for republican parties in the municipal elections of April 1931. The king fled the country without abdicating and a republic was established.
Second Spanish Republic (1931–36)
Political ideologies were intensely polarized, as both right and left saw vast evil conspiracies on the other side that had to be stopped. The central issue was the role of the Catholic Church, which the left saw as the major enemy of modernity and the Spanish people, and the right saw as the invaluable protector of Spanish values.
Power seesawed back and forth, 1931–36, as the monarchy was overthrown and complex coalitions formed and fell apart. The end came in a devastating civil war, 1936–39, which was won by the conservative, pro-church, Army-backed “Nationalist” forces supported by Nazi Germany and Italy. The Nationalists, led by General Francisco Franco, defeated the Republican coalition of liberals, socialists, anarchists, and communists, which was backed by the Soviet Union.
The first governments of the Republic were center-left, headed by Niceto Alcalá-Zamora and Manuel Azaña. Economic turmoil, substantial debt, and fractious, rapidly changing governing coalitions led to escalating political violence and attempted coups by right and left.
In 1933, the right-wing Spanish Confederation of the Autonomous Right (CEDA), based on the Catholic vote, won power. An armed rising of workers in October 1934, which reached its greatest intensity in Asturias and Catalonia, was forcefully put down by the CEDA government. This in turn energized political movements across the spectrum in Spain, including a revived anarchist movement and new reactionary and fascist groups, including the Falange and a revived Carlist movement.
Spanish Civil War (1936–39)
The Spanish Civil War was marked by numerous small battles and sieges, and many atrocities, until the rebels (the "Nationalists"), led by Francisco Franco, won in 1939. There was military intervention as Italy sent land forces, and Germany sent smaller elite air force and armored units to the rebel side (the Nationalists). The Soviet Union sold armaments to the "Loyalists" ("Republicans"), while the Communist parties in numerous countries sent soldiers to the "International Brigades." The civil war did not escalate into a larger conflict, but did become a worldwide ideological battleground that pitted the left and many liberals against Catholics and conservatives. Britain, France and the United States remained neutral and refused to sell military supplies. Worldwide there was a decline in pacifism and a growing sense that another world war was imminent, and that it would be worth fighting for.
Political and military balance
In the 1930s, Spanish politics were polarized at the left and right extremes of the political spectrum. The left-wing favored class struggle, land reform to overthrow the land owners, autonomy to the regions, and the destruction of the Catholic Church. The right-wing groups, the largest of which was CEDA, a Catholic coalition, believed in tradition, stability and hierarchy. Religion was the main dividing line between right and left, but there were regional variations. The Basques were devoutly Catholic but they put a high priority on regional autonomy. The Left offered a better deal so in 1936–37 they fought for the Republicans. In 1937 they pulled out of the war.
The Spanish Republican government moved to Valencia, to escape Madrid, which was under siege by the Nationalists. It had some military strength in the Air Force and Navy, but it had lost nearly all of the regular Army. After opening the arsenals to give rifles, machine guns and artillery to local militias, it had little control over the Loyalist ground forces. Republican diplomacy proved ineffective, with only useful two allies, the Soviet Union and Mexico. Britain, France and 27 other countries had agreed on an arms embargo to Spain, and the United States went along. Nazi Germany and Fascist Italy both signed that agreement, but ignored it and sent supplies and vital help, including a powerful air force under German command, the Condor Legion. Tens of thousands of Italian arrived under Italian command. Portugal supported the Nationalists, and allowed the trans-shipment of supplies to Franco's forces. The Soviets sold tanks and other armaments for Spanish gold, and sent well-trained officers and political commissars. It organized the mobilization of tens of thousands of mostly communist volunteers from around the world, who formed the International Brigades .
In 1936, the Left united in the Popular Front and were elected to power. However, this coalition, dominated by the centre-left, was undermined both by the revolutionary groups such as the anarchist Confederación Nacional del Trabajo (CNT) and Federación Anarquista Ibérica (FAI) and by anti-democratic far-right groups such as the Falange and the Carlists. The political violence of previous years began to start again. There were gunfights over strikes; landless labourers began to seize land, church officials were killed and churches burnt. On the other side, right wing militias (such as the Falange) and gunmen hired by employers assassinated left wing activists. The Republican democracy never generated the consensus or mutual trust between the various political groups that it needed to function peacefully. As a result, the country slid into civil war. The right wing of the country and high ranking figures in the army began to plan a coup, and when Falangist politician José Calvo-Sotelo was shot by Republican police, they used it as a signal to act whilst the Republican leadership was confused and inert.
The Nationalists under Franco won the war, and historians continue to debate the reasons. The Nationalists were much better unified and led than the Republicans, who squabbled and fought amongst themselves endlessly and had no clear military strategy. The Army went over to the Nationalists, but it was very poorly equipped – there were no tanks or modern airplanes. The small navy supported the Republicans, but their armies were made up of raw recruits and they lacked both equipment and skilled officers and sergeants. Nationalist senior officers were much better trained and more familiar with modern tactics than the Republicans.
On 17 July 1936, General Francisco Franco brought the colonial army stationed in Morocco to the mainland, while another force from the north under General Mola moved south from Navarre. Another conspirator, General Sanjurjo, who was in exile in Portugal, was killed in a plane crash while being brought to join the other military leaders. Military units were also mobilised elsewhere to take over government institutions. Franco intended to seize power immediately, but successful resistance by Republicans in key the centers of Madrid, Barcelona, Valencia, the Basque country (and other points) meant that Spain faced a prolonged civil war. By 1937 much of the south and west was under the control of the Nationalists, whose Army of Africa was the most professional force available to either side. Both sides received foreign military aid: the Nationalists from Nazi Germany and Italy, while the Republicans were supported by organised far-left volunteers from the Soviet Union.
The Siege of the Alcázar at Toledo early in the war was a turning point, with the Nationalists winning after a long siege. The Republicans managed to hold out in Madrid, despite a Nationalist assault in November 1936, and frustrated subsequent offensives against the capital at Jarama and Guadalajara in 1937. Soon, though, the Nationalists began to erode their territory, starving Madrid and making inroads into the east. The North, including the Basque country fell in late 1937 and the Aragon front collapsed shortly afterwards. The bombing of Guernica on the afternoon of 26 April 1937 – a mission used as a testing ground for the German Luftwaffe's Condor Legion – was probably the most infamous event of the war and inspired Picasso's painting. The Battle of the Ebro in July–November 1938 was the final desperate attempt by the Republicans to turn the tide. When this failed and Barcelona fell to the Nationalists in early 1939, it was clear the war was over. The remaining Republican fronts collapsed, as civil war broke out inside the Left, as the Republicans suppressed the Communists. Madrid fell in March 1939.
The war, cost between 300,000 and 1,000,000 lives. It ended with the total collapse of the Republic and the accession of Francisco Franco as dictator of Spain. Franco amalgamated all the right wing parties into a reconstituted fascist party Falange and banned the left-wing and Republican parties and trade unions. The Church was more powerful than it had been in centuries.
The conduct of the war was brutal on both sides, with widespread massacres of civilians and prisoners. After the war, many thousands of Republicans were imprisoned and up to 150,000 were executed between 1939 and 1943. Some 500,000 refugees escaped to France; they remained in exile for the years or decades.
The dictatorship of Francisco Franco (1936–75)
The Francoist regime resulted in deaths and arrests of hundreds of thousands of people who were either supporters of the previous Second Republic of Spain or potential threats to Franco's state. They were executed, sent to prisons or concentration camps. According to Gabriel Jackson, the number of victims of the White Terror (executions and hunger or illness in prisons) just between 1939 and 1943 was 200,000.
During Franco's rule, Spain was officially neutral in World War II and remained largely economically and culturally isolated from the outside world. Under a military dictatorship, Spain saw its political parties banned, except for the official party (Falange). Labor unions were banned and all political activity using violence or intimidation to achieve its goals was forbidden.
Under Franco, Spain actively sought the return of Gibraltar by the United Kingdom, and gained some support for its cause at the United Nations. During the 1960s, Spain began imposing restrictions on Gibraltar, culminating in the closure of the border in 1969. It was not fully reopened until 1985.
Spanish rule in Morocco ended in 1967. Though militarily victorious in the 1957–58 Moroccan invasion of Spanish West Africa, Spain gradually relinquished its remaining African colonies. Spanish Guinea was granted independence as Equatorial Guinea in 1968, while the Moroccan enclave of Ifni had been ceded to Morocco in 1969. Two cities in Africa, Ceuta and Melilla remain under Spanish rule and sovereignty.
The latter years of Franco's rule saw some economic and political liberalization, the Spanish miracle, including the birth of a tourism industry. Spain began to catch up economically with its European neighbors.
Franco ruled until his death on 20 November 1975, when control was given to King Juan Carlos. In the last few months before Franco's death, the Spanish state went into a paralysis. This was capitalized upon by King Hassan II of Morocco, who ordered the 'Green March' into Western Sahara, Spain's last colonial possession.
Transition to democracy
The Spanish transition to democracy or new Bourbon restoration was the era when Spain moved from the dictatorship of Francisco Franco to a liberal democratic state. The transition is usually said to have begun with Franco's death on 20 November 1975, while its completion is marked by the electoral victory of the socialist PSOE on 28 October 1982.
Under its current (1978) constitution, Spain is a constitutional monarchy. It comprises 17 autonomous communities (Andalusia, Aragon, Asturias, Balearic Islands, Canary Islands, Cantabria, Castile and León, Castile–La Mancha, Catalonia, Extremadura, Galicia, La Rioja, Community of Madrid, Region of Murcia, Basque Country, Valencian Community, Navarre) and 2 autonomous cities (Ceuta and Melilla).
Between 1978 and 1982, Spain was led by the Unión del Centro Democrático governments. In 1981 the 23-F coup d'état attempt took place. On 23 February Antonio Tejero, with members of the Guardia Civil entered the Congress of Deputies, and stopped the session, where Leopoldo Calvo Sotelo was about to be named prime minister of the government. Officially, the coup d'état failed thanks to the intervention of King Juan Carlos. Spain joined NATO before Calvo-Sotelo left office. Along with political change came radical change in Spanish society. Spanish society had been extremely conservative under Franco, but the transition to democracy also began a liberalization of values and social mores.
From 1982 until 1996, the social democratic PSOE governed the country, with Felipe González as prime minister. In 1986, Spain joined the European Economic Community (EEC, now European Union), and the country hosted the 1992 Summer Olympics in Barcelona and Seville Expo '92.
Spain within the European Union (1993 to present)
In 1996, the centre-right Partido Popular government came to power, led by José María Aznar. On 1 January 1999, Spain exchanged the peseta for the new Euro currency. The peseta continued to be used for cash transactions until January 1, 2002. On 11 March 2004 a number of terrorist bombs exploded on busy commuter trains in Madrid by Islamic extremists linked to Al-Qaeda, killing 191 persons and injuring thousands. The election, held three days after the attacks, was won by the PSOE, and José Luis Rodríguez Zapatero replaced Aznar as prime minister. As José María Aznar and his ministers at first accused ETA of the atrocity, it has been argued that the outcome of the election has been influenced by this event.
In the wake of its joining the EEC, Spain experienced an economic boom during two decades, cut painfully short by the financial crisis of 2008. During the boom years, Spain attracted a large number of immigrants, especially from the United Kingdom, but also including unknown but substantial illegal immigration, mostly from Latin America, eastern Europe and north Africa. Spain had the fourth largest economy in the Eurozone, but after 2008 the global economic recession hit Spain hard, with the burst of the housing bubble and unemployment reaching over 25%, sharp budget cutbacks were needed to stay in the Euro zone. The GDP shrank 1.2% in 2012. Although interest rates were historically low, investments were not encouraged sufficiently by entrepreneurs. Losses were especially high in real estate, banking, and construction. Economists concluded in early 2013 that, "Where once Spain's problems were acute, now they are chronic: entrenched unemployment, a large mass of small and medium-sized enterprises with low productivity, and, above all, a constriction in credit.." With the financial crisis and high unemployment, Spain is now suffering from a combination of continued illegal immigration paired with a massive emigration of workers, forced to seek employment elsewhere under the EU's "Freedom of Movement", with an estimated 700,000, or 1.5% of total population, leaving the country between 2008 and 2013.
Spanish statehood and secessionism
Although it had been used in treaties as far back as the seventeenth century, it was not until the constitution of 1812 that the name "Españas" became the official name for the Spanish kingdom and "King of the Spains" became the official title of the head of state. It was not until the constitution of 1876 that the singular form of the name, "España" (Spain), became the official name of the Spanish state.
Although colloquially and literally the expression "King of Spain" or "King of the Spains" was already widespread, and although the two crowns, Aragonese and Castilian, were held by the same monarch, and although the different kings had the long-term shared intention of uniting the peninsula under a single kingdom to restore the Visigoth unity, they were never proclaimed officially as a single kingdom until the enactment of the Spanish Constitution of 1812. Portugal was also ruled by the House of Habsburg with Castile and Aragon but this came to an end with a revolt after sixty years.
The statehood of Spain is generally accepted by the population of Spain as the Spanish Constitution of 1978 was massively approved by universal referendum. The vigor of the constitutional regime and tacit support by the Spanish population has been repeatedly confirmed ever since through periodical national elections to configure the Spanish Parliament. Said constitutional bicameral organ represents all the Spanish territories and people, where the national sovereignty is vested.
Still, there are some nationalist movements and political parties of regional scope (i.e. in Aragon, the Canaries, Catalonia, Euskadi, Galicia), mostly with seminal ideologies born in the late 19th century, some enjoying relatively important yet wavering support from local population. Traditional nationalist parties' claims range from increasing transfer of competencies and new financing and tax regime arrangements with the Central Government to sovereign rights and secessionism from Spain.
Spain is ranked among the best democracies in the world by reputed, independent analysts. As the Spanish Constitution legal framework guarantees civil rights, including the freedom of speech, a part of said nationalist regional parties have openly promoted and pursued the secession from Spain, by arguing most notably language, cultural and historic reasons and in some cases, also justified by alleged race issues.
Economic reasons are also a separatists' recurrent argument. The ongoing Catalan campaign for independence includes the motto "Spain is robbing us" ("España nos roba"), an argument refuted by many and claimed to be as simply propaganda for the nationalists and secessionists interests. Secessionists claim that an independent Catalan State, released from its financial contribution to the rest of Spain, would grow prosper and solve the difficulties currently faced by the autonomous region, an already self-governed economy, in particular local unemployment and Catalan public debt issues.
In parallel to the democratic arena and political activism, some terrorists groups (i.e. TERRA LLIURE (Catalan for "Free Land"), ETA (Basque acronym for "Basque Homeland and Freedom")) engaged in criminal activities (assassinations -indiscriminate bomb attacks to civilians incl.- extorsions or kidnappings) in an attempt to reach their secessionist goals. It has been recently noticed an increasing extremism in Catalonia in form of attacks, boycotts and even death menaces to those not supporting secessionist movement and events like the so-called consultation on independence organized by the Catalan government and some civil organisations held in November 2014 despite the manifest illegality of the process as it was previously deemed by the Spanish Constitutional Court. Some analysts believe said extremism could lead some current secessionist groups and individuals to undertake terrorist activities.
The Spanish Constitution configures and enables a modern democratic system with its own procedures to create, modify and derogate any law, including the Constitution itself, or even the adoption of a completely new one as may be decided by the people of Spain. Any such legitimate initiative must comply with the corresponding legal procedures as stated in the Constitution. Integrity and unity of the Spanish territory are therefore not irremovable principles, and secessionism would then be possible but subject to the law and to the sovereignty of the whole Spanish population, as it is proclaimed by the constitutionalists.
The Spanish Constitution of 1978, in its second article, recognizes "nationalities"(a carefully chosen word in order to avoid the more politically charged "nations") and "regions", within the context of "the Spanish nation". Account taken of this rich variety of cultures, Spain has enabled one of the most decentralized systems in Europe and worldwide in terms of decision-making power, its Autonomous Regions enjoying the highest rates of both political and fiscal competencies from an international comparative law viewpoint.
- "Spain". Encarta Online Encyclopedia. 2007. Archived from the original on 2009-10-31. See also: "'First west Europe tooth' found". BBC. 30 June 2007. Archived from the original on 2009-10-31. Retrieved 2008-08-09.
- "Spain - History - Pre-Roman Spain - Prehistory". Britannica Online Encyclopedia. 2008.
- Robert Chapman, Emerging Complexity: The Later Prehistory of South-East Spain, Iberia and the West Mediterranean (2009)
- "Spain - History - Pre-Roman Spain - Phoenicians". Britannica Online Encyclopedia. 2008.
- Grout, James (2007). "The Celtiberian War". Encyclopaedia Romana. University of Chicago. Retrieved 2008-06-08.
- "Major Phases in Roman History". Rome in the Mediterranean World. University of Toronto. Retrieved 2008-06-08.
- Great estates, the Latifundia (sing., latifundium), controlled by a land owning aristocracy, were superimposed on the existing Iberian landholding system.
- Rinehart, Robert; Seeley, Jo Ann Browning (1998). "A Country Study: Spain - Hispania". Library of Congress Country Series. Retrieved 2008-08-09.
- The Roman provinces of Hispania included Provincia Hispania Ulterior Baetica (Hispania Baetica), whose capital was Corduba, presently Córdoba, Provincia Hispania Ulterior Lusitania (Hispania Lusitania), whose capital was Emerita Augusta (now Mérida), Provincia Hispania Citerior, whose capital was Tarraco (Tarragona), Provincia Hispania Nova, whose capital was Tingis (Tánger in present Morocco), Provincia Hispania Nova Citerior and Asturiae-Calleciae (these latter two provinces were created and then dissolved in the 3rd century AD).
- Payne, Stanley G. (1973). "A History of Spain and Portugal; Ch. 1 Ancient Hispania". The Library of Iberian Resources Online. Retrieved 2008-08-09.
- Roger Collins, Visigothic Spain 409–711 (2006)
- Karen Eva Carr, Vandals to Visigoths: Rural Settlement Patterns in Early Medieval Spain (2002)
- Rhea Marsh Smith (June 1965), Spain: A Modern History, University of Michigan Press, p. 25 Missing or empty
- p. 14
- Rhea Marsh Smith, Spain: A Modern History, pp. 16-17.
- Collins, Visigothic Spain 409–711 (2006)
- Akhbār majmūa, p. 21 of Spanish translation, p. 6 of Arabic text.
- Fletcher, Richard (2006). Moorish Spain. Los Angeles, California: University of California Press. p. 53. ISBN 0-520-24840-6.
- Timelines - Vikings, Saracens, Magyars
- Granada by Richard Gottheil, Meyer Kayserling, Jewish Encyclopedia. 1906 ed.
- Ransoming Captives in Crusader Spain: The Order of Merced on the Christian-Islamic Frontier
- The Almohads
- Catalan Company (1302-1388 AD)
- Ramón Mariño Paz (1999). Historia da lingua galega. Sotelo Blanco Edicións. pp. 182–194. ISBN 978-84-7824-333-4. Retrieved 19 August 2013.
- Hugh Thomas, Rivers of Gold (Random House: New York, 2003) p. 18.
- Hugh Thomas, Rivers of Gold, p. 21.
- Hugh Thomas, Rivers of Gold, p. 58.
- There is simply no consensus as to the extent, with estimates varying by many orders of magnitude, but that it occurred is not doubted - See Population history of indigenous peoples of the Americas.
- Baten, Jörg (2016). A History of the Global Economy. From 1500 to the Present. Cambridge University Press. p. 159. ISBN 9781107507180.
- James Patrick (2007). Renaissance and Reformation. Marshall Cavendish. p. 207. ISBN 978-0-7614-7651-1. Retrieved 19 August 2013.
- When Europeans were slaves: Research suggests white slavery was much more common than previously believed
- The Seventeenth-Century Decline
- J.H. Elliott, "Imperial Spain: 1469–1716", Penguin Books, 1970, p.298
- Hugh Thomas. The Golden Age: The Spanish Empire of Charles V (2010)
- John B. Wolf, The Emergence of the Great Powers: 1685–1715 (1962)
- Henry Kamen, Philip V of Spain (2001)
- John Lynch, Bourbon Spain: 1700–1808 (1989) pp 67- 115
- Payne says Charles III "was probably the most successful European ruler of his generation. Stanley G. Payne, History of Spain and Portugal (1973) 2:71
- Jonathan Israel (2011). Democratic Enlightenment:Philosophy, Revolution, and Human Rights 1750-1790. Oxford University Press. p. 374.
- Earl J. Hamilton, "Money and Economic Recovery in Spain under the First Bourbon, 1701–1746", Journal of Modern History Vol. 15, No. 3 (Sep., 1943), pp. 192-206 in JSTOR
- Simms p.211
- Payne, History of Spain and Portugal (1973) 2:367-71
- Franklin Ford, Europe, 1780-1830 (1970) p 32
- Charles J. Esdaile, The Spanish Army in the Peninsular War (1988)
- Philip Haythornthwaite; Christa Hook (2013). Corunna 1809: Sir John Moore's Fighting Retreat. Osprey. pp. 17–18.
- Russell Crandall (2014). America's Dirty Wars: Irregular Warfare from 1776 to the War on Terror. Cambridge UP. p. 21.
- Otto Pivka, Spanish Armies of the Napoleonic Wars (Osprey Men-at-Arms, 1975)
- Julia Ortiz Griffin; William D. Griffin (2007). Spain and Portugal: A Reference Guide from the Renaissance to the Present. Infobase Publishing. p. 241.
- J. M. Thompson, Napoleon Bonaparte: His Rise and Fall (1951) 244-45
- Richard Herr (1971). Modern Spain: An Historical Essay. U. of California Press. pp. 72–3.
- David Gates, The Spanish Ulcer: A History of the Peninsular War (1986)
- Jon Cowans (2003). Modern Spain: A Documentary History. U. of Pennsylvania Press. pp. 26–27. ISBN 0-8122-1846-9.
- Jesus Cruz (2004). Gentlemen, Bourgeois, and Revolutionaries: Political Change and Cultural Persistence among the Spanish Dominant Groups, 1750-1850. Cambridge U.P. pp. 216–18.
- George F. Nafziger (2002). Historical Dictionary of the Napoleonic Era. Scarecrow Press. p. 158.
- David G. Chandler (1973). The Campaigns of Napoleon. Simon and Schuster. p. 659.
- Todd Fisher (2004). The Napoleonic Wars: The Rise And Fall Of An Empire. Osprey Publishing. p. 222.
- Ian Fletcher (2012). Vittoria 1813: Wellington Sweeps the French from Spain. Osprey Publishing.
- John Michael Francis (2006). Iberia and the Americas: Culture, Politics, and History. ABC-CLIO. p. 905.
- John Lynch, The Spanish American Revolutions 1808-1826 (2nd ed. 1986)
- John Lynch, ed. Latin American revolutions, 1808-1826: old and new world origins (1994), scholarly essays.
- Raymond Carr, Spain, 1808-1975 (2nd ed., 1982) pp 101-5, 122-23, 143-46, 306-9, 379-88
- David R. Ringrose (1998). Spain, Europe, and the 'Spanish Miracle', 1700-1900. Cambridge U.P. p. 325.
- Charles S. Esdaile, Spain in the Liberal Age: From Constitution to Civil War, 1808–1939 (2000)
- Carl Cavanagh Hodge (2008). Encyclopedia of the age of imperialism: 1800-1914. A - K. Greenwood. p. 138. Retrieved 13 December 2012.
- Stanley G. Payne (1967). Politics and the Military in Modern Spain: Stanley G. Payne. Stanford University Press. p. 26.
- William James Callahan (1984). Church, Politics, and Society in Spain, 1750-1874. Harvard U.P. p. 250.
- Spencer Tucker (20 May 2009). The Encyclopedia of the Spanish-American and Philippine-American Wars: A Political, Social, and Military History. ABC-CLIO. p. 12.
- Joseph A. Brandt, Toward the New Spain: The Spanish Revolution of 1868 and the First Republic (1977)
- Earl Ray Beck, Time of Triumph & Sorrow: Spanish Politics during the Reign of Alfonso XII, 1874–1885 (1979)
- Beck, Time of Triumph & Sorrow: Spanish Politics during the Reign of Alfonso XII, 1874–1885 (1979)
- John L. Offner, Unwanted War: The Diplomacy of the United States & Spain over Cuba, 1895–1898 (1992)
- H. Ramsden, "The Spanish 'Generation of 1898': The History of a Concept", Bulletin of the John Rylands University Library of Manchester, 1974, Vol. 56 Issue 2, pp 443-462
- Geoffrey Jensen, "Moral Strength Through Material Defeat? The Consequences of 1898 for Spanish Military Culture", War & Society, Oct 1999, Vol. 17 Issue 2, pp 25-39
- James A. Chandler, "Spain and Her Moroccan Protectorate 1898 - 1927," Journal of Contemporary History (1975) 10#2 pp. 301-322 in JSTOR
- Douglas Porch, "Spain's African Nightmare," MHQ: Quarterly Journal of Military History (2006) 18#2 pp 28-37.
- Raymond Carr, Spain, 1808-1975 (2nd ed 1982) pp 564-91
- Richard Herr, An Historical Essay on Modern Spain (1974) pp 162-3
- Herr, An Historical Essay on Modern Spain (1974) pp 154-87
- Stanley G. Payne, The Spanish Revolution (1970) pp 262-76
- Antony Beevor, The Spanish Civil War (1982), pp. 49-50
- Stanley G. Payne (2004). Spanish Civil War, the Soviet Union, and Communism. Yale University Press. p. 106.
- Michael Alpert, "The Clash of Spanish Armies: Contrasting Ways of War in Spain, 1936–1939," War in History (1999) 6#3 pp 331-351.
- Paul Preston, The Spanish Civil War: Reaction, Revolution, and Revenge (2nd ed. 2007) pp 266-300
- Preston, The Spanish Civil War: Reaction, Revolution, and Revenge (2007) pp 301-318
- Jackson, Gabriel. The Spanish Republic and the Civil War, 1931-1939. Princeton University Press. 1967. Princeton. p.539
- Stanley G. Payne, Franco and Hitler: Spain, Germany, and World War II (2009)
- Jean Grugel and Tim Rees, Franco's Spain (1997)
- Giles Tremlett, Spain attracts record levels of immigrants seeking jobs and sun The Guardian, Wednesday 26 July 2006
- Moran Zhang, "Spanish Economy Sinks Further Into Recession, Q4 GDP Down 0.6% Quarterly: Bank of Spain," International Business Times Jan 23, 2013
- Baten, Jörg (2016). A History of the Global Economy. From 1500 to the Present. Cambridge University Press. p. 66. ISBN 9781107507180.
- "Spain's Economy: Rajoy unconfined?" The Economist Feb. 13. 2013
- La nueva emigración española. Lo que sabemos y lo que no Fundación Alternativas Nº: 2013/18
- Constitución política de la Monarquía Española Promulgada en Cádiz a 19 de Marzo de 1812
- Estado y territorio en España, 1820-1930: la formación del paisaje nacional pg 25-26
- Felipe IV: el hombre y el reinado pg 137
- José Manuel Nieto Soria (2007). "Conceptos de España en tiempos de los Reyes Catolicos" (PDF). Norba. Nueva Revista de Historia. Universidad de Extremadura. 19: 105–123. ISSN 0213-375X.
- Peña,Lorenzo. Un puente jurídico entre Iberoamérica y Europa:la Constitución española de 1812. Instituto de Filosofía del CSIC
The first thing to understand is that for the most part, the Courts of Cadiz created a new state, the Spanish state. This is neither totally true nor totally false. The Spanish monarchy had never stopped being officially a new juxtaposition of kingdoms and crowns converging on the person of the sovereign. Of course this vision purely of paper reflected neither the authentic political reality nor the social culture and not even fully the juridical, which happened in a background of de facto unity. The fact remains, however, that ... there had never been a proclamation of a Kingdom of Spain, so that difficulties always arose over the legal meaning of the very frequent references to 'Spain' in the legal texts of the 16th, 17th and 18th centuries. The Spanish sovereigns had always refused the advice... in the sense of establishing a United Kingdom of Spain, preferring to see themselves as vertices of converging scattered kingdoms, at least in theory. Even the Napoleonic Bayonne Constitution of 1808 did not proclaim a kingdom of Spain, but a 'Crown of Spain and the Indies'. On the other hand, 'Spain' was merely a geographical name, a simple Romance version of 'Hispania', whereby its use, in principle, should not have to go beyond the Latin designations 'Gallia' and 'Germania'. Except that, of course, there was in fact a political union of most of that Hispania, and under it there were the very similar Romance languages of the spanned territories, in addition to very close historic, cultural and commercial links.
- Nationalisms and regionalisms of Spain
- List of active separatist movements in Europe#Spain
- Sabino Arana
- es:Pompeyo Gener
- Ethnic nationalism
- Terra Lliure
- Spanish Constitution of 1978
- Autonomous communities of Spain
- "Until the end of the dictatorship of Franco, Spain had a very centralized political system. In 1978, a decentralization process started after the creation of the current constitution. The constitution established a complex framework that combines the concept of Spain as a single political nation with the existence of autonomy statutes granted to all seventeen regions. The degree of autonomy for a number of regions is fairly high, these are the ‘historical’ regions. In 1983, all seventeen autonomous communities had adopted a statute. Although differences exist in the level of autonomy between ‘historical’ and ‘ordinary’ regions, all communities have experienced an increase in their level of autonomy. The group of the ‘historical’ communities consists of Catalonia, the Basque Country and Galicia. This group was joined later by Andalusia. The group of ‘ordinary’ regions consists of the rest of the autonomous communities (Aragon, Asturias, Balearic Islands, the Canary Islands, Cantabria, Castilla de La Mancha, Castilla-Leon, Extremadura, Madrid, Murcia, Navarra, La Rioja, and Valencia). The autonomous communities have wide legislative and executive autonomy, with their own parliaments and regional governments. The distribution of powers is different for every community, as laid out in the autonomy statutes. The ‘ordinary’ regions, which always had fewer powers, have slowly caught up with the ‘historical’ regions. In 1992, for example, the regional autonomy pact extended the power of the autonomous communities in areas of education and health, especially for the ‘ordinary’ autonomous communities. Decentralization in Spain can be characterized as asymmetrical devolution." http://www.fnp.nl/downloads/decentrilization_and_economic_growth_per_capita_in_europe.pdf
- Spain ranks 8 according to the research paper http://www.urv.cat/creip/media/upload/arxius/wp/WP2012/DT.15-2012-850-DIAZ%20i%20MEIX.pdf
- "Kingdom of Spain: People". US Department of State. Retrieved 13 August 2008.
- Barton, Simon. A History of Spain (2009) excerpt and text search
- Carr, Raymond. Spain, 1808-1975 (2nd ed 1982), a standard scholarly survey
- Carr, Raymond, ed. Spain: A History (2001) excerpt and text search
- Casey, James. Early Modern Spain: A Social History (1999) excerpt and text search
- Edwards, John. The Spain of the Catholic Monarchs 1474–1520 (2001) excerpt and text search
- Esdaile, Charles J. Spain in the Liberal Age: From Constitution to Civil War, 1808–1939 (2000) excerpt and text search
- Gerli, E. Michael, ed. Medieval Iberia: an encyclopedia. New York 2005. ISBN 0-415-93918-6
- Herr, Richard. An Historical Essay on Modern Spain (1974)
- Kamen, Henry. Spain. A Society of Conflict (3rd ed.) London and New York: Pearson Longman 2005. ISBN
- Lynch, John. The Hispanic World in Crisis and Change: 1598–1700 (1994) excerpt and text search
- O'Callaghan, Joseph F. A History of Medieval Spain (1983) excerpt and text search
- Payne, Stanley G. A History of Spain and Portugal (2 vol 1973) full text online vol 1 before 1700; full text online vol 2 after 1700; a standard scholarly history
- Payne, Stanley G. Spain: A Unique History (University of Wisconsin Press; 2011) 304 pages; history since the Visigothic era.
- Payne, Stanley G. Politics and Society in Twentieth-Century Spain (2012)
- Philips, William D., Jr., and Carla Rahn Phillips. A Concise History of Spain (2010) excerpt and text search
- Pierson, Peter. The History of Spain (2nd ed. 2008) excerpt and text search
- Preston, Paul. The Spanish Civil War: Reaction, Revolution, and Revenge (2nd ed. 2007)
- Shubert, Adrian. A Social History of Modern Spain (1990) excerpt and text search
- Tusell, Javier. Spain: From Dictatorship to Democracy, 1939 to the Present (2007) excerpt and text search
- Boyd, Kelly, ed. (1999). Encyclopedia of Historians and Historical Writing vol 2. Taylor & Francis. pp. 1124–36.
- Feros, Antonio. "Spain and America: All is One”: Historiography of the Conquest and Colonization of the Americas and National Mythology in Spain c. 1892–c. 1992." in Christopher Schmidt-Nowara and John M. Nieto Phillips, eds. Interpreting Spanish Colonialism: Empires, Nations, and Legends (2005).
- Herzberger, David K. Narrating the past: fiction and historiography in postwar Spain (Duke University Press, 1995).
- Herzberger, David K. "Narrating the past: History and the Novel of Memory in Postwar Spain." Publications of the Modern Language Association of America (1991): 34-45. in JSTOR
- Linehan, Peter. History and the historians of medieval Spain (Oxford UP, 1993)
- Viñao, Antonio. "From dictatorship to democracy: history of education in Spain." Paedagogica Historica 50#6 (2014): 830-843..
- History of Spain: Primary Documents
- Spanish History Sources & Documents
- Stanley G. Payne The Seventeenth-Century Decline
- Henry Kamen, "The Decline of Spain: A Historical Myth?", Past and Present, (Explains the complexities of this subject)
- WWW-VL "Spanish History Index
- Carmen Pereira-Muro. Culturas de España. Boston and New York: Houghton Mifflin Company 2003. ISBN |
In mechanical engineering, a gear ratio is a direct measure of the ratio of the rotational speeds of two or more interlocking gears. As a general rule, when dealing with two gears, if the driving gear (the one directly receiving rotational force from the engine, motor, etc.) is bigger than the driven gear, the latter will turn more quickly, and vice versa. We can express this basic concept with the formula Gear ratio = T2/T1, where T1 is the number of teeth on the first gear and T2 is the number of teeth on the second.
Finding the Gear Ratio of a Gear Train
1Start with a two-gear train. To be able to determine a gear ratio, you must have at least two gears engaged with each other — this is called a "gear train." Usually, the first gear is a "drive gear" attached to the motor shaft and the second is a "driven gear" attached to the load shaft. There may also be any number of gears between these two to transmit power from the drive gear to the driven gear: these are called "idler gears."
- For now, let's look at a gear train with only two gears in it. To be able to find a gear ratio, these gears have to be interacting with each other — in other words, their teeth need to be meshed and one should be turning the other. For example purposes, let's say that you have one small drive gear (gear 1) turning a larger driven gear (gear 2).
2Count the number of teeth on the drive gear. One simple way to find the gear ratio between two interlocking gears is to compare the number of teeth (the little peg-like protrusions at the edge of the wheel) that they both have. Start by determining how many teeth are on the drive gear. You can do this by counting manually or, sometimes, by checking for this information labeled on the gear itself.
- For example purposes, let's say that the smaller drive gear in our system has 20 teeth.
3Count the number of teeth on the driven gear. Next, determine how many teeth are on the driven gear exactly as you did before for the drive gear.
- Let's say that, in our example, the driven gear has 30 teeth.
4Divide one teeth count by the other. Now that you know how many teeth are on each gear, you can find the gear ratio relatively simply. Divide the driven gear teeth by the drive gear teeth. Depending on your assignment, you may write your answer as a decimal, a fraction, or in ratio form (i.e., x : y).
- In our example, dividing the 30 teeth of the driven gear by the 20 teeth of the drive gear gets us 30/20 = 1.5. We can also write this as 3/2 or 1.5 : 1, etc.
- What this gear ratio means is that the smaller driver gear must turn one and a half times to get the larger driven gear to make one complete turn. This makes sense — since the driven gear is bigger, it will turn more slowly.
More than Two Gears
1Start with a gear train of more than two gears. As its name suggests, a "gear train" can also be made from a long sequence of gears — not just a single driver gear and a single driven gear. In these cases, the first gear remains the driver gear, the last gear remains the driven gear, and the ones in the middle become "idler gears." These are often used to change the direction of rotation or to connect two gears when direct gearing would make them unwieldy or not readily available.
- Let's say for example purposes that the two-gear train described above is now driven by a small seven-toothed gear. In this case, the 30-toothed gear remains the driven gear and the 20-toothed gear (which was the driver before) is now an idler gear.
2Divide the teeth numbers of the drive and driven gears. The important thing to remember when dealing with gear trains with more than two gears is that only the driver and driven gears (usually the first and last ones) matter. In other words, the idler gears don't affect the gear ratio of the overall train at all. When you've identified your driver gear and your driven gear, you can find the gear ratio exactly as before.
- In our example, we would find the gear ratio by dividing the thirty teeth of the driven gear by the seven teeth of our new driver. 30/7 = about 4.3 (or 4.3 : 1, etc.) This means that the driver gear has to turn about 4.3 times to get the much larger driven gear to turn once.
3If desired, find the gear ratios for the intermediate gears. You can find the gear ratios involving the idler gears as well, and you may want to in certain situations. In these cases, start from the drive gear and work toward the load gear. Treat the preceding gear as if it were the drive gear as far as the next gear is concerned. Divide the number of teeth on each "driven" gear by the number of teeth on the "drive" gear for each interlocking set of gears to calculate the intermediate gear ratios.
- In our example, the intermediate gear ratios are 20/7 = 2.9 and 30/20 = 1.5. Note that neither of these are equal to the gear ratio for the entire train, 4.3.
- However, note also that (20/7) × (30/20) = 4.3. In general, the intermediate gear ratios of a gear train will multiply together to equal the overall gear ratio.
Making Ratio/Speed Calculations
1Find the rotational speed of your drive gear. Using the idea of gear ratios, it's easy to figure out how quickly a driven gear is rotating based on the "input" speed of the drive gear. To start, find the rotational speed of your drive gear. In most gear calculations, this is given in rotations per minute (rpm), though other units of velocity will also work.
- For example, let's say that in the example gear train above with a seven-toothed driver gear and a 30-toothed driven gear, the drive gear is rotating at 130 rpms. With this information, we'll find the speed of the driven gear in the next few steps.
2Plug your information into the formula S1 × T1 = S2 × T2. In this formula, S1 refers to the rotational speed of the drive gear, T1 refers to the teeth in the drive gear, and S2 and T2 to the speed and teeth of the driven gear. Fill in the variables until you have only one left undefined.
- Often, in these sorts of problems, you'll be solving for S2, though it's perfectly possible to solve for any of the variables. In our example, plugging in the information we have, we get this:
- 130 rpms × 7 = S2 × 30
3Solve. Finding your remaining variable is a matter of basic algebra. Just simplify the rest of the equation and isolate the variable on one side of the equals sign and you will have your answer. Don't forget to label it with the correct units — you can lose points for this in schoolwork.
- In our example, we can solve like this:
- 130 rpms × 7 = S2 × 30
- 910 = S2 × 30
- 910/30 = S2
- 30.33 rpms = S2
- In other words, if the drive gear spins at 130 rpms, the driven gear will spin at 30.33 rpms. This makes sense — since the driven gear is much bigger, it will spin much slower.
If a 38 tooth gear running at 360rpm is driving another gear at 144rpm, what is the number of teeth on the driven gear?
- T1*S1=S2*T2 where, T1=number of teeth on the driver gear, S1= angular speed on the driver gear, T2=number of teeth on the driven gear and S2=angular speed on the driven gear. 38 teeth*360rpm=T2*144rpm. T2=95 teeth on the driven gear.
- To see the principles of gear ratio in action, take a ride on your bike! Notice that it is easiest to go up hills when you have a small gear in front and a big one in the back. While it's easier to turn the smaller gear with the leverage from your pedals, it takes many rotations to get your rear wheel to rotate compared to the gear settings you'd use for flat sections, making you go slower.
- A geared down system (where load RPM is less than motor RPM) will require a motor that delivers optimal power at higher rotational speeds.
- The power needed to drive the load is geared up or down from the motor by the gear ratio. The motor must be sized to provide the power needed by the load after the gear ratio is taken in to consideration. A geared up system (where load RPM is greater than motor RPM) will require a motor that delivers optimal power at lower rotational speeds.
In other languages:
Português: Determinar a Relação de Transmissão de Engrenagens, Italiano: Determinare il Rapporto di Trasmissione, Русский: рассчитать передаточное отношение зубчатой передачи, Français: déterminer un rapport de transmission, Čeština: Jak určit převodový poměr, Bahasa Indonesia: Menentukan Rasio Roda Gigi
Thanks to all authors for creating a page that has been read 502,539 times. |
In this geometry worksheet, 10th graders are given points on a coordinate plane, and they must figure out what the x and y values are. There are 6 graphing question.
3 Views 6 Downloads
Connecting Algebra and Geometry Through Coordinates
This unit on connecting algebra and geometry covers a number of topics including worksheets on the distance formula, finding the perimeter and area of polynomials, the slope formula, parallel and perpendicular lines, parallelograms,...
9th - 10th Math CCSS: Designed
Circles in the Coordinate Plane
Make the connection between the distance formula and the equation of a circle. The teacher presents a lesson on how to use the distance formula to derive the equation of the circle. Pupils transform circles on the coordinate plane and...
9th - 11th Math CCSS: Adaptable
Transformations in the Coordinate Plane
Your learners connect the new concepts of transformations in the coordinate plane to their previous knowledge using the solid vocabulary development in this unit. Like a foreign language, mathematics has its own set of vocabulary terms...
9th - 12th Math CCSS: Designed
Measure Angles in Standard Position Using the Coordinate Plane
The mechanics of measuring angles is a skill that is often taken for granted in an upper-level math class. This clear and detailed video presentation bridges the gap between identifying angles in geometry, and using their unit-circle...
5 mins 9th - 12th Math CCSS: Designed |
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Indigenous peoples, also known in some regions as First peoples, First Nations, Aboriginal peoples or Native peoples or autochthonous peoples, are ethnic groups who are the original or earliest known inhabitants of an area, in contrast to groups that have settled, occupied or colonized the area more recently.[clarification needed] Groups are usually described as indigenous when they maintain traditions or other aspects of an early culture that is associated with a given region. Not all indigenous peoples share this characteristic, as many have adopted substantial elements of a colonizing culture, such as dress, religion or language. Indigenous peoples may be settled in a given region (sedentary) or exhibit a nomadic lifestyle across a large territory, but they are generally historically associated with a specific territory on which they depend. Indigenous societies are found in every inhabited climate zone and continent of the world except Antarctica.
Since indigenous peoples are often faced with threats to their sovereignty, economic well-being and access to the resources on which their cultures depend, political rights have been set forth in international law by international organizations such as the United Nations, the International Labour Organization and the World Bank. In 2007, the United Nations issued a Declaration on the Rights of Indigenous Peoples (UNDRIP) to guide member-state national policies to the collective rights of indigenous peoples, such as culture, identity, language and access to employment, health, education and natural resources. Estimates put the total population of indigenous peoples from 220 million to 350 million.
International Day of the World's Indigenous Peoples is celebrated on 9 August each year.
The term 'indigenous peoples' refers to culturally distinct groups affected by colonization. The term started being used in the 1970s as a way of linking experiences, issues and struggles of groups of colonized people across international borders. At this time 'Indigenous people(s)' also began to be used to describe a legal category in indigenous law created in international and national legislation. The use of the 's' in 'peoples' recognizes that there are real differences between different indigenous peoples.
James Anaya, former Special Rapporteur on the Rights of Indigenous Peoples, has defined indigenous peoples as "living descendants of pre-invasion inhabitants of lands now dominated by others. They are culturally distinct groups that find themselves engulfed by other settler societies born of forces of empire and conquest".
Indigenous is derived from the Latin word indigena, which is based on the root -genus, "to be born from", and the Old Latin prefix indu-, "in". Notably, the origins of the term "indigenous" are not related in any way to the origins of the term "Indian", which until recently was commonly applied to indigenous peoples of the Americas. Any given people, ethnic group or community may be described as "indigenous" in reference to some particular region or location that they see as their traditional indigenous land claim. Other terms for indigenous populations in use are 'First Peoples' or 'Native Peoples', 'First Nations' or 'People of the Land', 'Aboriginals', or 'Fourth World Peoples'. The words original, autochthonous or first (as in Canada's First Nations) are also used.
The Merriam-Webster Dictionary defines a people as "a body of persons that are united by a common culture, tradition, or sense of kinship, which typically have common language, institutions, and beliefs, and often constitute a politically organized group".
Throughout history, different states designate the groups within their boundaries that are recognized as indigenous peoples according to international or national legislation by different terms. Indigenous people also include people indigenous based on their descent from populations that inhabited the country when non-indigenous religions and cultures arrived—or at the establishment of present state boundaries—who retain some or all of their own social, economic, cultural and political institutions, but who may have been displaced from their traditional domains or who may have resettled outside their ancestral domains.
The status of the indigenous groups in the subjugated relationship can be characterized in most instances as an effectively marginalized or isolated in comparison to majority groups or the nation-state as a whole. Their ability to influence and participate in the external policies that may exercise jurisdiction over their traditional lands and practices is very frequently limited. This situation can persist even in the case where the indigenous population outnumbers that of the other inhabitants of the region or state; the defining notion here is one of separation from decision and regulatory processes that have some, at least titular, influence over aspects of their community and land rights.
In a ground-breaking 1997 decision involving the Ainu people of Japan, the Japanese courts recognized their claim in law, stating that "If one minority group lived in an area prior to being ruled over by a majority group and preserved its distinct ethnic culture even after being ruled over by the majority group, while another came to live in an area ruled over by a majority after consenting to the majority rule, it must be recognized that it is only natural that the distinct ethnic culture of the former group requires greater consideration."
In Russia, definition of "indigenous peoples" is contested largely referring to a number of population (less than 50 000 people), and neglecting self-identification, origin from indigenous populations who inhabited the country or region upon invasion, colonization or establishment of state frontiers, distinctive social, economic and cultural institutions. Thus, indigenous people of Russia such as Sakha, Komi, Karelian and others are not considered as such due to the size of the population (more than 50 000 people), and consequently they "are not the subjects of the specific legal protections."
The presence of external laws, claims and cultural mores either potentially or actually act to variously constrain the practices and observances of an indigenous society. These constraints can be observed even when the indigenous society is regulated largely by its own tradition and custom. They may be purposefully imposed, or arise as unintended consequence of trans-cultural interaction. They may have a measurable effect, even where countered by other external influences and actions deemed beneficial or that promote indigenous rights and interests.
The first meeting of the United Nations Working Group on Indigenous Populations was on 9 August 1982 and this date is now celebrated as the International Day of the World's Indigenous Peoples.
In 1982 the United Nations Working Group on Indigenous Populations (WGIP) accepted as a preliminary definition a formulation put forward by Mr. José R. Martínez-Cobo, Special Rapporteur on Discrimination against Indigenous Populations. This definition has some limitations, because the definition applies mainly to pre-colonial populations, and would likely exclude other isolated or marginal societies.
Indigenous communities, peoples, and nations are those that, having a historical continuity with pre-invasion and pre-colonial societies that developed on their territories, consider themselves distinct from other sectors of the societies now prevailing in those territories, or parts of them. They form at present non-dominant sectors of society and are determined to preserve, develop, and transmit to future generations their ancestral territories, and their ethnic identity, as the basis of their continued existence as peoples, in accordance with their own cultural patterns, social institutions and legal systems.
The United Nations Permanent Forum on Indigenous Issues (UNPFII), a high- level advisory body to the United Nations Economic and Social Council, was established on 28 July 2000 with the mandate to deal with indigenous issues related to economic and social development, culture, the environment, education, health and human rights.
The primary impetus in considering indigenous identity comes from the post-colonial movements and considering the historical impacts on populations by the European imperialism. The first paragraph of the Introduction of a report published in 2009 by the Secretariat of the Permanent Forum on Indigenous Issues published a report, states
For centuries, since the time of their colonization, conquest or occupation, indigenous peoples have documented histories of resistance, interface or cooperation with states, thus demonstrating their conviction and determination to survive with their distinct sovereign identities. Indeed, indigenous peoples were often recognized as sovereign peoples by states, as witnessed by the hundreds of treaties concluded between indigenous peoples and the governments of the United States, Canada, New Zealand and others.
In May 2016, the Fifteenth Session of the United Nations Permanent Forum on Indigenous Issues (UNPFII) affirmed that indigenous people (also termed aboriginal people, native people, or autochthonous people) are distinctive groups protected in international or national legislation as having a set of specific rights based on their linguistic and historical ties to a particular territory, prior to later settlement, development, and or occupation of a region. The session affirms that, since indigenous peoples are vulnerable to exploitation, marginalization, oppression, forced assimilation, and genocide by nation states formed from colonizing populations or by different, politically dominant ethnic groups, individuals and communities maintaining ways of life indigenous to their regions are entitled to special protection.
Greek sources of the Classical period acknowledge the prior existence of indigenous people(s), whom they referred to as "Pelasgians". These peoples inhabited lands surrounding the Aegean Sea before the subsequent migrations of the Hellenic ancestors claimed by these authors. The disposition and precise identity of this former group is elusive, and sources such as Homer, Hesiod and Herodotus give varying, partially mythological accounts. However, it is clear that cultures existed whose indigenous characteristics were distinguished by the subsequent Hellenic cultures (and distinct from non-Greek speaking "foreigners", termed "barbarians" by the historical Greeks).
Greco-Roman society flourished between 330 BCE and 640 CE and commanded successive waves of conquests that gripped more than half of the globe. But because already existent populations within other parts of Europe at the time of classical antiquity had more in common culturally speaking with the Greco-Roman world, the intricacies involved in expansion across the European frontier were not so contentious relative to indigenous issues.
However, when it came to expansion in other parts of the world, namely Asia, Africa, and the Middle East, then totally new cultural dynamics had entered into the equation, and this expansion became a forerunner of what was to take the Americas, Southeast Asia, and the Pacific by storm in more recent times. Thus, the idea that expansionist societies may encounter peoples who possess cultural customs and racial appearances strikingly different from those of the colonizing power was not new to the Renaissance or the Enlightenment.
European expansion and colonialismEdit
The rapid and extensive spread of the various European powers from the early 15th century onward had a profound impact upon many of the indigenous cultures with whom they came into contact. The exploratory and colonial ventures in the Americas, Africa, Asia and the Pacific often resulted in territorial and cultural conflicts, and the intentional or unintentional displacement and devastation of the indigenous populations.
Encounters between explorers and indigenous populations in the rest of the world often introduced new infectious diseases, which sometimes caused local epidemics of extraordinary virulence. For example, smallpox, measles, malaria, yellow fever, and other diseases were unknown in pre-Columbian America and Australia.
Population and distributionEdit
Indigenous societies range from those who have been significantly exposed to the colonizing or expansionary activities of other societies (such as the Maya peoples of Mexico and Central America) through to those who as yet remain in comparative isolation from any external influence (such as the Sentinelese and Jarawa of the Andaman Islands).
Precise estimates for the total population of the world's indigenous peoples are very difficult to compile, given the difficulties in identification and the variances and inadequacies of available census data. The United Nations estimates that there are over 370 million indigenous people living in over 70 countries worldwide. This would equate to just fewer than 6% of the total world population. This includes at least 5000 distinct peoples in over 72 countries.
Contemporary distinct indigenous groups survive in populations ranging from only a few dozen to hundreds of thousands and more. Many indigenous populations have undergone a dramatic decline and even extinction, and remain threatened in many parts of the world. Some have also been assimilated by other populations or have undergone many other changes. In other cases, indigenous populations are undergoing a recovery or expansion in numbers.
Certain indigenous societies survive even though they may no longer inhabit their "traditional" lands, owing to migration, relocation, forced resettlement or having been supplanted by other cultural groups. In many other respects, the transformation of culture of indigenous groups is ongoing, and includes permanent loss of language, loss of lands, encroachment on traditional territories, and disruption in traditional ways of life due to contamination and pollution of waters and lands.
Environmental and economic benefits of having indigenous peoples tend landEdit
A WRI report mentions that “tenure-secure” indigenous lands generates billions and sometimes trillions of dollars’ worth of benefits in the form of carbon sequestration, reduced pollution, clean water and more. It says that tenure-secure indigenous lands have low deforestation rates, they help to reduce GHG emissions, control erosion and flooding by anchoring soil, and provide a suite of other local, regional and global ecosystem services. However, many of these communities find themselves on the front lines of the deforestation crisis, and their lives and livelihoods threatened.
Indigenous peoples by regionEdit
Indigenous populations are distributed in regions throughout the globe. The numbers, condition and experience of indigenous groups may vary widely within a given region. A comprehensive survey is further complicated by sometimes contentious membership and identification.
In the post-colonial period, the concept of specific indigenous peoples within the African continent has gained wider acceptance, although not without controversy. The highly diverse and numerous ethnic groups that comprise most modern, independent African states contain within them various peoples whose situation, cultures and pastoralist or hunter-gatherer lifestyles are generally marginalized and set apart from the dominant political and economic structures of the nation. Since the late 20th century these peoples have increasingly sought recognition of their rights as distinct indigenous peoples, in both national and international contexts.
Though the vast majority of African peoples are indigenous in the sense that they originate from that continent, in practice, identity as an indigenous people per the modern definition is more restrictive, and certainly not every African ethnic group claims identification under these terms. Groups and communities who do claim this recognition are those who, by a variety of historical and environmental circumstances, have been placed outside of the dominant state systems, and whose traditional practices and land claims often come into conflict with the objectives and policies implemented by governments, companies and surrounding dominant societies.
Indigenous peoples of the American continent are broadly recognized as being those groups and their descendants who inhabited the region before the arrival of European colonizers and settlers (i.e., Pre-Columbian). Indigenous peoples who maintain, or seek to maintain, traditional ways of life are found from the high Arctic north to the southern extremities of Tierra del Fuego.
The impact of European colonization of the Americas on the indigenous communities has been in general quite severe, with many authorities estimating ranges of significant population decline primarily due to disease but also violence. The extent of this impact is the subject of much continuing debate. Several peoples shortly thereafter became extinct, or very nearly so.
All nations in North and South America have populations of indigenous peoples within their borders. In some countries (particularly in Latin American), indigenous peoples form a sizable component of the overall national population — in Bolivia they account for an estimated 56–70% of the total nation, and at least half of the population in Guatemala and the Andean and Amazonian nations of Peru. In English, indigenous peoples are collectively referred to by different names that vary by region and include such ethnonyms as Native Americans, Amerindians, and American Indians. In Spanish or Portuguese speaking countries one finds the use of terms such as índios, pueblos indígenas, amerindios, povos nativos, povos indígenas, and, in Peru, Comunidades Nativas (Native Communities), particularly among Amazonian societies like the Urarina and Matsés. In Chile there are indigenous peoples like the Mapuches in the Center-South and the Aymaras in the North; also the Rapa Nui indigenous to Easter Island are a Polynesian people.
In Brazil, the Portuguese term índio is used by most of the population, the media, the indigenous peoples themselves and even the government (FUNAI is an acronym for the Fundação Nacional do Índio), although its Hispanic equivalent indio is widely considered not politically correct and is falling into disuse.
Indigenous peoples in Canada comprise the First Nations, Inuit and Métis. The descriptors "Indian" and "Eskimo" have fallen into disuse in Canada. According to the 2016 Census, there are around 1 670 000 Aboriginal people. There are currently over 600 recognized First Nations governments or bands spread across Canada with distinctive Aboriginal cultures, languages, art, and music. National Aboriginal Day recognizes the cultures and contributions of Aboriginals to the history of Canada
The Inuit have achieved a degree of administrative autonomy with the creation in 1999 of the territories of Nunavik (in Northern Quebec), Nunatsiavut (in Northern Labrador) and Nunavut, which was until 1999 a part of the Northwest Territories.
The autonomous Danish territory of Greenland is also home to a majority population of Inuit (about 85%) who settled the area in the 13th century, displacing the indigenous Dorset people and Greenlandic Norse.
In the United States, the combined populations of Native Americans, Inuit and other indigenous designations totaled 2,786,652 (constituting about 1.5% of 2003 U.S. census figures). Some 563 scheduled tribes are recognized at the federal level, and a number of others recognized at the state level.
In Mexico, approximately 6,000,000 (constituting about 6.7% of 2005 Mexican census figures) identify as Indígenas (Spanish for natives or indigenous peoples). In the southern states of Chiapas, Yucatán and Oaxaca they constitute 26.1%, 33.5% and 35.3%, respectively, of the population. In these states several conflicts and episodes of civil war have been conducted, in which the situation and participation of indigenous societies were notable factors (see for example EZLN).
The Amerindians make up 0.4% of all Brazilian population, or about 700,000 people. Indigenous peoples are found in the entire territory of Brazil, although the majority of them live in Indian reservations in the North and Center-Western part of the country. On 18 January 2007, FUNAI reported that it had confirmed the presence of 67 different uncontacted peoples in Brazil, up from 40 in 2005. With this addition Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted peoples.
The vast regions of Asia contain the majority of the world's present-day indigenous populations, about 70% according to IWGIA figures.
The Yazidis are indigenous to the Sinjar mountain range in northern Iraq. The Yazidis are ethnically Kurd but are a religious minority of the Kurdish people. The Kurds, as a whole, are one of the indigenous peoples of Mesopotamia (south-eastern Turkey, north-eastern Syria, northern Iraq, north-western Iran and parts Armenia).
The Assyrians are indigenous to Mesopotamia. They claim descent from the ancient Neo-Assyrian Empire, and lived in what was Assyria, their original homeland, and still speak dialects of Aramaic, the official language of the Assyrian Empire.
The most substantial populations of indigenous people are in India, which constitutionally recognizes a range of "Scheduled Tribes" within its borders. These various people number about 200 million. But these terms "indigenous people" and "tribal people" are different.
There are also indigenous people residing in the hills of Northern, North-eastern and Southern India like the Tamils (of Tamil Nadu), Shina, Kalasha, Khowar, Burusho, Balti, Wakhi, Domaki, Nuristani, Kohistani, Bakkarwal, Meenas, Ladakhi, Lepcha, Bhutia (of Sikkim), Naga (of Nagaland), indigenous Assamese communities, Mizo (of Mizoram), Tripuri (Tripura), Adi and Nyishi (Arunachal Pradesh), Kodava (of Kodagu), Toda, Kurumba, Kota (of the Nilgiris), Irulas and others.
India's Andaman and Nicobar Islands in the Indian Ocean are also home to several indigenous groups such as the Andamanese of Strait Island, the Jarawas of Middle Andaman and South Andaman Islands, the Onge of Little Anadaman Island and the uncontacted Sentinelese of North Sentinel Island. They are registered and protected by the Indian government.
In Sri Lanka, the indigenous Veddah people constitute a small minority of the population today.
The Russians invaded Siberia and conquered the indigenous people in the 17th–18th centuries.
Nivkh people are an ethnic group indigenous to Sakhalin, having a few speakers of the Nivkh language, but their fisher culture has been endangered due to the development of oil field of Sakhalin from 1990s.
The Russian government recognizes only 40 ethnic groups as indigenous peoples, even though there are other 30 groups to be counted as such. The reason of nonrecognition is the size of the population and relatively late advent to their current regions, thus indigenous peoples in Russia should be numbered less than 50,000 people.
Ainu people are an ethnic group indigenous to Hokkaidō, the Kuril Islands, and much of Sakhalin. As Japanese settlement expanded, the Ainu were pushed northward and fought against the Japanese in Shakushain's Revolt and Menashi-Kunashir Rebellion, until by the Meiji period they were confined by the government to a small area in Hokkaidō, in a manner similar to the placing of Native Americans on reservations.
The Tibetans are indigenous to Tibet.
The languages of Taiwanese aborigines have significance in historical linguistics, since in all likelihood Taiwan was the place of origin of the entire Austronesian language family, which spread across Oceania..
The Malay Singaporeans are the indigenous people of Singapore, inhabiting it since the Austronesian migration. They had established the Kingdom of Singapura back in the 13th century. The name Singapore itself comes from the Malay word Singapura (Singa=Lion, Pura=City) which means the Lion City.
The Cham are the indigenous people of the former state of Champa which was conquered by Vietnam in the Cham–Vietnamese wars during Nam tiến. The Cham in Vietnam are only recognized as a minority, and not as an indigenous people by the Vietnamese government despite being indigenous to the region.
In Indonesia, there are 50 to 70 million people who classify as indigenous peoples. However, the Indonesian government does not recognize the existence of indigenous peoples, classifying every Native Indonesian ethnic group as "indigenous" despite the clear cultural distinctions of certain groups. This problem is shared by many other countries in the ASEAN region.
In the Philippines, there are 135 ethno-linguistic groups, majority of which are considered as indigenous peoples by mainstream indigenous ethnic groups in the country. The indigenous people of Cordillera Administrative Region and Cagayan Valley in the Philippines are the Igorot people. The indigenous peoples of Mindanao are the Lumad peoples and the Moro (Tausug, Maguindanao Maranao and others) who also live in the Sulu archipelago. There are also others sets of indigenous peoples in Palawan, Mindoro, Visayas, and the rest central and south Luzon. The country has one of the largest indigenous peoples population in the world.
In Myanmar indigenous peoples include the Shan, the Karen, the Rakhine, the Karenni, the Chin, the Kachin and the Mon. However, there are more ethnic groups that are considered indigenous, for example, the Akha, the Lisu, the Lahu or the Mru, among others.
In Europe, the majority of ethnic groups are indigenous to the region in the sense of having occupied it for several centuries or millennia. Present-day indigenous populations as recognized by the UN definition, however, are relatively few, and mainly confined to its north and far east.
Notable indigenous minority populations in Europe which are recognized by the UN include the Finno-Ugric Nenets, Samoyed, and Komi peoples of northern Russia; Circassians of southern Russia and the North Caucasus; Crimean Tatars of Crimea in Ukraine; and Sámi peoples of northern Norway, Sweden, and Finland and northwestern Russia (in an area also referred to as Sápmi).
In Australia the indigenous populations are the Aboriginal Australian peoples (comprising many different nations and tribes) and the Torres Strait Islander peoples (also with sub-groups). These groups are often together spoken of[by whom?] as Indigenous Australians.
Polynesian, Melanesian and Micronesian peoples originally populated many of the present-day Pacific Island countries in the Oceania region over the course of thousands of years. European, American, Chilean and Japanese colonial expansion in the Pacific brought many of these areas under non-indigenous administration, mainly during the 19th century. During the 20th century several of these former colonies gained independence and nation-states formed under local control. However, various peoples have put forward claims for indigenous recognition where their islands are still under external administration; examples include the Chamorros of Guam and the Northern Marianas, and the Marshallese of the Marshall Islands. Some islands remain under administration from Paris, Washington, London or Wellington.
In most parts of Oceania, indigenous peoples outnumber the descendants of colonists. Exceptions include Australia, New Zealand and Hawaii. According to the 2013 census, New Zealand Māori make up 14.9% of the population of New Zealand, with less than half (46.5%) of all Māori residents identifying solely as Māori. The Māori are indigenous to Polynesia and settled New Zealand relatively recently, with migrations thought to have occurred in the 13th century CE. In New Zealand pre-contact Māori groups did not necessarily see themselves as a single people, thus grouping into tribal (iwi) arrangements has become a more formal arrangement in more recent times. Many Māori national leaders signed a treaty with the British, the Treaty of Waitangi (1840), seen in some circles as forming the modern geo-political entity that is New Zealand.
A majority of the Papua New Guinea (PNG) population is indigenous, with more than 700 different nationalities recognized in a total population of 8 million. The country's constitution and key statutes identify traditional or custom-based practices and land tenure, and explicitly set out to promote the viability of these traditional societies within the modern state. However, conflicts and disputes concerning land use and resource rights continue between indigenous groups, the government, and corporate entities.
Indigenous rights and other issuesEdit
|Part of a series on|
|NGOs and political groups|
Indigenous peoples confront a diverse range of concerns associated with their status and interaction with other cultural groups, as well as changes in their inhabited environment. Some challenges are specific to particular groups; however, other challenges are commonly experienced. These issues include cultural and linguistic preservation, land rights, ownership and exploitation of natural resources, political determination and autonomy, environmental degradation and incursion, poverty, health, and discrimination.
The interaction between indigenous and non-indigenous societies throughout history has been complex, ranging from outright conflict and subjugation to some degree of mutual benefit and cultural transfer. A particular aspect of anthropological study involves investigation into the ramifications of what is termed first contact, the study of what occurs when two cultures first encounter one another. The situation can be further confused when there is a complicated or contested history of migration and population of a given region, which can give rise to disputes about primacy and ownership of the land and resources.
Wherever indigenous cultural identity is asserted, common societal issues and concerns arise from the indigenous status. These concerns are often not unique to indigenous groups. Despite the diversity of indigenous peoples, it may be noted that they share common problems and issues in dealing with the prevailing, or invading, society. They are generally concerned that the cultures of indigenous peoples are being lost and that indigenous peoples suffer both discrimination and pressure to assimilate into their surrounding societies. This is borne out by the fact that the lands and cultures of nearly all of the peoples listed at the end of this article are under threat. Notable exceptions are the Sakha and Komi peoples (two northern indigenous peoples of Russia), who now control their own autonomous republics within the Russian state, and the Canadian Inuit, who form a majority of the territory of Nunavut (created in 1999). Despite the control of their territories, many Sakha people have lost their lands as a result of the Russian Homestead Act, which allows any Russian citizen to own any land in the Far Eastern region of Russia. In Australia, a landmark case, Mabo v Queensland (No 2), saw the High Court of Australia reject the idea of terra nullius. This rejection ended up recognizing that there was a pre-existing system of law practised by the Meriam people.
A 2009 United Nations publication says "Although indigenous peoples are often portrayed as a hindrance to development, their cultures and traditional knowledge are also increasingly seen as assets. It is argued that it is important for the human species as a whole to preserve as wide a range of cultural diversity as possible, and that the protection of indigenous cultures is vital to this enterprise."
Human rights violationsEdit
The Bangladesh Government has stated that there are "no indigenous peoples in Bangladesh." This has angered the indigenous peoples of Chittagong Hill Tracts, Bangladesh, collectively known as the Jumma. Experts have protested against this move of the Bangladesh Government and have questioned the Government's definition of the term "indigenous peoples." This move by the Bangladesh Government is seen by the indigenous peoples of Bangladesh as another step by the Government to further erode their already limited rights.
Hindus and Chams have both experienced religious and ethnic persecution and restrictions on their faith under the current Vietnamese government, with the Vietnamese state confiscating Cham property and forbidding Cham from observing their religious beliefs. Hindu temples were turned into tourist sites against the wishes of the Cham Hindus. In 2010 and 2013 several incidents occurred in Thành Tín and Phươc Nhơn villages where Cham were murdered by Vietnamese. In 2012, Vietnamese police in Chau Giang village stormed into a Cham Mosque, stole the electric generator, and also raped Cham girls. Cham in the Mekong Delta have also been economically marginalised, with ethnic Vietnamese settling on land previously owned by Cham people with state support.
The Indonesian government has outright denied the existence of indigenous peoples within the countries' borders. In 2012, Indonesia stated that ‘The Government of Indonesia supports the promotion and protection of indigenous people worldwide ... Indonesia, however, does not recognize the application of the indigenous peoples concept ... in the country’. Along with the brutal treatment of the country's Papuan people (a conservative estimate places the violent deaths at 100,000 people in West New Guinea since Indonesian occupation in 1963, see Papua Conflict) has led to Survival International condemning Indonesia for treating its indigenous peoples as the worst in the world.
The Vietnamese viewed and dealt with the indigenous Montagnards from the Central Highlands as "savages," which caused a Montagnard uprising against the Vietnamese. The Vietnamese were originally centered around the Red River Delta but engaged in conquest and seized new lands such as Champa, the Mekong Delta (from Cambodia) and the Central Highlands during Nam Tien. While the Vietnamese received strong Chinese influence in their culture and civilization and were Sinicized, and the Cambodians and Laotians were Indianized, the Montagnards in the Central Highlands maintained their own indigenous culture without adopting external culture and were the true indigenous of the region. To hinder encroachment on the Central Highlands by Vietnamese nationalists, the term Pays Montagnard du Sud-Indochinois (PMSI) emerged for the Central Highlands along with the indigenous being addressed by the name Montagnard. The tremendous scale of Vietnamese Kinh colonists flooding into the Central Highlands has significantly altered the demographics of the region. The anti-ethnic minority discriminatory policies by the Vietnamese, environmental degradation, deprivation of lands from the indigenous people, and settlement of indigenous lands by an overwhelming number of Vietnamese settlers led to massive protests and demonstrations by the Central Highland's indigenous ethnic minorities against the Vietnamese in January–February 2001. This event gave a tremendous blow to the claim often published by the Vietnamese government that in Vietnam “There has been no ethnic confrontation, no religious war, no ethnic conflict. And no elimination of one culture by another.”
In December 1993, the United Nations General Assembly proclaimed the International Decade of the World's Indigenous People, and requested UN specialized agencies to consider with governments and indigenous people how they can contribute to the success of the Decade of Indigenous People, commencing in December 1994. As a consequence, the World Health Organization, at its Forty-seventh World Health Assembly, established a core advisory group of indigenous representatives with special knowledge of the health needs and resources of their communities, thus beginning a long-term commitment to the issue of the health of indigenous peoples.
The WHO notes that "Statistical data on the health status of indigenous peoples is scarce. This is especially notable for indigenous peoples in Africa, Asia and eastern Europe," but snapshots from various countries (where such statistics are available) show that indigenous people are in worse health than the general population, in advanced and developing countries alike: higher incidence of diabetes in some regions of Australia; higher prevalence of poor sanitation and lack of safe water among Twa households in Rwanda; a greater prevalence of childbirths without prenatal care among ethnic minorities in Vietnam; suicide rates among Inuit youth in Canada are eleven times higher than the national average; infant mortality rates are higher for indigenous peoples everywhere.
The first UN publication on the State of the World's Indigenous Peoples revealed alarming statistics about indigenous peoples' health. Health disparities between indigenous and non-indigenous populations are evident in both developed and developing countries. Native Americans in the United States are 600 times more likely to acquire tuberculosis and 62% more likely to commit suicide than the non-indigenous American population. Tuberculosis, obesity, and type 2 diabetes are major health concerns for the indigenous in developed countries. Globally, health disparities touch upon nearly every health issue, including HIV/AIDS, cancer, malaria, cardiovascular disease, malnutrition, parasitic infections, and respiratory diseases, affecting indigenous peoples at much higher rates. Many causes of indigenous children's mortality could be prevented. Poorer health conditions amongst indigenous peoples result from longstanding complex issues, such as extreme poverty, but also the intentional marginalization and dispossession of indigenous peoples by dominant, non-indigenous populations.
Racism and discriminationEdit
Indigenous peoples have frequently been subjected to various forms of racism and discrimination. Indigenous peoples have been denoted primitives, savages or uncivilized. These terms occurred commonly during the heyday of European colonial expansion, but still continue in use in certain societies in modern times.
During the 17th century, Europeans commonly labeled indigenous peoples as "uncivilized". Some philosophers, such as Thomas Hobbes (1588-1679), considered indigenous people to be merely "savages". Others (especially literary figures in the 18th century) popularised the concept of "noble savages". Those who were close to the Hobbesian view tended to believe themselves to have a duty to "civilize" and "modernize" the indigenous. Although anthropologists, especially from Europe, used[when?] to apply these terms to all tribal cultures, the practice has fallen into disfavor as demeaning and is, according to many anthropologists, not only inaccurate, but dangerous.
Survival International runs a campaign to stamp out media portrayal of indigenous peoples as "primitive" or "savages". Friends of Peoples Close to Nature considers not only that indigenous culture should be respected as not being inferior, but also sees indigenous ways of life as offering a lesson in sustainability and as a part of the struggle within the "corrupted" western world, from which the threat[which?] stems.
After World War I (1914-1918) many Europeans came to doubt the morality of the means used to "civilize" peoples. At the same time, the anti-colonial movement, and advocates of indigenous peoples, argued that words such as "civilized" and "savage" were products and tools of colonialism, and argued that colonialism itself was savagely destructive. In the mid-20th century European attitudes began to shift to the view that indigenous and tribal peoples should have the right to decide for themselves what should happen to their ancient cultures and ancestral lands.
The cultures of indigenous peoples can become a happy hunting ground for New Age advocates seeking to find ancient traditional truths, spiritualities and practices to appropriate into their worldviews.
At an international level, indigenous peoples have received increased recognition of their environmental rights since 2002, but few countries respect these rights in reality. The UN Declaration on the Rights of Indigenous Peoples, adopted by the General Assembly in 2007, established indigenous peoples' right to self-determination, implying several rights regarding natural resource management. In countries where these rights are recognized, land titling and demarcation procedures are often put on delay, or leased out by the state as concessions for extractive industries without consulting indigenous communities.
Many in the United States federal government are in favor of exploiting oil reserves in the Arctic National Wildlife Refuge, where the Gwich'in indigenous people rely on herds of caribou. Oil drilling could destroy thousands of years of culture for the Gwich'in. On the other hand, the Inupiat Eskimo, another indigenous community in the region, favors oil drilling because they could benefit economically.
The introduction of industrial agricultural technologies such as fertilizers, pesticides, and large plantation schemes have destroyed ecosystems that indigenous communities formerly depended on, forcing resettlement. Development projects such as dam construction and resource extraction have displaced large numbers of indigenous peoples, often without providing compensation. Governments have forced indigenous peoples off of their native lands in the name of ecotourism and national park development. Indigenous women are especially affected by land dispossession because they must walk longer distances for water and fuel wood. These women also become economically dependent on men when they lose their livelihoods. Indigenous groups asserting their rights has most often results in torture, imprisonment, or death.
Most indigenous populations are already subject to the deleterious effects of climate change. Climate change has not only environmental, but also human rights and socioeconomic implications for indigenous communities. The World Bank acknowledges climate change as an obstacle to Millennium Development Goals, notably the fight against poverty, disease, and child mortality, in addition to environmental sustainability.
- Collective rights
- Cultural appropriation
- Ethnic minority
- Genocide of indigenous peoples
- Human rights
- The Image Expedition
- Indigenous Futurisms
- Indigenous intellectual property
- Indigenous Peoples Climate Change Assessment Initiative
- Indigenous Peoples' Day of America
- Indigenous rights
- Intangible cultural heritage
- International Day of the World's Indigenous Peoples
- List of active NGOs of national minorities
- List of ethnic groups
- List of indigenous peoples
- Missing and murdered Indigenous women
- Uncontacted peoples
- United Nations Permanent Forum on Indigenous Issues
- Unrepresented Nations and Peoples Organization
- Virgin soil epidemic
- ILO definition of indigenous people
- Acharya, Deepak and Shrivastava Anshu (2008): Indigenous Herbal Medicines: Tribal Formulations and Traditional Herbal Practices, Aavishkar Publishers Distributor, Jaipur, India. ISBN 978-81-7910-252-7. p. 440
- Sanders, Douglas (1999). "Indigenous peoples: Issues of definition". International Journal of Cultural Property. 8: 4–13. doi:10.1017/S0940739199770591.
- Bodley 2008:2
- Smith, Linda Tuhiwai (2012). Decolonizing methodologies : research and indigenous peoples (Second ed.). Dunedin, New Zealand: Otago University Press. ISBN 978-1-877578-28-1. OCLC 805707083.
- Robert K. Hitchcock, Diana Vinding, Indigenous Peoples' Rights in Southern Africa, IWGIA, 2004, p. 8 based on Working Paper by the Chairperson-Rapporteur, Mrs. Erica-Irene A. Daes, on the concept of indigenous people. UN-Dokument E/CN.4/Sub.2/AC.4/1996/2 (, unhchr.ch)
- S. James Anaya, Indigenous Peoples in International Law, 2nd ed., Oxford University press, 2004, p. 3; Professor Anaya teaches Native American Law, and is the third Commission on Human Rights Special Rapporteur on the Human Rights and Fundamental Freedoms of Indigenous People
- Martínez-Cobo (1986/7), paras. 379–82,
- "indigene, adj. and n." OED Online. Oxford University Press, September 2016. Web. 22 November 2016.
- Mario Blaser, Harvey A. Feit, Glenn McRae, In the Way: Indigenous Peoples, Life Projects, and Development, IDRC, 2004, p. 53
- Silke Von Lewinski, Indigenous Heritage and Intellectual Property: Genetic Resources, Traditional Knowledge, and Folklore, Kluwer Law International, 2004, pp. 130–31
- "Indigenous and Tribal People's Rights Over Their Ancestral Lands and Natural Resources". cidh.org. Retrieved 30 May 2020.
- "Indigenous Peoples". World Bank. Retrieved 11 April 2020.
- Judgment of the Sapporo District Court, Civil Division No. 3, 27 March 1997, in (1999) 38 ILM, p. 419
- IWGIA (2012). Briefing note. Indigenous people in the Russian Federation
- Fondahl, G., Filippova, V., Mack, L. (2015). Indigenous peoples in the new Arctic. In B.Evengard, O.Nymand Larsen, O.Paasche (Eds), The New Arctic (pp. 7–22). Springer
- "Indigenous and Tribal People's Rights Over Their Ancestral Lands and Natural Resources". cidh.org. Retrieved 30 May 2020.
- "International Day of the World's Indigenous Peoples - 9 August". www.un.org. Retrieved 11 March 2020.
- Study of the Problem of Discrimination Against Indigenous Populations, p. 10, Paragraph 25, 30 July 1981, UN EASC
- "A working definition, by José Martinez Cobo". IWGIA - International Work Group for Indigenous Affairs. 9 April 2011. Archived from the original on 26 October 2019. Retrieved 11 March 2020.
- State of the World's Indigenous Peoples, p. 1 Archived 15 February 2010 at the Wayback Machine
- State of the World's Indigenous Peoples, Secretariat of Permanent Forum on Indigenous Issues, UN, 2009 Archived 15 February 2010 at the Wayback Machine
- Coates 2004:12
- Hall, Gillette, and Harry Anthony Patrinos. Indigenous Peoples, Poverty and Human Development in Latin America. New York: Palgrave MacMillan, n.d. Google Scholar. Web. 11 Mar. 2013
- Old World Contacts/Colonists/Canary Islands Archived 13 October 2007 at the Wayback Machine. Ucalgary.ca (22 June 1999). Retrieved on 2011-10-11.
- "Who are indigenous peoples?" (PDF). Retrieved 2 January 2020.
- "Indigenous issues". International Work Group on Indigenous Affairs. Retrieved 5 September 2005.
- Protecting Indigenous Land Rights Makes Good Economic Sense
- Climate Benefits, Tenure Costs
- Defending the defenders: tropical forests in the front line
- Protect indigenous people’s land rights and the whole world will benefit
- What role do indigenous people and forests have in a sustainable future?
- Dean, Bartholomew 2009 Urarina Society, Cosmology, and History in Peruvian Amazonia, Gainesville: University Press of Florida ISBN 978-0-8130-3378-5
- "Civilization.ca – Gateway to Aboriginal Heritage–Culture". Canadian Museum of Civilization Corporation. Government of Canada. 12 May 2006. Retrieved 18 September 2009.
- "Inuit Circumpolar Council (Canada) – ICC Charter". Inuit Circumpolar Council > ICC Charter and By-laws > ICC Charter. 2007. Archived from the original on 5 March 2010. Retrieved 18 September 2009.
- "In the Kawaskimhon Aboriginal Moot Court Factum of the Federal Crown Canada" (PDF). Faculty of Law. University of Manitoba. 2007. p. 2. Archived from the original (PDF) on 26 March 2009. Retrieved 18 September 2009.
- "Words First An Evolving Terminology Relating to Aboriginal Peoples in Canada". Communications Branch of Indian and Northern Affairs Canada. 2004. Archived from the original on 14 November 2007. Retrieved 26 June 2010.
- "Terminology of First Nations, Native, Aboriginal and Métis" (PDF). Aboriginal Infant Development Programs of BC. 2009. Archived from the original (PDF) on 14 July 2010. Retrieved 26 June 2010.
- Statistics Canada, Canada (table), Census Profile, 2016 Census of Population, Catalogue № 98-316-X2016001 (Ottawa: 2017‑11‑29); ———, Aboriginal Peoples Reference Guide, 2016 Census of Population, Catalogue № 98‑500‑X2016009 (Ottawa: 2017‑10‑25), ISBN‑13:978‑0‑660‑05518‑3, [accessed 2019‑10‑08].
- "Assembly of First Nations - Assembly of First Nations-The Story". Assembly of First Nations. Archived from the original on 2 August 2009. Retrieved 2 October 2009.
- "Civilization.ca-Gateway to Aboriginal Heritage-object". Canadian Museum of Civilization Corporation. 12 May 2006. Retrieved 2 October 2009.
- KintischNov. 10, Eli (10 November 2016). "Why did Greenland's Vikings disappear?". Science | AAAS. Retrieved 29 December 2019.
- "The World Is Changing for Greenland's Native Inuit People". oceanwide-expeditions.com. Retrieved 29 December 2019.
- Wade, Nicholas (30 May 2008). "DNA Offers Clues to Greenland's First Inhabitants". The New York Times. Retrieved 29 December 2019.
- "Reverse Colonialism - How the Inuit Conquered the Vikings". Canadian Geographic. 27 July 2015.
- Brazil urged to protect Indians. BBC News (30 March 2005). Retrieved on 2011-10-11.
- Brazil sees traces of more isolated Amazon tribes. Reuters.com. Retrieved on 2011-10-11.
- "History of the Yezidis".
- "Who Are the Kurds?". BBC News. 31 October 2017.
- "Kurds and Kurdistan: Facts and Figures".
- United Nations High Commissioner for Refugees. "Refworld – World Directory of Minorities and Indigenous Peoples – Turkey : Assyrians". Refworld.
- "Who are the indigenous and tribal peoples?". www.ilo.org. 22 July 2016. Retrieved 2 May 2019.
- "Natives in Russia's far east worry about vanishing fish". The Economic Times. India. Agence France-Presse. 25 February 2009. Retrieved 5 March 2011.
- IWGIA (2012) Briefing note. Indigenous people in the Russian Federation
- Lehtola, M. (2012). HoWhy theory and the cultural transition in the Sakha Republic. In T.Aikas, S.Lipkin, A.K.Salmi (Eds.), Archaeology of social relations: ten case studies by Finnish archaeologists (pp. 51–76). Oulu University
- Slezkine, Y. (1994). Arctic mirrors: Russia and the small peoples of the North. New York, NY: Cornell University Press
- Recognition at last for Japan's Ainu, BBC NEWS
- Blust, R. (1999), "Subgrouping, circularity and extinction: some issues in Austronesian comparative linguistics" in E. Zeitoun & P.J.K Li, ed., Selected papers from the Eighth International Conference on Austronesian Linguistics. Taipei: Academia Sinica
- Fox, James J."Current Developments in Comparative Austronesian Studies" (PDF). (105 KB). Paper prepared for Symposium Austronesia Pascasarjana Linguististik dan Kajian Budaya. Universitas Udayana, Bali 19–20 August 2004.
- Diamond, Jared M. "Taiwan's gift to the world" (PDF). Archived from the original (PDF) on 17 June 2009. (107 KB). Nature, Volume 403, February 2000, pp. 709–10
- "Indonesia and the Denial of Indigenous Peoples' Existence". 17 August 2013.
- "Myanmar - IWGIA - International Work Group for Indigenous Affairs". www.iwgia.org. Retrieved 30 May 2020.
- Who are Europe's indigenous peoples and what are their struggles?. Euronews, September 08, 2019.
- Pygmy human remains found on rock islands, Science | The Guardian, 12 March 2008.]
- "Papua New Guinea country profile". BBC News. 2018. Retrieved 1 February 2018.
- Bartholomew Dean and Jerome Levi (eds.) At the Risk of Being Heard: Indigenous Rights, Identity and Postcolonial States University of Michigan Press (2003)
- "Mabo v Queensland" (PDF). Retrieved 2 January 2020.
- No 'indigenous', reiterates Shafique Archived 19 March 2012 at the Wayback Machine. bdnews24.com (18 June 2011). Retrieved on 2011-10-11.
- Ministry of Chittagong Hill Tracts Affairs. mochta.gov.bd. Retrieved on 2012-03-28.
- INDIGENOUS PEOPLEChakma Raja decries non-recognition Archived 19 March 2012 at the Wayback Machine. bdnews24.com (28 May 2011). Retrieved on 2011-10-11.
- 'Define terms minorities, indigenous' Archived 18 November 2011 at the Wayback Machine. bdnews24.com (27 May 2011). Retrieved on 2011-10-11.
- Disregarding the Jumma Archived 19 June 2011 at the Wayback Machine. Himalmag.com. Retrieved on 2011-10-11.
- "Mission to Vietnam Advocacy Day (Vietnamese-American Meet up 2013) in the U.S. Capitol. A UPR report By IOC-Campa". Chamtoday.com. 14 September 2013. Archived from the original on 22 February 2014. Retrieved 17 June 2014.
- Taylor, Philip (December 2006). "Economy in Motion: Cham Muslim Traders in the Mekong Delta" (PDF). The Asia Pacific Journal of Anthropology. 7 (3): 238. doi:10.1080/14442210600965174. ISSN 1444-2213. Archived from the original (PDF) on 23 September 2015. Retrieved 3 September 2014.
- "Indonesia denies it has any indigenous peoples".
- Graham A. Cosmas (2006). MACV: The Joint Command in the Years of Escalation, 1962–1967. Government Printing Office. pp. 145–. ISBN 978-0-16-072367-4.
- Oscar Salemink (2003). The Ethnography of Vietnam's Central Highlanders: A Historical Contextualization, 1850–1990. University of Hawaii Press. pp. 28–. ISBN 978-0-8248-2579-9.
- Oscar Salemink (2003). The Ethnography of Vietnam's Central Highlanders: A Historical Contextualization, 1850-1990. University of Hawaii Press. pp. 29–. ISBN 978-0-8248-2579-9.
- McElwee, Pamela (2008). "7 Becoming Socialist or Becoming Kinh? Government Policies for Ethnic Minorities in the Socialist Republic of Viet Nam". In Duncan, Christopher R. (ed.). Civilizing the Margins: Southeast Asian Government Policies for the Development of Minorities. Singapore: NUS Press. p. 182. ISBN 978-9971-69-418-0.
- "Resolutions and Decisions. WHA47.27 International Decade of the World's Indigenous People. The Forty-seventh World Health Assembly" (PDF). World Health Organization. Retrieved 17 April 2011.
- Hanley, Anthony J. Diabetes in Indigenous Populations, Medscape Today
- Ohenjo, Nyang'ori; Willis, Ruth; Jackson, Dorothy; Nettleton, Clive; Good, Kenneth; Mugarura, Benon (2006). "Health of Indigenous people in Africa". The Lancet. 367 (9526): 1937–46. doi:10.1016/S0140-6736(06)68849-1. PMID 16765763.
- Health and Ethnic Minorities in Viet Nam, Technical Series No. 1, June 2003, WHO, p. 10
- Facts on Suicide Rates, First Nations and Inuit Health, Health Canada
- "Health of indigenous peoples". Health Topics A to Z. Retrieved 17 April 2011.
- State of the world's indigenous peoples. Vereinte Nationen Department of International Economic and Social Affairs. New York: United Nations. 2009. ISBN 978-92-1-130283-7. OCLC 699622751.CS1 maint: others (link)
- Charles Theodore Greve (1904). Centennial History of Cincinnati and Representative Citizens, Volume 1. Biographical Publishing Company. p. 35. Retrieved 22 May 2013.
- See Oliphant v. Suquamish Indian Tribe, 435 U.S. 191 (1978); also see Robert Williams, Like a Loaded Weapon
- Survival International website – About Us/FAQ. Survivalinternational.org. Retrieved on 2012-03-28.
- "Friends of Peoples close to Nature website – Our Ethos and statement of principles". Archived from the original on 26 February 2009. Retrieved 23 January 2010.CS1 maint: unfit url (link) Retrieved from Internet Archive 13 December 2013.
- "United Nations Declaration on the Rights of Indigenous Peoples | United Nations For Indigenous Peoples". www.un.org. Retrieved 22 January 2020.
Pike, Sarah M. (2004). "4: The 1960s Watershed Years". New Age and Neopagan Religions in America. Columbia Contemporary American Religion Series. New York: Columbia University Press. p. 82. ISBN 9780231508384. Retrieved 19 February 2020.
Many young people looked to American Indian traditions for alternative lifestyles, and this was to shape New Agers' and Neopagans' subsequent turn to and incorporation of indigenous peoples' practices into their own rituals and belief systems. [...] The desire to share in native peoples' perceived harmony with nature became a common theme of the 1960s counterculture and in 1970s Neopaganism and New Age communities.
- Fisher, Matthew R.; Editor (2017), "1.5 Environmental Justice & Indigenous Struggles", Environmental Biology, retrieved 17 April 2020CS1 maint: extra text: authors list (link)
- African Commission on Human and Peoples’ Rights (2003). "Report of the African Commission's Working Group of Experts on Indigenous Populations/Communities" (PDF). ACHPR & IWGIA. Archived from the original (PDF) on 26 September 2007.
- Baviskar, Amita (2007). "Indian Indigeneitites: Adivasi Engagements with Hindu NAtionalism in India". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Bodley, John H. (2008). Victims of Progress (5th. ed.). Plymouth, England: AltaMira Press. ISBN 978-0-7591-1148-6.
- de la Cadena, Marisol; Orin Starn, eds. (2007). Indigenous Experience Today. Oxford: Berg Publishers, Wenner-Gren Foundation for Anthropological Research. ISBN 978-1-84520-519-5.
- Clifford, James (2007). "Varieties of Indigenous Experience: Diasporas, Homelands, Sovereignties". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Coates, Ken S. (2004). A Global History of Indigenous Peoples: Struggle and Survival. New York: Palgrave MacMillan. ISBN 978-0-333-92150-0.
- Farah, Paolo D.; Tremolada Riccardo (2014). "Intellectual Property Rights, Human Rights and Intangible Cultural Heritage". Journal of Intellectual Property Law, Issue 2, Part I, Giuffre pp. 21–47. ISSN 0035-614X. SSRN 2472388.
- Farah, Paolo D.; Tremolada Riccardo (2014). "Desirability of Commodification of Intangible Cultural Heritage: The Unsatisfying Role of IPRs". TRANSNATIONAL DISPUTE MANAGEMENT, Special Issues "The New Frontiers of Cultural Law: Intangible Heritage Disputes", Volume 11, Issue 2. ISSN 1875-4120. SSRN 2472339.
- Henriksen, John B. (2001). "Implementation of the Right of Self-Determination of Indigenous Peoples" (PDF). Indigenous Affairs. 3/2001 (PDF ed.). Copenhagen: International Work Group for Indigenous Affairs. pp. 6–21. ISSN 1024-3283. OCLC 30685615. Archived from the original (PDF) on 2 June 2010. Retrieved 1 September 2007.
- Hughes, Lotte (2003). The no-nonsense guide to indigenous peoples. Verso. ISBN 978-1-85984-438-0.
- Howard, Bradley Reed (2003). Indigenous Peoples and the State: The struggle for Native Rights. DeKalb, Illinois: Northern Illinois University Press. ISBN 978-0-87580-290-9.
- Johansen. Bruce E. (2003). Indigenous Peoples and Environmental Issues: An Encyclopedia. Westport, Connecticut: Greenwood Press. ISBN 978-0-313-32398-0.
- Martinez Cobo, J. (198). "United Nations Working Group on Indigenous Populations". Study of the Problem of Discrimination Against Indigenous Populations. UN Commission on Human Rights.[permanent dead link]
- Maybury-Lewis, David (1997). Indigenous Peoples, Ethnic Groups and the State. Needham Heights, Massachusetts: Allyn & Bacon. ISBN 978-0-205-19816-0.
- Merlan, Francesca (2007). "Indigeneity as Relational Identity: The Construction of Australian Land Rights". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Pratt, Mary Louise (2007). "Afterword: Indigeneity Today". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
- Tsing, Anna (2007). "Indigenous Voice". In Marisol de la Cadena & Orin Starn (ed.). Indigenous Experience today. Oxford, UK: Berg Publishers. ISBN 978-1-84520-519-5.
|Wikisource has original text related to this article:|
|Look up indigenous in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to Indigenous peoples.|
- Awareness raising film by Rebecca Sommer for the Secretariat of the UNPFII
- "First Peoples" from PBS
- "The Indigenous World' from International Work Group for Indigenous Affairs
- IFAD and indigenous peoples (International Fund for Agricultural Development, IFAD)
- IPS Inter Press Service News on indigenous peoples from around the world |
A light-year is a unit of astronomical distance defined by the distance that light travels in one year. It is a fundamental measure in astronomy, providing a way to express vast distances on the cosmic scale. Understanding the concept of a light-year requires delving into the nature of light, the speed at which it travels, and the immense distances involved in the study of the cosmos.
Firstly, light is a form of electromagnetic radiation that travels in waves. It is composed of particles called photons, which have both wave-like and particle-like properties. Light can travel through the vacuum of space, making it an essential tool for astronomers to observe and study distant celestial objects.
The speed of light in a vacuum is approximately 299,792 kilometers per second (about 186,282 miles per second). This value, often denoted by the symbol “c,” represents the maximum speed at which information or matter can travel through the universe. The speed of light is a fundamental constant in physics and plays a crucial role in our understanding of the cosmos.
The concept of a light-year arises from the desire to express astronomical distances in a more comprehensible manner. Given that light has a finite speed, it takes time for light to travel from one point in space to another. For example, the light emitted by the Sun takes approximately 8 minutes and 20 seconds to reach Earth. This delay is due to the vast distance between the Sun and our planet, and it highlights the finite speed of light.
To define a light-year, we consider the distance light travels in one year. Since there are about 31.56 million seconds in a year (365.25 days), the calculation becomes straightforward:
Distance (light-year)=Speed of Light×Time (year)
The resulting value is approximately 9.461 trillion kilometers or 5.878 trillion miles. This enormous distance serves as a practical unit for expressing the vastness of interstellar and intergalactic distances.
One of the fundamental uses of light-years is in measuring the distances between stars, galaxies, and other astronomical objects. The nearest star system to Earth, Alpha Centauri, is approximately 4.37 light-years away. This means that the light we see from Alpha Centauri today actually left the star over four years ago. Similarly, the Milky Way, our galaxy, has a diameter of about 100,000 light-years, emphasizing the immense scale of our cosmic home.
The concept of a light-year also has implications for observing distant galaxies. When astronomers observe a galaxy that is, for instance, 1 billion light-years away, they are seeing the galaxy as it existed 1 billion years ago. This is because the light emitted by the galaxy has taken 1 billion years to travel to us. Studying such distant galaxies provides a glimpse into the universe's past, allowing scientists to investigate the conditions and structures that existed in the early cosmos.
Moreover, the expanding nature of the universe, as described by the theory of cosmic inflation and supported by observational evidence, has implications for the interpretation of light-year distances. As the universe expands, the space between galaxies also expands, causing an increase in the distance over time. This expansion complicates distance measurements on cosmological scales and requires the use of additional concepts such as comoving distance and proper distance to account for the changing nature of cosmic distances.
In addition to its role in expressing astronomical distances, the light-year has become ingrained in popular culture and is often used to describe the vastness of space in a more relatable manner. Science fiction writers, educators, and enthusiasts frequently employ the term to convey the enormity of cosmic distances and the challenges of space exploration.
While the light-year is a crucial unit for expressing cosmic distances, it's worth noting that astronomers also use other distance units based on the astronomical unit (AU) and parsec. The astronomical unit is the average distance between the Earth and the Sun, approximately 149.6 million kilometers (93 million miles). The parsec, derived from the terms “parallax” and “arcsecond,” is approximately 3.26 light-years. Both of these units are valuable in specific contexts, with the astronomical unit often used within our solar system, and the parsec being favored for larger cosmic scales.
The concept of a light-year has practical implications for space exploration and communication. When spacecraft are sent to explore distant regions of the solar system or beyond, engineers and scientists must account for the time it takes for signals to travel to and from the spacecraft. For instance, a signal sent from Earth to a spacecraft on the outer edge of the solar system may take hours to reach its destination. This delay, known as one-way light time, requires careful planning and coordination for mission operations.
Similarly, when considering potential communication with extraterrestrial civilizations, the vast distances involved become a significant challenge. Radio signals, which also travel at the speed of light, would take years or even centuries to reach nearby star systems. This realization underscores the difficulties inherent in interstellar communication and highlights the limitations imposed by the finite speed of light.
The study of light-years also intersects with the search for exoplanets—planets orbiting stars outside our solar system. When astronomers identify an exoplanet located, for example, 100 light-years away, they are not only describing its current position but also providing a glimpse into the planet's past. Observing exoplanets at varying distances allows scientists to explore the diversity of planetary systems throughout the Milky Way and other galaxies.
In summary, a light-year is a fundamental unit of distance in astronomy, representing the distance that light travels in one year. Its use allows astronomers and astrophysicists to express the vast distances between celestial objects in a comprehensible manner. The concept of a light-year underlines the finite speed of light and its implications for our perception of the universe. As technology advances and our understanding of the cosmos deepens, the light-year will continue to play a crucial role in astronomical research, space exploration, and our collective appreciation of the vastness and complexity of the cosmos. |
WELCOME all to our lecture on kinematics, a branch of physics which relates to an algebraic calculation of geometry in motion.As such, kinematics is taken as a geometry of motion.
It focuses on calculating the movement of a particular body such as a particle, a part of a machinery or even the movement of a galactic body like a star or a planet.
Application of algebra, mathematics of functions for a cartesian coordinate and matrices are involved to calculate motion in dimensions as linear, translational and rotations.
It brings to point two dimensional as well as three dimensional points of movements by a particular travelling body such as a projectile.
A coordinate system such as a point from a starting point taken as zero (0) and down south to be a negative point seventy (-70) are two coordinate points and from that southern point with a stationary structure with point zero (0) or a structure such as a tower with its height (30) is the third coordinate point.
With such a picture in mind, and with the fixed coordinate points of the X and Y axes, a particle or a projectile can move with that as the frame of reference.
In the above scenario, quantities such as speed, velocity and acceleration can be calculated.
Speed is the rate of movement of a body or the particle’s movement per unit time.
Changes in the speed create the second path for its movement and that is the average speed or displacement.
That is the curvature whose tangent will mark its velocity as a derivative.
That is, velocity results from the average size of speed from the original speed with that of the second size or magnitude of the speed.
The acceleration is obtained in a similar way to the first derivative that is the average speed.
Specifically, the average of the velocities from the initial and the final velocities give the magnitude of acceleration as the second derivative.
From the vector diagrams considering several factors of accelerations such as the Coriolis Effect, centripetal and radial accelerations taken into account.
In any kinematic calculation, the frame of reference is always very important, without which the movement of the body will not be properly calculated.
Such is applied in a linear, two-dimensional calculation such as a mechanical system with a rigid transformation.
A rigid transformation would be the Euclidean Geometry whereby it involves a straight-line path of a motion of the body or projectile.
The circular path taken by a traveling particle involves the calculations involving the rotational geometry with polar coordinates.
It involves the sine and the cosine functions to determine the path of the travelling particle or body. That is a two-dimensional path marked by a point, a line from the centre and another line closing in the angle subtended for that rotation.
The velocity and the acceleration for such motions are tangential to the orbit.
Therefore, velocity and acceleration are not obtained from any inward points of the rotation.
Speed is a scalar quantity because it does not have a direction of travel. It only has the size or magnitude of travel.
The next quantity after speed is the velocity that accounts for any changes in the rate of speed.
Both of these two quantities are calculated with reference to time.
Time is factored as a very important variable because it determines the derivative of the average speed.
Velocity is the first derivative of speed.
It represents the values of changes in speed.
The third is the acceleration as the second derivative of the particle or body in motion.
A body can accelerate or decelerate, just like changing speed from a low speed to high speed or vice versa.
Appropriate signs as a plus or minus to indicate an acceleration and a deceleration respectively or wording can be made if it is acceleration or deceleration just like an increase or a decrease in speed.
There can also be uniform (constant) speed, velocity and acceleration.
An important idea is to pictorially show the motion of the body or projectile on a graph.
It is always advisable to have a title for any graph that is drawn in kinematics.
The frame of reference in the X and Y axis are to be correctly labelled with the correct variables.
In the X axis, it is always the independent variable that is labelled.
Example in the case of kinematics would be the time in any unit.
It can be time in hours, minutes, seconds or even years.
These are variables that cannot be changed.
The other coordinate is the y axis which always is labelled with the dependent variable (s).
In the case of this subject it will be either, distance, displacement, speed, velocity or acceleration.
These are variables that can be changed.
Having plotted a graph, would enable one to see the inclination of the particle and locate correctly where the travelling body be in X time.
It will also help to give the magnitude and the direction of the speed as it is a vector quantity.
The astronomical calculations of the movement of the planetary bodies, or other interstellar bodies, are perfected with the use of kinematics.
Kinetics is another field of study that is concerned with motion but specifically looks at how the motion of bodies of particles are derived.
There are mathematical algebraic formulas to work out the derivatives of the body as speed, velocity and acceleration.
A fourth factor is that each derivative can be identified as a function of time.
In rotational and translational calculations done in matrices, the independent variable of time is calculated alongside the derivatives of the travelling body under study.
My prayer for PNG today is “Come to me, with all your hearts, don’t let fear keep us apart. Long have I waited for your coming home to me and living, deeply a new life,” says the Lord.
Next week: Dynamics (force and motion)
- Michael Uglo is the author of the science textbook Science in PNG, Pacific, Asia & Caribbean, and a lecturer in avionics, auto-piloting and aircraft engineering. Please send comments to: [email protected] |
routing is the process of finding paths through a network that have a minimum
distance between two stations/points.
It can also be considered
as a function of distance, bandwidth, average traffic, communication cost,
router processing speed, etc.
algorithms have many applications. They are also important for road network,
operations, and logistics research. Shortest path algorithms are important for
Routing of data packets
on the Internet is an example which involves millions
of routers in a complex worldwide network.
Any software that helps
you choose a route uses some form of a shortest path algorithm. Google Maps is
on such real-life instance that uses shortest path algorithm. Here, the
application automatically suggests the shortest path to reach the given
Finding an optimum routing
on the internet has a major impact on performance and cost.
To solve shortest path
problem we use graphs. The graph has a few properties. First, the direction of
the edges. Edge refers to the arrow that is used to connect two vertices. The
edges can be unidirectional or bidirectional. If it is a directional edge, it
is a directed graph. And if it’s a bidirectional edge it is a undirected graph.
Second, the weight of the edge. Weights represent time or cost required to
reach the destination or the next hop vertex.
There are several
algorithms to solve the shortest path problem. One such is Dijkstra’s
Let the starting node be
the initial node.
Assign every node a tentative distance value. Set the initial
node to zero and all other nodes to zero.
Mark the initial node as current. Mark all the other nodes as unvisited
and the set of all these nodes into unvisited set.
For the current node, calculate the tentative distances of
all its neighbours. Compare the calculated tentative distance with the current
node and assign the smaller one. Otherwise, keep the current value.
After all the neighbours of the current node are considered,
mark the current node as visited and remove it from the unvisited set. A
visited node will never be checked again.
If the destination node is visited or if there is no connection
between the initial node and the remaining unvisited nodes, then stop. The
algorithm has finished.
Else, select the unvisited node that has the smallest tentative
distance and set it as the current node and repeat step 3.
Tools required to complete the project is NS2. The above
algorithm for the computation of shortest path would be implemented. |
Chicano, or Chicana, is a chosen identity of some Mexican Americans in the United States. Chicano or Xicano are sometimes used interchangeably with Mexican-American and both names exist as chosen identities within the Mexican-American community in the United States.
Although Chicano had negative connotations as a term of denigration prior to the Chicano Movement, it was reclaimed in the 1960s and 1970s by Mexican Americans to express self-determination and solidarity in a shared cultural, ethnic, and communal identity while openly rejecting assimilation. Chicano identity hit a low point in the 1980s and 1990s, as assimilation and economic mobility became a goal of many middle-class Mexican Americans who instead adopted the terms Hispanic and Latino.
By the end of the 1990s a shift in Chicano identity, initiated by Xicana feminists and others, supporting the adoption of Xicana/o identity occurred among some members of the community. In the 2010s, there has been a resurgence of Chicana/o/x and Xicana/o/x identity, some even referring to it as a renaissance, centered on ethnic pride, Indigenous consciousness, cultural expression, defense of immigrants, and the rights of women and queer Latinx people.
- 1 Recorded usage
- 2 Etymology
- 3 Usage of terms
- 4 Identity
- 5 Sociological aspects
- 6 Political aspects
- 7 Cultural aspects
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
In 1857, a gunboat, the Chicana, was sold to Jose Maria Carvajal to ship arms on the Rio Grande. The King and Kenedy firm submitted a voucher to the Joint Claims Commission of the United States in 1870 to cover the costs of this gunboat's conversion from a passenger steamer. No explanation for the boat's name is known.
The Chicano poet and writer Tino Villanueva traced the first documented use of the term as an ethnonym to 1911, as referenced in a then-unpublished essay by University of Texas anthropologist José Limón.
Linguists Edward R. Simmen and Richard F. Bauerle report the use of the term in an essay by Mexican-American writer, Mario Suárez, published in the Arizona Quarterly in 1947. There is ample literary evidence to substantiate that Chicano is a long-standing endonym, as a large body of Chicano literature pre-dates the 1950s.
The etymology of the term Chicano is not definitive and has been debated by historians, scholars, and activists. Although there has been controversy over the origins of Chicano, community conscience reportedly remains strong among those who claim the identity.
Chicano is believed by some scholars to be a Spanish language derivative of an older Nahuatl word Mexitli ("Meh-shee-tlee"). Mexitli formed part of the expression Huitzilopochtlil Mexitli—a reference to the historic migration of the Mexica people from their homeland of Aztlán to the Oaxaca Valley. Mexitli is the linguistic progenitor or root of the word "Mexica," referring to the Mexica people, and its singular form "Mexicatl" ("Me-hee-cah-no"). The word "Mexico" actually derives from "Méjicano," a mispronunciation of the term Mexicatl by the Spanish in the early 16th century, with the "x" in Mexicatl being pronounced by the Spanish with an "h" sound and the glottal stop at the end of the Nahuatl word disappearing completely. The word Chicano therefore more directly derives from the loss of the initial syllable of Mexicano (Mexican). According to Villanueva, "given that the velar (x) is a palatal phoneme (S) with the spelling (sh)," in accordance with the Indigenous phonological system of the Mexicas ("Meshicas"), it would become "Meshicano" or "Mechicano." Some Chicanos further replace the ch with the letter x, forming Xicano, as a means of reclaiming and reverting to the Nahuatl use of the letter "x." The first two syllables of Xicano are therefore in Nahuatl while the last syllable is Castillian.
In Mexico's Indigenous regions, mestizos and Westernized natives are referred to as mexicanos, referring to the modern nation, rather than the pueblo (village or tribal) identification of the speaker, be it Mayan, Zapotec, Mixtec, Huasteco, or any of hundreds of other indigenous groups. Thus, a newly emigrated Nahuatl speaker in an urban center might referred to his cultural relatives in this country, different from himself, as mexicanos, shortened to Chicanos.
The New Handbook of Texas (1996) combines the two ideas:
According to one explanation, the pre-Colombian tribes in Mexico called themselves Meshicas, and the Spaniards, employing the letter x (which at that time represented a [ʃ] and [tʃ]), spelled it Mexicas. The Indians later referred to themselves as Meshicanos and even as Shicanos, thus giving birth to the term Chicano.
Usage of termsEdit
Chicano identity was originally reclaimed in the 1960s and 1970s by Mexican Americans as a means of asserting their own ethnic, political, and cultural identity while rejecting and resisting assimilation into whiteness, systematic racism and stereotypes, colonialism, and the American nation-state. Chicano identity was also founded on the need to create alliances with other oppressed ethnic and third world peoples while protesting U.S. imperialism. The notion of Aztlán, a mythical homeland which was claimed to be located in the southwestern United States, was critical in mobilizing many Mexican Americans to take social and political action. Chicano identity was organized around seven objectives: unity, economy, education, institutions, self-defense, culture, and political liberation, in an effort to bridge regional and class divisions among Mexican Americans. Chicanos originally espoused the belief in a unifying mestizo identity and also centered their platform in the masculine body.
In the 1970s, Chicano identity became further defined under a reverence for machismo while also maintaining the values of their original platform, exemplified via the language employed in court cases such as Montez v. Superior Court, 1970, which defined the Chicano community as unified under "a commonality of ideals and costumbres with respect to masculinity (machismo), family roles, child discipline, [and] religious values." Oscar Zeta Acosta defined machismo as the source of Chicano identity, claiming that this "instinctual and mystical source of manhood, honor and pride... alone justifies all behavior." Armando Rendón wrote in Chicano Manifesto (1971) that machismo was "in fact an underlying drive of the gathering identification of Mexican Americans... the essence of machismo, of being macho, is as much a symbolic principle for the Chicano revolt as it is a guideline for family life."
From the beginning of the Chicano Movement, Chicana activists and scholars have "criticized the conflation of revolutionary commitment with manliness or machismo" and questioned "whether machismo is indeed a genuinely Mexican cultural value or a kind of distorted view of masculinity generated by the psychological need to compensate for the indignities suffered by Chicanos in a white supremacist society," as noted by José-Antonio Orosco. Academic Angie Chabram-Dernersesian indicates in her study of literary texts which were formative in the Chicano movement that most of the stories focus on men and boys and none focus on Chicanas. The omission of Chicanas and the masculine-focused foundations of Chicano identity, created a shift in consciousness among some Chicanas/os by the 1990s.
Xicanisma was coined by Chicana Feminist writer Ana Castillo in Massacre of the Dreamers: Essays on Xicanisma (1994) as a recognition of the shift in consciousness since the Chicano Movement. In the 1990s and early 2000s, Xicana/o activists and scholars, including Guillermo Gómez-Peña, were beginning to form a new ideological notion of Xicanisma: "a call for a return to the Amerindian roots of most Latinos as well as a call for a strategic alliance to give agency to Native American groups," reasserting the need to form coalitions with other oppressed ethnic groups, which was foundational in the formation of Chicano identity. Juan Velasco states that "implicit in the 'X' of more recent configurations of 'Xicano' and 'Xicanisma' is a criticism not only of the term 'Hispanic' but of the racial poetics of the 'multiracial' within Mexican and American culture." While still recognizing many of the foundational elements of Chicano identity, some Xicana feminists have preferred to identify as Xicana because of the masculine-focused foundations of Chicano identity and the patriarchal biases inherent in the Spanish language.
Scholar Francesca A. López notes that "Chicanismo has evolved into Xicanismo and even Xicanisma and other variations, but however it is spelled, it is based on the idea that to be Xican@ means to be proud of your Mexican Indigenous roots and committed to the struggle for liberation of all oppressed people." While adopting Chicano identity was a means of rejecting conformity to the dominant system as well as Hispanic identity, Xicano identity was adopted to emphasize a diasporic Indigenous American identity through being ancestrally connected to the land.
Dylan Miner has noted how the emergence of Xicano identity emphasizes an "Indigenous and indigenist turn" which recognizes the Indigenous roots of Xicana/o/x people by explicitly referencing Nahuatl language and using an 'x' to signify a "lost or colonized history." While Chicano identity has been noted by scholars such as Francisco Rios as being limited by its focus on "race and ethnicity with strong male overtones," Xicanismo has been referred to as elastic enough to recognize the "intersecting nature of identities" (race/ethnicity and gender, class and sexual orientation) as well as transnational roots "from Mexico as well as those with roots centered in Central and South America."
Distinction from Hispanic and LatinoEdit
Chicanos, like many Mexicans, are Mestizos who have heritage of both indigenous American cultures and European, mainly Spanish, through colonization and immigration. The term Latino refers to a native or inhabitant of Latin America or a person of Latin American origin living in the United States.
Hispanic literally refers to Spain, but, in effect, to those of Spanish-speaking descent; therefore, the two terms are misnomers inasmuch as they apply only by extension to Chicanos, who may identify primarily as Amerindian or simply Mexican, and who may speak Amerindian languages (and English) as well as Spanish. The term was first brought up in the 1970s but it was not until the 1990s that the term was used on the U.S. Census. Since then it has widely been used by politicians and the media. For this reason, many Chicanos reject the term Hispanic.
While some Mexican-Americans may embrace the term Chicano, others prefer to identify themselves as:
- Mexican American; American of Mexican descent.
- Hispanic; Hispanic American; Hispano/hispana.
- Latino/a, also mistranslated/pseudo-etymologically anglicized as "Latin".
- American Latino/Latina.
- Latin American (especially if immigrant).
- Mexican; mexicano/mexicana
- Mestizo; [insert racial identity X] mestizo (e.g. blanco mestizo); pardo.
- californiano (or californio) / californiana; nuevomexicano/nuevomexicana; tejano/tejana.
- Part/member of la Raza. (Various definitions exist of what would be such a "universal race".)
- Americans, solely.
Term of derisionEdit
Chicano existed as a disparaging term, yet transformed from a class-based label of derision to one of ethnic pride and general usage within Mexican-American communities with the rise of the Chicano Movement. Prior to the 1960s, it was used as a racial slur by non-Mexican Americans to refer to Mexican American people in Spanish-speaking neighborhoods. In his essay "Chicanismo" in The Oxford Encyclopedia of Mesoamerican Cultures (2002), José Cuéllar, a professor of Chicano studies at San Francisco State University, dates the transition from derisive to positive to the late 1950s, with a usage by young Mexican-American high school students.
In Mexico, which by American standards would be considered class discrimination or racist, chicano is associated with a Mexican-American person of low importance class and poor morals (similarly to the Spanish terms Cholo, Chulo and Majo). Chicano is widely known and used in Mexico. The Mexican archeologist and anthropologist Manuel Gamio reported in 1930 that the term chicamo (with an m) was used as a derogatory term used by Hispanic Texans for recently arrived Mexican immigrants displaced during the Mexican Revolution in the beginning of the early 20th century. At this time, the term "Chicano" began to reference those who resisted total assimilation, while the term "Pochos" referred (often pejoratively) to those who strongly advocated assimilation.
Vicki Ruíz and Virginia Sánchez report that demographic differences in the adoption of the term existed; because of the prior vulgar connotations, it was more likely to be used by males than females, and as well, less likely to be used among those in a higher socioeconomic status. Usage was also generational, with the more assimilated third-generation members (again, more likely male) likely to adopt the usage. This group was also younger, of more radical persuasion, and less connected to a Mexican cultural heritage.
Outside of Mexican-American communities, the term might assume a negative meaning if it is used in a manner that embodies the prejudices and bigotries long directed at Mexican and Mexican-American people in the United States. For example, in one case, Ana Castillo has indicated the following subjective meaning through her creative work: "[a] marginalized, brown woman who is treated as a foreigner and is expected to do menial labor and ask nothing of the society in which she lives." Castillo herself considers chicano to be a positive one of self-determination and political solidarity.
The term's meanings are flexible. Reclaiming its usage as a term of derision, self-described Chicanos view the identity as a positive self-identifying social construction meant to assert certain notions of ethnic, cultural, political, and Indigenous consciousness. Chicano identity usually consists of incorporating some notion of hybridity. For Chicanos, the identity has been defined as being "neither from here, nor from there" in reference to the US and Mexico. As a mixture of cultures from both countries, being Chicano represents the struggle of being institutionally acculturated into the Anglo-dominated society of the United States, while maintaining the cultural sense developed as a Latin-American cultured, U.S.-born Mexican child.
The identity may hold different meanings for different Chicanos. Armando Rendón wrote in the Chicano Manifesto (1971), "I am Chicano. What it means to me may be different than what it means to you." In the 1990s, Chicano writer Benjamin Alire Sáenz wrote "There is no such thing as the Chicano voice: there are only Chicano and Chicana voices." Juan Bruce-Novoa, a professor of Spanish and Portuguese at University of California, Irvine, wrote in 1990: "A Chicano lives in the space between the hyphen in Mexican-American." The identity thus may be understood as somewhat ambiguous (e.g. in the 1991 Culture Clash play A Bowl of Beings, in response to Che Guevara's demand for a definition of "Chicano," an "armchair activist" cries out, "I still don't know!"). However, as substantiated by Chicano activists, artists, writers, and scholars since the inception of the Chicano Movement, many Chicanos gravitate around the following conceptualizations of ethnic, political, cultural, and Indigenous identity:
From a popular perspective, the term Chicano became widely visible outside of Chicano communities during the American civil rights movement. It was commonly used during the mid-1960s by Mexican-American activists such as Rodolfo "Corky" Gonzales, who was one of the first to reclaim the term, in an attempt to assert their civil rights and rid the word of its polarizing negative connotations. Chicano soon became an identity for Mexican Americans to assert their ethnic pride, proudly identifying themselves as Chicanos and also asserting a notion of Brown Pride, drawing on the "Black is Beautiful" movement, inverting phrases of insult into forms of ethnic empowerment.
Following this reclamation, Chicano identity soon became a celebration of non-whiteness, both within and external to the Mexican-American community. Chicano ethnic identity worked against the state-sanctioned census categories of "Whites with Spanish Surnames," originally promulgated on the 1950 U.S. census, and "Mexican-American," which Chicanos felt encouraged assimilation. Chicanos thus asserted their non-Europeaness during a time when Mexican assimilation into whiteness was being promoted by the federal government, which, as noted by Ian Haney López, was done in order "serve Anglo self-interest," who would use it to deny discrimination against Mexicans.
The United States Census Bureau provided no clear way for Mexican Americans or other Latino Americans to officially identify as a racial/ethnic category prior to 1980, when the broader-than-Mexican term "Hispanic" was first available as a self-identification in census forms. While Chicano also appeared on the 1980 census, indicating the success of the Chicano Movement in gaining some federal recognition, it was only permitted to be selected as a subcategory underneath Spanish/Hispanic descent, which erased the visibility of Amerindian and African ancestries among Chicanos and populations throughout Latin America and the Caribbean.
Chicano writers have described how Chicano ethnic identity is born out of colonial encounters between Europe and the Americas. Alfred Arteaga writes how the Chicano arose as a result of the violence of colonialism, emerging as a hybrid ethnicity or race from European colonizers and Amerindian Indigenous peoples. Arteaga acknowledges how this ethnic and racial hybridity among Chicanos is highly complex and extends beyond a previously generalized "Aztec" ancestry, as originally asserted during the formative years of the Chicano Movement, sometimes involves more than Spanish ancestry, and also may include African ancestry, largely as a result of Spanish slavery or runaway slaves from Anglo-Americans. Arteaga therefore concludes that "the physical manifestation of the Chicano, is itself a product of hybridity."
Chicano political identity has been cited as having developed out of a glorification of pachuco/a resistance to assimilation in the 1940s and 1950s. Pachucos were negatively perceived by white European American society. As stated by Luis Valdéz: "Pachuco determination and pride grew through the 1950s and gave impetus to the Chicano movement of the 1960s. [...] By then the political consciousness stirred by the 1943 Zoot Suit Riots had developed into a movement that would soon issue the Chicano Manifesto – a detailed platform of political activism." Pachuco political action has been documented by some as a precursor to the Chicano Movement. By the late 1960s, according to Catherine S. Ramírez, the Pachuco figure "had emerged as an icon of resistance in much Chicano cultural production," despite the absence of similar portrayals of the pachuco in Mexican-American literature and art prior to the Chicano Movement as well as the omission of the same reverence for the pachuca figure, which Ramírez credits with the pachuca's embodiment of "dissident femininity, female masculinity, and, in some instances, lesbian sexuality."
By the 1960s, Chicano identity was consolidating around several key political positions: rejecting assimilation into white American society, resisting systematic racism, colonialism, and the American nation-state, and affirming the need to create alliances with other oppressed ethnic and third world peoples. Political liberation was a founding principle of Chicano identity. Chicano nationalism called for the creation of a Chicano subject whose political identity was separate from the U.S. nation-state which Chicanos recognized had impoverished, oppressed, and destroyed their communities. As stated by scholar Alberto Varon, Chicano nationalism "created enduring social improvement for the lives of Mexican Americans and others" through the political action of Chicanos. At the same time, this brand of Chicano nationalism focused on the masculinist subject in its calls for political resistance, which has since been insightfully and powerfully critiqued by Chicana feminists.
Chicano political activist groups such as the Brown Berets (1967-1972; 1992–Present), originally founded by David Sánchez in East Los Angeles as the Young Chicanos for Community Action, quickly gained support for their political objectives of protesting educational inequalities and demanding an end to police brutality. Paralleling with groups such as the Black Panthers and Young Lords, which were founded in 1966 and 1968 respectively, membership in the Brown Berets was estimated to have reached five thousand in over eighty chapters mostly centered in California and Texas. The Brown Berets were critical in organizing the Chicano Blowouts of 1968 and the National Chicano Moratorium, which protested the high number of Chicano casualties in the Vietnam War. Continued police harassment, infiltration by federal agents provacateur via COINTELPRO, and internal disputes led to the decline and disbandment of the Berets in 1972. Sánchez, then a professor a East Los Angeles College, revived the Brown Berets in 1992 after being prompted by the high number of Chicano homicides is Los Angeles County, seeking to supplant the structure of the gang as family with the Brown Berets.
At certain points in the 1970s, Chicano was the preferred term for reference to Mexican Americans, particularly in scholarly literature. However, even though the term is politicized, its use fell out of favor as a means of referring to the entire population due to ignorance and due to the majority's attempt to impose Latino and Hispanic as misnomers. Because of this, Chicano has tended to refer to participants in Mexican-American activism. Sabine Ulibarrí, an author from Tierra Amarilla, New Mexico, once labeled Chicano as a politically "loaded" term, though later recanted that assessment.
Reies Tijerina (who died on January 19, 2015) was a vocal claimant to the rights of Latin Americans and Mexican Americans, and he remains a major figure of the early Chicano Movement. Of the term, he wrote: "The Anglo press degradized the word 'Chicano'. They use it to divide us. We use it to unify ourselves with our people and with Latin America."
Since the Chicano Movement, Chicano has been reclaimed by Mexican-Americans to denote a hybrid cultural identity that is neither American or Mexican. Chicano cultural identity is commonly defined as embodying the "in-between" nature of hybridity. Rather than existing as a "subculture" of European American culture, Chicano culture has been positioned by Alicia Gasper de Alba as an "alter-Native culture, an Other American culture indigenous to the land base now known as the West and Southwest of the United States." While influenced by settler-imposed systems and structures, Chicano culture is referred to as "not immigrant but native, not foreign but colonized, not alien but different from the overarching hegemony of white America."
At least as early as the 1930s, the precursors to Chicano cultural identity largely developed in Los Angeles and the Southwest. In the early 20th century, former zootsuiter Salvador "El Chava" reflects how "racism and poverty created [Mexican-American] gangs; we had to protect ourselves." Racism forced Mexican Americans to congregate in areas separated from Anglo Americans. Barrios and colonias (rural barrios) were founded throughout Southern California and elsewhere in neglected districts of cities and outlying areas which exacerbated social and cultural issues within Mexican-American communities. Along with alienation from public institutions, some Chicano youth became susceptible to gang channels in a search for self-identity, allured by the rigid hierarchal structure and assigned roles amidst a world of state-sanctioned disorder. "The pull of urban culture, with its rigidly defined hierarchy, prescribed member roles and activities, and symbols of group and cultural identity, can be particularly alluring for vulnerable youth," as noted by academic Kurt C. Organista.
Pachuco/a culture in Los Angeles developed in the 1940s and 1950s and has been credited as a precursor to the consolidation of Chicano cultural identity. Chicano zoot suiters on the West Coast were influenced by Black American zoot suiters and the jazz and swing music scene on the East Coast. In Los Angeles, Chicano zoot suiters developed their own cultural identity, as noted by Charles "Chaz" Bojórquez, "with their hair done in big pompadours, and 'draped' in tailor-made suits, they were swinging to their own styles. They spoke Cálo, their own language, a cool jive of half-English, half-Spanish rhythms. [...] Out of the zootsuiter experience came lowrider cars and culture, clothes, music, tag names, and, again, its own graffiti language."
Many aspects forming Chicano cultural identity, such as lowrider culture, have been stigmatized and policed by white European Americans who perceived all Chicanos as "juvenile delinquents or gang members" for their embrace of nonwhite style and cultures, many of which were influenced by and adjacent to Black American urban culture. These negative perceptions were amplified by media outlets such as the Los Angeles Times. Luis Alvarez remarks how this affected the policing of Black and Brown male bodies in particular: "Popular discourse characterizing nonwhite youth as animal-like, hypersexual, and criminal marked their bodies as 'other' and, when coming from city officials and the press, served to help construct for the public a social meaning of African Americans and Mexican American youth. In these ways, the physical and discursive bodies of nonwhite youth were the sites upon which their dignity was denied."
With mass media, Chicano culture became popular in both the United States and internationally. In Japan, the highlights of Chicano culture include the music, lowrider community, and the arts. Chicano Culture took hold in Japan in the 1980s and continues to grow with contributions from people such as Shin Miyata, Junichi Shimodaira, Miki Style, Night Tha Funksta, and MoNa (Sad Girl). The introduction of Chicano culture in Japan has caused thousands of individuals to engage in Chicano culture. There has been debate over whether this should be termed cultural appropriation, with some arguing that it is appreciation rather than appropriation.
The identity has been perceived as a means of reclaiming Indigenous ancestry and forming an identity distinct from a European identity, despite partial European descent. As exemplified through its extensive use within el Plan de Santa Bárbara, one of the primary documents responsible for the genesis of M.E.Ch.A. (Movimiento Estudiantil Chicanx de Aztlán), Chicano was used by many as a reference to their Indigenous ancestry and roots. As Mexican-American journalist Rubén Salazar put it in "Who is a Chicano? And what is it the Chicanos want?", a 1970 Los Angeles Times piece: "A Chicano is a Mexican-American with a non-Anglo image of himself." Leo Limón, an artist and community activist in Los Angeles states, "...a Chicano is ... an indigenous Mexican American."
Scholar Patrisia Gonzales analyzes how Chicanx people are descendants of the Indigenous peoples of Mexico and have been displaced because of colonial violence which positions them among "detribalized Indigenous peoples and communities." Journalist and academic Roberto Cintli Rodríguez describes Chicanos as "de-Indigenized," which he remarks occurred "in part due to religious indoctrination and a violent uprooting from the land," which detached them from maíz-based cultures throughout the greater Mesoamerican region. Rodríguez examines how and why "peoples who are clearly red or brown and undeniably Indigenous to this continent have allowed ourselves, historically, to be framed by bureaucrats and the courts, by politicians, scholars, and the media as alien, illegal, and less than human."
Gloria E. Anzaldúa has addressed detribalization, stating "In the case of Chicanos, being 'Mexican' is not a tribe. So in a sense Chicanos and Mexicans are 'detribalized'. We don't have tribal affiliations but neither do we have to carry ID cards establishing tribal affiliation." Anzaldúa also recognizes that "Chicanos, people of color, and 'whites'," have often chosen "to ignore the struggles of Native people even when it's right in our caras (faces)," expressing disdain for this "willful ignorance." She concludes that "though both 'detribalized urban mixed bloods' and Chicanas/os are recovering and reclaiming, this society is killing off urban mixed bloods through cultural genocide, by not allowing them equal opportunities for better jobs, schooling, and health care."
While some Chicanos have asserted an identification with a generalized Indigenous ancestry and the mythical Aztec homeland of Aztlán, which has been noted by J. Jorge Klor de Alva as a useful political maneuver for mobilizing support for the Chicano Movement and rejecting assimilation, the appropriation of a pre-contact Aztec culture has since been reexamined by some Chicanos who recognize a need to affirm the diversity of the Indigenous peoples of Mexico and of Indigenous ancestry among Chicanos. As a result, some Chicanos have argued there has emerged a need to reconstruct the place of Aztlán and Indigeneity in relation to Chicano identity. The beginnings of this movement in revising Chicano Indigenous consciousness may be exemplified in the removal of Aztlán from M.E.Ch.A. in 2019.
Academic Inés Hernández-Ávila has emphasized how Chicanos reconnecting with their roots "respectfully and humbly" while validating "those peoples who still maintain their identity as original peoples of this continent" will serve as a means of creating radical change capable of "transforming our world, our universe, and our lives."
Gender and sexualityEdit
Chicana women frequently confront objectification, being perceived as "exotic," "lascivious," and "hot" at a very young age while also facing denigration as "barefoot," "pregnant," "dark," and "low-class." These perceptions in society engender numerous negative sociological and psychological effects, such as excessive dieting and eating disorders. In addition, numerous studies have found that Chicanas experience elevated levels of stress as a result of sexual expectations by their parents and families. Although many Chicana youth desire open conversation regarding gendered and sexual expectations, as well as mental health, these issues are often not discussed openly in families, which perpetuates unsafe and destructive practices. While young Chicana women are objectified, middle-aged Chicanas often discuss feelings of being invisible. Chicana women at this age report feeling trapped in balancing family obligations to their parents and children while attempting to create a space for their own sexual desires. The cultural expectation that Chicana women should be "protected" by Chicano men also constricts the agency and mobility of Chicana women.
Early in their social development, Chicano men develop their gendered identity as men within the context of marginalization. Some authors argue that "Mexican men and their Chicano brothers suffer from an inferiority complex due to the conquest and genocide inflicted upon their indigenous ancestors," which leaves many Chicano men feeling trapped between identifying with the "superior" European conqueror and the "inferior" indigenous ancestor. As a result, the psychological pain this conflict along with marginalization creates is reported to manifest itself in the form of hypermasculinity, in which there occurs a "quest for power and control over others in order to feel better" about oneself. This can result in abusive behavior, developing an impenetrable cold persona, alcohol abuse, and other destructive behaviors. The lack of discussion of sexuality between Chicano men and their fathers, other Chicano men, or their mothers. Chicano men tend to learn about sex from their peers as well as older male family members who perpetuate the idea that as men they have "a right to engage in sexual activity without commitment." The looming threat of being labeled a joto (gay) for not engaging in sexual activity also conditions many Chicano men to "use" women for their own sexual desires.
Chicana/o queer people often seek refuge in their families because it is difficult for them to find spaces where they feel safe in the dominant and hostile white culture which surrounds them, yet may be excluded because of hypermasculinity and homophobia. Gabriel S. Estrada describes how "the overarching structures of capitalist white (hetero)sexism," including higher levels of criminalization directed towards Chicanos, have proliferated "further homophobia" especially among Chicano boys and men who may adopt "hypermasculine personas that can include sexual violence directed at others." Estrada notes that not only does this constrict "the formation of a balanced Indigenous sexuality for anyone[,] but especially... for those who do identify" as part of the queer community to reject the "Judeo-Christian mandates against homosexuality that are not native to their own ways," recognizing how many precolonial Indigenous societies in Mexico and elsewhere accepted homosexuality openly.
Chicanos, regardless of their generational status, may seek both Western biomedical healthcare and Indigenous health practices when dealing with trauma or illness. The effects of colonization and conquest have been proven to produce psychological distress among Indigenous communities. Similarly, intergenerational trauma along with racism and institutionalized systems of oppression which emerged from colonization have been shown to adversely impact the mental health of Chicanos and Latinos. Mexican Americans are three times more likely than European Americans to live in poverty. However, the utilization rate of mental health services is lower and lower levels of psychiatric distress were reported among Chicanos. Similar studies demonstrate lower comparative levels of distress in regard to physical health as well. Some scholars have cited strong family connections, lower levels of smoking/drinking, and adherence to traditional values as possible sources for this difference.
Among Mexican immigrants who have lived in the United States for less than thirteen years, lower rates of mental health disorders were found in comparison to Mexican-Americans and Chicanos born in the United States. Scholar Yvette G. Flores concludes that these studies demonstrate that "factors associated with living in the United States are related to an increased risk of mental disorders." Risk factors for negative mental health include historical and contemporary trauma stemming from colonization, marginalization, discrimination, and devaluation. The disconnection of Chicanos from their Indigeneity has been cited as a cause of trauma and negative mental health:
Loss of language, cultural rituals, and spiritual practices creates shame and despair. The loss of culture and language often goes unmourned, because it is silenced and denied by those who occupy, conquer, or dominate. Such losses and their psychological and spiritual impact are passed down across generations, resulting in depression, disconnection, and spiritual distress in subsequent generations, which are manifestations of historical or intergenerational trauma.
Psychological distress may emerge from Chicanos being "othered" in society since childhood and is linked to psychiatric disorders and symptoms which are culturally bound – susto (fright), nervios (nerves), mal de ojo (evil eye), and ataque de nervios (an attack of nerves resembling a panic attack).
Many currents came together to produce the revived Chicano political movement of the 1960s and 1970s. Early struggles were against school segregation, but the Mexican-American cause, or la Causa as it was called, soon came under the banner of the United Farm Workers and César Chávez. However, Corky Gonzales and Reies Tijerina stirred up old tensions about New Mexican land claims with roots going back to before the Mexican–American War. Simultaneous movements like the Young Lords, to empower youth, question patriarchy, democratize the Church, end police brutality, and end the Vietnam War, all intersected with other ethnic nationalist, peace, countercultural, and feminist movements.
Since Chicanismo covers a wide array of political, religious and ethnic beliefs, and not everybody agrees with what exactly a Chicano is, most new Latino immigrants see it as a lost cause, as a lost culture, because Chicanos do not identify with Mexico or wherever their parents migrated from as new immigrants do. Chicanoism is an appreciation of a historical movement, but also is used by many to bring a new revived politicized feeling to voters young and old in the defense of Mexican and Mexican-American rights. People descended from Aztlan (both in the contemporary U.S. and in Mexico) use the Chicano ideology to create a platform for fighting for immigration reform and equality for all people.
Rejection of bordersEdit
For some, Chicano ideals involve a rejection of borders. The 1848 Treaty of Guadalupe Hidalgo transformed the Rio Grande region from a rich cultural center to a rigid border poorly enforced by the United States government. At the end of the Mexican–American War, 80,000 Spanish-Mexican-Indian people were forced into sudden U.S. habitation. As a result, Chicano identification is aligned with the idea of Aztlán, which extends to the Aztec period of Mexico, celebrating a time preceding land division.
Paired with the dissipation of militant political efforts of the Chicano movement in the 1960s was the emergence of the Chicano generation. Like their political predecessors, the Chicano generation rejects the "immigrant/foreigner" categorization status. Chicano identity has expanded from its political origins to incorporate a broader community vision of social integration and nonpartisan political participation.
The shared Spanish language, Catholic faith, close contact with their political homeland (Mexico) to the south, a history of labor segregation, ethnic exclusion and racial discrimination encourage a united Chicano or Mexican folkloric tradition in the United States. Ethnic cohesiveness is a resistance strategy to assimilation and the accompanying cultural dissolution.
Mexican nationalists in Mexico, however, condemn the advocates of Chicanoism for attempting to create a new identity for the Mexican-American population, distinct from that of the Mexican nation. Chicanoism is embraced through personal identity especially within small rural communities that integrate the American culture connected to the Mexican heritage practiced in different parts of Mexico.
The term Chicano is also used to describe the literary, artistic, and musical movements that emerged with the Chicano Movement.
Chicana/o film is rooted in economic, social, and political oppression and has therefore been marginalized since its inception. Scholar Charles Ramírez Berg has suggested that Chicana/o cinema has progressed through three fundamental stages since its establishment in the 1960s. The first wave occurred from 1969 to 1976 and was characterized by the creation of radical documentaries which chronicled "the cinematic expression of a cultural nationalist movement, it was politically contestational and formally oppositional." Some films of this era include El Teatro Campesino's Yo Soy Joaquín (1969) and Luis Valdez's El Corrido (1976). These films were focused on documenting the systematic oppression of Chicanas/os in the United States.
The second wave of Chicana/o film, according to Ramírez Berg, developed out of portraying anger against oppression faced in society, highlighting immigration issues, and re-centering the Chicana/o experience, yet channeling this in more accessible forms which were not as outright separatist as the first wave of films. Docudramas like Esperanza Vasquez's Agueda Martínez (1977), Jesús Salvador Treviño's Raíces de Sangre (1977), and Robert M. Young's ¡Alambrista! (1977) served as transitional works which would inspire full-length narrative films. Early narrative films of the second wave include Valdez's Zoot Suit (1981), Young's The Ballad of Gregorio Cortez (1982), Gregory Nava's, My Family/Mi familia (1995) and Selena (1997), and Josefina López's Real Women Have Curves, originally a play which premiered in 1990 and was later released as a film in 2002.
The second wave of Chicana/o film is still ongoing and overlaps with the third wave, the latter of which gained noticeable momentum in the 1990s and does not emphasize oppression, exploitation, or resistance as central themes. According to Ramírez Berg, third wave films "do not accentuate Chicano oppression or resistance; ethnicity in these films exists as one fact of several that shape characters' lives and stamps their personalities."
Chicano literature tends to focus on themes of identity, discrimination, and culture, with an emphasis on validating Mexican-American and Chicano culture in the United States. Rodolfo "Corky" Gonzales's "Yo Soy Joaquin" is one of the first examples of explicitly Chicano poetry, while José Antonio Villarreal's Pocho (1959) is widely recognized as the first major Chicano novel.
The novel Chicano, by Richard Vasquez, was the first novel about Mexican Americans to be released by a major publisher (Doubleday, 1970). It was widely read in high schools and universities during the 1970s and is now recognized as a breakthrough novel. Vasquez's social themes have been compared with those found in the work of Upton Sinclair and John Steinbeck.
Chicana writers have tended to focus on themes of identity, questioning how identity is constructed, who constructs it, and for what purpose in a racist, classist, and patriarchal structure. Characters in books such as Victuum (1976) by Isabella Ríos, The House on Mango Street (1983) by Sandra Cisneros, Loving in the War Years: lo que nunca pasó por sus labios (1983) by Cherríe Moraga, The Last of the Menu Girls (1986) by Denise Chávez, Margins (1992) by Terri de la Peña, and Gulf Dreams (1996) by Emma Pérez have also been read regarding how they intersect with themes of gender and sexuality. Academic Catrióna Rueda Esquibel performs a queer reading of Chicana literature in her work With Her Machete in Her Hand: Reading Chicana Lesbians (2006), demonstrating how some of the intimate relationships between girls and women in these works contributes to a discourse on homoeroticism and nonnormative sexuality in Chicana/o literature.
Chicano writers have tended to gravitate toward themes of cultural, racial, and political tensions in their work, while not explicitly focusing on issues of identity or gender and sexuality, in comparison to the work of Chicana writers. Chicanos who were marked as overtly gay in early Chicana/o literature, from 1959 to 1972, tended to be removed from the Mexican-American barrio and were typically portrayed with negative attributes, as examined by Daniel Enrique Pérez, such as the character of "Joe Pete" in Pocho and the unnamed protagonist of John Rechy's City of Night (1963). However, other characters in the Chicano canon may also be read as queer, such as the unnamed protagonist of Tomás Rivera's ...y no se lo tragó la tierra (1971), and "Antonio Márez" in Rudolfo Anaya's Bless Me, Ultima (1972), since, according to Pérez, "these characters diverge from heteronormative paradigms and their identities are very much linked to the rejection of heteronormativity."
As noted by scholar Juan Bruce-Novoa, Chicano novels allowed for androgynous and complex characters "to emerge and facilitate a dialogue on nonnormative sexuality" and that homosexuality was "far from being ignored during the 1960s and 1970s" in Chicano literature, although homophobia may have curtailed portrayals of openly gay characters during this era. Given this representation in early Chicano literature, Bruce-Novoa concludes, "we can say our community is less sexually repressive than we might expect."
Other major names in Chicana/o literature include Norma Elia Cantú, Gary Soto, Sergio Troncoso, Rigoberto González, Raul Salinas, Daniel Olivas, Benjamin Alire Sáenz, Luís Alberto Urrea, Dagoberto Gilb, Alicia Gaspar de Alba, Luis J. Rodriguez and Pat Mora.
Lalo Guerrero has been lauded as the "father of Chicano music". Beginning in the 1930s, he wrote songs in the big band and swing genres that were popular at the time. He expanded his repertoire to include songs written in traditional genres of Mexican music, and during the farmworkers' rights campaign, wrote music in support of César Chávez and the United Farm Workers.
Other Chicano/Mexican-American singers include Selena, who sang a mixture of Mexican, Tejano, and American popular music, but died in 1995 at the age of 23; Zack de la Rocha, lead vocalist of Rage Against the Machine and social activist; and Los Lonely Boys, a Texas-style country rock band who have not ignored their Mexican-American roots in their music. In recent years, a growing Tex-Mex polka band trend influenced by the conjunto and norteño music of Mexican immigrants, has in turn influenced much new Chicano folk music, especially on large-market Spanish language radio stations and on television music video programs in the U.S. Some of these artists, like the band Quetzal, are known for the political content of political songs.
The Chicano Movement was affected not only those in the United States and Mexico but the Chicano culture has also gone abroad to countries such as Japan. Influencers such as Shin Miyata raised the awareness of Chicano culture in Japan. Miyata owns a record label, Gold Barrio Records, that re-releases Chicano music in Japan from Chicano soul to Chicano rap.
In the 1950s, 1960s and 1970s, a wave of Chicano pop music surfaced through innovative musicians Carlos Santana, Johnny Rodriguez, Ritchie Valens and Linda Ronstadt. Joan Baez, who was also of Mexican-American descent, included Hispanic themes in some of her protest folk songs. Chicano rock is rock music performed by Chicano groups or music with themes derived from Chicano culture.
There are two undercurrents in Chicano rock. One is a devotion to the original rhythm and blues roots of Rock and roll including Ritchie Valens, Sunny and the Sunglows, and ? and the Mysterians. Groups inspired by this include Sir Douglas Quintet, Thee Midniters, Los Lobos, War, Tierra, and El Chicano, and, of course, the Chicano Blues Man himself, the late Randy Garribay.
The second theme is the openness to Latin American sounds and influences. Trini Lopez, Santana, Malo, Azteca, Toro, Ozomatli and other Chicano Latin rock groups follow this approach. Chicano rock crossed paths of other Latin rock genres (Rock en español) by Cubans, Puerto Ricans, such as Joe Bataan and Ralphi Pagan and South America (Nueva canción). Rock band The Mars Volta combines elements of progressive rock with traditional Mexican folk music and Latin rhythms along with Cedric Bixler-Zavala's Spanglish lyrics.
Chicano punk is a branch of Chicano rock. There were many bands that emerged from the California punk scene, including The Zeros, Bags, Los Illegals, The Brat, The Plugz, Manic Hispanic, and the Cruzados; as well as others from outside of California including Mydolls from Houston, Texas and Los Crudos from Chicago, Illinois. Some music historians argue that Chicanos of Los Angeles in the late 1970s might have independently co-founded punk rock along with the already-acknowledged founders from British-European sources when introduced to the US in major cities. The rock band ? and the Mysterians, which was composed primarily of Mexican-American musicians, was the first band to be described as punk rock. The term was reportedly coined in 1971 by rock critic Dave Marsh in a review of their show for Creem magazine.
Although Latin jazz is most popularly associated with artists from the Caribbean (particularly Cuba) and Brazil, young Mexican Americans have played a role in its development over the years, going back to the 1930s and early 1940s, the era of the zoot suit, when young Mexican-American musicians in Los Angeles and San Jose, such as Jenni Rivera, began to experiment with banda, a jazz-like fusion genre that has grown recently in popularity among Mexican Americans.
Hip hop and rapEdit
Hip hop culture, which is cited as having formed in the 1980s street culture of African American, West Indian (especially Jamaican), and Puerto Rican New York City Bronx youth and characterized by DJing, rap music, graffiti, and breakdancing, was adopted by many Chicano youth by the 1980s as its influence moved westward across the United States. By the 1980s on the West Coast, Chicano artists were beginning to develop their own style of hip hop. Rappers such as Ice-T and Easy-E shared their music and commercial insights with Chicano rappers in the late 1980s. Chicano rapper Kid Frost, who is often cited as "the godfather of Chicano rap" was highly influenced by Ice-T and was even cited as his protégé.
Chicano rap is a unique style of hip hop music which started with Kid Frost, who saw some mainstream exposure in the early 1990s. While Mellow Man Ace was the first mainstream rapper to use Spanglish, Frost's song "La Raza" paved the way for its use in American hip hop. Chicano rap tends to discuss themes of importance to young urban Chicanos. Some of today's Chicano artists include A.L.T., Lil Rob, Psycho Realm, Baby Bash, Serio, A Lighter Shade of Brown, and Funky Aztecs Sir Dyno, Chingo bling.
Chicano rap has also reached overseas in Japan. MoNa (Sad Girl) is a Chicano-style rapper based in Japan who creates new rap music based on Chicano culture. MoNa is well known in Japan as well as cities such as San Diego and Los Angeles where Chicano culture thrives in.
Pop and R&BEdit
In the visual arts, works by Chicanos address similar themes as works in literature. The preferred media for Chicano art are murals, graphic arts, and graffiti art. Scholar Guisela Latorre refers to Chicana/o murals as "a unique and effective tool with which to assert agency from the margins." San Diego's Chicano Park, located in Barrio Logan, is home to the largest collection of Chicano murals in the world and was created as an outgrowth of the city's political movement by Chicanos. Rasquache art is a unique style subset of the Chicano Arts movement.
Artists like Charles "Chaz" Bojórquez developed an original style of graffiti art known as West Coast Cholo style influenced by Mexican murals and placas (tags which indicate territorial boundaries) in the mid-20th century. Bojórquez remarks how paint brushes were used prior to the introduction of spray cans in the early 1950s. Some sources say Mexican-American graffiti culture in Los Angeles was already "in full bloom" in the 1930s, stretching as far back as to when "shoeshine boys marked their names on the walls with their daubers to stake out their spots on the sidewalk" in the early 20th century.
Chicano art emerged in the mid-60s as a necessary component to the urban and agrarian civil rights movement in the Southwest, known as la causa chicana, la Causa, or the Chicano Renaissance. The artistic spirit, based on historical and traditional cultural evolution, within the movement has continued into the present millennium. There are artists, for example, who have chosen to do work within ancestral/historical references or who have mastered traditional techniques. Some artists and crafters have transcended the motifs, forms, functions, and context of Chicano references in their work but still acknowledge their identity as Chicano. These emerging artists are incorporating new materials to present mixed-media, digital media, and transmedia works.
Chicano performance art blends humor and pathos for tragicomic effect as shown by Los Angeles' comedy troupe Culture Clash and Mexican-born performance artist Guillermo Gómez-Peña and Nao Bustamante is a Chicana artist known internationally for her conceptual art pieces and as a participant in Work of Art: The Next Great Artist, produced by Sarah Jessica Parker. Lalo Alcaraz often depicts the issues of Chicanos in his cartoon series called "La Cucaracha."
One of the most powerful and far-reaching cultural aspects of Chicano culture is the Indigenous current that strongly roots Chicano culture to the American continent. It also unifies Chicanismo within the larger Pan-Indian Movement. Since its arrival in 1974, an art movement known as Danza Azteca in the U.S., (and known by several names in its homeland of the central States of Mexico: Danza Conchera, De la Conquista, Chichimeca, and so on.) has had a deep impact in Chicano muralism, graphic design, tattoo art (flash), poetry, music, and literature. Lowrider cars also figure prominently as functional art in the Chicano community.
Chicano art has also been trending in Japan especially among the youth. The capital for Chicano art in Japan is located in Osaka, Japan. Night Tha Funksta is one of the leading figures of Chicano art and provided his own take for his artwork. Chicano culture is often associated with the gangs and cholos which appeals to the Japanese youth with the idea of rebellion. Instead of focusing on the images of gangs, Night focuses his art on the more positive images of the Chicano culture and its roots. Chicano art in Japan revolves around the theme or family and belonging in a community and avoids gang-related activities such as drugs and violence.
|Look up Chicano in Wiktionary, the free dictionary.|
- History of Mexican Americans
- Caló (Chicano)
- Chicano Movement
- Chicano Moratorium
- Chicano nationalism
- Chicano rap
- Chileans (and Chilean American)
- Cosmic race
- Hispanos (and Spanish American)
- Ethnicity (United States Census)
- Latino punk
- List of Mexican Americans
- Los Siete de la Raza
- Mexican Americans
- Murals, i.e. Chicano Park, San Diego
- Plaza de César Chávez
- Race (U.S. Census)
- Josefa Segovia
- Villanueva, Tino (1985). "Chicanos (selección)". Philosophy & Social Criticism (in Spanish). Mexico: Lecturas Mexicanas, número 889 FCE/SEP. 31 (4): 7. doi:10.1177/0191453705052972.
- "From Chicano to Xicanx: A brief history of a political and cultural identity". The Daily Dot. 2017-10-22. Retrieved 2018-03-10.
- Anaya, Rudolfo A. (1998). Conversations with Rudolfo Anaya. University Press of Mississippi. p. 142. ISBN 9781578060771.
- Romero, Dennis (15 July 2018). "A Chicano renaissance? A new Mexican-American generation embraces the term". NBC News. Retrieved 2 August 2019.
- Jacqueline M. Hidalgo (2016). "Competing Land Claims and Conflicting Scriptures". Refractions of the Scriptural: Critical Orientation as Transgression. Routledge. p. 117. ISBN 9781138643666.
- Moraga, Cherríe (2011). A Xicana Codex of Changing Consciousness: Writings, 2000–2010. Duke University Press. pp. xxi. ISBN 9780822349778.
- Rodriguez, Roberto (June 7, 2017). "Rodriguez: The X in LatinX". Diverse: Issues In Higher Education. Cox, Matthews, and Associates. Retrieved August 4, 2019.
- Chance, Joseph (2006). Jose Maria de Jesus Carvajal: The Life and Times of a Mexican Revolutionary. San Antonio, Texas: Trinity University Press. p. 195.
- Félix Rodríguez González, ed. Spanish Loanwords in the English Language. A Tendency towards Hegemony Reversal. Berlin: Mouton de Gruyter, 1996. Villanueva is referring to Limón's essay "The Folk Performance of Chicano and the Cultural Limits of Political Ideology," available via ERIC. Limón refers to use of the word in a 1911 report titled "Hot tamales" in the Spanish-language newspaper La Crónica in 1911.
- Edward R. Simmen and Richard F. Bauerle. "Chicano: Origin and Meaning." American Speech 44.3 (Autumn 1969): 225-230.
- Zaragoza, Cosme (2017). Aztlán: Essays on the Chicano Homeland. Revised and Expanded Edition. University of New Mexico Press. p. 137. ISBN 9780826356758.
- Baca, D. (2008). Mestiz@ Scripts, Digital Migrations, and the Territories of Writing. Palgrave Macmillan. p. 54. ISBN 9780230605152.
- Not to be confused with the language Ladino of Spain and Portugal, a Spanish language spoken by Sephardic Jews of Spain, Portugal, Turkey, Israel and the USA.
- The New Handbook of Texas, Volume 2. Texas State Historical Association. 1996. p. 69. ISBN 9780876111512.
- Varon, Alberto (2018). Before Chicano: Citizenship and the Making of Mexican American Manhood, 1848-1959. NYU Press. pp. 207–211. ISBN 9781479831197.
- Gutiérrez-Jones, Carl (1995). Rethinking the Borderlands: Between Chicano Culture and Legal Discourse. University of California Press. p. 134. ISBN 9780520085794.
- Jacobs, Elizabeth (2006). Mexican American Literature: The Politics of Identity. Routledge. p. 87. ISBN 9780415364904.
- Orosco, José-Antonio (2008). Cesar Chavez and the Common Sense of Nonviolence. University of New Mexico Press. pp. 71–72, 85. ISBN 9780826343758.
- Lerate, Jesús; Ángeles Toda Iglesia, María (2007). "Entrevista con Ana Castillo". Critical Essays on Chicano Studies. Peter Lang AG. p. 26. ISBN 9783039112814.
- Velasco, Juan (2002). "Performing Multiple Identities". Latino/a Popular Culture. NYU Press. p. 217. ISBN 9780814736258.
- A. T. Miner, Dylan (2014). Creating Aztlán: Chicano Art, Indigenous Sovereignty, and Lowriding Across Turtle Island. University of Arizona Press. p. 221. ISBN 9780816530038.
- López, Francesca A. (2017). Asset Pedagogies in Latino Youth Identity and Achievement: Nurturing Confianza. Routledge. pp. 177–178. ISBN 9781138911413.
- Rios, Francisco (Spring 2013). "From Chicano/a to Xicana/o: Critical Activist Teaching Revisited". Multicultural Education. 20: 59–61 – via ProQuest.
- C'de Baca, Joseph (June 14, 1995). "Hispanic terms, categories, and definitions". La Voz; Denver (24). Denver: La Voz Publishing Company dba as La Voz – via ProQuest.
- Montoya, Maceo (2016). Chicano Movement For Beginners. For Beginners. pp. 3–5. ISBN 9781939994646.
- "Chicano Art". Archived from the original on 2007-05-16.
Thus, the 'Chicano' term carried an inferior, negative connotation because it was usually used to describe a worker who had to move from job to job to be able to survive. Chicanos were the low class Mexican Americans.
- McConnell, Scott (1997-12-31). "Americans no more? - immigration and assimilation". National Review. Archived from the original on 2007-10-13.
In the late 1960s, a nascent Mexican-American movement adopted for itself the word "Chicano" (which had a connotation of low class) and broke forth with surprising suddenness.
- Alcoff, Linda Martín (2005). "Latino vs. Hispanic: The politics of ethnic names". Philosophy & Social Criticism. SAGE Publications. 31 (4): 395–407. doi:10.1177/0191453705052972.
- Gamio, Manuel (1930). Mexican Immigration to the United States: A Study of Human Migration and Adjustment. Chicago: University of Chicago Press.
- See: Adalberto M. Guerrero, Macario Saldate IV, and Salomon R. Baldenegro. "Chicano: The term and its meanings." Archived October 22, 2007, at the Wayback Machine A paper written for Hispanic Heritage Month, published in the 1999 conference newsletter of the Arizona Association of Chicanos for Higher Education.
- Vicki L. Ruiz & Virginia Sanchez Korrol, editors. Latinas in the United States: A Historical Encyclopedia. Indiana University Press, 2006.
- Maria Herrera-Sobek. Chicano folklore; a handbook. Greenwood Press 2006.
- Ana Castillo (May 25, 2006). How I Became a Genre-jumper (TV broadcast of a lecture). Santa Barbara, California: UCTV Channel 17.
- "VG: Artist Biography: Castillo, Ana". Voices.CLA.UMN.edu. Minneapolis: University of Minnesota. Retrieved October 13, 2008.
- "Anna Castillo". SpeakingOfStories.org. Archived from the original on October 31, 2008. Retrieved October 13, 2008.
- "The Chicana Subject in Ana Castillo's Fiction and the Discursive Zone of Chicana/o Theory". ERIC.Ed.gov. Retrieved October 13, 2008.
- Castillo, Ana. "Bio". AnaCastillo.com. Retrieved October 13, 2008.
- Bruce-Novoa, Juan (1990). Retro/Space: Collected Essays on Chicano Literature: Theory and History. Houston, Texas: Arte Público Press.
- Butterfield, Jeremy. >. "Chicano - Oxford Reference". Oxford University Press. doi:10.1093/acref/9780199666317.001.0001/acref-9780199666317-e-4513. Retrieved 2016-04-15.
- Stephen, Lynn (2007). Transborder Lives: Indigenous Oaxacans in Mexico, California, and Oregon. Duke University Press Books. pp. 223–225. ISBN 9780822339908.
- Moore, J. W.; Cuéllar, A. B. (1970). Mexican Americans. Ethnic Groups in American Life series. Englewood, Cliffs, New Jersey: Prentice-Hall. p. 149. ISBN 978-0-13-579490-6.
- Haney López, Ian F. (2004). Racism on Trial: The Chicano Fight for Justice. Belknap Press. p. 82. ISBN 9780674016293.
- Arteaga, Alfred (1997). Chicano Poetics: Heterotexts and Hybridities. Cambridge University Press. p. 11. ISBN 9780521574921.
- Mazón, Mauricio (1989). The Zoot-Suit Riots: The Psychology of Symbolic Annihilation. University of Texas Press. p. 118. ISBN 9780292798038.
- López, Miguel R. (2000). Chicano Timespace: The Poetry and Politics of Ricardo Sánchez. Texas A&M University Press. p. 113. ISBN 9780890969625.
- Ramírez, Catherine S. (2009). The Woman in the Zoot Suit: Gender, Nationalism, and the Cultural Politics of Memory. Duke University Press Books. pp. 109–111. ISBN 9780822343035.
- Meier, Matt S.; Gutiérrez, Margo (2003). The Mexican American Experience: An Encyclopedia. Greenwood. pp. 55–56. ISBN 9780313316432.
- Soldatenko, Michael (1996-06-01). "Perspectivist Chicano Studies, 1970-1985". ethn stud rev. 19 (2–3): 181–208. doi:10.1525/esr.1996.19.2-3.181. ISSN 1555-1881.
- Tijerina, Reies; Gutiérrez, José Ángel (2000). They Called Me King Tiger: My Struggle for the Land and Our Rights. Houston, Texas: Art Público Press. ISBN 978-1-55885-302-7.
- Renteria, Tamis Hoover (1998). Chicano Professionals: Culture, Conflict, and Identity. Routledge. pp. 67–68. ISBN 9780815330936.
- Gasper De Alba, Alicia (2002). Velvet Barrios: Popular Culture and Chicana/o Sexualities. Palgrave Macmillan. pp. xxi. ISBN 9781403960979.
- Bojórquez, Charles "Chaz" (2019). "Graffiti is Art: Any Drawn Line That Speaks About Identity, Dignity, and Unity... That Line Is Art". Chicano and Chicana Art: A Critical Anthology. Duke University Press Books. ISBN 9781478003007.
- Diego Vigil, James (1988). Barrio Gangs: Street Life and Identity in Southern California. University of Texas Press. pp. 16–17. ISBN 9780292711198.
- Diego Vigil, James (1988). Barrio Gangs: Street Life and Identity in Southern California. University of Texas Press. p. 150. ISBN 9780292711198.
- Organista, Kurt C. (2007). Solving Latino Psychosocial and Health Problems: Theory, Practice, and Populations. Wiley. p. 191. ISBN 9780470126578.
- Kun, Josh; Pulido, Laura (2013). Black and Brown in Los Angeles: Beyond Conflict and Coalition. University of California Press. pp. 180–181. ISBN 9780520275607.
- "Inside Japan's Chicano Culture". YouTube. New York Times. Retrieved 5 May 2019.
- Jones, Dana. "Japanese Chicano Culture Does Not Amount to Appropriation". The Cougar. The Cougar. Retrieved 5 May 2019.
- Ellison, Louis. "Chicano, A Film by Louis Ellison and Jacob Hodgkinson". YouTube. YouTube. Retrieved 5 May 2019.
- "Japanese Chicanas! Culture Appropriation or Culture Appreciation?". Energy 941. Energy 94.1 FM. Retrieved 5 May 2019.
- Salazar, Rubén (February 6, 1970). "Who is a Chicano? And what is it the Chicanos want?". Los Angeles Times.
- "Leo Limón". UCLA Chicano Studies Research Center. 2019-04-23. Retrieved 2019-06-03.
- Gonzales, Patrisia (2012). Red Medicine: Traditional Indigenous Rites of Birthing and Healing. University of Arizona Press. pp. xxv. ISBN 9780816529568.
- Rodríguez, Roberto Cintli (2014). Our Sacred Maíz Is Our Mother : Indigeneity and Belonging in the Americas. University of Arizona Press. p. 202. ISBN 9780816530618.
- Rodríguez, Roberto Cintli (2014). Our Sacred Maíz Is Our Mother: Indigeneity and Belonging in the Americas. University of Arizona Press. pp. 8–9. ISBN 9780816530618.
- Rodríguez, Roberto Cintli (2014). Our Sacred Maíz Is Our Mother: Indigeneity and Belonging in the Americas. University of Arizona Press. pp. xx–xxi. ISBN 9780816530618.
- Anzaldúa, Gloria (2009). The Gloria Anzaldúa Reader. Duke University Press Books. pp. 289–290. ISBN 9780822345640.
- Beltran, Cristina (2010). The Trouble with Unity: Latino Politics and the Creation of Identity. Oxford University Press. pp. 26–27. ISBN 9780195375916.
- "'Chicano' and the fight for identity". San Francisco Examiner. 9 June 2019. Retrieved 1 August 2019.
- "At L.A. Meeting, Mexican American Student Group MEChA Considers Name Change Amid Generational Divisions". KTLA 5. 3 April 2019. Retrieved 1 August 2019.
- Estrada, Gabriel E. (2002). "The 'Macho' Body as Social Malinche". Velvet Barrios: Popular Culture and Chicana/o Sexualities. Palgrave Macmillan. p. 55. ISBN 9781403960979.
- Flores, Yvette G. (2013). Chicana and Chicano Mental Health: Alma, Mente y Corazón. University of Arizona Press. pp. 103–104. ISBN 9780816529742.
- Flores, Yvette G. (2013). Chicana and Chicano Mental Health: Alma, Mente y Corazón. University of Arizona Press. p. 79. ISBN 9780816529742.
- Flores, Yvette G. (2013). Chicana and Chicano Mental Health: Alma, Mente y Corazón. University of Arizona Press. p. 107. ISBN 9780816529742.
- Rodríguez, Richard T. (2012). "Making Queer Familia". The Routledge Queer Studies Reader. Routledge. ISBN 9780415564113.
- Estrada, Gabriel S. (2002). Velvet Barrios: Popular Culture & Chicana/o Sexualities. Palgrave Macmillan. p. 43. ISBN 9781403960979.
- Flores, Yvette G. (2013). Chicana and Chicano Mental Health: Alma, Mente y Corazón. University of Arizona Press. pp. 1–8. ISBN 9780816529742.
- Flores, Yvette G. (2013). Chicana and Chicano Mental Health: Alma, Mente y Corazón. University of Arizona Press. pp. 8–9. ISBN 9780816529742.
- Castro, Rafaela G. (2001). Chicano Folklore. New York: Oxford University Press. ISBN 978-0-19-514639-4.
- Hurtado, Aida; Gurin, Patricia (2003). Chicana/o Identity in a Changing U.S. Society. Tucson: University of Arizona Press. pp. 10–91. ISBN 978-0-8165-2205-7. OCLC 54074051.
- Montejano, David (1999). Chicano Politics and Society in the Late Twentieth Century. Austin: University of Texas Press. ISBN 978-0-292-75214-6.
- "Cinco de Mayo: An open challenge to Chicano Nationalists". Archived from the original on December 3, 2013.
- Enrique Pérez, Daniel (2009). Rethinking Chicana/o and Latina/o Popular Culture. Palgrave Macmillan. pp. 93–95. ISBN 9780230616066.
- Saldivar, Ramon (1990). Chicano Narrative: Dialectics of Difference. University of Wisconsin Press. p. 175. ISBN 9780299124748.
- Enrique Pérez, Daniel (2009). Rethinking Chicana/o and Latina/o Popular Culture. Palgrave Macmillan. pp. 65–66. ISBN 9780230616066.
- Enrique Pérez, Daniel, Daniel (2009). Rethinking Chicana/o and Latina/o Popular Culture. Palgrave Macmillan. pp. 90–91. ISBN 9780230616066.
- Cordelia Chávez Candelaria, Peter J. Garcâia, Arturo J. Aldama, eds., Encyclopedia of Latino Popular Culture, Vol. 1: A–L; Greenwood Publishing Group, (2004) p. 135.
- "Inside Japan's Chicano Culture". YouTube. New York Times. Retrieved 5 May 2019.
- Roman, Gabriel. "When East Los Meets Tokyo: Chicano Rap and Lowrider Culture in Japan". OC Weekly. OC Weekly. Retrieved 5 May 2019.
- "HARP Magazine". Archived from the original on December 8, 2008. Retrieved October 13, 2008.
- "The revolution that saved rock". CNN.com. November 13, 2003. Retrieved October 13, 2008.
- Tatum, Charles M. (2017). Chicano Popular Culture, Second Edition: Que Hable el Pueblo. University of Arizona Press. pp. 74–75. ISBN 9780816536528.
- Tatum, Charles M. (2011). Lowriders in Chicano Culture: From Low to Slow to Show. Greenwood. p. 128. ISBN 9780313381492.
- Latorre, Guisela (2008). "Indigenism and Chicana/o Muralism: The Radicalization of an Aesthetic". Walls of Empowerment: Chicana/o Indigenist Murals of California. University of Texas Press. ISBN 9780292719064.
- "Japanese Artist "Night The Funksta" talks 80's Chicano Culture's Spread to Japan". Silicon Valley Debug. Silicon Valley Debug. Retrieved 5 May 2019.
- Rodolfo Acuna, Occupied America: A History of Chicanos, Longman, 2006.
- John R. Chavez, "The Chicano Image and the Myth of Aztlan Rediscovered", in Patrick Gerster and Nicholas Cords (eds.), Myth America: A Historical Anthology, Volume II. St. James, New York: Brandywine Press, 1997.
- John R. Chavez, The Lost Land: A Chicano Image of the American Southwest, Las Cruces: New Mexico State University Publications, 1984.
- Ignacio López-Calvo, Latino Los Angeles in Film and Fiction: The Cultural Production of Social Anxiety. University of Arizona Press, 2011.
- Natalia Molina, Fit to Be Citizens?: Public Health and Race in Los Angeles, 1879–1940. Los Angeles: University of California Press, 2006.
- Michael A. Olivas, Colored Men and Hombres Aquí: Hernandez V. Texas and the Emergence of Mexican American Lawyering. Arte Público Press, 2006.
- Randy J. Ontiveros, In the Spirit of a New People: The Cultural Politics of the Chicano Movement. New York University Press, 2014.
- Gregorio Riviera and Tino Villanueva (eds.), MAGINE: Literary Arts Journal. Special Issue on Chicano Art. Vol. 3, Nos. 1 & 2. Boston: Imagine Publishers. 1986.
- F. Arturo Rosales, Chicano! The History of the Mexican American Civil Rights Movement. Houston, Texas: Arte Publico Press, 1996. |
The Opium Wars (or the Anglo-Chinese Wars) were two wars fought in the mid-1800s that were the climax of a long dispute between China and Britain. In the second, France fought alongside Britain. This dispute centered on the British India-grown opium import into China. The Qing emperor (Dao Guang) had banned opium in China, citing its harmful effects on health and deleterious impact on societal productivity. The British Empire, while also banning opium consumption within her border, saw no problem exporting the drug for profit. The Opium Wars and the unequal treaties signed afterwards led in part to the downfall of the Qing empire, as many countries followed Britain and forced unequal terms of trade with China.
For Britain, China was an arena where what has been described as a ‘new imperial policy’ was pursued, which negotiated trade concessions, permanent missions and a small colonial possession, such as Hong Kong, instead of conquering or acquiring a much larger territory. Places such as China and Persia and parts of the the Ottoman Empire were brought within the sphere of imperial influence so much so that the effective power of these countries’ own governments was compromised. The Opium Wars, which aimed to compel China to continue to import opium, were among the most immoral and hypocritical episodes in the history of the British Empire, which saw itself as shouldering a moral burden to educate and uplift the non-white world while in reality it was an exploitative and often brutal enterprise.
The Growth of the Opium Trade (1650–1773)
The Qing Dynasty of China, beset by increasingly aggressive foreign powers that clamoured for two-way trade with China, entered a long decline in the early 1800s. Europeans bought porcelain, silk, spices and tea from China, but were unable to sell goods in return. Instead, they were forced to trade directly in silver, which further strained finances already squeezed by European wars.
Opium itself had been manufactured in China since the fifteenth century for medical purposes. It was mixed with tobacco in a process popularized by the Spanish. Trade in opium was dominated by the Dutch during the eighteenth century. Faced with the health and social problems associated with opium use, the Chinese imperial government prohibited the smoking and trading of opium in 1729.
The British, following the Dutch lead, had been purchasing opium from India ever since the reign of Akbar (1556–1605). After territorial conquest of Bengal in the Battle of Plassey (1757), the British East India Company pursued a monopoly on production and export in India. This effort had serious implications for the peasant cultivators, who were often coerced or offered cash advances to encourage cultivation of the poppy (something that was rarely done for other crops). The product was then sold at auctions in Calcutta, often with a profit of 400 percent.
The British East India Company (1773–1833)
In 1773 the governor-general of Bengal pursued the monopoly on the sale of opium in earnest, and abolished the old opium syndicate at Patna. For the next 50 years, opium would be key to the East India Company's hold on India. Since importation of opium into China was against Chinese law (China already produced a small quantity domestically), the British East India Company would buy tea in Canton on credit, carrying no opium, but would instead sell opium at the auctions in Calcutta leaving it to be smuggled to China. In 1797 the company ended the role of local Bengal purchasing agents and instituted the direct sale of opium to the company by farmers.
British exports of opium to China skyrocketed from an estimated 15 tons in 1730, to 75 tons in 1773, shipped in over two thousand "chests," each containing 140 pounds (67 kilograms) of opium.
In 1799 the Chinese Empire reaffirmed its ban on opium imports, and in 1810 the following decree was issued:
Opium has a very violent effect. When an addict smokes it, it rapidly makes him extremely excited and capable of doing anything he pleases. But before long, it kills him. Opium is a poison, undermining our good customs and morality. Its use is prohibited by law. Now the commoner, Yang, dares to bring it into the Forbidden City. Indeed, he flouts the law!
However, recently the purchases and eaters of opium have become numerous. Deceitful merchants buy and sell it to gain profit. The customs house at the Ch'ung-wen Gate was originally set up to supervise the collection of imports (it had no responsibility with regard to opium smuggling). If we confine our search for opium to the seaports, we fear the search will not be sufficiently thorough. We should also order the general commandant of the police and police- censors at the five gates to prohibit opium and to search for it at all gates. If they capture any violators, they should immediately punish them and should destroy the opium at once. As to Kwangtung and Fukien, the provinces from which opium comes, we order their viceroys, governors, and superintendents of the maritime customs to conduct a thorough search for opium, and cut off its supply. They should in no ways consider this order a dead letter and allow opium to be smuggled out!
The decree had little effect. The Manchu Chinese government was located in Beijing, in the north–too far away to control the merchants who smuggled opium into China from the south. The lack of governmental action, the addictive properties of the drug, the greed for more profit by the British East India Company and merchants, and the British government's hunger for silver to support the gold standard (each printed bank note was backed by its value in gold and silver) combined to further the opium trade. In the 1820s, opium trade averaged nine hundred tons per year from Bengal to China.
From the Napier Affair through the First Opium War (1834–1843)
In 1834, to accommodate the revocation of the East India Company's monopoly, the British sent Lord Napier to Macao. He attempted to circumvent the restrictive Canton trade laws, which forbade direct contact with Chinese officials, and was turned away by the governor of Macao, who promptly closed trade starting on September 2 of that year. The British were not yet ready to force the matter, and agreed to resume trade under the old restrictions, even though Lord Napier implored them to force open the port.
Within the Chinese mandarinate, there was a debate on legalizing opium trade itself, but this was rejected in favor of continued restrictions. In 1838 the death penalty was imposed for native drug traffickers; by this time the British were selling 1,400 tons annually to China. In March 1839, a new commissioner, Lin Zexu, was appointed by the emperor to control the opium trade at the port of Canton. He immediately enforced the imperial demand that there be a permanent halt to drug shipments into China. When the British refused to end the trade, Lin Zexu imposed a trade embargo on the British. On March 27, 1839, Charles Elliot, British Superintendent of Trade, demanded that all British subjects turn over opium to him to be confiscated by the commissioner, amounting to nearly a year's supply of the drug.
After the opium was surrendered, trade was restarted on the condition that no more drugs were smuggled into China. Lin Zexu demanded that British merchants had to sign a bond promising not to deal in opium under penalty of death. The British officially opposed signing of the bond, but some British merchants that didn't deal in opium were willing to sign. Lin Zexu then disposed of the opium by dissolving it with water, salt and lime and flushing it out into the ocean.
To avoid direct conflict, Lin also attempted diplomacy. In 1839 Lin Zexu wrote a letter to Queen Victoria, questioning her royal government's moral reasoning for enforcing strict prohibition of opium trade within England, Ireland and Scotland while reaping profits from such trade in the Far East.
Sidestepping the moral questions, the British government and merchants accused Lin Zexu of destroying their private property—roughly three million pounds of opium. The British responded by sending warships and soldiers, along with a large British Indian army, which arrived in June of 1840.
British military superiority was evident during the armed conflict. British warships attacked coastal towns at will, and their troops, armed with modern muskets and cannons, were able to easily defeat the Qing forces. The British took Canton and then sailed up the Yangtze and took the tax barges, slashing the revenue of the imperial court in Beijing to just a small fraction.
In 1842 the Qing authorities sued for peace, which concluded with the Treaty of Nanking negotiated in August of that year and accepted in 1843. The treaty included ceding to Britain the crown colony of Hong Kong and allowing Britain and other foreign powers to operate in a number of Chinese ports, including Shanghai, with almost no revenue going to the Chinese government. Thus, what were called 'spheres of influence' developed. The treaty also admitted Christian missionaries into China and excepted British men and women living or working in China from Chinese law, meaning all British personnel enjoyed what amounted to diplomatic status and immunity. The international and French concessions in Shanghai enjoyed extraterritoriality and were self-governing as were similar concessions, or "capitulations," in Ottoman territory.
Second Opium War (1856-1860)
The Second Opium War, or Arrow War, broke out following an incident in which Chinese officials boarded a British-registered, Chinese-owned ship, the Arrow. The crew of the Arrow were accused of piracy and smuggling, and were arrested. In response, the British claimed that the ship was flying a British flag, and was protected (as were all British ships) by the Treaty of Nanking.
The war's true outbreak was delayed for a few months by the Taiping Rebellion and the Indian Mutiny; the following year, the British attacked Guangzhou. The British then gained aid from their allies—France, Russia, and the United States—and the war continued.
The Treaty of Tientsin was created in July 1858, but was not ratified by China until two years later; this would prove to be a very important document in China's early modern history, as it was one of the primary unequal treaties.
Hostilities broke out once more in 1859, after China refused the establishment of a British embassy in Beijing, which had been promised by the Treaty of Tientsin. Fighting erupted in Hong Kong and in Beijing, where the British set fire to the Summer Palace and Old Summer Palace after considerable looting took place.
In 1860, at the Convention of Peking, China ratified the Treaty of Tientsin, ending the war, and granting a number of privileges to British (and other Western) subjects within China.
- ↑ Fu Lo-shu. A Documentary Chronicle of Sino-Western relations, Vol. 1, Tucson, AZ: University of Arizona Press, 1966. ISBN 9780816501519. p. 380)
- ↑ Coleman, Anthony (ed.). Millennium: A Thousand Years of History. London: Bantam, 1999. ISBN 9780593044780. pp. 243-244.
- ↑ Modern History Sourcebook: Commissioner Lin: Letter to Queen Victoria, 1839. Paul Hallsall, Fordham University. October 1998. Retrieved February 14, 2007.
- ↑ Spence, Jonathan D. The Search for Modern China, 2nd ed.. New York: W. W. Norton & Company, 1990. ISBN 9780393027082. pp. 153-155.
ReferencesISBN links support NWE through referral fees
- Beeching, Jack. The Chinese Opium Wars. New York: Harcourt Brace Jovanovich, 1975. ISBN
- Brook, Timothy and Bob Tadashi Wakabayashi (eds.). Opium Regimes: China, Britain, and Japan, 1839-1952. Berkeley, CA: University of California Press, 2000. ISBN 9780520220096
- Coleman, Anthony (ed.). Millennium: A Thousand Years of History. London: Bantam, 1999. ISBN 9780593044780
- Collis, Maurice. Foreign Mud, An account of the Opium War. New York: A. A. Knopf, 1947; London: Faber and Faber, 1997. ISBN 0571193013
- Fu Lo-shu. A Documentary Chronicle of Sino-Western relations, Vol. 1. Tucson, AZ: University of Arizona Press, 1966. ISBN 9780816501519
- Spence, Jonathan D. The Search for Modern China, 2nd ed.. New York: W. W. Norton & Company, 1990. ISBN 9780393027082
- Trocki, Carl A. Opium, Empire and the Global Political Economy: A Study of the Asian Opium Trade, 1750-1950. London: Routledge, 1999. ISBN 9780415199186
- Yangwen Zheng, Yangwen. The Social Life of Opium in China. Cambridge: Cambridge University Press, 2005. ISBN 9780521846080
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
Caltech researchers used the Mars Reconnaissance Orbiter to determine that surface water left salt minerals behind as recently as 2 billion years ago.
Mars once rippled with rivers and ponds billions of years ago, providing a potential habitat for microbial life. As the planet’s atmosphere thinned over time, that water evaporated, leaving the frozen desert world that NASA’s Mars Reconnaissance Orbiter (MRO) studies today.
It’s commonly believed that Mars’ water evaporated about 3 billion years ago. But two scientists studying data that MRO has accumulated at Mars over the last 15 years have found evidence that reduces that timeline significantly: Their research reveals signs of liquid water on the Red Planet as recently as 2 billion to 2.5 billion years ago, meaning water flowed there about a billion years longer than previous estimates.
The findings – published in AGU Advances on Dec. 27, 2021 – center on the chloride salt deposits left behind as icy meltwater flowing across the landscape evaporated.
While the shape of certain valley networks hinted that water may have flowed on Mars that recently, the salt deposits provide the first mineral evidence confirming the presence of liquid water. The discovery raises new questions about how long microbial life could have survived on Mars, if it ever formed at all. On Earth, at least, where there is water, there is life.
The study’s lead author, Ellen Leask, performed much of the research as part of her doctoral work at Caltech in Pasadena. She and Caltech professor Bethany Ehlmann used data from the MRO instrument called the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) to map the chloride salts across the clay-rich highlands of Mars’ southern hemisphere – terrain pockmarked by impact craters. These craters were one key to dating the salts: The fewer craters a terrain has, the younger it is. By counting the number of craters on an area of the surface, scientists can estimate its age.
Click on this interactive visualization of the Mars Reconnaissance Orbiter and take it for a spin. The “HD” button in the lower right offers more detailed textures. The full interactive experience is at Eyes on the Solar System. Credit: NASA/JPL-Caltech
MRO has two cameras that are perfect for this purpose. The Context Camera, with its black-and-white wide-angle lens, helps scientists map the extent of the chlorides. To zoom in, scientists turn to the High-Resolution Imaging Science Experiment (HiRISE) color camera, allowing them to see details as small as a Mars rover from space.
Using both cameras to create digital elevation maps, Leask and Ehlmann found that many of the salts were in depressions – once home to shallow ponds – on gently sloping volcanic plains. The scientists also found winding, dry channels nearby – former streams that once fed surface runoff (from the occasional melting of ice or permafrost) into these ponds. Crater counting and evidence of salts on top of volcanic terrain allowed them to date the deposits.
“What is amazing is that after more than a decade of providing high-resolution image, stereo, and infrared data, MRO has driven new discoveries about the nature and timing of these river-connected ancient salt ponds,” said Ehlmann, CRISM’s deputy principal investigator. Her co-author, Leask, is now a post-doctoral researcher at Johns Hopkins University’s Applied Physics Laboratory, which leads CRISM.
The salt minerals were first discovered 14 years ago by NASA’s Mars Odyssey orbiter, which launched in 2001. MRO, which has higher-resolution instruments than Odyssey, launched in 2005 and has been studying the salts, among many other features of Mars, ever since. Both are managed by NASA’s Jet Propulsion Laboratory in Southern California.
“Part of the value of MRO is that our view of the planet keeps getting more detailed over time,” said Leslie Tamppari, the mission’s deputy project scientist at JPL. “The more of the planet we map with our instruments, the better we can understand its history.”
More About the Mission
JPL, a division of Caltech in Pasadena, California, manages the MRO mission for NASA’s Science Mission Directorate in Washington. The University of Arizona, in Tucson, operates HiRISE, which was built by Ball Aerospace & Technologies Corp., in Boulder, Colorado. MARCI and the Context Camera were both built and are operated by Malin Space Science Systems in San Diego.
For more information about MRO:
Jet Propulsion Laboratory, Pasadena, Calif.
Karen Fox / Alana Johnson
NASA Headquarters, Washington
301-286-6284 / 202-358-1501
email@example.com / firstname.lastname@example.org |
A quantum computer (also known as a quantum supercomputer) is a computation device that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1969.
As of 2014[update] quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.
Large-scale quantum computers will be able to solve certain problems much more quickly than any classical computer using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, which run faster than any possible probabilistic classical algorithm. Given sufficient computational resources, however, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis.
A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of these two qubit states; moreover, a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8. In general, a quantum computer with qubits can be in an arbitrary superposition of up to different states simultaneously (this compares to a normal computer that can only be in one of these states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with a measurement, collapsing the system of qubits into one of the pure states, where each qubit is purely zero or one. The outcome can therefore be at most classical bits of information. Quantum algorithms are often non-deterministic, in that they provide the correct solution only with a certain known probability.
An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written and , or and ). But in fact any system possessing an observable quantity A, which is conserved under time evolution such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.
Bits vs. qubits
A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.
The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. Here, however, the coefficients can have complex values, and it is the sum of the squares of the coefficients' magnitudes, , that must equal 1. These square magnitudes represent the probability amplitudes of given states. However, because a complex number encodes not just a magnitude but also a direction in the complex plane, the phase difference between any two coefficients (states) represents a meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical computing.
If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = , the probability of measuring 001 = , etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution and we say that the quantum state "collapses" to a classical state as a result of making the measurement.
Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, ..., 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
- where, e.g.,
The computational basis for a single qubit (two dimensions) is and .
Using the eigenvectors of the Pauli-x operator, a single qubit is and .
|Is a universal quantum computer sufficient to efficiently simulate an arbitrary physical system?|
While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, , corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)
Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.
For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch-Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.
Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers (or the related discrete logarithm problem, which can also be solved by Shor's algorithm), including forms of RSA. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
However, other existing cryptographic algorithms do not appear to be broken by these algorithms. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.
Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.
Consider a problem that has these four properties:
- The only way to solve it is to guess answers repeatedly and check them,
- The number of possible answers to check is the same as the number of inputs,
- Every possible answer takes the same amount of time to check, and
- There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.
There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:
- scalable physically to increase the number of qubits;
- qubits can be initialized to arbitrary values;
- quantum gates faster than decoherence time;
- universal gate set;
- qubits can be read easily.
One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background nuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 qubits without error correction. With error correction, the figure would rise to about 107 qubits. Note that computation time is about L2 or about 107 steps and on 1 MHz, about 10 seconds.
A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.
There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
- Quantum gate array (computation decomposed into sequence of few-qubit quantum gates)
- One-way quantum computer (computation decomposed into sequence of one-qubit measurements applied to a highly entangled initial state or cluster state)
- Adiabatic quantum computer or computer based on Quantum annealing (computation decomposed into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contains the solution)
- Topological quantum computer (computation decomposed into the braiding of anyons in a 2D lattice)
The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent to each other in the sense that each can simulate the other with no more than polynomial overhead.
For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
- Superconductor-based quantum computers (including SQUID-based quantum computers) (qubit implemented by the state of small superconducting circuits (Josephson junctions))
- Trapped ion quantum computer (qubit implemented by the internal state of trapped ions)
- Optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice)
- Electrically defined or self-assembled quantum dots (e.g. the Loss-DiVincenzo quantum computer or) (qubit given by the spin states of an electron trapped in the quantum dot)
- Quantum dot charge based semiconductor quantum computer (qubit is the position of an electron inside a double quantum dot)
- Nuclear magnetic resonance on molecules in solution (liquid-state NMR) (qubit provided by nuclear spins within the dissolved molecule)
- Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon)
- Electrons-on-helium quantum computers (qubit is the electron spin)
- Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of atoms trapped in and coupled to high-finesse cavities)
- Molecular magnet
- Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerene structures)
- Linear optical quantum computer (qubits realized by processing appropriate states of different modes of the electromagnetic field through linear optics elements such as mirrors, beam splitters and phase shifters, e.g.)
- Diamond-based quantum computer (qubit realized by the electronic or nuclear spin of Nitrogen-vacancy centers in diamond)
- Bose–Einstein condensate-based quantum computer
- Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap
- Rare-earth-metal-ion-doped inorganic crystal based quantum computers (qubit realized by the internal electronic state of dopants in optical fibers)
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. But at the same time, there is also a vast amount of flexibility.
In 2001, researchers were able to demonstrate Shor's algorithm to factor the number 15 using a 7-qubit NMR computer.
In 2005, researchers at the University of Michigan built a semiconductor chip that functioned as an ion trap. Such devices, produced by standard lithography techniques, may point the way to scalable quantum computing tools. An improved version was made in 2006.
In 2009, researchers at Yale University created the first rudimentary solid-state quantum processor. The two-qubit superconducting chip was able to run elementary algorithms. Each of the two artificial atoms (or qubits) were made up of a billion aluminum atoms but they acted like a single one that could occupy two different energy states.
Another team, working at the University of Bristol, also created a silicon-based quantum computing chip, based on quantum optics. The team was able to run Shor's algorithm on the chip. Further developments were made in 2010. Springer publishes a journal ("Quantum Information Processing") devoted to the subject.
In April 2011, a team of scientists from Australia and Japan made a breakthrough in quantum teleportation. They successfully transferred a complex set of quantum data with full transmission integrity achieved. Also the qubits being destroyed in one place but instantaneously resurrected in another, without affecting their superpositions.
In 2011, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset. On May 25, 2011 D-Wave announced that Lockheed Martin Corporation entered into an agreement to purchase a D-Wave One system. Lockheed Martin and the University of Southern California (USC) reached an agreement to house the D-Wave One Adiabatic Quantum Computer at the newly formed USC Lockheed Martin Quantum Computing Center, part of USC's Information Sciences Institute campus in Marina del Rey. D-Wave's engineers use an empirical approach when designing their quantum chips, focusing on whether the chips are able to solve particular problems rather than designing based on a thorough understanding of the quantum principles involved. This approach was liked by investors more than by some academic critics, who said that D-Wave had not yet sufficiently demonstrated that they really had a quantum computer. Such criticism softened once D-Wave published a paper in Nature giving details, which critics said proved that the company's chips did have some of the quantum mechanical properties needed for quantum computing.
During the same year, researchers working at the University of Bristol created an all-bulk optics system able to run an iterative version of Shor's algorithm. They successfully managed to factorize 21.
In November 2011 researchers factorized 143 using 4 qubits.
In February 2012 IBM scientists said that they had made several breakthroughs in quantum computing with superconducting integrated circuits that put them "on the cusp of building systems that will take computing to a whole new level."
In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a crystal of diamond doped with some manner of impurity, that can easily be scaled up in size and functionality at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used. A system which formed an impulse of microwave radiation of certain duration and the form was developed for maintenance of protection against decoherence. By means of this computer Grover's algorithm for four variants of search has generated the right answer from the first try in 95% of cases.
In September 2012, Australian researchers at the University of New South Wales said the world's first quantum computer was just 5 to 10 years away, after announcing a global breakthrough enabling manufacture of its memory building blocks. A research team led by Australian engineers created the first working "quantum bit" based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern day computers, laptops and phones.
In October 2012, Nobel Prizes were presented to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world - work which may eventually help make quantum computing possible.
In February 2013, a new technique, boson sampling, was reported by two groups using photons in an optical lattice that is not a universal quantum computer but which may be good enough for practical problems. Science Feb 15, 2013
In May 2013, Google Inc announced that it was launching the Quantum Artificial Intelligence Lab, to be hosted by NASA's Ames Research Center. The lab will house a 512-qubit quantum computer from D-Wave Systems, and the USRA (Universities Space Research Association) will invite researchers from around the world to share time on it. The goal is to study how quantum computing might advance machine learning.
In early 2014 it was reported, based on documents provided by former NSA contractor Edward Snowden, that the U.S. National Security Agency (NSA) is running a $79.7 million research program (titled "Penetrating Hard Targets") with the aim of developing a quantum computer capable of breaking encryption vulnerable to quantum computers.
Relation to computational complexity theory
The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half. A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.
BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.
The capacity of a quantum computer to accelerate classical algorithms has rigid limits—upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer. A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.
Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis. It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e., there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.
- Chemical computer
- DNA computer
- Electronic quantum holography
- List of emerging technologies
- Natural computing
- Normal mode
- Photonic computing
- Post-quantum cryptography
- Quantum annealing
- Quantum bus
- Quantum cognition
- Quantum gate
- Quantum threshold theorem
- Timeline of quantum computing
- Topological quantum computer
- "Quantum Computing with Molecules" article in Scientific American by Neil Gershenfeld and Isaac L. Chuang
- Manin, Yu. I. (1980). Vychislimoe i nevychislimoe [Computable and Noncomputable] (in Russian). Sov.Radio. pp. 13–15. Retrieved 4 March 2013.
- Feynman, R. P. (1982). "Simulating physics with computers". International Journal of Theoretical Physics 21 (6): 467–488. doi:10.1007/BF02650179.
- Deutsch, David (1992-01-06). "Quantum computation". Physics World.
- Finkelstein, David (1969). "Space-Time Structure in High Energy Interactions". In Gudehus, T.; Kaiser, G. Fundamental Interactions at High Energy. New York: Gordon & Breach.
- New qubit control bodes well for future of quantum computing
- Quantum Information Science and Technology Roadmap for a sense of where the research is heading.
- Simon, D.R. (1994). "On the power of quantum computation". Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on: 116–123. doi:10.1109/SFCS.1994.365701. ISBN 0-8186-6580-7.
- Nielsen, Michael A.; Chuang, Isaac L. Quantum Computation and Quantum Information. p. 202.
- Waldner, Jean-Baptiste (2007). Nanocomputers and Swarm Intelligence. London: ISTE. p. 157. ISBN 2-7462-1516-0.
- David P. DiVincenzo (1995). "Quantum Computation". Science 270 (5234): 255–261. Bibcode:1995Sci...270..255D. doi:10.1126/science.270.5234.255. (subscription required)
- Arjen K. Lenstra (2000). "Integer Factoring". Designs, Codes and Cryptography 19 (2/3): 101–128. doi:10.1023/A:1008397921377.
- Daniel J. Bernstein, Introduction to Post-Quantum Cryptography. Introduction to Daniel J. Bernstein, Johannes Buchmann, Erik Dahmen (editors). Post-quantum cryptography. Springer, Berlin, 2009. ISBN 978-3-540-88701-0
- See also pqcrypto.org, a bibliography maintained by Daniel J. Bernstein and Tanja Lange on cryptography not known to be broken by quantum computing.
- Robert J. McEliece. "A public-key cryptosystem based on algebraic coding theory." Jet Propulsion Laboratory DSN Progress Report 42–44, 114–116.
- Kobayashi, H.; Gall, F.L. (2006). "Dihedral Hidden Subgroup Problem: A Survey". Information and Media Technologies 1 (1): 178–185.
- Bennett C.H., Bernstein E., Brassard G., Vazirani U., The strengths and weaknesses of quantum computation. SIAM Journal on Computing 26(5): 1510–1523 (1997).
- Quantum Algorithm Zoo – Stephen Jordan's Homepage
- NSA seeks to build quantum computer that could crack most types of encryption By Steven Rich & Barton Gellman 01.02.2014, Washington Post
- The Father of Quantum Computing by Quinn Norton 02.15.2007, Wired.com
- David P. DiVincenzo, IBM (2000-04-13). "The Physical Implementation of Quantum Computation". arXiv:quant-ph/0002077 [quant-ph].
- M. I. Dyakonov, Université Montpellier (2006-10-14). "Is Fault-Tolerant Quantum Computation Really Possible?". In: Future Trends in Microelectronics. Up the Nano Creek, S. Luryi, J. Xu, and A. Zaslavsky (eds), Wiley, pp.: 4–18. arXiv:quant-ph/0610117.
- Freedman, Michael H.; Kitaev, Alexei; Larsen, Michael J.; Wang, Zhenghan (2003). "Topological quantum computation". Bulletin of the American Mathematical Society 40 (1): 31–38. arXiv:quant-ph/0101025. doi:10.1090/S0273-0979-02-00964-3. MR 1943131.
- Monroe, Don, "Anyons: The breakthrough quantum computing needs?", New Scientist, 1 October 2008
- Das, A.; Chakrabarti, B. K. (2008). "Quantum Annealing and Analog Quantum Computation". Rev. Mod. Phys. 80 (3): 1061–1081. doi:10.1103/RevModPhys.80.1061
- Nayak, Chetan; Simon, Steven; Stern, Ady; Das Sarma, Sankar (2008). "Nonabelian Anyons and Quantum Computation". Rev Mod Phys 80 (3): 1083. arXiv:0707.1889. Bibcode:2008RvMP...80.1083N. doi:10.1103/RevModPhys.80.1083.
- Clarke, John; Wilhelm, Frank (June 19, 2008). "Superconducting quantum bits". Nature 453 (7198): 1031–1042. Bibcode:2008Natur.453.1031C. doi:10.1038/nature07128. PMID 18563154.
- William M Kaminsky (2004). "Scalable Superconducting Architecture for Adiabatic Quantum Computation". arXiv:quant-ph/0403090 [quant-ph].
- Imamoğlu, Atac; Awschalom, D. D.; Burkard, Guido; DiVincenzo, D. P.; Loss, D.; Sherwin, M.; Small, A. (1999). "Quantum information processing using quantum dot spins and cavity-QED". Physical Review Letters 83 (20): 4204. Bibcode:1999PhRvL..83.4204I. doi:10.1103/PhysRevLett.83.4204.
- Fedichkin, Leonid; Yanchenko, Maxim; Valiev, Kamil (2000). "Novel coherent quantum bit using spatial quantization levels in semiconductor quantum dot". Quantum Computers and Computing 1: 58–76. arXiv:quant-ph/0006097. Bibcode:2000quant.ph..6097F.
- Knill, G. J.; Laflamme, R.; Milburn, G. J. (2001). "A scheme for efficient quantum computation with linear optics". Nature 409 (6816): 46–52. Bibcode:2001Natur.409...46K. doi:10.1038/35051009. PMID 11343107.
- Nizovtsev, A. P. et al. (October 19, 2004). "A quantum computer based on NV centers in diamond: Optically detected nutations of single electron and nuclear spins". Optics and Spectroscopy 99 (2): 248–260. Bibcode:2005OptSp..99..233N. doi:10.1134/1.2034610.
- Wolfgang Gruener, TG Daily (2007-06-01). "Research indicates diamonds could be key to quantum storage". Retrieved 2007-06-04.
- Neumann, P. et al. (June 6, 2008). "Multipartite Entanglement Among Single Spins in Diamond". Science 320 (5881): 1326–1329. Bibcode:2008Sci...320.1326N. doi:10.1126/science.1157233. PMID 18535240.
- Rene Millman, IT PRO (2007-08-03). "Trapped atoms could advance quantum computing". Retrieved 2007-07-26.
- Ohlsson, N.; Mohan, R. K.; Kröll, S. (January 1, 2002). "Quantum computer hardware based on rare-earth-ion-doped inorganic crystals". Opt. Commun. 201 (1–3): 71–77. Bibcode:2002OptCo.201...71O. doi:10.1016/S0030-4018(01)01666-2.
- Longdell, J. J.; Sellars, M. J.; Manson, N. B. (September 23, 2004). "Demonstration of conditional quantum phase shift between ions in a solid". Phys. Rev. Lett. 93 (13): 130503. arXiv:quant-ph/0404083. Bibcode:2004PhRvL..93m0503L. doi:10.1103/PhysRevLett.93.130503. PMID 15524694.
- Vandersypen, Lieven M. K.; Steffen, Matthias; Breyta, Gregory; Yannoni, Costantino S.; Sherwood, Mark H.; Chuang, Isaac L. (2001). "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance". Nature 414 (6866): 883–7. doi:10.1038/414883a. PMID 11780055.
- Ann Arbor (2005-12-12). "U-M develops scalable and mass-producible quantum computer chip". Retrieved 2006-11-17.
- L. DiCarlo, J. M. Chow, J. M. Gambetta, Lev S. Bishop, B. R. Johnson, D. I. Schuster, J. Majer, A. Blais, L. Frunzio, S. M. Girvin, R. J. Schoelkopf (2009-06-28). "Demonstration of two-qubit algorithms with a superconducting quantum processor". Nature 460 (7252): 240–4. Bibcode:2009Natur.460..240D. doi:10.1038/nature08121. PMID 19561592. Retrieved 2009-07-02.
- "Scientists Create First Electronic Quantum Processor". 2009-07-02. Retrieved 2009-07-02.
- New Scientist (2009-09-04). "Code-breaking quantum algorithm runs on a silicon chip". Retrieved 2009-10-14.
- "New Trends in Quantum Computation".
- Quantum Information Processing. Springer.com. Retrieved on 2011-05-19.
- "University of New South Wales".
- "Engadget, First light wave quantum teleportation achieved, opens door to ultra fast data transmission".
- "Learning to program the D-Wave One". Retrieved 11 May 2011.
- "D-Wave Systems sells its first Quantum Computing System to Lockheed Martin Corporation". 2011-05-25. Retrieved 2011-05-30.
- "Operational Quantum Computing Center Established at USC". 2011-10-29. Retrieved 2011-12-06.
- Quantum annealing with manufactured spins Nature 473, 194–198, 12 May 2011
- The CIA and Jeff Bezos Bet on Quantum Computing Technology Review October 4, 2012 by Tom Simonite
- Enrique Martin Lopez, Anthony Laing, Thomas Lawson, Roberto Alvarez, Xiao-Qi Zhou, Jeremy L. O'Brien (2011). "Implementation of an iterative quantum order finding algorithm". Nature Photonics 6 (11): 773–776. arXiv:1111.4147. doi:10.1038/nphoton.2012.259.
- Quantum computer with Von Neumann architecture
- Quantum Factorization of 143 on a Dipolar-Coupling NMR system
- "IBM Says It's 'On the Cusp' of Building a Quantum Computer"
- Quantum computer built inside diamond
- "Australian engineers write quantum computer 'qubit' in global breakthrough". The Australian. Retrieved 3 October 2012.
- "Breakthrough in bid to create first quantum computer". University of New South Wales. Retrieved 3 October 2012.
- Frank, Adam (October 14, 2012). "Cracking the Quantum Safe". New York Times. Retrieved October 14, 2012.
- Overbye, Dennis (October 9, 2012). "A Nobel for Teasing Out the Secret Life of Atoms". New York Times. Retrieved October 14, 2012.
- The Physics arXiv Blog (November 15, 2012). "First Teleportation from One Macroscopic Object to Another". MIT Technology Review. Retrieved November 17, 2012.
- Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-wei (November 13, 2012). "Quantum teleportation between remote atomic-ensemble quantum memories". arXiv. arXiv:1211.2892.
- "Launching the Quantum Artificial Intelligence Lab". Research@Google Blog. Retrieved 16 May 2013.
- Nielsen, p. 42
- Nielsen, p. 41
- Bernstein, Ethan; Vazirani, Umesh (1997). "Quantum Complexity Theory". SIAM Journal on Computing 26 (5): 1411. doi:10.1137/S0097539796300921.
- Ozhigov, Yuri (1999). "Quantum Computers Speed Up Classical with Probability Zero". Chaos Solitons Fractals 10 (10): 1707–1714. arXiv:quant-ph/9803064. Bibcode:1998quant.ph..3064O. doi:10.1016/S0960-0779(98)00226-4.
- Ozhigov, Yuri (1999). "Lower Bounds of Quantum Search for Extreme Point". Proceedings of the London Royal Society A455 (1986): 2165–2172. arXiv:quant-ph/9806001. Bibcode:1999RSPSA.455.2165O. doi:10.1098/rspa.1999.0397.
- Nielsen, p. 126
- Scott Aaronson, NP-complete Problems and Physical Reality, ACM SIGACT News, Vol. 36, No. 1. (March 2005), pp. 30–52, section 7 "Quantum Gravity": "[...] to anyone who wants a test or benchmark for a favorite quantum gravity theory,[author's footnote: That is, one without all the bother of making numerical predictions and comparing them to observation] let me humbly propose the following: can you define Quantum Gravity Polynomial-Time? [...] until we can say what it means for a ‘user’ to specify an ‘input’ and ‘later’ receive an ‘output’—there is no such thing as computation, not even theoretically." (emphasis in original)
- Nielsen, Michael and Chuang, Isaac (2000). Quantum Computation and Quantum Information. Cambridge: Cambridge University Press. ISBN 0-521-63503-9. OCLC 174527496.
- Derek Abbott, Charles R. Doering, Carlton M. Caves, Daniel M. Lidar, Howard E. Brandt, Alexander R. Hamilton, David K. Ferry, Julio Gea-Banacloche, Sergey M. Bezrukov, and Laszlo B. Kish (2003). "Dreams versus Reality: Plenary Debate Session on Quantum Computing". Quantum Information Processing 2 (6): 449–472. arXiv:quant-ph/0310130. doi:10.1023/B:QINP.0000042203.24782.9a. hdl:2027.42/45526.
- David P. DiVincenzo (2000). "The Physical Implementation of Quantum Computation". Experimental Proposals for Quantum Computation. arXiv:quant-ph/0002077
- David P. DiVincenzo (1995). "Quantum Computation". Science 270 (5234): 255–261. Bibcode:1995Sci...270..255D. doi:10.1126/science.270.5234.255. Table 1 lists switching and dephasing times for various systems.
- Richard Feynman (1982). "Simulating physics with computers". International Journal of Theoretical Physics 21 (6–7): 467. Bibcode:1982IJTP...21..467F. doi:10.1007/BF02650179.
- Gregg Jaeger (2006). Quantum Information: An Overview. Berlin: Springer. ISBN 0-387-35725-4. OCLC 255569451.
- Stephanie Frank Singer (2005). Linearity, Symmetry, and Prediction in the Hydrogen Atom. New York: Springer. ISBN 0-387-24637-1. OCLC 253709076.
- Giuliano Benenti (2004). Principles of Quantum Computation and Information Volume 1. New Jersey: World Scientific. ISBN 981-238-830-3. OCLC 179950736.
- Sam Lomonaco Four Lectures on Quantum Computing given at Oxford University in July 2006
- C. Adami, N.J. Cerf. (1998). "Quantum computation with linear optics". arXiv:quant-ph/9806048v1.
- Joachim Stolze; Dieter Suter (2004). Quantum Computing. Wiley-VCH. ISBN 3-527-40438-4.
- Ian Mitchell, (1998). "Computing Power into the 21st Century: Moore's Law and Beyond".
- Rolf Landauer, (1961). "Irreversibility and heat generation in the computing process".
- Gordon E. Moore (1965). "Cramming more components onto integrated circuits". Electronics Magazine.
- R. W. Keyes, (1988). "Miniaturization of electronics and its limits". "IBM Journal of Research and Development".
- M. A. Nielsen,; E. Knill, ; R. Laflamme,. "Complete Quantum Teleportation By Nuclear Magnetic Resonance".
- Lieven M.K. Vandersypen,; Constantino S. Yannoni, ; Isaac L. Chuang, (2000). Liquid state NMR Quantum Computing.
- Imai Hiroshi,; Hayashi Masahito, (2006). Quantum Computation and Information. Berlin: Springer. ISBN 3-540-33132-8.
- Andre Berthiaume, (1997). "Quantum Computation".
- Daniel R. Simon, (1994). "On the Power of Quantum Computation". Institute of Electrical and Electronic Engineers Computer Society Press.
- "Seminar Post Quantum Cryptology". Chair for communication security at the Ruhr-University Bochum.
- Laura Sanders, (2009). "First programmable quantum computer created".
- "New trends in quantum computation".
|Wikimedia Commons has media related to Quantum computer.|
- Stanford Encyclopedia of Philosophy: "Quantum Computing" by Amit Hagar.
- Quantiki – Wiki and portal with free-content related to quantum information science.
- Scott Aaronson's blog, which features informative and critical commentary on developments in the field
- Quantum Annealing and Computation: A Brief Documentary Note, A. Ghosh and S. Mukherjee
- Maryland University Laboratory for Physical Sciences: conducts researches for the quantum computer-based project led by the NSA, named 'Penetrating Hard Target'.
- Quantum Mechanics and Quantum Computation — Coursera course by Umesh Vazirani
- Quantum computing for the determined — 22 video lectures by Michael Nielsen
- Video Lectures by David Deutsch
- Lectures at the Institut Henri Poincaré (slides and videos)
- Online lecture on An Introduction to Quantum Computing, Edward Gerjuoy (2008)
- Quantum Computing research by Mikko Möttönen at Aalto University (video) on YouTube |
How can you predict outcomes accurately?
Theoretical probability uses math to predict the outcomes. Just divide the favorable outcomes by the possible outcomes. Experimental probability is based on observing a trial or experiment, counting the favorable outcomes, and dividing it by the total number of times the trial was performed.
How do you predict accurately?
How to Make Accurate Predictions
- Unpack the question into components.
- Distinguish as sharply as you can between the known and unknown. ...
- Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena.
How do you predict outcomes accurately in probability?
- look for the reason for actions.
- find implied meaning.
- sort out fact from opinion.
- make comparisons – The reader must remember previous information and compare it to the material being read now.
How do you predict the outcome of data?
Predictive analytics uses historical data to predict future events. Typically, historical data is used to build a mathematical model that captures important trends. That predictive model is then used on current data to predict what will happen next, or to suggest actions to take for optimal outcomes.
Why do we predict outcomes?
Predicting supports the development of critical thinking skills by requiring students to draw upon their prior knowledge and experiences as well as observations to anticipate what might happen. The ability to make logical predictions supports the development of the ability to formulate hypotheses.
Making predictions with probability | Statistics and probability | 7th grade | Khan Academy
Does predicting outcomes mean?
Outcomes are the results of an experiment or trial. Using outcomes, it is possible to make predictions. A prediction is a guess about what will happen.
How can I improve my prediction skills?
So How Can You Improve Your Own Prediction Skills?
- Establish a Base Rate. Compare. ...
- Be Specific. ...
- Consider the Opposite. ...
- Cast a Wide Net. ...
- Measure Everything.
What is the best predictor to achieve successful outcomes?
Past behavior is the best predictor of future behavior, and the same is true of success. People who experience small victories build the confidence – and the momentum – to keep going.
Which method is used for prediction?
Delphi Survey Method
The Delphi Survey technique is a popular method used in prediction.
What is a prediction of the outcome of a study?
The prediction is a statement of the expected results of the experiment based on the hypothesis. The prediction is often an "if/then statement." For example: If increasing fertilizer increases number of beans, then coffee bean plants treated with more fertilizer will have more beans.
What is meant by prediction accuracy?
The predictive accuracy A describes whether the predicted values match the actual values of the target field within the incertitude due to statistical fluctuations and noise in the input data values.
What is the best way to predict the future?
“The best way to predict the future is to invent it.” PARC researcher Alan Kay is widely attributed as having said this here.
Which is the most accurate forecasting method and why?
It can often result in a more accurate forecast. It is an easy method that enables forecasts to quickly react to new trends or changes. A benefit to exponential smoothing is that it does not require a large amount of historical data.
What is the most accurate predictor of behavior?
“The best predictor of future behaviour is past behaviour”, has been attributed to everyone from psychologists, such as Albert Ellis, Walter Michel, and B.F. Skinner, to writers such as Mark Twain. One of the people to explore this idea in depth was the American psychologist Paul Meehl. He wrote, “…
Is a more accurate predictor of future success?
Intelligence is more accurate predictor of future career success than socioeconomic background, study suggests.
What is the strongest predictor of success in the workplace?
Conscientiousness is top personality predictor of positive career and work-related outcomes, has broad benefits
- Researchers analyzed more than 100 years worth of previous research on conscientiousness at work. ...
- The study found:
How do you explain a prediction?
A prediction is what someone thinks will happen. A prediction is a forecast, but not only about the weather. Pre means “before” and diction has to do with talking. So a prediction is a statement about the future.
What is an example of predicting?
All the local forecasters are predicting rain for this afternoon. She claims that she can predict future events. It's hard to predict how the election will turn out. Many people predicted that the store would fail, but it has done very well.
What is a prediction of the possible outcomes called?
Probability is used to make predictions about how likely it is for an event to happen, given the total number of possible outcomes. There are many events you can't predict with total certainty, but you can predict the chances of an event happening. You express all probability answers with a value from zero to one.
What are three measures of forecasting accuracy?
Measures of forecast accuracy
Mean Absolute Error (MAE) or Mean Absolute Deviation (MAD) Root Mean Square Error (RMSE) Mean Absolute Percentage Error (MAPE)
What is the main advantage of accurately predicting?
Accurate forecasting aids with reduction of unnecessary spending, proper scheduling of production/staffing, avoiding missing potential opportunities, and managing your overall cash flow.
Why is forecast accuracy important?
Why Forecast? A forecast can play a major role in driving company success or failure. At the base level, an accurate forecast keeps prices low by optimizing a business operation - cash flow, production, staff, and financial management.
What are some other ways to predict the future?
30 Ways to Tell the Future
- Divining the Future. It seems humans have for a very long time been troubled by the opacity of the future. ...
- Aeromancy. Definition : divination from the state of the air or from atmospheric substances. ...
- Aleuromancy. ...
- Anthropomancy. ...
- Astragalomancy. ...
- Axinomancy. ...
- Belomancy. ...
Which is the best definition for accuracy?
What is meant by accuracy? Accuracy refers to the closeness of the measured value to a standard or true value.
How can we determine the accuracy of our prediction in a research study?
Assessment of predictive accuracy. Predictive accuracy should be measured based on the difference between the observed values and predicted values. However, the predicted values can refer to different information. Thus the resultant predictive accuracy can refer to different concepts. |
GradesK to 12
In the ClassroomLearn about the Sun using JHelioviewer. Create mashups of Sun images and learn more about the resource that provides the Earth with energy. Use the resources on this site to learn more about concepts and objects found in space. Use this site to ask questions that can be a springboard for further research and projects either by individual students or groups. Introduce this site on your interactive whiteboard or projector. Then have students explore this site independently or in small groups. Make a shortcut to this site on classroom computers and use it as a center. The text portions are challenging, so you should pair weaker readers with a partner as they research on this site. Have cooperative learning groups create podcasts demonstrating their understanding of one of the concepts. Use a site such as PodOmatic (reviewed here). Have students create online posters on paper or do it together as a class using a tool such as Web Poster Wizard (reviewed here) or PicLits (reviewed here). Use an online poster creator, such as Padlet (reviewed here).
Grades6 to 12
In the ClassroomTry using this site when discussing how science relates to our current world. For instance, show the ten most dangerous moments for the space shuttle and the station history when studying astronomy. Incorporate the slide show about the Gulf oil spill and reading into a class blog for a biology unit on bacteria. This slide show demonstrates how microbes are used to clean up the oil. The pictures of the organisms are wonderful! Or, incorporate it into an environmental science class dealing with the impact of human behavior on the environment. Have students read and view the slide show as homework, and then discuss what they have learned via your class wiki or in class. Challenge students to create online posters on paper or do it together as a class using a tool such as Web Poster Wizard (reviewed here) or PicLits (reviewed here). Have students further discuss the potential problems with associated new microbes into the oil spill area.
Grades5 to 12
In the ClassroomThere are many different calculators for students to explore as ways to apply math in real world situations. For example, choose weather and then wind chill. Enter the information and wind chill will be calculated. Enter the information, view the calculated answer, and then have students determine how it is actually calculated. This site is a great find for gifted students to use to further investigate specific topics beyond your "regular" classroom content. Share this site on your interactive whiteboard or projector. Then have students work with a partner to explore various "buttons" on this interactive calculator. Have the groups create multimedia presentations to share their findings or demonstrate them on the whiteboard as advertisements or infomercials! Have students create online posters on paper or do it together as a class using a tool such as Web Poster Wizard (reviewed here) or PicLits (reviewed here).
Grades6 to 12
In the ClassroomUse this site as a learning center or station during a unit on space exploration. To assess student learning, have students create online posters on paper, or if you are beginning to incorporate technology in your class make the posters together using a tool such as PicLits, reviewed here. If you and your classes are more advanced in using technology try Genial.ly, reviewed here. Genial.ly allows you to create interactive posters by adding polls, videos, embeds, web links, PowerPoint, and PDfs.
Grades6 to 12
tag(s): blogs (84), charts and graphs (198), communities (37), experiments (70), geology (81), literature (272), news (260), search strategies (27), spreadsheets (22), statistics (128), tutorials (49), wikis (21)
In the ClassroomFor example, use the lesson It's a Statistical World to bring statistics and the use of spreadsheets into the classroom. Follow project ideas, suggestions, and how to's to complete the activity. Specific examples, suggestions, and tutorials for using the resources are given throughout. Find unbelievable ideas that are exceptional for many curricular areas. Mark this one in your Favorites to use when you need inspiration or a new approach to curriculum that never seems to "stick" the way you wish it would.
Grades2 to 9
In the ClassroomView movies that feature testing of the Mars Rover models on similar terrain areas here on Earth. Share the videos on your interactive whiteboard or projector. Learn why we map Mars by visiting the Map room. After viewing the information video, help find terrain changes on Mars or count craters. There is great information on every page of this site. Find your way back using the sitemap. Allow students to explore this site and hold a class discussion of the interesting information and major points learned through the exploration. Research other NASA probes and missions to identify information learned and how we understand the universe and maybe our own planet better.
GradesK to 12
This site includes advertising.
In the ClassroomShare portions of the site such as how to use a lab notebook or how to do experiments safely on your interactive whiteboard when beginning science projects. Use the site as a resource for classroom experiments with materials that are readily available. Assign experiments for students to do at home, then have them prepare a presentation for the class describing science concepts demonstrated and learned. Secondary teachers can assign students a topic from the Science News portion of the site to read and discuss with the class. Challenge students to create a multimedia project to share with the class using one of many TeachersFirst Edge tools reviewed here.
Grades5 to 8
In the ClassroomTry out the lesson plans for astronomy and wildlife. There are power-points, activities and even interactives for students to try. Use one of these lessons as a part of a unit on space or pollution. On the Education page there are links for teachers and kids. Put a link on your class website to the link for kids!
Grades9 to 12
In the ClassroomFind great information, photos, and possible questions for use in the classroom to stimulate thinking and make connections between content and the use of science in everyday life. For example, the debate "Can we sustain our lifestyles and our planet?" uses content from food chains to technology to natural resources. Additionally the discussion of what every organism needs to survive can bring to light discussions of characteristics of living things and our responsibility to the planet.
Grades8 to 12
In the ClassroomUse this as resource when researching for scientific papers, getting ideas for experiments, or just staying apprised of the latest scientific research on a specific topic.
If your students are doing scientific research you might want to supply them with links from Science.gov using Diigo-Education reviewed here.
Would like to see better search features within subject categories.Kathleen, VT, Grades: 0 - 12
Grades2 to 8
In the ClassroomBegin with the comic strip to introduce a concept (share on your interactive whiteboard or projector). Have students note the physical and chemical properties occurring in each frame and to identify the scientific principle being presented. Use as a class discussion and introduction to specific principle. Use the suggested experiments and activities for further inquiry and investigation. When discussing other topics in class, encourage students to create their own comic either traditionally or digitally to demonstrate their understandings of the concept. Try using an online tool for students to create comics, such as the Comic Creator (explained here).
GradesK to 8
In the ClassroomCheck first to be sure the media are not blocked by school web filtering. Choose one item from the site to share on your interactive whiteboard or projector as a class discussion starter on current topics or as a lead-in to a lesson. (Example: show the YouTube video about order of the planets when beginning an astronomy unit). Share the site with students and let them explore to find interesting topics for research reports. Ask students to choose one item from the site to share with other students as a way to practice oral presentation skills. Use videos or images as writing prompts or blog prompts. ESL/ELL students can practice their language skills by retelling a favorite video. Challenge your students to create their own informative videos on a topic that your class is exploring. Share the videos using a site such as TeacherTube reviewed here.
Grades3 to 12
In the ClassroomUse as a reference to answer questions that students have. Use this site to also apply information learned in the classroom. For example, when discussing light energy and wavelengths, use the explanation of why it is hot in the summer and cold in the winter to apply the information about energy and wavelength. Follow the use of this site with related labs and other activities. Follow up also with more research. For example, after learning about how an hour glass works, research, report, or create other timepieces used throughout history focusing on the advantages and disadvantages as well as the limitations and changes in technology over time.
Grades6 to 12
In the ClassroomTry showing the video (on your interactive whiteboard or projector) at the beginning of a chapter or unit on universes and galaxies. Have students discuss what they think is correct or even incorrect about the video. As you work through your unit, use the teacher activities in addition to your traditional curriculum materials. Revisit the video at least twice throughout the unit to "check-in" on your student's understanding and to assess whether their misconceptions are being cleared. Another idea, is to show the video as a writing prompt for science. Pose a question such as, "How big are you? Explain in terms of the universe." Then, have students view the video and write about their ideas generated by the video.
Grades7 to 10
In the ClassroomHave students click through the site as the instructional part of the lesson which would be great for introductory physics or physical science. Students can work through the module taking notes as they proceed. Then, have students create a graphic organizer comparing both the microscope differences, and have them use the view of the telescope function. Have students draw or take screen shots using a program such as Jing (reviewed here) of the views from the different telescopes. Have students add analysis bubbles to the pictures comparing the views.
Grades2 to 8
At the time of this review, the link to "Your Log" (at the lab) was not active. You may want students to use "traditional" paper to take notes for their log.
Note that the small black bar across the top of the screen takes you to other NFB ites and may lead students off track. The one notable option is to click "Francais" at top right and discover the same activities in French!
In the ClassroomUse these great interactives for individual work or as a group activity. For example, use the "Eyes on the Sky Mission Game" to explore the forces and fuels needed to launch a rocket, identification of various space objects, and other skills. The "Keep your Cool Mission" activity requires players to fill the compressor with freon and put all the articles back in place.
Many topics related to physical science can be researched and discussed from these activities. Discuss forces and motion, different types of fuels, how various appliances work, etc. Bring in related environmental and societal issues, especially changes throughout the years. Research various types of rocket design throughout the years and the technical advancements that caused these changes.
Have students complete multimedia research projects to share their findings. Challenge cooperative learning groups to create an online book using a tool such as Bookemon, reviewed here.
French teachers will enjoy the various French language versions of the games to give students practice following instructions and applied language in engaging activities. Great practice!
Grades6 to 10
In the ClassroomUse different activities from the Globe at Night to help students learn about star magnitudes, constellations, and astronomy. Have students record and send in observations of the night sky using the PDF handouts available on the website.
Grades8 to 12
This site includes advertising.
tag(s): area (73), carbon (23), carbon footprint (11), chemicals (45), coal (14), earthquakes (50), energy (209), engineering (132), fossil fuels (18), fossils (46), glaciers (17), machines (27), matter (61), moon (74), natural resources (57), ozone (9), ph (3), planets (130), prime numbers (31), pythagorean theorem (33), questioning (37), space (226), square roots (22), stars (68), sun (68), volume (53)
In the ClassroomTry using this site's questions on a weekly or daily basis in science or math class to start discussions and provoke student thinking. Allow students to view the question on your interactive whiteboard or projector. Then brainstorm possible answers. Once enough thoughts have been seeded, share the real answers. Or, allow students to work at the answer as the lesson continues for a few days and reveal the correct answer as a finale to the lesson.
This site could also be used as a learning station for the question of the day or the week.
Grades6 to 11
In the ClassroomPrint out instructions and have student work through the experiments when relevant to topics. Also, some experiments could be used as demonstrations. Assign cooperative learning groups specific experiments to try out and create a video to share with the class. Share the videos on a site such as TeacherTube reviewed here.
Grades3 to 9
tag(s): questioning (37) |
History of the oul' British Raj
The British Raj refers to the bleedin' period of British rule on the Indian subcontinent between 1858 and 1947. The system of governance was instituted in 1858 when the feckin' rule of the East India Company was transferred to the oul' Crown in the person of Queen Victoria, grand so.
It lasted until 1947, when the British provinces of India were partitioned into two sovereign dominion states: the oul' Dominion of India and the Dominion of Pakistan, leavin' the oul' princely states to choose between them. Bejaysus this is a quare tale altogether. Most of the feckin' princely states decided to join either Dominion of India or Dominion of Pakistan, except the bleedin' state of Jammu and Kashmir. It was only at the feckin' last moment that Jammu and Kashmir agreed to sign the oul' "Instrument of Accession" with India, the hoor. The two new dominions later became the Republic of India and the oul' Islamic Republic of Pakistan (the eastern half of which, still later, became the feckin' People's Republic of Bangladesh), game ball! The province of Burma in the eastern region of the feckin' Indian Empire had been made a holy separate colony in 1937 and became independent in 1948.
The East India Company, was an English and later British joint-stock company. It was formed to trade in the feckin' Indian Ocean region, initially with Mughal India and the feckin' East Indies, and later with Qin' China. Right so. The company ended up seizin' control of large parts of the feckin' Indian subcontinent, colonised parts of Southeast Asia, and colonised Hong Kong after a war with Qin' China.
Effects on the bleedin' economy
In the bleedin' later half of the bleedin' 19th century, both the bleedin' direct administration of India by the feckin' British crown and the bleedin' technological change ushered in by the industrial revolution, had the oul' effect of closely intertwinin' the feckin' economies of India and Great Britain. In fact, many of the major changes in transport and communications (that are typically associated with Crown Rule of India) had already begun before the feckin' Mutiny, like. Since Dalhousie had embraced the feckin' technological change then rampant in Great Britain, India too saw rapid development of all those technologies, begorrah. Railways, roads, canals, and bridges were rapidly built in India and telegraph links equally rapidly established in order that raw materials, such as cotton, from India's hinterland could be transported more efficiently to ports, such as Bombay, for subsequent export to England. Likewise, finished goods from England were transported back just as efficiently, for sale in the risin'(burgeonin') Indian markets. However, unlike Britain itself, where the market risks for the infrastructure development were borne by private investors, in India, it was the bleedin' taxpayers—primarily farmers and farm-labourers—who endured the risks, which, in the feckin' end, amounted to £50 million. In spite of these costs, very little skilled employment was created for Indians. By 1920, with a bleedin' history of 60 years of its construction, only ten per cent of the "superior posts" in the railways were held by Indians.
The rush of technology was also changin' the feckin' agricultural economy in India: by the oul' last decade of the oul' 19th century, a large fraction of some raw materials—not only cotton, but also some food-grains—were bein' exported to faraway markets. Consequently, many small farmers, dependent on the whims of those markets, lost land, animals, and equipment to money-lenders. More tellingly, the feckin' latter half of the 19th century also saw an increase in the feckin' number of large-scale famines in India, Lord bless us and save us. Although famines were not new to the subcontinent, these were particularly severe, with tens of millions dyin', and with many critics, both British and Indian, layin' the feckin' blame at the feckin' doorsteps of the lumberin' colonial administrations.
Lord Ripon, the feckin' Liberal Viceroy of India, who instituted the feckin' Famine Code
The Agra canal (c. Sufferin' Jaysus. 1873), a feckin' year away from completion. In fairness now. The canal was closed to navigation in 1904 to increase irrigation and aid in famine-prevention.
In terms of the bleedin' longer lastin' effects and legacies of the bleedin' economic impact of the British Raj, the bleedin' impact predominantly stems from the oul' irregular investment of areas of infrastructure. Whisht now and listen to this wan. Simon Carey explains how the investment into Indian society was 'narrowly focused' and favoured the growth of transportation of goods and workers. Therefore India has since seen an uneven economic development of society, what? For example, Acemoglu et al. Sufferin' Jaysus. (2001) identify how the bleedin' inability of certain areas of rural India to cope with disease and famine best explain this uneven development of the bleedin' nation. Carey also points out that a lastin' impact of the British Raj is the feckin' transformation of India into an agricultural tradin' economy. However, since the feckin' rise of technology in the latter 20th Century, India has been able to become a holy leadin' nation in the production of technology, with companies like the oul' IT company Tata Consultancy service employin' 470,000 people spannin' over 50 countries, and the feckin' Tata Group takin' an annual revenue of US$113 billion, makin' it the feckin' largest IT service provider in the world. Therefore, some areas of India, predominantly in affluent urban areas, have benefited from the oul' legacies of the feckin' British Raj in the bleedin' long term due to the oul' transformation of Indian economic culture to a production based economy, begorrah. However the bleedin' majority of Indian society has experienced a holy negative impact of the oul' British Raj, especially in rural and suburban areas, due to the bleedin' focus of investment into transport such as railways and canals rather than into healthcare and primary education.[original research?]
Beginnings of self-government
The first steps were taken toward self-government in British India in the late 19th century with the oul' appointment of Indian counsellors to advise the oul' British viceroy and the oul' establishment of provincial councils with Indian members; the oul' British subsequently widened participation in legislative councils with the oul' Indian Councils Act 1892. Municipal Corporations and District Boards were created for local administration; they included elected Indian members
The Indian Councils Act 1909 – also known as the Morley-Minto Reforms (John Morley was the secretary of state for India, and Gilbert Elliot, fourth earl of Minto, was viceroy) – gave Indians limited roles in the central and provincial legislatures, known as legislative councils. Indians had previously been appointed to legislative councils, but after the feckin' reforms some were elected to them, bejaysus. At the centre, the oul' majority of council members continued to be government-appointed officials, and the feckin' viceroy was in no way responsible to the bleedin' legislature. Jasus. At the feckin' provincial level, the elected members, together with unofficial appointees, outnumbered the bleedin' appointed officials, but responsibility of the governor to the oul' legislature was not contemplated, the shitehawk. Morley made it clear in introducin' the feckin' legislation to the feckin' British Parliament that parliamentary self-government was not the goal of the British government.
The Morley-Minto Reforms were a feckin' milestone. Sufferin' Jaysus listen to this. Step by step, the feckin' elective principle was introduced for membership in Indian legislative councils. In fairness now. The "electorate" was limited, however, to a small group of upper-class Indians. Arra' would ye listen to this shite? These elected members increasingly became an "opposition" to the feckin' "official government", the hoor. The Communal electorates were later extended to other communities and made a political factor of the bleedin' Indian tendency toward group identification through religion.
World War I and its causes
World War I would prove to be an oul' watershed in the imperial relationship between Britain and India. 1.4 million Indian and British soldiers of the oul' British Indian Army would take part in the war and their participation would have a holy wider cultural fallout: news of Indian soldiers fightin' and dyin' with British soldiers, as well as soldiers from dominions like Canada,Australia and New Zealand, would travel to distant corners of the world both in newsprint and by the bleedin' new medium of the radio. India's international profile would thereby rise and would continue to rise durin' the oul' 1920s. It was to lead, among other things, to India, under its own name, becomin' a foundin' member of the League of Nations in 1920 and participatin', under the feckin' name, "Les Indes Anglaises" (The British Indies), in the bleedin' 1920 Summer Olympics in Antwerp. Back in India, especially among the leaders of the feckin' Indian National Congress, it would lead to calls for greater self-government for Indians.
In 1916, in the face of new strength demonstrated by the oul' nationalists with the oul' signin' of the oul' Lucknow Pact and the foundin' of the bleedin' Home Rule leagues, and the realisation, after the disaster in the oul' Mesopotamian campaign, that the bleedin' war would likely last longer, the new Viceroy, Lord Chelmsford, cautioned that the bleedin' Government of India needed to be more responsive to Indian opinion. Towards the oul' end of the oul' year, after discussions with the oul' government in London, he suggested that the bleedin' British demonstrate their good faith – in light of the feckin' Indian war role – through a number of public actions, includin' awards of titles and honours to princes, grantin' of commissions in the feckin' army to Indians, and removal of the oul' much-reviled cotton excise duty, but most importantly, an announcement of Britain's future plans for India and an indication of some concrete steps. After more discussion, in August 1917, the bleedin' new Liberal Secretary of State for India, Edwin Montagu, announced the feckin' British aim of "increasin' association of Indians in every branch of the feckin' administration, and the feckin' gradual development of self-governin' institutions, with a bleedin' view to the bleedin' progressive realization of responsible government in India as an integral part of the British Empire." This envisioned reposin' confidence in the bleedin' educated Indians, so far disdained as an unrepresentative minority, who were described by Montague as "Intellectually our children". The pace of the bleedin' reforms where to be determined by Britain as and when the Indians were seen to have earned it. However, although the oul' plan envisioned limited self-government at first only in the oul' provinces – with India emphatically within the British Empire – it represented the feckin' first British proposal for any form of representative government in a holy non-white colony.
Earlier, at the oul' onset of World War I, the oul' reassignment of most of the bleedin' British army in India to Europe and Mesopotamia had led the previous Viceroy, Lord Hardin', to worry about the oul' "risks involved in denudin' India of troops." Revolutionary violence had already been a concern in British India; consequently in 1915, to strengthen its powers durin' what it saw was a time of increased vulnerability, the bleedin' Government of India passed the oul' Defence of India Act, which allowed it to intern politically dangerous dissidents without due process and added to the feckin' power it already had – under the feckin' 1910 Press Act – both to imprison journalists without trial and to censor the bleedin' press. Now, as constitutional reform began to be discussed in earnest, the British began to consider how new moderate Indians could be brought into the fold of constitutional politics and simultaneously, how the oul' hand of established constitutionalists could be strengthened. However, since the bleedin' reform plan was devised durin' a time when extremist violence had ebbed as a bleedin' result of increased wartime governmental control and it now feared a revival of revolutionary violence, the government also began to consider how some of its wartime powers could be extended into peacetime.
Consequently, in 1917, even as Edwin Montagu announced the new constitutional reforms, a sedition committee chaired by an oul' British judge, Mr. S. Jesus Mother of Chrisht almighty. A. Holy blatherin' Joseph, listen to this. T. Be the holy feck, this is a quare wan. Rowlatt, was tasked with investigatin' wartime revolutionary conspiracies and the bleedin' German and Bolshevik links to the bleedin' violence in India, with the oul' unstated goal of extendin' the oul' government's wartime powers. The Rowlatt committee presented its report in July 1918 and identified three regions of conspiratorial insurgency: Bengal, the feckin' Bombay presidency, and the Punjab. To combat subversive acts in these regions, the committee recommended that the bleedin' government use emergency powers akin to its wartime authority, which included the bleedin' ability to try cases of sedition by a panel of three judges and without juries, exaction of securities from suspects, governmental overseein' of residences of suspects, and the power for provincial governments to arrest and detain suspects in short-term detention facilities and without trial.
With the oul' end of World War I, there was also a change in the bleedin' economic climate, begorrah. By year's end 1919, 1.5 million Indians had served in the bleedin' armed services in either combatant or non-combatant roles, and India had provided £146 million in revenue for the oul' war. The increased taxes coupled with disruptions in both domestic and international trade had the bleedin' effect of approximately doublin' the index of overall prices in India between 1914 and 1920. Returnin' war veterans, especially in the feckin' Punjab, created a bleedin' growin' unemployment crisis and post-war inflation led to food riots in Bombay, Madras, and Bengal provinces, a situation that was made only worse by the oul' failure of the bleedin' 1918–19 monsoon and by profiteerin' and speculation. The global influenza epidemic and the Bolshevik Revolution of 1917 added to the feckin' general jitters; the oul' former among the population already experiencin' economic woes, and the oul' latter among government officials, fearin' an oul' similar revolution in India.
To combat what it saw as a comin' crisis, the feckin' government now drafted the oul' Rowlatt committee's recommendations into two Rowlatt Bills. Although the bills were authorised for legislative consideration by Edwin Montagu, they were done so unwillingly, with the bleedin' accompanyin' declaration, "I loathe the bleedin' suggestion at first sight of preservin' the oul' Defence of India Act in peace time to such an extent as Rowlatt and his friends think necessary." In the ensuin' discussion and vote in the bleedin' Imperial Legislative Council, all Indian members voiced opposition to the bleedin' bills. Whisht now. The Government of India was nevertheless able to use of its "official majority" to ensure passage of the bleedin' bills early in 1919. However, what it passed, in deference to the oul' Indian opposition, was a holy lesser version of the feckin' first bill, which now allowed extrajudicial powers, but for a period of exactly three years and for the oul' prosecution solely of "anarchical and revolutionary movements," droppin' entirely the bleedin' second bill involvin' modification of the oul' Indian Penal Code. Even so, when it was passed the new Rowlatt Act aroused widespread indignation throughout India and brought Mohandas Gandhi to the oul' forefront of the bleedin' nationalist movement.
Montagu–Chelmsford Report 1919
Meanwhile, Montagu and Chelmsford themselves finally presented their report in July 1918 after a long fact-findin' trip through India the previous winter. After more discussion by the oul' government and parliament in Britain, and another tour by the Franchise and Functions Committee for the feckin' purpose of identifyin' who among the oul' Indian population could vote in future elections, the feckin' Government of India Act 1919 (also known as the oul' Montagu–Chelmsford Reforms) was passed in December 1919. The new Act enlarged the oul' provincial councils and converted the oul' Imperial Legislative Council into an enlarged Central Legislative Assembly. It also repealed the feckin' Government of India's recourse to the feckin' "official majority" in unfavourable votes. Although departments like defence, foreign affairs, criminal law, communications and income-tax were retained by the bleedin' Viceroy and the feckin' central government in New Delhi, other departments like public health, education, land-revenue and local self-government were transferred to the provinces. The provinces themselves were now to be administered under a holy new dyarchical system, whereby some areas like education, agriculture, infrastructure development, and local self-government became the bleedin' preserve of Indian ministers and legislatures, and ultimately the bleedin' Indian electorates, while others like irrigation, land-revenue, police, prisons, and control of media remained within the bleedin' purview of the bleedin' British governor and his executive council. The new Act also made it easier for Indians to be admitted into the bleedin' civil service and the bleedin' army officer corps.
A greater number of Indians were now enfranchised, although, for votin' at the feckin' national level, they constituted only 10% of the bleedin' total adult male population, many of whom were still illiterate. In the bleedin' provincial legislatures, the bleedin' British continued to exercise some control by settin' aside seats for special interests they considered cooperative or useful. In particular, rural candidates, generally sympathetic to British rule and less confrontational, were assigned more seats than their urban counterparts. Seats were also reserved for non-Brahmins, landowners, businessmen, and college graduates. The principal of "communal representation", an integral part of the Minto–Morley Reforms, and more recently of the oul' Congress-Muslim League Lucknow Pact, was reaffirmed, with seats bein' reserved for Muslims, Sikhs, Indian Christians, Anglo-Indians, and domiciled Europeans, in both provincial and Imperial legislative councils. The Montagu–Chelmsford reforms offered Indians the bleedin' most significant opportunity yet for exercisin' legislative power, especially at the provincial level; however, that opportunity was also restricted by the bleedin' still limited number of eligible voters, by the oul' small budgets available to provincial legislatures, and by the bleedin' presence of rural and special interest seats that were seen as instruments of British control.
Round Table Conferences 1930–31–32
The three Round Table Conferences of 1930–32 were an oul' series of conferences organised by the oul' British Government to discuss constitutional reforms in India. Jesus, Mary and holy Saint Joseph. They were conducted accordin' to the oul' recommendation of Muslim leader Muhammad Ali Jinnah to the Viceroy Lord Irwin and the feckin' Prime Minister Ramsay MacDonald, and by the report submitted by the feckin' Simon Commission in May 1930. Demands for swaraj, or self-rule, in India had been growin' increasingly strong. In fairness now. By the bleedin' 1930s, many British politicians believed that India needed to move towards dominion status, the shitehawk. However, there were significant disagreements between the Indian and the bleedin' British leaders that the oul' Conferences could not resolve.
Willingdon imprisons leaders of Congress
In 1932 the feckin' Viceroy, Lord Willingdon, after the feckin' failure of the oul' three Round Table Conferences (India) in London, now confronted Gandhi's Congress in action, would ye believe it? The India Office told Willingdon that he should conciliate only those elements of Indian opinion that were willin' to work with the bleedin' Raj. That did not include Gandhi and the Indian National Congress, which launched its Civil Disobedience Movement on 4 January 1932. Story? Therefore, Willingdon took decisive action. He imprisoned Gandhi, bejaysus. He outlawed the oul' Congress; he rounded up all members of the Workin' Committee and the Provincial Committees and imprisoned them; and he banned Congress youth organisations. In total he imprisoned 80,000 Indian activists. Without most of their leaders, protests were uneven and disorganised, boycotts were ineffective, illegal youth organisations proliferated but were ineffective, more women became involved, and there was terrorism, especially in the oul' North-West Frontier Province. Soft oul' day. Gandhi remained in prison until 1933. Willingdon relied on his military secretary, Hastings Ismay, for his personal safety.
Communal Award: 1932
MacDonald, tryin' to resolve the critical issue of how Indians would be represented, on 4 August 1932 grantin' separate electorates for Muslims, Sikhs, and Europeans in India and increased the number of provinces that offered separate electorates to Anglo-Indians and Indian Christians. Untouchables (now known as the Dalits) obtained an oul' separate electorate. That outraged Gandhi because he firmly believed they had to be treated as Hindus. Bejaysus here's a quare one right here now. He and Congress rejected the proposal, but it went into effect anyway.
Government of India Act (1935)
In 1935, after the bleedin' failure of the bleedin' Round Table Conferences, the oul' British Parliament approved the bleedin' Government of India Act 1935, which authorized the feckin' establishment of independent legislative assemblies in all provinces of British India, the feckin' creation of a central government incorporatin' both the British provinces and the oul' princely states, and the bleedin' protection of Muslim minorities. The future Constitution of independent India would owe a feckin' great deal to the oul' text of this act. The act also provided for a feckin' bicameral national parliament and an executive branch under the oul' purview of the British government. Although the oul' national federation was never realized, nationwide elections for provincial assemblies were held in 1937, you know yourself like. Despite initial hesitation, the oul' Congress took part in the oul' elections and won victories in seven of the oul' eleven provinces of British India, and Congress governments, with wide powers, were formed in these provinces. In Great Britain, these victories were to later turn the bleedin' tide for the bleedin' idea of Indian independence.
World War II
India played a major role in the feckin' Allied war effort against both Japan and Germany. It provided over 2 million soldiers, who fought numerous campaigns in the feckin' Middle East, and in the oul' India-Burma front and also supplied billions of pounds to the feckin' British war effort. Jaykers! The Muslim and Sikh populations were strongly supportive of the bleedin' British war effort, but the Hindu population was divided. Congress opposed the war, and tens of thousands of its leaders were imprisoned in 1942–45. A major famine in eastern India led to hundreds of thousands of deaths by starvation, and remains a feckin' highly controversial issue regardin' Churchill's reluctance to provide emergency food relief.
With the feckin' outbreak of World War II in 1939, the bleedin' viceroy, Lord Linlithgow, declared war on India's behalf without consultin' Indian leaders, leadin' the oul' Congress provincial ministries to resign in protest. The Muslim League, in contrast, supported Britain in the bleedin' war effort; however, it now took the view that Muslims would be unfairly treated in an independent India dominated by the oul' Congress. Jasus. Hindus not affiliated with the oul' Congress typically supported the bleedin' war. Here's another quare one. The two major Sikh factions, the Unionists and the oul' Akali Dal, supported Britain and successfully urged large numbers of Sikhs to volunteer for the feckin' army.
Quit India movement or the oul' Bharat Chhodo Andolan
The British sent a high level Cripps Mission in 1942 to secure Indian nationalists' co-operation in the bleedin' war effort in exchange for postwar independence and dominion status. In fairness now. Congress demanded immediate independence and the bleedin' mission failed. Gandhi then launched the bleedin' Quit India Movement in August 1942, demandin' the oul' immediate withdrawal of the bleedin' British from India or face nationwide civil disobedience. Along with thousands of other Congress leaders, Gandhi was immediately imprisoned, and the feckin' country erupted in violent local episodes led by students and later by peasant political groups, especially in Eastern United Provinces, Bihar, and western Bengal. C'mere til I tell ya now. Accordin' to John F, would ye swally that? Riddick, from 9 August 1942 to 21 September 1942, the Quit India movement:
- attacked 550 post offices, 250 railway stations, damaged many rail lines, destroyed 70 police stations, and burned or damaged 85 other government buildings. Sufferin' Jaysus. There were about 2,500 instances of telegraph wires bein' cut....The Government of India deployed 57 battalions of British troops to restore order.
The police and Army crushed the bleedin' resistance in a little more than six weeks; nationalist leaders were imprisoned for the feckin' duration.
Bose and the oul' Indian National Army (INA)
With Congress leaders in jail, attention also turned to Subhas Chandra Bose, who had been ousted from the oul' Congress in 1939 followin' differences with the bleedin' more conservative high command;[incomplete short citation] Bose now turned to Germany and Japan for help with liberatin' India by force. With Japanese support, he organised the Indian National Army, composed largely of Indian soldiers of the feckin' British Indian army who had been captured at Singapore by the bleedin' Japanese, includin' many Sikhs as well as Hindus and Muslims, the cute hoor. Japan secret service had promoted unrest in South east Asia to destabilise the oul' British War effort, and came to support a feckin' number of puppet and provisional governments in the bleedin' captured regions, includin' those in Burma, the feckin' Philippines and Vietnam, the feckin' Provisional Government of Azad Hind (Free India), presided over by Bose. Bose's effort, however, was short lived; after the reverses of 1944, the bleedin' reinforced British Indian Army in 1945 first halted and then reversed the oul' Japanese U Go offensive, beginnin' the bleedin' successful part of the bleedin' Burma Campaign. Bejaysus this is a quare tale altogether. Bose's Indian National Army surrendered with the recapture of Singapore; Bose died in a plane crash soon thereafter, the shitehawk. The British demanded trials for INA officers, but public opinion—includin' Congress and even the feckin' Indian Army—saw the bleedin' INA as fightin' for Indian independence and demanded a feckin' termination, that's fierce now what? Yasmin Khan says "The INA became the real heroes of the feckin' war in India." After a holy wave of unrest and nationalist violence the feckin' trials were stopped.
Britain borrowed everywhere it could and made heavy purchases of munitions and supplies in India durin' the war. Previously India owed Britain large sums; now it was reversed. Britain's sterlin' balances around the bleedin' world amounted to £3.4 billion in 1945; India's share was £1.3 billion (equivalent to $US 74 billion in 2016 dollars.) In this way the feckin' Raj treasury accumulated very large sterlin' reserves of British pounds that was owed to it by the feckin' British treasury. Me head is hurtin' with all this raidin'. However, Britain treated this as a bleedin' long-term loan with no interest and no specified repayment date, bedad. Just when the money would be made available by London was an issue, for the feckin' British treasury was nearly empty by 1945. India's balances totalled to Rs. G'wan now. 17.24 billion in March 1946; of that sum Rs. Sufferin' Jaysus listen to this. 15.12 billion [£1.134 billion] was split between India and Pakistan when they became independent in August 1947. Me head is hurtin' with all this raidin'. They finally got the feckin' money and India spent all its share by 1957; mostly buyin' back British owned assets in India.
Transfer of Power
The All India Azad Muslim Conference gathered in Delhi in April 1940 to voice its support for an independent and united India. Its members included several Islamic organisations in India, as well as 1400 nationalist Muslim delegates. The pro-separatist All-India Muslim League worked to try to silence those nationalist Muslims who stood against the oul' partition of India, often usin' "intimidation and coercion". The murder of the feckin' All India Azad Muslim Conference leader Allah Bakhsh Soomro also made it easier for the All-India Muslim League to demand the bleedin' creation of a Pakistan.
In January 1946, an oul' number of mutinies broke out in the bleedin' armed services, startin' with that of RAF servicemen frustrated with their shlow repatriation to Britain. The mutinies came to a bleedin' head with mutiny of the bleedin' Royal Indian Navy in Bombay in February 1946, followed by others in Calcutta, Madras, and Karachi. Although the oul' mutinies were rapidly suppressed, they found much public support in India and had the bleedin' effect of spurrin' the feckin' new Labour government in Britain to action, and leadin' to the bleedin' Cabinet Mission to India led by the bleedin' Secretary of State for India, Lord Pethick Lawrence, and includin' Sir Stafford Cripps, who had visited four years before.
Also in early 1946, new elections were called in India in which the oul' Congress won electoral victories in eight of the bleedin' eleven provinces. The negotiations between the bleedin' Congress and the Muslim League, however, stumbled over the oul' issue of the partition. Jinnah proclaimed 16 August 1946, Direct Action Day, with the bleedin' stated goal of highlightin', peacefully, the demand for a feckin' Muslim homeland in British India, fair play. The followin' day Hindu-Muslim riots broke out in Calcutta and quickly spread throughout India. Although the Government of India and the bleedin' Congress were both shaken by the feckin' course of events, in September, a bleedin' Congress-led interim government was installed, with Jawaharlal Nehru as united India's prime minister.
Later that year, the feckin' Labour government in Britain, its exchequer exhausted by the recently concluded World War II, decided to end British rule of India, and in early 1947 Britain announced its intention of transferrin' power no later than June 1948.
As independence approached, the violence between Hindus and Muslims in the oul' provinces of Punjab and Bengal continued unabated. Jesus Mother of Chrisht almighty. With the British army unprepared for the bleedin' potential for increased violence, the bleedin' new viceroy, Louis Mountbatten, advanced the feckin' date for the feckin' transfer of power, allowin' less than six months for a mutually agreed plan for independence, so it is. In June 1947, the feckin' nationalist leaders, includin' Nehru and Abul Kalam Azad on behalf of the Congress, Jinnah representin' the pro-separatist Muslim League, B, bejaysus. R. Arra' would ye listen to this shite? Ambedkar representin' the feckin' Untouchable community, and Master Tara Singh representin' the bleedin' Sikhs, agreed to a feckin' partition of the bleedin' country along religious lines. The predominantly Hindu and Sikh areas were assigned to the feckin' new India and predominantly Muslim areas to the new nation of Pakistan; the plan included a partition of the oul' Muslim-majority provinces of Punjab and Bengal. In the bleedin' years leadin' up to the bleedin' partition of India, the pro-separatist All-India Muslim League violently drove out Hindus and Sikhs from the oul' western Punjab.
Many millions of Muslim, Sikh, and Hindu refugees trekked across the feckin' newly drawn borders. G'wan now. In Punjab, where the bleedin' new border lines divided the Sikh regions in half, massive bloodshed followed; in Bengal and Bihar, where Gandhi's presence assuaged communal tempers, the violence was more limited. Here's a quare one for ye. In all, anywhere between 250,000 and 500,000 people on both sides of the feckin' new borders died in the feckin' violence. On 14 August 1947, the bleedin' new Dominion of Pakistan came into bein', with Muhammad Ali Jinnah sworn in as its first Governor General in Karachi. Sure this is it. The followin' day, 15 August 1947, India, now a holy smaller Union of India, became an independent country with official ceremonies takin' place in New Delhi, and with Jawaharlal Nehru assumin' the bleedin' office of the oul' prime minister, and the viceroy, Louis Mountbatten, stayin' on as its first Governor General.
- The Dutch East India Company was the bleedin' first to issue public stock.
- (Stein 2001, p. 259), (Oldenburg 2007)
- (Oldenburg 2007), (Stein 2001, p. 258)
- (Oldenburg 2007)
- (Stein 2001, p. 258)
- (Stein 2001, p. 159)
- (Stein 2001, p. 260)
- (Bose & Jalal 2003, p. 117)
- Carey 2012
- Acemoglu, Johnson & Robinson 2001
- Carey 2012
- Overby 2019
- Brown 1994, pp. 197–198
- Olympic Games Antwerp 1920: Official Report Archived 5 May 2011 at the Wayback Machine, Nombre de bations representees, p, what? 168, bejaysus. Quote: "31 Nations avaient accepté l'invitation du Comité Olympique Belge:... Jaysis. la Grèce – la Hollande Les Indes Anglaises – l'Italie – le Japon ..."
- Brown 1994, pp. 203–204
- Metcalf & Metcalf 2006, p. 166
- Brown 1994, pp. 201–203
- Lovett 1920, pp. 94, 187–191
- Sarkar 1921, p. 137
- Tinker 1968, p. 92
- Spear 1990, p. 190
- Brown 1994, pp. 195–196
- Stein 2001, p. 304
- Ludden 2002, p. 208
- Brown 1994, pp. 205–207
- Wolpert, Stanley (2013). Bejaysus. Jinnah of Pakistan (15 ed.), for the craic. Karachi, Pakistan: Oxford University Press. I hope yiz are all ears now. p. 107. Right so. ISBN 978-0-19-577389-7.
- Wolpert, Stanley (2012). Shameful Flight (1st ed.), like. Karachi, Pakistan: Oxford University Press. p. 5. Me head is hurtin' with all this raidin'. ISBN 978-0-19-906606-3.
- Hoiberg, Dale (2000). G'wan now. Students' Britannica India. Listen up now to this fierce wan. p. 309. G'wan now and listen to this wan. ISBN 9780852297605.
- John F, enda story. Riddick (2006), would ye swally that? The History of British India: A Chronology. Be the hokey here's a quare wan. Greenwood. p. 110. Sure this is it. ISBN 9780313322808.
- Brian Roger Tomlinson, The Indian National Congress and the feckin' Raj, 1929–1942: the bleedin' penultimate phase (Springer, 1976).
- Rosemary Rees. Bejaysus this is a quare tale altogether. India 1900–47 (Heineman, 2006) p 122
- Ismay, Hastings (1960). Arra' would ye listen to this shite? The Memoirs of General Lord Ismay. New York: Vikin' Press. C'mere til I tell yiz. p. 66. ISBN 978-0-8371-6280-5.
- Helen M. Jesus, Mary and Joseph. Nugent, "The communal award: The process of decision‐makin'." South Asia: Journal of South Asian Studies 2#1–2 (1979): 112–129.
- (Low 1993, pp. 40, 156)
- (Low 1993, p. 154)
- Srinath Raghavan, India's War: World War II and the feckin' Makin' of Modern South Asia (2016).
- Yasmin Khan, India At War: The Subcontinent and the bleedin' Second World War (2015).
- Lawrence James, Raj: the oul' makin' and remakin' of British India (1997) pp 545–85
- Madhusree Mukerjee, Churchill's Secret War: The British Empire and the Ravagin' of India durin' World War II (2010).
- Robin Jeffrey (2016). What's Happenin' to India?: Punjab, Ethnic Conflict, and the feckin' Test for Federalism. Whisht now. Springer. pp. 68–69, game ball! ISBN 9781349234103.
- John F. Whisht now and eist liom. Riddick, The History of British India: A Chronology (2006) p 115
- Srinath Raghavan, India's War: World War II and the Makin' of Modern South Asia (2016) pp 233–75.
- Nehru 1942, p. 424 harvnb error: no target: CITEREFNehru1942 (help)
- (Low 1993, pp. 31–31)
- Lebra 1977, p. 23
- Lebra 1977, p. 31
- Khan, Raj at War pp 304–5.
- Chaudhuri 1953, p. 349
- Sarkar 1983, p. 411
- Hyam 2007, p. 115
- Dharma Kumar, ed., The Cambridge Economic History of India: Volume 2, c.1751–c.1970 Edited by Dharma Kumar, The Cambridge Economic History of India The Cambridge Economic History of India Volume 2, c. G'wan now. 1751–c. Sufferin' Jaysus listen to this. 1970 (1983) pp 640–42, 942–44.
- Srinath Raghavan, India's War: World War II and the Makin' of Modern South Asia (2016) pp 339–47.
- See "Pounds Sterlin' to Dollars: Historical Conversion of Currency"
- Marcelo de Paiva Abreu, "India as an oul' creditor: sterlin' balances, 1940–1953." (Department of Economics, Pontifical Catholic University of Rio de Janeiro, 2015) online
- Uma Kapila (2005). G'wan now and listen to this wan. Indian Economy. Bejaysus this is a quare tale altogether. Academic Foundation, would ye believe it? p. 23. ISBN 9788171884292.
- Qasmi, Ali Usman; Robb, Megan Eaton (2017). Muslims against the oul' Muslim League: Critiques of the oul' Idea of Pakistan. Me head is hurtin' with all this raidin'. Cambridge University Press, Lord bless us and save us. p. 2, like. ISBN 9781108621236.
- Haq, Mushir U. (1970). Here's a quare one for ye. Muslim politics in modern India, 1857-1947. Jaysis. Meenakshi Prakashan. Holy blatherin' Joseph, listen to
this. p. 114.
This was also reflected in one of the oul' resolutions of the feckin' Azad Muslim Conference, an organization which attempted to be representative of all the bleedin' various nationalist Muslim parties and groups in India.
- Ahmed, Ishtiaq (27 May 2016), the hoor. "The dissenters". Jasus. The Friday Times, for the craic.
However, the feckin' book is a feckin' tribute to the feckin' role of one Muslim leader who steadfastly opposed the bleedin' Partition of India: the bleedin' Sindhi leader Allah Bakhsh Soomro. Right so. Allah Bakhsh belonged to a landed family. Jasus. He founded the Sindh People’s Party in 1934, which later came to be known as ‘Ittehad’ or ‘Unity Party’. .., would ye swally that? Allah Bakhsh was totally opposed to the oul' Muslim League’s demand for the oul' creation of Pakistan through a feckin' division of India on a religious basis. Consequently, he established the oul' Azad Muslim Conference. Sufferin' Jaysus. In its Delhi session held durin' April 27–30, 1940 some 1400 delegates took part. Whisht now and listen to this wan. They belonged mainly to the feckin' lower castes and workin' class. Bejaysus this is a quare tale altogether. The famous scholar of Indian Islam, Wilfred Cantwell Smith, feels that the oul' delegates represented a holy ‘majority of India’s Muslims’. Among those who attended the conference were representatives of many Islamic theologians and women also took part in the oul' deliberations .., the shitehawk. Shamsul Islam argues that the oul' All-India Muslim League at times used intimidation and coercion to silence any opposition among Muslims to its demand for Partition. He calls such tactics of the oul' Muslim League as an oul' ‘Reign of Terror’. C'mere til I tell ya. He gives examples from all over India includin' the NWFP where the oul' Khudai Khidmatgars remain opposed to the Partition of India.
- Ali, Afsar (17 July 2017), begorrah. "Partition of India and Patriotism of Indian Muslims", grand so. The Milli Gazette.
- (Judd 2004, pp. 172–173)
- (Judd 2004, p. 172)
- Abid, Abdul Majeed (29 December 2014). "The forgotten massacre". The Nation. Stop the lights!
On the oul' same dates, Muslim League-led mobs fell with determination and full preparations on the bleedin' helpless Hindus and Sikhs scattered in the bleedin' villages of Multan, Rawalpindi, Campbellpur, Jhelum and Sargodha, be the hokey! The murderous mobs were well supplied with arms, such as daggers, swords, spears and fire-arms. Would ye believe this shite?(A former civil servant mentioned in his autobiography that weapon supplies had been sent from NWFP and money was supplied by Delhi-based politicians.) They had bands of stabbers and their auxiliaries, who covered the bleedin' assailant, ambushed the bleedin' victim and if necessary disposed of his body, would ye believe it? These bands were subsidized monetarily by the oul' Muslim League, and cash payments were made to individual assassins based on the bleedin' numbers of Hindus and Sikhs killed. C'mere til I tell ya now. There were also regular patrollin' parties in jeeps which went about snipin' and pickin' off any stray Hindu or Sikh. ... Stop the lights! Thousands of non-combatants includin' women and children were killed or injured by mobs, supported by the oul' All India Muslim League.
- (Khosla 2001, p. 299)
Surveys and reference books
- Bandyopadhyay, Sekhar (2004), From Plassey to Partition: A History of Modern India, New Delhi and London: Orient Longmans, so it is. Pp. xx, 548., ISBN 81-250-2596-0.
- Bose, Sugata; Jalal, Ayesha (2003), Modern South Asia: History, Culture, Political Economy, London and New York: Routledge, 2nd edition. Pp. Sufferin' Jaysus listen to this. xiii, 304, ISBN 0-415-30787-2.
- Brown, Judith M. Listen up now to this fierce wan. (1994), Modern India: The Origins of an Asian Democracy, Oxford and New York: Oxford University Press. Sufferin' Jaysus. Pp. xiii, 474, ISBN 0-19-873113-2.
- Buckland, C.E. Stop the lights! Dictionary of Indian Biography (1906) 495pp full text
- Copland, Ian (2001), India 1885–1947: The Unmakin' of an Empire (Seminar Studies in History Series), Harlow and London: Pearson Longmans. Whisht now. Pp. Jaysis. 160, ISBN 0-582-38173-8.
- Judd, Dennis (2004), The Lion and the oul' Tiger: The Rise and Fall of the British Raj, 1600–1947, Oxford and New York: Oxford University Press. Pp. xiii, 280, ISBN 0-19-280358-1.
- Keay, John (2000), India: A History, New York: Atlantic Monthly Press, ISBN 0-87113-800-X
- Kulke, Hermann; Rothermund, Dietmar (2004), A History of India, 4th edition. Routledge, Pp. Be the holy feck, this is a quare wan. xii, 448, ISBN 0-415-32920-5.
- Ludden, David (2002), India And South Asia: A Short History, Oxford: Oneworld Publications. Would ye swally this in a minute now?Pp, what? xii, 306, ISBN 1-85168-237-6, archived from the original on 16 July 2011, retrieved 4 May 2008
- Markovits, Claude, ed. Soft oul' day. (2005), A History of Modern India 1480–1950 (Anthem South Asian Studies), Anthem Press. Pp. Sufferin' Jaysus. 607, ISBN 1-84331-152-6.
- Metcalf, Barbara; Metcalf, Thomas R. Chrisht Almighty. (2006), A Concise History of Modern India (Cambridge Concise Histories), Cambridge and New York: Cambridge University Press. C'mere til I tell yiz. Pp. Here's another quare one. xxxiii, 372, ISBN 0-521-68225-8.
- Peers, Douglas M. Jasus. (2006), India under Colonial Rule 1700–1885, Harlow and London: Pearson Longmans. Here's another quare one for ye. Pp. xvi, 163, ISBN 0-582-31738-X.
- Rees, Rosemary, fair play. India 1900–47 (Heineman, 2006), textbook.
- Riddick, John F. Who Was Who in British India (1998); 5000 entries excerpt
- Robb, Peter (2004), A History of India (Palgrave Essential Histories), Houndmills, Hampshire: Palgrave Macmillan, be the hokey! Pp. xiv, 344, ISBN 0-333-69129-6.
- Sarkar, Sumit (1983), Modern India: 1885–1947, Delhi: Macmillan India Ltd. Pp. C'mere til I tell yiz. xiv, 486, ISBN 0-333-90425-7.
- Spear, Percival (1990), A History of India, Volume 2, New Delhi and London: Penguin Books, grand so. Pp, bedad. 298, ISBN 0-14-013836-6.
- Stein, Burton (2001), A History of India, New Delhi and Oxford: Oxford University Press, Lord bless us and save us. Pp. xiv, 432, ISBN 0-19-565446-3.
- Wolpert, Stanley (2003), A New History of India, Oxford and New York: Oxford University Press, would ye swally that? Pp. 544, ISBN 0-19-516678-7.
Monographs and collections
- Bayly, C. A. (1990), Indian Society and the Makin' of the feckin' British Empire (The New Cambridge History of India), Cambridge and London: Cambridge University Press. Sure this is it. Pp, would ye swally that? 248, ISBN 0-521-38650-0.
- Bayly, C, like. A. (2000), Empire and Information: Intelligence Gatherin' and Social Communication in India, 1780–1870 (Cambridge Studies in Indian History and Society), Cambridge and London: Cambridge University Press. G'wan now. Pp. Me head is hurtin' with all this raidin'. 426, ISBN 0-521-66360-1
- Brown, Judith M.; Louis, Wm, would ye swally that? Roger, eds. Jaykers! (2001), Oxford History of the oul' British Empire: The Twentieth Century, Oxford and New York: Oxford University Press, that's fierce now what? Pp. Whisht now and eist liom. 800, ISBN 0-19-924679-3
- Chandavarkar, Rajnarayan (1998), Imperial Power and Popular Politics: Class, Resistance and the bleedin' State in India, 1850–1950, (Cambridge Studies in Indian History & Society). Holy blatherin' Joseph, listen to this. Cambridge and London: Cambridge University Press, grand so. Pp, Lord bless us and save us. 400, ISBN 0-521-59692-0.
- Copland, Ian (2002), Princes of India in the bleedin' Endgame of Empire, 1917–1947, (Cambridge Studies in Indian History & Society). Cambridge and London: Cambridge University Press. Pp. 316, ISBN 0-521-89436-0.
- Gilmartin, David. Right so. 1988. Whisht now and listen to this wan. Empire and Islam: Punjab and the oul' Makin' of Pakistan. Berkeley: University of California Press. Jesus, Mary and Joseph. 258 pages. ISBN 0-520-06249-3.
- Gould, William (2004), Hindu Nationalism and the Language of Politics in Late Colonial India, (Cambridge Studies in Indian History and Society), to be sure. Cambridge and London: Cambridge University Press. Pp, to be sure. 320, ISBN 0-521-83061-3.
- Hyam, Ronald (2007), Britain's Declinin' Empire: The Road to Decolonisation 1918–1968., Cambridge University Press., ISBN 978-0-521-86649-1.
- Jalal, Ayesha (1993), The Sole Spokesman: Jinnah, the Muslim League and the bleedin' Demand for Pakistan, Cambridge, UK: Cambridge University Press, 334 pages, ISBN 0-521-45850-1.
- Khan, Yasmin (2007), The Great Partition: The Makin' of India and Pakistan, New Haven and London: Yale University Press, 250 pages, ISBN 978-0-300-12078-3
- Khosla, G. Story? D. (2001), "Stern Reckonin'", in Page, David; Inder Singh, Anita; Moon, Penderal; Khosla, G. D.; Hasan, Mushirul (eds.), The Partition Omnibus: Prelude to Partition/the Origins of the bleedin' Partition of India 1936-1947/Divide and Quit/Stern Reckonin', Delhi and Oxford: Oxford University Press, ISBN 0-19-565850-7
- Lebra, Joyce C. (1977), Japanese Trained Armies in South-East Asia, Columbia University Press, ISBN 0-231-03995-6
- Low, D. A. Here's a quare one. (1993), Eclipse of Empire, Cambridge and London: Cambridge University Press. Jesus, Mary and holy Saint Joseph. Pp, would ye believe it? xvi, 366, ISBN 0-521-45754-8.
- Low, D, so it is. A. (2002), Britain and Indian Nationalism: The Imprint of Amibiguity 1929–1942, Cambridge and London: Cambridge University Press. Pp. Sufferin' Jaysus. 374, ISBN 0-521-89261-9.
- Low, D, you know yerself. A., ed. (2004) , Congress & the oul' Raj: Facets of the bleedin' Indian Struggle 1917–47, New Delhi and Oxford: Oxford University Press, game ball! Pp. xviii, 513, ISBN 0-19-568367-6.
- Metcalf, Thomas R. Bejaysus. (1991), The Aftermath of Revolt: India, 1857–1870, Riverdale Co. C'mere til I tell yiz. Pub. Holy blatherin' Joseph, listen to this. Pp. 352, ISBN 81-85054-99-1
- Metcalf, Thomas R. Whisht now and eist liom. (1997), Ideologies of the Raj, Cambridge and London: Cambridge University Press, Pp. Would ye believe this shite?256, ISBN 0-521-58937-1
- Porter, Andrew, ed. Would ye believe this shite?(2001), Oxford History of the oul' British Empire: Nineteenth Century, Oxford and New York: Oxford University Press, you know yourself like. Pp. Bejaysus. 800, ISBN 0-19-924678-5
- Ramusack, Barbara (2004), The Indian Princes and their States (The New Cambridge History of India), Cambridge and London: Cambridge University Press. Pp, so it is. 324, ISBN 0-521-03989-4
- Shaikh, Farzana. 1989. Jasus. Community and Consensus in Islam: Muslim Representation in Colonial India, 1860—1947, like. Cambridge, UK: Cambridge University Press. Whisht now. 272 pages, you know yerself. ISBN 0-521-36328-4.
- Wainwright, A. Martin (1993), Inheritance of Empire: Britain, India, and the oul' Balance of Power in Asia, 1938–55, Praeger Publishers. Be the holy feck, this is a quare wan. Pp. Bejaysus. xvi, 256, ISBN 0-275-94733-5.
- Wolpert, Stanley (2006), Shameful Flight: The Last Years of the oul' British Empire in India, Oxford and New York: Oxford University Press. Pp. 272, ISBN 0-19-515198-4.
Articles in journals or collections
- Acemoglu, Daron; Johnson, Simon; Robinson, James A. Jesus, Mary and Joseph. (December 2001), "The Colonial Origins of Comparative Development: An Empirical Investigation", The American Economic Review, 91 (5): 1369–1401, doi:10.1257/aer.91.5.1369, JSTOR 2677930
- Banthia, Jayant; Dyson, Tim (December 1999), "Smallpox in Nineteenth-Century India", Population and Development Review, Population Council, 25 (4): 649–689, doi:10.1111/j.1728-4457.1999.00649.x, JSTOR 172481, PMID 22053410
- Brown, Judith M, begorrah. (2001), "India", in Brown, Judith M.; Louis, Wm. Bejaysus here's a quare one right here now. Roger (eds.), Oxford History of the British Empire: The Twentieth Century, Oxford and New York: Oxford University Press, pp. 421–446, ISBN 0-19-924679-3
- Carey, Simon (2012), "The Legacy of British Colonialism in India Post 1947", The New Zealand Review of Economics and Finance, 2: 37–47, ISSN 2324-478X
- Chaudhuri, Niradh C. (December 1953), "Subhas Chandra Bose: His Legacy and Legend", Pacific Affairs, 26 (4): 349–357, JSTOR 2752872
- Derbyshire, I, be the hokey! D. Bejaysus. (1987), "Economic Change and the feckin' Railways in North India, 1860–1914", Population Studies, Cambridge University Press, 21 (3): 521–545, doi:10.1017/s0026749x00009197, JSTOR 312641
- Dyson, Tim (March 1991), "On the oul' Demography of South Asian Famines: Part I", Population Studies, Taylor & Francis, 45 (1): 5–25, doi:10.1080/0032472031000145056, JSTOR 2174991, PMID 11622922
- Dyson, Tim (July 1991), "On the Demography of South Asian Famines: Part II", Population Studies, Taylor & Francis, 45 (2): 279–297, doi:10.1080/0032472031000145446, JSTOR 2174784, PMID 11622922
- Gilmartin, David (November 1994), "Scientific Empire and Imperial Science: Colonialism and Irrigation Technology in the Indus Basin", The Journal of Asian Studies, Association for Asian Studies, 53 (4): 1127–1149, doi:10.2307/2059236, JSTOR 2059236
- Goswami, Manu (October 1998), "From Swadeshi to Swaraj: Nation, Economy, Territory in Colonial South Asia, 1870 to 1907", Comparative Studies in Society and History, Cambridge University Press, 40 (4): 609–636, doi:10.1017/s0010417598001674, JSTOR 179304
- Harnetty, Peter (July 1991), "'Deindustrialization' Revisited: The Handloom Weavers of the bleedin' Central Provinces of India, c. Sufferin' Jaysus listen to this. 1800–1947", Modern Asian Studies, Cambridge University Press, 25 (3): 455–510, doi:10.1017/S0026749X00013901, JSTOR 312614
- Klein, Ira (1988), "Plague, Policy and Popular Unrest in British India", Modern Asian Studies, Cambridge University Press, 22 (4): 723–755, doi:10.1017/s0026749x00015729, JSTOR 312523, PMID 11617732
- Klein, Ira (July 2000), "Materialism, Mutiny and Modernization in British India", Modern Asian Studies, Cambridge University Press, 34 (3): 545–580, doi:10.1017/S0026749X00003656, JSTOR 313141, S2CID 143348610
- Moore, Robin J. Jasus. (2001a), "Imperial India, 1858–1914", in Porter, Andrew (ed.), Oxford History of the British Empire: The Nineteenth Century, Oxford and New York: Oxford University Press, pp. 422–446, ISBN 0-19-924678-5
- Moore, Robin J. Jaykers! (2001b), "India in the 1940s", in Winks, Robin (ed.), Oxford History of the British Empire: Historiography, Oxford and New York: Oxford University Press, pp. 231–242, ISBN 0-19-924680-7
- Overby, Stephanie (17 May 2019), "The top 10 IT outsourcin' service providers of the feckin' year", CIO
- Ray, Rajat Kanta (July 1995), "Asian Capital in the oul' Age of European Domination: The Rise of the feckin' Bazaar, 1800–1914", Modern Asian Studies, Cambridge University Press, 29 (3): 449–554, doi:10.1017/S0026749X00013986, JSTOR 312868
- Raychaudhuri, Tapan (2001), "India, 1858 to the 1930s", in Winks, Robin (ed.), Oxford History of the British Empire: Historiography, Oxford and New York: Oxford University Press, pp. 214–230, ISBN 0-19-924680-7
- Robb, Peter (May 1997), "The Colonial State and Constructions of Indian Identity: An Example on the feckin' Northeast Frontier in the bleedin' 1880s", Modern Asian Studies, Cambridge University Press, 31 (2): 245–283, doi:10.1017/s0026749x0001430x, JSTOR 313030
- Roy, Tirthankar (Summer 2002), "Economic History and Modern India: Redefinin' the feckin' Link", The Journal of Economic Perspectives, American Economic Association, 16 (3): 109–130, doi:10.1257/089533002760278749, JSTOR 3216953
- Sarkar, Benoy Kumar (March 1921), "A History of the oul' Indian Nationalist Movement. Whisht now. by Verney Lovett", Political Science Quarterly (Review), 36 (1): 136–138, doi:10.2307/2142669, hdl:2027/coo1.ark:/13960/t3nw01g05, JSTOR 2142669
- Simmons, Colin (1985), "'De-Industrialization', Industrialization and the feckin' Indian Economy, c. Here's another quare one. 1850–1947", Modern Asian Studies, Cambridge University Press, 19 (3): 593–622, doi:10.1017/s0026749x00007745, JSTOR 312453
- Talbot, Ian (2001), "Pakistan's Emergence", in Winks, Robin (ed.), Oxford History of the feckin' British Empire: Historiography, Oxford and New York: Oxford University Press, pp. 253–263, ISBN 0-19-924680-7
- Tinker, Hugh (1968), "India in the bleedin' First World War and after.", Journal of Contemporary History, Sage Publications, 3 (4): 89–107, doi:10.1177/002200946800300407, ISSN 0022-0094, S2CID 150456443.
- Tomlinson, B. R. Sufferin' Jaysus listen to this. (2001), "Economics and Empire: The Periphery and the bleedin' Imperial Economy", in Porter, Andrew (ed.), Oxford History of the oul' British Empire: The Nineteenth Century, Oxford and New York: Oxford University Press, pp. 53–74, ISBN 0-19-924678-5
- Washbrook, D. Me head is hurtin' with all this raidin'. A. (2001), "India, 1818–1860: The Two Faces of Colonialism", in Porter, Andrew (ed.), Oxford History of the feckin' British Empire: The Nineteenth Century, Oxford and New York: Oxford University Press, pp. 395–421, ISBN 0-19-924678-5
- Watts, Sheldon (November 1999), "British Development Policies and Malaria in India 1897-c. 1929", Past & Present, Oxford University Press, 165 (1): 141–181, doi:10.1093/past/165.1.141, JSTOR 651287, PMID 22043526
- Wylie, Diana (2001), "Disease, Diet, and Gender: Late Twentieth Century Perspectives on Empire", in Winks, Robin (ed.), Oxford History of the oul' British Empire: Historiography, Oxford and New York: Oxford University Press, pp. 277–289, ISBN 0-19-924680-7
Classic Histories and Gazetteers
- Imperial Gazetteer of India vol, bedad. IV (1907), The Indian Empire, Administrative, Published under the oul' authority of His Majesty's Secretary of State for India in Council, Oxford at the bleedin' Clarendon Press. Jasus. Pp. Jaysis. xxx, 1 map, 552.
- Lovett, Sir Verney (1920), A History of the feckin' Indian Nationalist Movement, New York, Frederick A. G'wan now. Stokes Company, ISBN 81-7536-249-9
- Majumdar, R. C.; Raychaudhuri, H. C.; Datta, Kalikinkar (1950), An Advanced History of India, London: Macmillan and Company Limited. 2nd edition. Pp. xiii, 1122, 7 maps, 5 coloured maps..
- Smith, Vincent A. (1921), India in the British Period: Bein' Part III of the bleedin' Oxford History of India, Oxford: At the bleedin' Clarendon Press. 2nd edition. C'mere til I tell ya. Pp. xxiv, 316 (469–784).
- Oldenburg, Philip (2007), "India: Movement for Freedom", Encarta Encyclopedia, archived from the original on 31 October 2009.
- Wolpert, Stanley (2007), "India: British Imperial Power 1858–1947 (Indian nationalism and the bleedin' British response, 1885–1920; Prelude to Independence, 1920–1947)", Encyclopædia Britannica. |
Road traffic safety
Road traffic safety refers to methods and measures for reducing the risk of a person using the road network for being killed or seriously injured. The users of a road include pedestrians, cyclists, motorists, their passengers, and passengers of on-road public transport, mainly buses and trams. Best-practice road safety strategies focus upon the prevention of serious injury and death crashes in spite of human fallibility (which is contrasted with the old road safety paradigm of simply reducing crashes assuming road user compliance with traffic regulations). Safe road design is now about providing a road environment which ensures vehicle speeds will be within the human tolerances for serious injury and death wherever conflict points exist.
The basic strategy of a Safe System approach is to ensure that in the event of a crash, the impact energies remain below the threshold likely to produce either death or serious injury. This threshold will vary from crash scenario to crash scenario, depending upon the level of protection offered to the road users involved. For example, the chances of survival for an unprotected pedestrian hit by a vehicle diminish rapidly at speeds greater than 30 km/h, whereas for a properly restrained motor vehicle occupant the critical impact speed is 50 km/h (for side impact crashes) and 70 km/h (for head-on crashes).—International Transport Forum, Towards Zero, Ambitious Road Safety Targets and the Safe System Approach, Executive Summary page 19
As sustainable solutions for all classes of road have not been identified, particularly lowly trafficked rural and remote roads, a hierarchy of control should be applied, similar to best practice Occupational Safety and Health. At the highest level is sustainable prevention of serious injury and death crashes, with sustainable requiring all key result areas to be considered. At the second level is real time risk reduction, which involves providing users at severe risk with a specific warning to enable them to take mitigating action. The third level is about reducing the crash risk which involves applying the road design standards and guidelines (such as from AASHTO), improving driver behaviour and enforcement.
- 1 Background
- 2 Vehicle safety
- 3 Regulation of road users
- 4 Information campaigns
- 5 Statistics
- 6 Advocacy groups
- 7 Criticisms
- 8 See also
- 9 References
- 10 External links
Road traffic crashes are one of the world’s largest public health and injury prevention problems. The problem is all the more acute because the victims are overwhelmingly healthy before their crashes. According to the World Health Organization (WHO), more than 1 million people are killed on the world’s roads each year. A report published by the WHO in 2004 estimated that some 1.2 million people were killed and 50 million injured in traffic collisions on the roads around the world each year and was the leading cause of death among children 10–19 years of age. The report also noted that the problem was most severe in developing countries and that simple prevention measures could halve the number of deaths.
The standard measures used in assessing road safety interventions are fatalities and killed or seriously injured (KSI) rates, usually per billion (109) passenger kilometres. Countries caught in the old road safety paradigm, replace KSI rates with crash rates — for example, crashes per million vehicle miles.
Vehicle speed within the human tolerances for serious injury and death is a key goal of modern road design because impact speed affects the severity of injury to both occupants and pedestrians. For occupants, Joksch (1993) found the probability of death for drivers in multi-vehicle accidents increased as the fourth power of impact speed (often referred to by the mathematical term δv ("delta V"), meaning change in velocity). Injuries are caused by sudden, severe acceleration (or deceleration); this is difficult to measure. However, crash reconstruction techniques can estimate vehicle speeds before a crash. Therefore, the change in speed is used as a surrogate for acceleration. This enabled the Swedish Road Administration to identify the KSI risk curves using actual crash reconstruction data which led to the human tolerances for serious injury and death referenced above.
Interventions are generally much easier to identify in the modern road safety paradigm, whose focus is on the human tolerances for serious injury and death. For example, the elimination of head-on KSI crashes simply required the installation of an appropriate median crash barrier. For example, roundabouts, with speed reducing approaches, encounter very few KSI crashes.
The old road safety paradigm of purely crash risk is a far more complex matter. Contributing factors to highway crashes may be related to the driver (such as driver error, illness or fatigue), the vehicle (brake, steering, or throttle failures) or the road itself (lack of sight distance, poor roadside clear zones, etc.). Interventions may seek to reduce or compensate for these factors, or reduce the severity of crashes. A comprehensive outline of interventions areas can be seen in management systems for road safety.
In addition to management systems, which apply predominantly to networks in built-up areas, another class of interventions relates to the design of roadway networks for new districts. Such interventions explore the configurations of a network that will inherently reduce the probability of collisions.
Interventions for the prevention of road traffic injuries are often evaluated; the Cochrane Library has published a wide variety of reviews of interventions for the prevention of road traffic injuries.
For road traffic safety purposes it can be helpful to classify roads into three usages: built-up urban streets with slower speeds, dense and diverse road users; non built-up rural roads with higher speeds; and major highways (motorways/ Interstates/ freeways/ Autobahns, etc.) reserved for motor-vehicles and designed to minimize and attenuate crashes. Most casualties occur on urban streets but most fatalities on rural roads, while motorways are the safest in relation to distance traveled. For example, in 2013, German autobahns carried 31% of motorized road traffic (in travel-kilometres) while accounting for 13% of Germany's traffic deaths. The autobahn fatality rate of 1.9 deaths per billion-travel-kilometres compared favorably with the 4.7 rate on urban streets and 6.6 rate on rural roads.
|Road Class||Injury Crashes||Fatalities||Injury Rate*||Fatality Rate*||Fatalities per 1000 Injury Crashes|
* per 1,000,000,000 travel-kilometres
On neighborhood roads where many vulnerable road users, such as pedestrians and bicyclists can be found, traffic calming can be a tool for road safety. Though not strictly a traffic calming measure, mini-traffic circles implanted in normal intersections of neighbourhood streets have been shown to reduce collisions at intersections dramatically (see picture). Shared space schemes, which rely on human instincts and interactions, such as eye contact, for their effectiveness, and are characterised by the removal of traditional traffic signals and signs, and even by the removal of the distinction between carriageway (roadway) and footway (sidewalk), are also becoming increasingly popular. Both approaches can be shown to be effective.
For planned neighbourhoods, studies recommend new network configurations, such as the Fused Grid or 3-Way Offset. These layout models organize a neighbourhood area as a zone of no cut-through traffic by means of loops or dead-end streets. They also ensure that pedestrians and bicycles have a distinct advantage by introducing exclusive shortcuts by path connections through blocks and parks. Such a principle of organization is referred to as "Filtered Permeability" implying a preferential treatment of active modes of transport. These new patterns, which are recommended for laying out neighbourhoods, are based on analyses of collision data of large regional districts and over extended periods. They show that four-way intersections combined with cut-through traffic are the most significant contributors to increased collisions.
Modern safety barriers are designed to absorb impact energy and minimize the risk to the occupants of cars and bystanders. For example, most side rails are now anchored to the ground, so that they cannot skewer a passenger compartment. Most light poles are designed to break at the base rather than violently stop a car that hits them. Some road fixtures such as signs and fire hydrants are designed to collapse on impact. Highway authorities have removed trees in the vicinity of roads; while the idea of "dangerous trees" has attracted a certain amount of skepticism, unforgiving objects such as trees can cause severe damage and injury to errant road users.
Most roads are cambered (crowned), that is, made so that they have rounded surfaces, to reduce standing water and ice, primarily to prevent frost damage but also increasing traction in poor weather. Some sections of road are now surfaced with porous bitumen to enhance drainage; this is particularly done on bends. These are just a few elements of highway engineering. As well as that, there are often grooves cut into the surface of cement highways to channel water away, and rumble strips at the edges of highways to rouse inattentive drivers with the loud noise they make when driven over. In some cases, there are raised markers between lanes to reinforce the lane boundaries; these are often reflective. In pedestrian areas, speed bumps are often placed to slow cars, preventing them from going too fast near pedestrians.
Poor road surfaces can lead to safety problems. If too much asphalt or bitumenous binder is used in asphalt concrete, the binder can 'bleed' or flush' to the surface, leaving a very smooth surface that provides little traction when wet. Certain kinds of stone aggregate become very smooth or polished under the constant wearing action of vehicle tyres, again leading to poor wet-weather traction. Either of these problems can increase wet-weather crashes by increasing braking distances or contributing to loss of control. If the pavement is insufficiently sloped or poorly drained, standing water on the surface can also lead to wet-weather crashes due to hydroplaning.
Lane markers in some countries and states are marked with cat's eyes, Botts' dots or reflective raised pavement markers that do not fade like paint. Botts dots are not used where it is icy in the winter, because frost and snowplows can break the glue that holds them to the road, although they can be embedded in short, shallow trenches carved in the roadway, as is done in the mountainous regions of California.
Road hazards and intersections in some areas are now usually marked several times, roughly five, twenty, and sixty seconds in advance so that drivers are less likely to attempt violent manoeuvres.
Most road signs and pavement marking materials are retro-reflective, incorporating small glass spheres or prisms to more efficiently reflect light from vehicle headlights back to the driver's eyes.
Turning across traffic
Turning across traffic (i.e., turning left in right-hand drive countries, turning right in left-hand drive countries) poses several risks. The more serious risk is a collision with oncoming traffic. Since this is nearly a head-on collision, injuries are common. It is the most common cause of fatalities in a built-up area. The other risk is involvement in a rear-end collision while waiting for a gap in oncoming traffic.
Countermeasures for this type of collision include:
- Addition of left turn lanes
- Providing protected turn phasing at signalized intersections
- Using indirect turn treatments such as the Michigan left
- Converting conventional intersections to roundabouts
In the absence of these facilities as a driver about to turn:
- Keep your wheels straight, so that in the event of a rear end shunt, you are not pushed into on-coming traffic.
- When you think it is clear, look away, to the road that you are entering. There is an optical illusion that, after a time, presents an oncoming vehicle as further away and travelling slower. Looking away breaks this illusion.
There is no presumption of negligence which arises from the bare fact of a collision at an intersection, and circumstances may dictate that a left turn is safer than to turn right. The American Association of State Highway Transportation Officials (AASHTO) recommends in their publication Geometric Design of Highways and Streets that left or right turns are to be provided the same time gap. Some states have recognized this in statute, and a presumption of negligence is only raised because of the turn if and only if the turn was prohibited by an erected sign.
Turns across traffic have been shown to be problematic for older drivers.
Designing for pedestrians and cyclists
Pedestrians and cyclists are among the most vulnerable road users and in some countries constitute over half of all road deaths. Interventions aimed at improving safety of non-motorised users:
- Sidewalks of suitable width for pedestrian traffic
- Pedestrian crossings close to the desire line which allow pedestrians to cross roads safely
- Segregated pedestrian routes and cycle lanes away from the main highway
- Overbridges (tend to be unpopular with pedestrians and cyclists due to additional distance and effort)
- Underpasses (these can pose heightened risk from crime is not designed well, can work for cyclists in some cases)
- Traffic calming and speed humps
- Low speed limits that are rigorously enforced, possibly by speed cameras
- Shared space schemes giving ownership of the road space and equal priority to all road users, regardless of mode of use
- Pedestrian barriers to prevent pedestrians crossing dangerous locations
Pedestrians' advocates question the equitability of schemes if they impose extra time and effort on the pedestrian to remain safe from vehicles, for example overbridges with long slopes or steps up and down, underpasses with steps and addition possible risk of crime and at-grade crossings off the desire line. Make Roads Safe was criticised in 2007 for proposing such features. Successful pedestrian schemes tend to avoid over-bridges and underpasses and instead use at-grade crossings (such as pedestrian crossings) close to the intended route. Successful cycling scheme by contrast avoid frequent stops even if some additional distance is involved given that the main effort required for cyclists is starting off.
In Costa Rica 57% of road deaths are pedestrians. However, a partnership between AACR, Cosevi, MOPT and iRAP has proposed the construction of 190 km of pedestrian footpaths and 170 pedestrian crossings which could save over 9000 fatal or serious injuries over 20 years.
By 1947 the Pedestrians' Association was suggesting that many of the safety features being introduced (speed limits, traffic calming, road signs and road markings, traffic lights, Belisha beacons, pedestrian crossings, cycle lanes, etc.) were potentially self-defeating because "every nonrestrictive safety measure, however admirable in itself, is treated by the drivers as an opportunity for more speeding, so that the net amount of danger is increased and the latter state is worse than the first."
During the 1990s a new approach, known as 'shared space' was developed which removed many of these features in some places has attracted the attention of authorities around the world. The approach was developed by Hans Monderman who believed that "if you treat drivers like idiots, they act as idiots" and proposed that trusting drivers to behave was more successful than forcing them to behave. Professor John Adams, an expert on risk compensation suggested that traditional traffic engineering measures assumed that motorists were "selfish, stupid, obedient automatons who had to be protected from their own stupidity" and non-motorists were treated as "vulnerable, stupid, obedient automatons who had to be protected from cars – and their own stupidity".
Reported results indicate that the 'shared space' approach leads to significantly reduced traffic speeds, the virtual elimination of road casualties, and a reduction in congestion. Living streets share some similarities with shared spaces. The woonerven also sought to reduce traffic speeds in community and housing zones by the use of lower speed limits enforced by the use of special signage and road markings, the introduction of traffic calming measures, and by giving pedestrians priority over motorists.
Non built-up areas
|This section requires expansion. (April 2010)|
Safety features include:
- limited access from properties and local roads.
- Grade separated junctions
- Median dividers between opposite-direction traffic to reduce likelihood of head-on collisions
- Removing roadside obstacles.
- Prohibition of more vulnerable road users and slower vehicles.
- Placements of energy attenuation devices (e.g. guard rails, wide grassy areas, sand barrels).
- Eliminating road toll booths
The ends of some guard rails on high-speed highways in the United States are protected with impact attenuators, designed to gradually absorb the kinetic energy of a vehicle and slow it more gently before it can strike the end of the guard rail head on, which would be devastating at high speed. Several mechanisms are used to dissipate kinetic energy. Fitch Barriers, a system of sand-filled barrels, uses momentum transfer from the vehicle to the sand. Many other systems tear or deform steel members to absorb energy and gradually stop the vehicle.
In some countries major roads have "tone bands" impressed or cut into the edges of the legal roadway, so that drowsing drivers are awakened by a loud hum as they release the steering and drift off the edge of the road. Tone bands are also referred to as "rumble strips", owing to the sound they create. An alternative method is the use of "Raised Rib" markings, which consists of a continuous line marking with ribs across the line at regular intervals. They were first specially authorised for use on motorways as an edge line marking to separate the edge of the hard shoulder from the main carriageway. The objective of the marking is to achieve improved visual delineation of the carriageway edge in wet conditions at night. It also provides an audible/vibratory warning to vehicle drivers, should they stray from the carriageway, and run onto the marking.
Better motorways are banked on curves to reduce the need for tire-traction and increase stability for vehicles with high centers of gravity.
An example of the importance of roadside clear zones can be found on the Isle of Man TT motorcycle race course. It is much more dangerous than Silverstone because of the lack of runoff. When a rider falls off at Silverstone, he slides along slowly losing energy, with minimal injuries. When he falls off in the Manx, he impacts violently with trees and walls. Similarly, a clear zone alongside a freeway or other high speed road can prevent off-road excursions from becoming fixed-object crashes.
The US has developed a prototype automated roadway, to reduce driver fatigue and increase the carrying capacity of the roadway. Roadside units participating in future Wireless vehicle safety communications networks have been studied.
Motorways are far more expensive and space-consumptive to build than ordinary roads, so are only used as principal arterial routes. In developed nations, motorways bear a significant portion of motorized travel; for example, the United Kingdom's 3533 km of motorways represented less than 1.5% of the United Kingdom's roadways in 2003, but carry 23% of road traffic.
The proportion of traffic borne by motorways is a significant safety factor. For example, even though the United Kingdom had a higher fatality rates on both motorways and non-motorways than Finland, both nations shared the same overall fatality rate in 2003. This result was due to the United Kingdom's higher proportion of motorway travel.
Similarly, the reduction of conflicts with other vehicles on motorways results in smoother traffic flow, reduced collision rates, and reduced fuel consumption compared with stop-and-go traffic on other roadways.
The improved safety and fuel economy of motorways are common justifications for building more motorways. However, the planned capacity of motorways is often exceeded in a shorter timeframe than initially planned, due to the under estimation of the extent of the suppressed demand for road travel. In developing nations, there is significant public debate on the desirability of continued investment in motorways.
Motorways around the world are subject to a broad range of speed limits. Recent experiments with variable speed limits based on automatic measurements of traffic density have delivered both improvements in traffic flow and reduced collision rates, based on principles of turbulent flow analysis.
With effect from January 2005 and based primarily on safety grounds, the UK’s Highways Agency's policy is that all new motorway schemes are to use high containment concrete step barriers in the central reserve. All existing motorways will introduce concrete barriers into the central reserve as part of ongoing upgrades and through replacement as and when these systems have reached the end of their useful life. This change of policy applies only to barriers in the central reserve of high speed roads and not to verge side barriers. Other routes will continue to use steel barriers.
More people die on the hard shoulder than on the highway itself. Without other vehicles passing a parked car, following drivers are unaware that the vehicle is parked, despite hazard lights. Truck drivers indicate that they are parked by putting their cab seat behind their truck. In the UK, the AA and police park their vehicles on the hard shoulder at a slight angle so that following drivers can see down the side of their vehicle and are therefore aware that they are stopped.
30% of highway crashes that occur in the vicinity of toll collection booths in the countries that have them, these can be reduced by switching to electronic toll systems.
Safety can be improved in various ways depending on the transport taken.
Buses and coaches
Safety can be improved in various simple ways to reduce the chance of an accident occurring. Avoiding rushing or standing in unsafe places on the bus or coach and following the rules on the bus or coach itself will greatly increase the safety of a person travelling by bus or coach. Various safety features can also be implemented into buses and coaches to improve safety including safety bars for people to hold onto.
The main ways to stay safe when travelling by bus or coach are as follows:
- Leave your location early so that you do not have to run to catch the bus or coach.
- At the bus stop, always follow the queue.
- Do not board or alight at a bus stop other than an official one.
- Never board or alight at a red light crossing or unauthorized bus stop.
- Board the bus only after it has come to a halt without rushing in or pushing others.
- Do not sit, stand or travel on the footboard of the bus.
- Do not put any part of your body outside a moving or a stationary bus.
- While in the bus, refrain from shouting or making noise as it can distract the driver.
- Always hold onto the handrail if standing in a moving bus, especially on sharp turns.
- Always adhere to the bus safety rules.
Safety can be improved by reducing the chances of a driver making an error, or by designing vehicles to reduce the severity of crashes that do occur. Most industrialized countries have comprehensive requirements and specifications for safety-related vehicle devices, systems, design, and construction. These may include:
- Passenger restraints such as seat belts — often in conjunction with laws requiring their use — and airbags
- Crash avoidance equipment such as lights and reflectors
- Driver assistance systems such as Electronic Stability Control
- Crash survivability design including fire-retardant interior materials, standards for fuel system integrity, and the use of safety glass
- Sobriety detectors: These interlocks prevent the ignition key from working if the driver breathes into one and it detects significant quantities of alcohol. They have been used by some commercial transport companies, or suggested for use with persistent drunk-driving offenders on a voluntary basis
|This section requires expansion. (April 2010)|
According to statistics, the percentage of intoxicated motorcyclists in fatal crashes is higher than other riders on roads. Helmets also play a major role in the safety of motorcyclists. In 2008, The National Highway Traffic Safety Administration (NHTSA) estimated the helmets are 37 percent effective in saving lives of motorcyclists involved in crashes.
According to the European Commission Transportation Department "it has been estimated that up to 25% of accidents involving trucks can be attributable to inadequate cargo securing". Improperly secured cargo can cause severe accidents and lead to loss of cargo, loss of lives, loss of vehicles and can be a hazard for the environment. One way to stabilize, secure and protect cargo during transportation on the road is by using Dunnage Bags which are placed in the void between the cargo and are designed to prevent the load from moving during transport.
Regulation of road users
Various types of road user regulations are in force or have been tried in most jurisdictions around the world, some these are discussed by road user type below.
Motor vehicle users
Dependent on jurisdiction, driver age, road type and vehicle type, motor vehicle drivers may be required to pass a driving test (public transport and goods vehicle drivers may need additional training and licensing), conform to restrictions on driving after consuming alcohol or various drugs, comply with restrictions on use of mobile phones, be covered by compulsory insurance, wear seat belts and comply with certain speed limits. Motorcycle riders may additionally be compelled to wear a motorcycle helmet. Drivers of certain vehicle types may be subject to maximum driving hour regulations.
Some jurisdictions such as the US states Virginia and Maryland, have implemented specific regulations such as the prohibiting mobile phone use by, and limiting the number of passengers accompanying, young and inexperienced drivers. It has been noticed that more serious collisions occur at night, when the car has multiple occupants, and when seat belt use is less.
Insurance companies[which?] have proposed[where?] that the following restrictions should be imposed on new drivers: a "curfew" imposed on young drivers to prevent them driving at night, an experienced supervisor to chaperone the less experienced driver, forbidding the carrying of passengers, zero alcohol tolerance, raising the standards required for driving instructors and improving the driving test, vehicle restrictions (e.g. restricting access to 'high-performance' vehicles), a sign placed on the back of the vehicle (an N- or P-Plate) to notify other drivers of a novice driver and encouraging good behaviour in the post-test period.
Pedal bicycle users
Dependent on jurisdiction, road type and age, pedal cyclists may be required conform to restrictions on driving after consuming alcohol or various drugs, comply with restrictions on use of mobile phones, be covered by compulsory insurance, wear a bicycle helmet and comply with certain speed limits.
Dependent on jurisdiction, jaywalking may be prohibited.
Collisions with animals are usually fatal to the animals, and occasionally to drivers as well.
Information campaigns can be used to raise awareness of initiatives designed to reduce road casualty levels. Examples include:
- Decade of Action by World Health Organization and Fédération Internationale de l'Automobile (2011-2020)
- traffic awareness campaigns such as the "one false move" campaign documented by Hillman et al.
- Speeding. No one thinks big of you. (New South Wales, Australia, 2007)
- Road Safety is no Accident World Health Organization
- Designated driver campaign, (US, 1970s-present)
- Click It or Ticket, (US, 1993–present)
- Clunk Click Every Trip (UK 1971)
- Green Cross Code (UK 1970–present)
Rating roads for safety
Since 1999 the EuroRAP initiative has been assessing major roads in Europe with a road protection score. This results in a star rating for roads based on how well its design would protect car occupants from being severely injured or killed if a head-on, run-off, or intersection accident occurs, with 4 stars representing a road with the best survivability features. The scheme states it has highlighted thousands of road sections across Europe where road-users are routinely maimed and killed for want of safety features, sometimes for little more than the cost of safety fencing or the paint required to improve road markings.
There are plans to extend the measurements to rate the probability of an accident for the road. These ratings are being used to inform planning and authorities' targets. For example, in Britain two-thirds of all road deaths in Britain happen on rural roads, which score badly when compared to the high quality motorway network; single carriageways claim 80% of rural deaths and serious injuries, while 40% of rural car occupant casualties are in cars that hit roadside objects, such as trees. Improvements in driver training and safety features for rural roads are hoped to reduce this statistic.
The number of designated traffic officers in the UK fell from 15–20% of Police force strength in 1966 to seven per cent of force strength in 1998, and between 1999 and 2004 by 21%. It is an item of debate whether the reduction in traffic accidents per 100 million miles driven over this time has been due to robotic enforcement.
In the United States, roads are not government-rated, for media-releases and public knowledge on their actual safety features. However, in 2011, the National Highway Traffic Safety Administration's Traffic Safety Facts found that over 800 persons were killed across the USA by "non-fixed objects" that includes roadway debris. California had the highest number of total deaths from those crashes; New Mexico had a best chance for an individual to die from experiencing any vehicle-debris crash.
According to WHO in 2010 it was estimated that 1.24 million people were killed worldwide and 50 million more were injured in motor vehicle collisions. Young adults aged between 15 and 44 years account for 59% of global road traffic deaths. Other key facts according to the WHO report are:
- Road traffic injuries are the leading cause of death among young people, aged 15–29 years.
- 91% of the world's fatalities on the roads occur in low-income and middle-income countries, even though these countries have approximately half of the world's vehicles.
- Half of those dying on the world’s roads are "vulnerable road users": pedestrians, cyclists and motorcyclists.
- Without action, road traffic crashes are predicted to result in the deaths of around 1.9 million people annually by 2020.
- Only 28 countries, representing 416 million people (7% of the world’s population), have adequate laws that address all five risk factors (speed, drink-driving, helmets, seat-belts and child restraints).
As the comparatively poor improvements in pedestrian safety have become a concern at OECD level, the Joint Transport Research Centre of OECD and the International Transport Forum (JTRC) convened an international expert group and published a report entitled ”Pedestrian Safety, Urban Space and Health in 2012”.
1 million inhabitants
10 billion vehicle-km
100 000 registered vehicles
|Seatbelt wearing rates Front / Rear||speed limit
urban / rural / motorways (km/h)
|Argentina||124||n.a.||25||38% / 26%||30-60 / 110 / 130|
|Australia||57||56||7||97% / ~96%||50 / 100 or 110 / 110|
|Austria||63||69||9||89% / 76%||50 / 100 / 130|
|Belgium||69||77||11||86% / n.a.||30 or 50 / 70 or 90 / 120|
|Cambodia||134||n.a.||91||16% / n.a.||40 / 90 / n.a.|
|Canada||58||59||9||95% / n.a.||40-70 / 80-90 / 100-110|
|Chile||114||n.a.||n.a.||n.a. / n.a.||60 / 100 / 120|
|Colombia||127||n.a.||65||n.a. / n.a.||80(30) / 120 / n.a.|
|Czech Republic||71||157||13||97% / 66%||50 / 90 / 130|
|Denmark||30||34||6||94% / 81%||50 / 80 / 130|
|Finland||47||47||7||87-95% / 86%||50 / 80/100 / 120|
|France||58||65||9||98.5% / 84%||50 / 90 / 130|
|Germany||44||50||7||97% / 97%||50 / 100 / no limit or 130|
|Greece||91||n.a.||12||77% / 23%||50 / 90 / 130|
|Hungary||61||n.a.||17||87% / 68%||50 / 90 / 130(110)|
|Iceland||28||29||3||84% / 65%||50 / 90 / n.a.|
|Ireland||35||34||7||93% / 89%||50 / 80 or 100 / 120|
|Israel||33||52||10||93% / 89%||30,50,70 / 80,90,100 / 110|
|Italy||60||n.a.||7||63% / 10%||50 / 90-110 / 130-150|
|Jamaica||114||n.a.||n.a.||n.a. / n.a.||40 / 90 / n.a.|
|Japan||41||72||6||98% / 61%||40,50,60 / 50,60 / 100|
|Korea||108||184||25||88% / 9.4%
|60 / 60-80 / 110(100)|
|Lithuania||100||n.a.||13||70% / 71%||50 / 90(70) / 130/100|
|Luxembourg||60||n.a.||8||80% / n.a.||50 / 90 / 130|
|Malaysia||236||134||31||91% / 11%||50 / 90 / 110|
|Netherlands||39||49||6||97% / 82%||50 / 80 / 130|
|New Zealand||69||77||10||96% / 87%||50 / 100 / 100|
|Nigeria||40||n.a.||40||80% / <5%||50 / 80 / 100|
|Norway||29||33||4||95% / n.a.||50 / 80 / 100|
|Poland||92||n.a.||20||n.a. / 59%||50 / 90-120 / 140|
|Portugal||68||n.a.||12||n.a / n.a.||50 / 90 / 120|
|Serbia||97||n.a.||33||70% / 3%||50 / 80 / 120|
|Slovenia||63||78||10||94% / 66%||50 / 90 / 130|
|Spain||41||n.a.||6||91% / 81%||50 / 90 or 100 / 120|
|Sweden||30||36||5||98% / 84%||30,40,50 / 60,70,80,90,100 / 110 or 120|
|Switzerland||43||56||6||92% / 72%||50 / 80 / 120|
|United Kingdom||28||36||5||95% / 89%||48 / 96 / 113|
|United States||107||71||13||87% / 74%||set by state / set by state / 88-129 (set by state)|
The Automobile Association was established in 1905 in the United Kingdom to help motorists avoid police speed traps. They became involved in other safety issues and also erected thousands of roadside warning signs.
The International Road Federation has an issue area and working group dedicated to road safety. They work with their membership to advocate measures that improve road safety through infrastructure and cooperation with other international organizations.
Motoring advocacy groups including the Association of British Drivers (UK), Speed cameras.org (UK), National Motorists Association (USA/Canada) argue that the strict enforcement of speed limits does not necessarily result in safer driving, and may even have negative effect on road safety in general. Safe Speed is a UK group set up specifically to campaign against the use of Speed cameras. The Association of British Drivers also argues that speed humps result in increased air pollution, increased noise pollution, and even unnecessary vehicle damage.
In 1965, Ralph Nader put pressure on car manufactures in his book Unsafe at Any Speed detailing resistance by car manufacturers to the introduction of safety features, like seat belts, and their general reluctance to spend money on improving safety. The GM President James Roche was later forced to appear before a United States Senate subcommittee, and to apologize to Nader for the company's campaign of harassment and intimidation. Nader later successfully sued GM for excessive invasion of privacy.
RoadPeace was formed in 1991 in the United Kingdom to advocate for better road safety and founded World Day of Remembrance for Road Traffic Victims in 1993 which received support from the United Nations General Assembly in 2005.
There is some controversy over the way that the motor advocacy groups has been seen to dominate the road safety agenda. Some road safety activists use the term "road safety" (in quotes) to describe measures such as removal of "dangerous" trees and forced segregation of the vulnerable to the advantage of motorized traffic. Orthodox "road safety" opinion fails to address what Adams describes as the top half of the risk thermostat, the perceptions and attitudes of the road user community.
Some road-safety groups[who?] argue that the problem of road safety is largely being stated in the wrong terms because most road safety measures are designed to increase the safety of drivers, but many road traffic casualties are not drivers (in the UK only 40% of casualties are drivers), and those measures which increase driver safety may, perversely, increase the risk to these others, through risk compensation.
The core elements of the thesis are:
- that vulnerable road users are marginalised by the "road safety" establishment
- that "road safety" interventions are often centred around reducing the severity of results from dangerous behaviours, rather than reducing the dangerous behaviours themselves
- that improved "road safety" has often been achieved by making the roads so hostile that those most likely to be injured cannot use them at all
- that the increasing "safety" of cars and roads is often counteracted wholly or in part by driver responses (risk compensation).
RoadPeace and other groups have been strongly critical of what they see as moves to solve the problem of danger, posed to vulnerable road users by motor traffic, through increasing restrictions on vulnerable road users, an approach which they believe both blames the victim and fails to address the problem at source. This is discussed in detail by Dr Robert Davis in the book Death on the Streets: Cars and the mythology of road safety, and the core problem is also addressed in books by Professor John Adams, Mayer Hillman and others.
For example; the UK publishes Road Casualties Great Britain each year detailing reported road fatalities and injuries, claiming to have among the best pedestrian safety in Europe with falling injury rates, as measured in pedestrian KSI per head of population. A study published by the British Medical Journal in 2006, suggested instead that the reduction in injury levels was due to lower levels of reporting, rather than the actual levels of injury as such. Considerable under-reporting was confirmed by a second report prepared for the UK Department for Transport. and the UK government now acknowledge the issue of under-reporting, but is not convinced that the reductions in reported injury levels do not reflect an actual decline. Another independent report investigated if the roads were actually sufficiently dangerous as to deter pedestrians from using them at all.
- AAA Foundation for Traffic Safety (in the US)
- Assured Clear Distance Ahead
- Asia Injury Prevention Foundation
- Fatality Analysis Reporting System
- Geometric design of roads
- Handicap International
- Highway Safety Manual
- ISO 39001
- List of countries by traffic-related death rate
- National Highway Traffic Safety Administration
- National Traffic and Motor Vehicle Safety Act
- Road Casualties Great Britain
- Rules of the road
- Speed limit
- Traffic psychology
- Traffic sign
- Road surface marking
- Road marking machine
- Transportation safety in the United States
- Turning Point (documentary)
- United Nations Road Safety Collaboration
- Work-related road safety in the United States
- International Transport Forum (2008). "Towards Zero, Ambitious Road Safety Targets and the Safe System Approach". OECD. Retrieved 26 January 2012.
It recognises that prevention efforts notwithstanding, road users will remain fallible and crashes will occur.
- Towards Zero Framework
- Statistical Annex, World report on road traffic injury prevention
- "World report on road traffic injury prevention". World Health Organisation. Retrieved 14 April 2010.
- "UN raises child accidents alarm". BBC News. 10 December 2008. Retrieved 22 May 2010.
- KSI league tables
- Lovegrove G., Sayed T. (2006). "Macro-level collision models for evaluating neighbourhood traffic safety". Canadian Journal of Civil Engineering 33: 609–621. doi:10.1139/l06-013.
- "Speed Cameras". ROSPA.
The Cochrane Collaboration published out a second systematic review in 2006, which was updated in 2010. These studies only included before-and-after trials with comparison areas and interrupted time series studies.
- "Reduce Injuries Associated with Motor Vehicle Crashes". Cochrane Database of Systematic Reviews (Centers for Disease Control and Prevention) (3): CD004168. doi:10.1002/14651858.CD004168.pub2.
Alcohol ignition interlock programmes for reducing drink driving recidivism.
- http://www.bast.de (October 2014). "Traffic and Accident Data: Summary Statistics – Germany" (PDF). Bundesanstalt für Straßenwesen (Federal Highway Research Institute). Bundesanstalt für Straßenwesen. Retrieved 2014-12-14. Check date values in:
|year= / |date= mismatch(help)
- Neighborhood Traffic Calming: Seattle's Traffic Circle Program http://www.usroads.com/journals/rmej/9801/rm980102.htm
- Sun, J. & Lovegrove, G. (2008). Research Study on Evaluating the Level of Safety of the Fused Grid Road Pattern, External Research Project for CMHC, Ottawa, Ontario
- Eric Dumbaugh and Robert Rae. Safe Urban Form: Revisiting the Relationship Between Community Design and Traffic Safety. Journal of the American Planning Association, Vol. 75, No. 3, Summer 2009
- Vicky Feng Wei, BASc and Gord Lovegrove PhD (2011), Sustainable Road Safety: A New Neighbourhood Road Pattern that saves VRU Lives, University of British Columbia
- "Reflective Glass Beads". Retrieved 13 May 2014.
- NEUMAN, TIMOTHY R. et al. (2003). NCHRP REPORT 500 Volume 5: A Guide for Addressing Unsignalized Intersection Collisions (PDF). WASHINGTON, D.C.: TRANSPORTATION RESEARCH BOARD.
- ANTONUCCI, NICHOLAS D. et al. NCHRP REPORT 500 Volume 12: A Guide for Reducing Collisions at Signalized Intersections (PDF). Washington D.C.: Transportation Research Board.
- "Cordova v. Ford, 46 Cal. App. 2d 180". 2 46. Official California Appellate Reports. 7 November 1966. p. 180. Retrieved 27 July 2013.
All courts are agreed that the mere fact of a collision of two automobiles gives rise to no inference of negligence against either driver in an action brought by the other. ...When a vehicle operated by A collides with a vehicle operated by B, there are four possibilities. A alone was negligent; B alone was negligent; both were negligent; or neither. Of these four only the first will result in liability of A to B. The bare fact of a collision affords no basis on which to conclude that it is the preponderant probability. The odds are against it.See Official Reports Opinions Online
- A Policy on Geometric Design of Highways and Streets. Washington D.C.: American Association of State Highway and Transportation Officials. 2004.
- Geometric Design of Highways and Streets. American Association of State Highway Transportation Officials. 7 November 20010. Retrieved 27 July 2013.
Exhibit 9-54. Time Gap for Case B1-Left turn from StopCheck date values in:
- "Cal. Veh C. § 22101. Regulation of Turns at Intersection". State of California. 1 January 1975. Retrieved 27 July 2013.
When right- or left-hand turns are prohibited at an intersection notice of such prohibition shall be given by erection of a sign.See opinions on C.V.C. § 22101: Official Reports Opinions Online
- Staplin, L. et al. (2001). Highway Design Handbook for Older Drivers and Pedestrians. Washington D.C.: Federal Highway Administration.
- "Vehicle Pedestrian Crashes". International Road Assessment Programme. Retrieved 26 September 2008.
- "Vaccines for Roads; The new iRAP tools and their pilot application" (PDF). International Road Assessment Programme. Retrieved 26 September 2008.[dead link]
- J.S.Dean. Murder most foul.
- Matthias Schulz (16 November 2006). "European Cities Do Away with Traffic Signs". Spiegel Online. Retrieved 27 February 2008.
- Ted White (September 2007). "Signing Off: Visionary traffic planners". Urbanite Baltimore. Retrieved 27 February 2008.
- Gray, Sadie (11 January 2008). "Obituaries: Hans Monderman". The Times (London: Times Newspapers Ltd). Retrieved 27 February 2008.
- Andrew Gilligan (7 February 2008). "It's hell on the roads, and I know who's to blame". The Evening Standard (Associated Newspapers Limited). Retrieved 27 February 2008.
- Professor John Adams (2 September 2007). "Shared Space – would it work in Los Angeles?" (PDF). John Adams. Retrieved 27 February 2008.
- "Bringing U.S. Roads into the 21st Century".
- "Primary and secondary prevention of drink driving by the use of alcolock device and program: Swedish experiences" 37 (6). Accident Analysis & Prevention. November 2005. doi:10.1016/j.aap.2005.06.020. Retrieved 6 January 2008.
- Driving Safety. http://www.nhtsa.gov/Safety/Motorcycles. NHTSA. Retrieved 3 January 2014.
- Motorcycles: Traffic Safety Facts-2008 Data. http://www-nrd.nhtsa.dot.gov/pubs/811159.pdf. National Highway Traffic Safety Administration. Retrieved 3 January 2014.
- [dead link]
- Williamson, Elizabeth (1 February 2005). "Brain Immaturity Could Explain Teen Crash Rate". Washington Post.
- "The Good, the Bad and the Talented: Young Drivers' Perspectives on Good Driving and Learning to Drive" (PDF). Road Safety Research Report No. 74. Transport Research Laboratory. January 2007. Retrieved 4 January 2008.
- "Statistics database for transports". http://epp.eurostat.ec.europa.eu (statistical database). Eurostat, European Commission. 20 April 2014. Retrieved 12 May 2014.
- Vojtech Eksler, ed. (5 May 2013). "Intermediate report on the development of railway safety in the European Union 2013" (PDF). http://www.era.europa.eu (report). Safety Unit, European Railway Agency & European Union. p. 1. Retrieved 12 May 2014.
- "Star rating roads for safety: UK trials 2006-07". EuroRAP. 3 December 2007. (Note: see country maps here )
- John Dawson, John. "Chairman's Message".
- "Star rating roads for safety, UK trials 2006-07" (PDF). TRL, EuroRAP & ADAC. December 2007.
- "Section 21, traffic officer numbers reduction in the UK" (PDF). Retrieved 9 April 2012.
- "page 147 Transport statistics 2009 edition" (PDF). Dft.gov.uk. 31 March 2012. Retrieved 9 April 2012.
- "Global status report on road safety 2013: Supporting a Decade of Action" (PDF) (report). Geneva, Switzerland: World Health Organisation WHO. 2013. ISBN 978-92-4-156456-4. Retrieved 2014-10-11.
data from 2010
- "Visualizing Major Causes of Death in the 20th Century" (news article). Visual News. 19 March 2013. Retrieved 2014-10-11.
- "Pedestrian Safety, Urban Space and Health: Summary Report" (PDF) (research report). Paris, France: International Transport Forum, OECD. 2011. Retrieved 2014-10-11.
- "Road Safety Annual Report 2014" (PDF) (report). Paris, France: International Traffic Safety Data and Analysis Group irtad, International Transport Forum, OECD. 2014. Retrieved 2014-10-11.
data from 2012
- "About us". The AA. Retrieved 26 February 2010.
- IRF Road Safety.
- "Speed cameras.org". Speed cameras.org. Retrieved 9 April 2012.
- Nader v. General Motors Corp. Court of Appeals of New York, 1970
- "about". World Day of Remembrance. Retrieved 26 February 2010.
- United Nations General Assembly Session 60 Resolution 5. Improving global road safety A/RES/60/5 page 3. 26 October 2005. Retrieved 9 July 2008.
- "Reported Road Casualties Great Britain: 2008 - Annual Report". UK Department for Transport. Retrieved 13 January 2010.
- Mike Gill, Michael J Goldacre, David G R Yeates (23 June 2006). "Changes in safety on England’s roads: analysis of hospital statistics" (PDF). BMJ.
- Heather Ward, Ronan Lyons, Roselle Thoreau (June 2006). "Road Safety Research Report No. 69: Under-reporting of Road Casualties – Phase 1" (PDF). UK Department for Transport.
- "Reported Road Casualties Great Britain: 2008 Annual Report" (PDF). Department for Transport. p. 62. Retrieved 10 January 2010.
It has long been known that a considerable proportion of non-fatal casualties are not known to the police and hospital; survey and compensation claims data all indicate a higher number of casualties than are reported... Police data on road accidents (STATS19), whilst not perfect, remains the most detailed, complete and reliable single source of information on road casualties covering the whole of Great Britain, in particular for monitoring trends over time
- One False Move. ISBN 0-85374-494-7. Retrieved 13 January 2010.
- World Health Organization (2013). "Global status report on road safety 2013". Retrieved 15 March 2013.
- Department for Transport (2008). "Reported Road Casualties Great Britain: 2008 Annual Report" (PDF). Road Casualties Great Britain. Retrieved 9 January 2010.
- Mayer Hillman, John Adams, John Whitelegg (2000) . One False Move: a study of children's independent mobility. Policy Studies Institute. ISBN 0-85374-494-7.
- Robert Davis (1993). Death on the Streets: Cars and the mythology of road safety. Leading Edge Press. ISBN 0-948135-46-8.
- John Adams (1995). Risk. UCL Press. ISBN 1-85728-068-7.
- Leonard Evans (2004). Traffic Safety. Science Serving Society. ISBN 0-9754871-0-8.
|Wikimedia Commons has media related to Road transport safety.|
- WHO road traffic injuries
- iRAP - International Road Assessment Programme
- International Transport Statistics Database
- Road Safety Toolkit
- ERSO - European Road Safety Observatory
- ETSC - European Transport Safety Council
- Journal of Safety Research
- The Cochrane Injuries Group
- Mortality from Road Crashes in 193 Countries: A Comparison with Other Leading Causes of Death, University of Michigan Transportation Research Institute, February 2014
- - Making Road Safer |
Although premise, hypothesis, and supposition are three entirely different words, they are frequently used together. They are therefore rather confusing. How does a hypothesis relate to a premise? What does supposition mean? All of these are some of the queries asked by students.
In this article, you will learn about the difference between premise, hypothesis and supposition.
A premise is an assumption or condition that one believes to be true. Typically, it is a logical assumption or statement that is supported by some logic. A conclusion is therefore typically developed from a premise. It’s all part of logical reasoning.
A statement or concept that needs to be tested in order to be proven is known as a hypothesis. Typically, it is based on known facts that have not yet been proven. It must be proved before one can consider an idea or an explanation for something to be true.
A hypothesis is an essential part of the scientific process, in which an idea is typically proved through some sort of experiment or research.
Can a hypothesis be a premise?
The idea may also be referred to as a premise, which is an assumption or a condition that one holds to be true. When one attempts to prove it, it becomes a hypothesis.
A supposition is an idea that is thought to be true, but it is uncertain whether it is exactly or to what extent. The act of supposing, imagining, or considering into account as true or existing what is known to be false or unproven.
Example of supposition
The product was introduced, for instance, under the supposition that there was a demand for it. There may be a demand for the products in this situation, but it may not be as strong as they had anticipated. As a result, the notion that there was demand may not be wrong, but rather real, but the amount is unknown.
Example: Premise vs Hypothesis vs Supposition
Premise: The ball is round.
The premise depends on what we can see and understand.
Conclusion based on the premise that the ball will roll.
On the premise, the conclusion is based. Because the ball is round, it will roll.
Hypothesis: The ball will roll 17 feet.
The hypothesis is an assertion that demands evidence. It’s possible that the ball will roll for 17 feet. In order to determine how far it will roll, it must be tested.
Supposition: This is the best ball for rolling.
The supposition is a belief. Just because the ball rolls well, you might assume that it is the best at doing so. There will undoubtedly be balls that roll better than this one, though. All the balls in the world cannot be tested to prove this. So, it might or might not be true.
This example should make it clearer how each of these words is used and how they relate to one another.
Difference between Premise, Hypothesis and Supposition
|An idea or the theory that serves as a foundation for a statement or action.||An idea or explanation for something based on known facts but not yet supported by evidence.||The fact that one believes something to be true without any evidence.|
|An assumption or a condition that supports a logical argument.||A plausible conjecture or assumption that may be proved or disproved through experimentation.||A belief or assumption that might be correct or true but might not. It may turn out to be false or inaccurate.|
|Based on fact||Needs to be proved or disapproved||May be partially true|
|Has some basis in logical understanding, Partially Proved||Needs to be Proved||Without Proof or Regardless of Proof|
|The Earth revolves around the Sun.|
|Because the Earth revolves around the Sun and we have days and nights, the Earth spins in its orbit.|
In hundreds of billions of years, a dead Sun might swallow the Earth. |
In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.
It is calculated by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This process of converting a raw score into a standard score is called standardizing or normalizing (however, "normalizing" can refer to many types of ratios; see normalization for more).
Standard scores are most commonly called z-scores; the two terms may be used interchangeably, as they are in this article. Other terms include z-values, normal scores, standardized variables and pull in High Energy Physics.
Computing a z-score requires knowing the mean and standard deviation of the complete population to which a data point belongs; if one only has a sample of observations from the population, then the analogous computation with sample mean and sample standard deviation yields the t-statistic.
If the population mean and population standard deviation are known, a raw score x is converted into a standard score by
The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above.
Calculating z using this formula requires the population mean and the population standard deviation, not the sample mean or sample deviation. But knowing the true mean and standard deviation of a population is often unrealistic except in cases such as standardized testing, where the entire population is measured.
When the population mean and the population standard deviation are unknown, the standard score may be calculated using the sample mean and sample standard deviation as estimates of the population values.
In these cases, the z-score is
In either case, since the numerator and denominator of the equation must both be expressed in the same units of measure, and since the units cancel out through division, z is left as a dimensionless quantity.
The z-score is often used in the z-test in standardized testing – the analog of the Student's t-test for a population whose parameters are known, rather than estimated. As it is very unusual to know the entire population, the t-test is much more widely used.
The standard score can be used in the calculation of prediction intervals. A prediction interval [L,U], consisting of a lower endpoint designated L and an upper endpoint designated U, is an interval such that a future observation X will lie in the interval with high probability , i.e.
For the standard score Z of X it gives:
By determining the quantile z such that
In process control applications, the Z value provides an assessment of how off-target a process is operating.
Comparison of scores measured on different scales: ACT and SAT
When scores are measured on different scales, they may be converted to z-scores to aid comparison. Dietz et al. give the following example comparing student scores on the (old)SAT and ACT high school tests. The table shows the mean and standard deviation for total score on the SAT and ACT. Suppose that student A scored 1800 on the SAT, and student B scored 24 on the ACT. Which student performed better relative to other test-takers?
The z-score for student A is
The z-score for student B is
Because student A has a higher z-score than student B, student A performed better compared to other test-takers than did student B.
Percentage of observations below a z-score
Continuing the example of ACT and SAT scores, if it can be further assumed that both ACT and SAT scores are normally distributed (which is approximately correct), then the z-scores may be used to calculate the percentage of test-takers who received lower scores than students A and B.
Cluster analysis and multidimensional scaling
"For some multivariate techniques such as multidimensional scaling and cluster analysis, the concept of distance between the units in the data is often of considerable interest and importance … When the variables in a multivariate data set are on different scales, it makes more sense to calculate the distances after some form of standardization."
Principal components analysis
In principal components analysis, "Variables measured on different scales or on a common scale with widely differing ranges are often standardized."
Relative importance of variables in multiple regression: Standardized regression coefficients
"The standardized regression slope is the slope in the regression equation if X and Y are standardized… Standardization of X and Y is done by subtracting the respective means from each set of observations and dividing by the respective standard deviations… In multiple regression, where several X variables are used, the standardized regression coefficients quantify the relative contribution of each X variable."
However, Kutner et al. (p 278) give the following caveat: "… one must be cautious about interpreting any regression coefficients, whether standardized or not. The reason is that when the predictor variables are correlated among themselves, … the regression coefficients are affected by the other predictor variables in the model … The magnitudes of the standardized regression coefficients are affected not only by the presence of correlations among the predictor variables but also by the spacings of the observations on each of these variables. Sometimes these spacings may be quite arbitrary. Hence, it is ordinarily not wise to interpret the magnitudes of standardized regression coefficients as reflecting the comparative importance of the predictor variables."
Standardizing in mathematical statistics
If the random variable under consideration is the sample mean of a random sample of X:
then the standardized version is
In bone density measurements, the T-score is the standard score of the measurement compared to the population of healthy 30-year-old adults.
- https://e-publishing.cern.ch/index.php/CYRSP/article/download/303/405/2022. Missing or empty
- E. Kreyszig (1979). Advanced Engineering Mathematics (Fourth ed.). Wiley. p. 880, eq. 5. ISBN 0-471-02140-7.
- Spiegel, Murray R.; Stephens, Larry J (2008), Schaum's Outlines Statistics (Fourth ed.), McGraw Hill, ISBN 978-0-07-148584-5
- Mendenhall, William; Sincich, Terry (2007), Statistics for Engineering and the Sciences (Fifth ed.), Pearson / Prentice Hall, ISBN 978-0131877061
- Aho, Ken A. (2014), Foundational and Applied Statistics for Biologists (First ed.), Chapman & Hall / CRC Press, ISBN 978-1439873380
- E. Kreyszig (1979). Advanced Engineering Mathematics (Fourth ed.). Wiley. p. 880, eq. 6. ISBN 0-471-02140-7.
- Diez, David; Barr, Christopher; Çetinkaya-Rundel, Mine (2012), OpenIntro Statistics (Second ed.), openintro.org
- Everitt, Brian; Hothorn, Torsten J (2011), An Introduction to Applied Multivariate Analysis with R, Springer, ISBN 978-1441996497
- Johnson, Richard; Wichern, Wichern (2007), Applied Multivariate Statistical Analysis, Pearson / Prentice Hall
- Afifi, Abdelmonem; May, Susanne K.; Clark, Virginia A. (2012), Practical Multivariate Analysis (Fifth ed.), Chapman & Hall/CRC, ISBN 978-1439816806
- Kutner, Michael; Nachtsheim, Christopher; Neter, John (204), Applied Linear Regression Models (Fourth ed.), McGraw Hill, ISBN 978-0073014661
- John Salvia; James Ysseldyke; Sara Witmer (29 January 2009). Assessment: In Special and Inclusive Education. Cengage Learning. pp. 43–. ISBN 0-547-13437-1.
- Edward S. Neukrug; R. Charles Fawcett (1 January 2014). Essentials of Testing and Assessment: A Practical Guide for Counselors, Social Workers, and Psychologists. Cengage Learning. pp. 133–. ISBN 978-1-305-16183-2.
- Randy W. Kamphaus (16 August 2005). Clinical Assessment of Child and Adolescent Intelligence. Springer. pp. 123–. ISBN 978-0-387-26299-4.
- "Bone Mass Measurement: What the Numbers Mean". NIH Osteoporosis and Related Bone Diseases National Resource Center. National Institute of Health. Retrieved 5 August 2017.
- Carroll, Susan Rovezzi; Carroll, David J. (2002). Statistics Made Simple for School Leaders (illustrated ed.). Rowman & Littlefield. ISBN 978-0-8108-4322-6. Retrieved 7 June 2009.
- Larsen, Richard J.; Marx, Morris L. (2000). An Introduction to Mathematical Statistics and Its Applications (Third ed.). p. 282. ISBN 0-13-922303-7. |
By the end of this section, you will be able to:
- Explain the purpose of an electric field diagram
- Describe the relationship between a vector diagram and a field line diagram
- Explain the rules for creating a field diagram and why these rules make physical sense
- Sketch the field of an arbitrary source charge
Now that we have some experience calculating electric fields, let’s try to gain some insight into the geometry of electric fields. As mentioned earlier, our model is that the charge on an object (the source charge) alters space in the region around it in such a way that when another charged object (the test charge) is placed in that region of space, that test charge experiences an electric force. The concept of electric field lines, and of electric field line diagrams, enables us to visualize the way in which the space is altered, allowing us to visualize the field. The purpose of this section is to enable you to create sketches of this geometry, so we will list the specific steps and rules involved in creating an accurate and useful sketch of an electric field.
It is important to remember that electric fields are three-dimensional. Although in this book we include some pseudo-three-dimensional images, several of the diagrams that you’ll see (both here, and in subsequent chapters) will be two-dimensional projections, or cross-sections. Always keep in mind that in fact, you’re looking at a three-dimensional phenomenon.
Our starting point is the physical fact that the electric field of the source charge causes a test charge in that field to experience a force. By definition, electric field vectors point in the same direction as the electric force that a (hypothetical) positive test charge would experience, if placed in the field (Figure 5.27)
We’ve plotted many field vectors in the figure, which are distributed uniformly around the source charge. Since the electric field is a vector, the arrows that we draw correspond at every point in space to both the magnitude and the direction of the field at that point. As always, the length of the arrow that we draw corresponds to the magnitude of the field vector at that point. For a point source charge, the length decreases by the square of the distance from the source charge. In addition, the direction of the field vector is radially away from the source charge, because the direction of the electric field is defined by the direction of the force that a positive test charge would experience in that field. (Again, keep in mind that the actual field is three-dimensional; there are also field lines pointing out of and into the page.)
This diagram is correct, but it becomes less useful as the source charge distribution becomes more complicated. For example, consider the vector field diagram of a dipole (Figure 5.28).
There is a more useful way to present the same information. Rather than drawing a large number of increasingly smaller vector arrows, we instead connect all of them together, forming continuous lines and curves, as shown in Figure 5.29.
Although it may not be obvious at first glance, these field diagrams convey the same information about the electric field as do the vector diagrams. First, the direction of the field at every point is simply the direction of the field vector at that same point. In other words, at any point in space, the field vector at each point is tangent to the field line at that same point. The arrowhead placed on a field line indicates its direction.
As for the magnitude of the field, that is indicated by the field line density—that is, the number of field lines per unit area passing through a small cross-sectional area perpendicular to the electric field. This field line density is drawn to be proportional to the magnitude of the field at that cross-section. As a result, if the field lines are close together (that is, the field line density is greater), this indicates that the magnitude of the field is large at that point. If the field lines are far apart at the cross-section, this indicates the magnitude of the field is small. Figure 5.30 shows the idea.
In Figure 5.30, the same number of field lines passes through both surfaces (S and but the surface S is larger than surface . Therefore, the density of field lines (number of lines per unit area) is larger at the location of , indicating that the electric field is stronger at the location of than at S. The rules for creating an electric field diagram are as follows.
Drawing Electric Field Lines
- Electric field lines either originate on positive charges or come in from infinity, and either terminate on negative charges or extend out to infinity.
- The number of field lines originating or terminating at a charge is proportional to the magnitude of that charge. A charge of 2q will have twice as many lines as a charge of q.
- At every point in space, the field vector at that point is tangent to the field line at that same point.
- The field line density at any point in space is proportional to (and therefore is representative of) the magnitude of the field at that point in space.
- Field lines can never cross. Since a field line represents the direction of the field at a given point, if two field lines crossed at some point, that would imply that the electric field was pointing in two different directions at a single point. This in turn would suggest that the (net) force on a test charge placed at that point would point in two different directions. Since this is obviously impossible, it follows that field lines must never cross.
Always keep in mind that field lines serve only as a convenient way to visualize the electric field; they are not physical entities. Although the direction and relative intensity of the electric field can be deduced from a set of field lines, the lines can also be misleading. For example, the field lines drawn to represent the electric field in a region must, by necessity, be discrete. However, the actual electric field in that region exists at every point in space.
Field lines for three groups of discrete charges are shown in Figure 5.31. Since the charges in parts (a) and (b) have the same magnitude, the same number of field lines are shown starting from or terminating on each charge. In (c), however, we draw three times as many field lines leaving the charge as entering the . The field lines that do not terminate at emanate outward from the charge configuration, to infinity.
The ability to construct an accurate electric field diagram is an important, useful skill; it makes it much easier to estimate, predict, and therefore calculate the electric field of a source charge. The best way to develop this skill is with software that allows you to place source charges and then will draw the net field upon request. We strongly urge you to search the Internet for a program. Once you’ve found one you like, run several simulations to get the essential ideas of field diagram construction. Then practice drawing field diagrams, and checking your predictions with the computer-drawn diagrams.
One example of a field-line drawing program is from the PhET “Charges and Fields” simulation. |
That gold on your ring finger is stellar — and not just in a complimentary way.
In a finding that may overthrow our understanding of where Earth’s heavy elements such as gold and platinum come from, new research by a University of Guelph physicist suggests that most of them were spewed from a largely overlooked kind of star explosion far away in space and time from our planet.
Some 80 per cent of the heavy elements in the universe likely formed in collapsars, a rare but heavy element-rich form of supernova explosion from the gravitational collapse of old, massive stars typically 30 times as weighty as our sun, said physics professor Daniel Siegel.
That finding overturns the widely held belief that these elements mostly come from collisions between neutron stars or between a neutron star and a black hole, said Siegel.
His paper co-authored with Columbia University colleagues appears today in the journal Nature.
Using supercomputers, the trio simulated the dynamics of collapsars, or old stars whose gravity causes them to implode and form black holes.
Under their model, massive, rapidly spinning collapsars eject heavy elements whose amounts and distribution are “astonishingly similar to what we observe in our solar system,” said Siegel. He joined U of G this month and is also appointed to the Perimeter Institute for Theoretical Physics, in Waterloo, Ont.
Most of the elements found in nature were created in nuclear reactions in stars and ultimately expelled in huge stellar explosions.
Heavy elements found on Earth and elsewhere in the universe from long-ago explosions range from gold and platinum, to uranium and plutonium used in nuclear reactors, to more exotic chemical elements such as neodymium found in consumer items such as electronics.
Until now, scientists thought that these elements were cooked up mostly in stellar smashups involving neutron stars or black holes, as in a collision of two neutron stars observed by Earth-bound detectors that made headlines in 2017.
Ironically, said Siegel, his team began working to understand the physics of that merger before their simulations pointed toward collapsars as a heavy element birth chamber. “Our research on neutron star mergers has led us to believe that the birth of black holes in a very different type of stellar explosion might produce even more gold than neutron star mergers.”
What collapsars lack in frequency, they make up for in generation of heavy elements, said Siegel. Collapsars also produce intense flashes of gamma rays.
“Eighty per cent of these heavy elements we see should come from collapsars. Collapsars are fairly rare in occurrences of supernovae, even more rare than neutron star mergers — but the amount of material that they eject into space is much higher than that from neutron star mergers.”
The team now hopes to see its theoretical model validated by observations. Siegel said infrared instruments such as those on the James Webb Space Telescope, set for launch in 2021, should be able to detect telltale radiation pointing to heavy elements from a collapsar in a far-distant galaxy.
“That would be a clear signature,” he said, adding that astronomers might also detect evidence of collapsars by looking at amounts and distribution of heavy element s in other stars across our Milky Way galaxy.
Siegel said this research may yield clues about how our galaxy began.
“Trying to nail down where heavy elements come from may help us understand how the galaxy was chemically assembled and how the galaxy formed. This may actually help solve some big questions in cosmology as heavy elements are a nice tracer.”
This year marks the 150th anniversary of Dmitri Mendeleev’s creation of the periodic table of the chemical elements. Since then, scientists have added many more elements to the periodic table, a staple of science textbooks and classrooms worldwide.
Referring to the Russian chemist, Siegel said, “We know many more elements that he didn’t. What’s fascinating and surprising is that, after 150 years of studying the fundamental building blocks of nature, we still don’t quite understand how the universe creates a big fraction of the elements in the periodic table.”
- Daniel M. Siegel, Jennifer Barnes, Brian D. Metzger. Collapsars as a major source of r-process elements. Nature, 2019; 569 (7755): 241 DOI: 10.1038/s41586-019-1136-0 |
National Register of Historic Places
The National Register of Historic Places is the United States federal government's official list of districts, buildings and objects deemed worthy of preservation for their historical significance. A property listed in the National Register, or located within a National Register Historic District, may qualify for tax incentives derived from the total value of expenses incurred preserving the property; the passage of the National Historic Preservation Act in 1966 established the National Register and the process for adding properties to it. Of the more than one million properties on the National Register, 80,000 are listed individually; the remainder are contributing resources within historic districts. For most of its history the National Register has been administered by the National Park Service, an agency within the United States Department of the Interior, its goals are to help property owners and interest groups, such as the National Trust for Historic Preservation, coordinate and protect historic sites in the United States.
While National Register listings are symbolic, their recognition of significance provides some financial incentive to owners of listed properties. Protection of the property is not guaranteed. During the nomination process, the property is evaluated in terms of the four criteria for inclusion on the National Register of Historic Places; the application of those criteria has been the subject of criticism by academics of history and preservation, as well as the public and politicians. Historic sites outside the country proper, but associated with the United States are listed. Properties can be nominated in a variety of forms, including individual properties, historic districts, multiple property submissions; the Register categorizes general listings into one of five types of properties: district, structure, building, or object. National Register Historic Districts are defined geographical areas consisting of contributing and non-contributing properties; some properties are added automatically to the National Register when they become administered by the National Park Service.
These include National Historic Landmarks, National Historic Sites, National Historical Parks, National Military Parks, National Memorials, some National Monuments. On October 15, 1966, the Historic Preservation Act created the National Register of Historic Places and the corresponding State Historic Preservation Offices; the National Register consisted of the National Historic Landmarks designated before the Register's creation, as well as any other historic sites in the National Park system. Approval of the act, amended in 1980 and 1992, represented the first time the United States had a broad-based historic preservation policy; the 1966 act required those agencies to work in conjunction with the SHPO and an independent federal agency, the Advisory Council on Historic Preservation, to confront adverse effects of federal activities on historic preservation. To administer the newly created National Register of Historic Places, the National Park Service of the U. S. Department of the Interior, with director George B.
Hartzog Jr. established an administrative division named the Office of Archeology and Historic Preservation. Hartzog charged OAHP with creating the National Register program mandated by the 1966 law. Ernest Connally was the Office's first director. Within OAHP new divisions were created to deal with the National Register; the division administered several existing programs, including the Historic Sites Survey and the Historic American Buildings Survey, as well as the new National Register and Historic Preservation Fund. The first official Keeper of the Register was an architectural historian. During the Register's earliest years in the late 1960s and early 1970s, organization was lax and SHPOs were small and underfunded. However, funds were still being supplied for the Historic Preservation Fund to provide matching grants-in-aid to listed property owners, first for house museums and institutional buildings, but for commercial structures as well. A few years in 1979, the NPS history programs affiliated with both the U.
S. National Parks system and the National Register were categorized formally into two "Assistant Directorates." Established were the Assistant Directorate for Archeology and Historic Preservation and the Assistant Directorate for Park Historic Preservation. From 1978 until 1981, the main agency for the National Register was the Heritage Conservation and Recreation Service of the United States Department of the Interior. In February 1983, the two assistant directorates were merged to promote efficiency and recognize the interdependency of their programs. Jerry L. Rogers was selected to direct this newly merged associate directorate, he was described as a skilled administrator, sensitive to the need for the NPS to work with SHPOs, local governments. Although not described in detail in the 1966 act, SHPOs became integral to the process of listing properties on the National Register; the 1980 amendments of the 1966 law further defined the responsibilities of SHPOs concerning the National Register.
Several 1992 amendments of the NHPA added a category to the National Register, known as Traditional Cultural Properties: those properties associated with Native American or Hawaiian groups
1890 United States Census
The Eleventh United States Census was taken beginning June 2, 1890. It determined the resident population of the United States to be 62,979,766—an increase of 25.5 percent over the 50,189,209 persons enumerated during the 1880 census. The data was tabulated by machine for the first time; the data reported that the distribution of the population had resulted in the disappearance of the American frontier. Most of the 1890 census materials were destroyed in a 1921 fire and fragments of the US census population schedule exist only for the states of Alabama, Illinois, New Jersey, New York, North Carolina, South Dakota, Texas, the District of Columbia; this was the first census in which a majority of states recorded populations of over one million, as well as the first in which multiple cities – New York as of 1880, Philadelphia – recorded populations of over one million. The census saw Chicago rank as the nation's second-most populous city, a position it would hold until 1990, in which Los Angeles would supplant it.
The 1890 census collected the following information: The 1890 census was the first to be compiled using methods invented by Herman Hollerith and was overseen by Superintendents Robert P. Porter and Carroll D. Wright. Data was entered on a machine readable medium, punched cards, tabulated by machine; the net effect of the many changes from the 1880 census: the larger population, the number of data items to be collected, the Census Bureau headcount, the volume of scheduled publications, the use of Hollerith's electromechanical tabulators, was to reduce the time required to process the census from eight years for the 1880 census to six years for the 1890 census. The total population of 62,947,714, the family, or rough, was announced after only six weeks of processing; the public reaction to this tabulation was disbelief, as it was believed that the "right answer" was at least 75,000,000. The United States census of 1890 showed a total of 248,253 Native Americans living in the United States, down from 400,764 Native Americans identified in the census of 1850.
The 1890 census announced that the frontier region of the United States no longer existed, that the Census Bureau would no longer track the westward migration of the U. S. population. Up to and including the 1880 census, the country had a frontier of settlement. By 1890, isolated bodies of settlement had broken into the unsettled area to the extent that there was hardly a frontier line; this prompted Frederick Jackson Turner to develop his Frontier Thesis. The original data for the 1890 Census is no longer available. All the population schedules were damaged in a fire in the basement of the Commerce Building in Washington, D. C. in 1921. Some 25 % of the materials were presumed another 50 % damaged by smoke and water; the damage to the records led to an outcry for a permanent National Archives. In December 1932, following standard federal record-keeping procedures, the Chief Clerk of the Bureau of the Census sent the Librarian of Congress a list of papers to be destroyed, including the original 1890 census schedules.
The Librarian was asked by the Bureau to identify any records which should be retained for historical purposes, but the Librarian did not accept the census records. Congress authorized destruction of that list of records on February 21, 1933, the surviving original 1890 census records were destroyed by government order by 1934 or 1935; the other censuses for which some information has been lost are the 1810 enumerations. Few sets of microdata from the 1890 census survive, but aggregate data for small areas, together with compatible cartographic boundary files, can be downloaded from the National Historical Geographic Information System. Mayo-Smith, Richmond, "The Eleventh Census of the United States". In: The Economic Journal, Vol. 1, p. 43 - 58 1891 U. S Census Report Contains 1890 Census results Historical US Census data from the U. S. Census Bureau website Hollerith 1890 Census Tabulator by Columbia University "The Fate of the 1890 Population Census" from the National Archives website
United States Census Bureau
The United States Census Bureau is a principal agency of the U. S. Federal Statistical System, responsible for producing data about the American people and economy; the Census Bureau is part of the U. S. Department of Commerce and its director is appointed by the President of the United States; the Census Bureau's primary mission is conducting the U. S. Census every ten years, which allocates the seats of the U. S. House of Representatives to the states based on their population; the Bureau's various censuses and surveys help allocate over $400 billion in federal funds every year and it helps states, local communities, businesses make informed decisions. The information provided by the census informs decisions on where to build and maintain schools, transportation infrastructure, police and fire departments. In addition to the decennial census, the Census Bureau continually conducts dozens of other censuses and surveys, including the American Community Survey, the U. S. Economic Census, the Current Population Survey.
Furthermore and foreign trade indicators released by the federal government contain data produced by the Census Bureau. Article One of the United States Constitution directs the population be enumerated at least once every ten years and the resulting counts used to set the number of members from each state in the House of Representatives and, by extension, in the Electoral College; the Census Bureau now conducts a full population count every 10 years in years ending with a zero and uses the term "decennial" to describe the operation. Between censuses, the Census Bureau makes population projections. In addition, Census data directly affects how more than $400 billion per year in federal and state funding is allocated to communities for neighborhood improvements, public health, education and more; the Census Bureau is mandated with fulfilling these obligations: the collecting of statistics about the nation, its people, economy. The Census Bureau's legal authority is codified in Title 13 of the United States Code.
The Census Bureau conducts surveys on behalf of various federal government and local government agencies on topics such as employment, health, consumer expenditures, housing. Within the bureau, these are known as "demographic surveys" and are conducted perpetually between and during decennial population counts; the Census Bureau conducts economic surveys of manufacturing, retail and other establishments and of domestic governments. Between 1790 and 1840, the census was taken by marshals of the judicial districts; the Census Act of 1840 established a central office. Several acts followed that revised and authorized new censuses at the 10-year intervals. In 1902, the temporary Census Office was moved under the Department of Interior, in 1903 it was renamed the Census Bureau under the new Department of Commerce and Labor; the department was intended to consolidate overlapping statistical agencies, but Census Bureau officials were hindered by their subordinate role in the department. An act in 1920 changed the date and authorized manufacturing censuses every two years and agriculture censuses every 10 years.
In 1929, a bill was passed mandating the House of Representatives be reapportioned based on the results of the 1930 Census. In 1954, various acts were codified into Title 13 of the US Code. By law, the Census Bureau must count everyone and submit state population totals to the U. S. President by December 31 of any year ending in a zero. States within the Union receive the results in the spring of the following year; the United States Census Bureau defines four statistical regions, with nine divisions. The Census Bureau regions are "widely used...for data collection and analysis". The Census Bureau definition is pervasive. Regional divisions used by the United States Census Bureau: Region 1: Northeast Division 1: New England Division 2: Mid-Atlantic Region 2: Midwest Division 3: East North Central Division 4: West North Central Region 3: South Division 5: South Atlantic Division 6: East South Central Division 7: West South Central Region 4: West Division 8: Mountain Division 9: Pacific Many federal, state and tribal governments use census data to: Decide the location of new housing and public facilities, Examine the demographic characteristics of communities and the US, Plan transportation systems and roadways, Determine quotas and creation of police and fire precincts, Create localized areas for elections, utilities, etc.
Gathers population information every 10 years The United States Census Bureau is committed to confidentiality, guarantees non-disclosure of any addresses or personal information related to individuals or establishments. Title 13 of the U. S. Code establishes penalties for the disclosure of this information. All Census employees must sign an affidavit of non-disclosure prior to employment; the Bureau cannot share responses, addresses or personal information with anyone including United States or foreign government
The United States of America known as the United States or America, is a country composed of 50 states, a federal district, five major self-governing territories, various possessions. At 3.8 million square miles, the United States is the world's third or fourth largest country by total area and is smaller than the entire continent of Europe's 3.9 million square miles. With a population of over 327 million people, the U. S. is the third most populous country. The capital is Washington, D. C. and the largest city by population is New York City. Forty-eight states and the capital's federal district are contiguous in North America between Canada and Mexico; the State of Alaska is in the northwest corner of North America, bordered by Canada to the east and across the Bering Strait from Russia to the west. The State of Hawaii is an archipelago in the mid-Pacific Ocean; the U. S. territories are scattered about the Pacific Ocean and the Caribbean Sea, stretching across nine official time zones. The diverse geography and wildlife of the United States make it one of the world's 17 megadiverse countries.
Paleo-Indians migrated from Siberia to the North American mainland at least 12,000 years ago. European colonization began in the 16th century; the United States emerged from the thirteen British colonies established along the East Coast. Numerous disputes between Great Britain and the colonies following the French and Indian War led to the American Revolution, which began in 1775, the subsequent Declaration of Independence in 1776; the war ended in 1783 with the United States becoming the first country to gain independence from a European power. The current constitution was adopted in 1788, with the first ten amendments, collectively named the Bill of Rights, being ratified in 1791 to guarantee many fundamental civil liberties; the United States embarked on a vigorous expansion across North America throughout the 19th century, acquiring new territories, displacing Native American tribes, admitting new states until it spanned the continent by 1848. During the second half of the 19th century, the Civil War led to the abolition of slavery.
By the end of the century, the United States had extended into the Pacific Ocean, its economy, driven in large part by the Industrial Revolution, began to soar. The Spanish–American War and World War I confirmed the country's status as a global military power; the United States emerged from World War II as a global superpower, the first country to develop nuclear weapons, the only country to use them in warfare, a permanent member of the United Nations Security Council. Sweeping civil rights legislation, notably the Civil Rights Act of 1964, the Voting Rights Act of 1965 and the Fair Housing Act of 1968, outlawed discrimination based on race or color. During the Cold War, the United States and the Soviet Union competed in the Space Race, culminating with the 1969 U. S. Moon landing; the end of the Cold War and the collapse of the Soviet Union in 1991 left the United States as the world's sole superpower. The United States is the world's oldest surviving federation, it is a representative democracy.
The United States is a founding member of the United Nations, World Bank, International Monetary Fund, Organization of American States, other international organizations. The United States is a developed country, with the world's largest economy by nominal GDP and second-largest economy by PPP, accounting for a quarter of global GDP; the U. S. economy is post-industrial, characterized by the dominance of services and knowledge-based activities, although the manufacturing sector remains the second-largest in the world. The United States is the world's largest importer and the second largest exporter of goods, by value. Although its population is only 4.3% of the world total, the U. S. holds 31% of the total wealth in the world, the largest share of global wealth concentrated in a single country. Despite wide income and wealth disparities, the United States continues to rank high in measures of socioeconomic performance, including average wage, human development, per capita GDP, worker productivity.
The United States is the foremost military power in the world, making up a third of global military spending, is a leading political and scientific force internationally. In 1507, the German cartographer Martin Waldseemüller produced a world map on which he named the lands of the Western Hemisphere America in honor of the Italian explorer and cartographer Amerigo Vespucci; the first documentary evidence of the phrase "United States of America" is from a letter dated January 2, 1776, written by Stephen Moylan, Esq. to George Washington's aide-de-camp and Muster-Master General of the Continental Army, Lt. Col. Joseph Reed. Moylan expressed his wish to go "with full and ample powers from the United States of America to Spain" to seek assistance in the revolutionary war effort; the first known publication of the phrase "United States of America" was in an anonymous essay in The Virginia Gazette newspaper in Williamsburg, Virginia, on April 6, 1776. The second draft of the Articles of Confederation, prepared by John Dickinson and completed by June 17, 1776, at the latest, declared "The name of this Confederation shall be the'United States of America'".
The final version of the Articles sent to the states for ratification in late 1777 contains the sentence "The Stile of this Confederacy shall be'The United States of America'". In June 1776, Thomas Jefferson wrote the phrase "UNITED STATES OF AMERICA" in all capitalized letters in the headline of his "original Rough draught" of the Declaration of Independence; this draft of the document did not surface unti
The Ringling brothers were seven American siblings of German and French descent who transformed their small touring company of performers into one of America's largest circuses in the late 19th and early 20th centuries. Four brothers were born in McGregor, Iowa: Alf T. Charles and Henry, the family lived in McGregor for twelve years, from 1860 until 1872; the Ringling family moved to Prairie du Chien and settled in Baraboo, Wisconsin, in 1875. They were the children of harness maker Heinrich Friedrich August Ringling of Hanover and Marie Salome Juliar of Ostheim, in Alsace, they merged their Ringling Brothers Circus with America's other leading circus troupes creating the Ringling Bros. and Barnum & Bailey Circus. Albert Carl "Al" Ringling. Albert died of Bright's disease at the age of 63 in Wisconsin. Augustus "Gus" Ringling. A founder of the circus, Augustus was self-educated, he died at age 55 from complications of various diseases at a sanatorium in New Orleans, where he had arrived two weeks earlier hoping the warmer climate would help his condition.
Otto Ringling. Otto died April 1911, at the home of his brother John, who lived on Fifth Avenue in Manhattan, he was in New York to see a show at Madison Square Garden. Alfred Theodore "Alf" Ringling. Alfred was a juggler, he had a son, Richard Ringling, a daughter, Marjorie Joan Ringling, married to future United States Senator Jacob K. Javits from 1933 to 1936, his granddaughter, Mabel Ringling, married an elephant trainer. In 1916, Alfred took up residence in Petersburg, New Jersey, now known as Oak Ridge, where he was responsible for the creation of Lake Swannanoa, the body of water that would become the center point of the Lake Swannanoa lake community; the property was used as the winter quarters for his son Richard's circus, the R. T. Richards Circus. Alfred died in his 28-room New Jersey manor, three years after its completion, on October 21, 1919. Charles Edward Ringling. John Nicholas Ringling. John was a professional clown. Henry William George Ringling. Henry was the youngest of the brothers, died October 10, 1918, of a heart disorder and other internal organ disorders.
Ida Loraina Wilhelmina Ringling. Ida married Harry Whitestone North in 1902, their sons were Henry Ringling North. Apps, Jerry. "Ringlingville USA: The Stupendous Story of Seven Siblings and Their Stunning Circus Success". Wisconsin Magazine of History, vol. 88, no. 4: 12-17. Schlicher, J. J. "On the Trail of the Ringlings". Wisconsin Magazine of History, vol. 26, no. 1: 8-22. Ringling Brothers and Barnum & Bailey Circus – Official website Ringling Brothers Poster from the Wisconsin Historical Society
Race and ethnicity in the United States Census
Race and ethnicity in the United States Census, defined by the federal Office of Management and Budget and the United States Census Bureau, are self-identification data items in which residents choose the race or races with which they most identify, indicate whether or not they are of Hispanic or Latino origin. The racial categories represent a social-political construct for the race or races that respondents consider themselves to be and, "generally reflect a social definition of race recognized in this country." OMB defines the concept of race as outlined for the US Census as not "scientific or anthropological" and takes into account "social and cultural characteristics as well as ancestry", using "appropriate scientific methodologies" that are not "primarily biological or genetic in reference." The race categories include both national-origin groups. Race and ethnicity are considered separate and distinct identities, with Hispanic or Latino origin asked as a separate question. Thus, in addition to their race or races, all respondents are categorized by membership in one of two ethnic categories, which are "Hispanic or Latino" and "Not Hispanic or Latino".
However, the practice of separating "race" and "ethnicity" as different categories has been criticized both by the American Anthropological Association and members of US Commission on Civil Rights. In 1997, OMB issued a Federal Register notice regarding revisions to the standards for the classification of federal data on race and ethnicity. OMB developed race and ethnic standards in order to provide "consistent data on race and ethnicity throughout the Federal Government; the development of the data standards stem in large measure from new responsibilities to enforce civil rights laws." Among the changes, OMB issued the instruction to "mark one or more races" after noting evidence of increasing numbers of interracial children and wanting to capture the diversity in a measurable way and having received requests by people who wanted to be able to acknowledge their or their children's full ancestry rather than identifying with only one group. Prior to this decision, the Census and other government data collections asked people to report only one race.
The OMB states, "many federal programs are put into effect based on the race data obtained from the decennial census. Race data are critical for the basic research behind many policy decisions. States require these data to meet legislative redistricting requirements; the data are needed to monitor compliance with the Voting Rights Act by local jurisdictions". "Data on ethnic groups are important for putting into effect a number of federal statutes. Data on Ethnic Groups are needed by local governments to run programs and meet legislative requirements." The 1790 United States Census was the first census in the history of the United States. The population of the United States was recorded as 3,929,214 as of Census Day, August 2, 1790, as mandated by Article I, Section 2 of the United States Constitution and applicable laws."The law required that every household be visited, that completed census schedules be posted in'two of the most public places within, there to remain for the inspection of all concerned...' and that'the aggregate amount of each description of persons' for every district be transmitted to the president."
This law along with U. S. marshals were responsible for governing the census. One third of the original census data has been lost or destroyed since documentation; the data was lost in 1790–1830 time period and included data from: Connecticut, Maryland, New Hampshire, New York, North Carolina, Rhode Island, South Carolina, Delaware, New Jersey, Virginia. Census data included the name of the head of the family and categorized inhabitants as follows: free white males at least 16 years of age, free white males under 16 years of age, free white females, all other free persons, slaves. Thomas Jefferson the Secretary of State, directed marshals to collect data from all thirteen states, from the Southwest Territory; the census was not conducted in Vermont until 1791, after that state's admission to the Union as the 14th state on March 4 of that year. There was some doubt surrounding the numbers, President George Washington and Thomas Jefferson maintained the population was undercounted; the potential reasons Washington and Jefferson may have thought this could be refusal to participate, poor public transportation and roads, spread out population, restraints of current technology.
No microdata from the 1790 population census is available, but aggregate data for small areas and their compatible cartographic boundary files, can be downloaded from the National Historical Geographic Information System. In 1800 and 1810, the age question regarding free white males was more detailed; the 1820
An indentured servant or indentured laborer is an employee within a system of unfree labor, bound by a signed or forced contract to work for a particular employer for a fixed time. The contract lets the employer sell the labor of an indenturee to a third party. Indenturees enter into an indenture for a specific payment or other benefit, or to meet a legal obligation, such as debt bondage. On completion of the contract, indentured servants were given their freedom, plots of land. In many countries, systems of indentured labor have now been outlawed, are banned by the Universal Declaration of Human Rights as a form of slavery; until the late 18th century, indentured servitude was common in British North America. It was a way for poor Europeans to immigrate to the American colonies: they signed an indenture in return for a costly passage. After their indenture expired, the immigrants were free to work for another employer, it has been argued by at least one economist that indentured servitude occurred as "an institutional response to a capital market imperfection".
In some cases, the indenture was made with a ship's master, who sold on the indenture to an employer in the colonies. Most indentured servants worked as farm laborers or domestic servants, although some were apprenticed to craftsmen; the terms of an indenture were not always enforced by American courts, although runaways were sought out and returned to their employer. Between one-half and two-thirds of white immigrants to the American colonies between the 1630s and American Revolution had come under indentures. However, while half the European immigrants to the Thirteen Colonies were indentured servants, at any one time they were outnumbered by workers who had never been indentured, or whose indenture had expired, thus free wage labor was the more prevalent for Europeans in the colonies. Indentured people were numerically important in the region from Virginia north to New Jersey. Other colonies saw far fewer of them; the total number of European immigrants to all 13 colonies before 1775 was about 500,000.
Of the 450,000 or so European arrivals who came voluntarily, Tomlins estimates that 48% were indentured. About 75% of these were under the age of 25; the age of adulthood for men was 24 years. Regarding the children who came, Gary Nash reports that "many of the servants were nephews, nieces and children of friends of emigrating Englishmen, who paid their passage in return for their labor once in America."Several instances of kidnapping for transportation to the Americas are recorded such as that of Peter Williamson. As historian Richard Hofstadter pointed out, "Although efforts were made to regulate or check their activities, they diminished in importance in the eighteenth century, it remains true that a certain small part of the white colonial population of America was brought by force, a much larger portion came in response to deceit and misrepresentation on the part of the spirits." One "spirit" named William Thiene was known to have spirited away 840 people from Britain to the colonies in a single year.
Historian Lerone Bennett, Jr. notes that "Masters given to flogging did not care whether their victims were black or white."Indentured servitude was used by various English and British governments as a punishment for defeated foes in rebellions and civil wars. Oliver Cromwell sent into enforced indentured service thousands of prisoners captured in the 1648 Battle of Preston and the 1651 Battle of Worcester. King James II acted after the Monmouth Rebellion in 1685, use of such measures continued in the 18th Century. Indentured servants could not marry without the permission of their master, were sometimes subject to physical punishment and did not receive legal favor from the courts. To ensure that the indenture contract was satisfied with the allotted amount of time, the term of indenture was lengthened for female servants if they became pregnant. Upon finishing their term they were set free; the American Revolution limited immigration to the United States, but economic historians dispute its long-term impact.
Sharon Salinger argues that the economic crisis that followed the war made long-term labor contracts unattractive. His analysis of Philadelphia's population shows how the percentage of bound citizens fell from 17% to 6.4% over the course of the war. William Miller posits a more moderate theory, stating that "the Revolution wrought disturbances upon white servitude, but these were temporary rather than lasting". David Galenson supports this theory by proposing that the numbers of British indentured servants never recovered, that Europeans from other nationalities replaced them; the American and British governments passed several laws that helped foster the decline of indentures. The UK Parliament's Passenger Vessels Act 1803 regulated travel conditions aboard ships to make transportation more expensive, so as to hinder landlords' tenants seeking a better life. An American law passed in 1833 abolished imprisonment of debtors, which made prosecuting runaway servants more difficult, increasing the risk of indenture contract purchases.
The 13th Amendment, passed in the wake of the American Civil War, made indentured servitude illegal in the United States. Through its introduction, the details regarding indentured labor varied across import and export regions and most overseas contracts were made before the voyage with the understanding that prospective migrants were competent enough to make overseas contracts on their own account and that they pre |
Nucleic acids are the biopolymers, or large biomolecules, essential to all known forms of life. The term nucleic acid is the overall name for DNA and RNA. They are composed of nucleotides, which are the monomers made of three components: a 5-carbon sugar, a phosphate group and a nitrogenous base. If the sugar is a compound ribose, the polymer is RNA (ribonucleic acid); if the sugar is derived from ribose as deoxyribose, the polymer is DNA (deoxyribonucleic acid).
Nucleic acids are the most important of all biomolecules. These are found in abundance in all living things, where they function to create and encode and then store information of every living cell of every life-form organism on Earth. In turn, they function to transmit and express that information inside and outside the cell nucleus—to the interior operations of the cell and ultimately to the next generation of each living organism. The encoded information is contained and conveyed via the nucleic acid sequence, which provides the 'ladder-step' ordering of nucleotides within the molecules of RNA and DNA.
Strings of nucleotides are bonded to form helical backbones—typically, one for RNA, two for DNA—and assembled into chains of base-pairs selected from the five primary, or canonical, nucleobases, which are: adenine, cytosine, guanine, thymine, and uracil. Thymine occurs only in DNA and uracil only in RNA. Using amino acids and the process known as protein synthesis, the specific sequencing in DNA of these nucleobase-pairs enables storing and transmitting coded instructions as genes. In RNA, base-pair sequencing provides for manufacturing new proteins that determine the frames and parts and most chemical processes of all life forms.
- Nuclein were discovered by Friedrich Miescher in 1869.
- In the early 1880s Albrecht Kossel further purified the substance and discovered its highly acidic properties. He later also identified the nucleobases.
- In 1889 Richard Altmann creates the term nucleic acid
- In 1938 Astbury and Bell published the first X-ray diffraction pattern of DNA.
- In 1953 Watson and Crick determined the structure of DNA.
Experimental studies of nucleic acids constitute a major part of modern biological and medical research, and form a foundation for genome and forensic science, and the biotechnology and pharmaceutical industries.
Occurrence and nomenclature
The term nucleic acid is the overall name for DNA and RNA, members of a family of biopolymers, and is synonymous with polynucleotide. Nucleic acids were named for their initial discovery within the nucleus, and for the presence of phosphate groups (related to phosphoric acid). Although first discovered within the nucleus of eukaryotic cells, nucleic acids are now known to be found in all life forms including within bacteria, archaea, mitochondria, chloroplasts, and viruses (There is debate as to whether viruses are living or non-living). All living cells contain both DNA and RNA (except some cells such as mature red blood cells), while viruses contain either DNA or RNA, but usually not both. The basic component of biological nucleic acids is the nucleotide, each of which contains a pentose sugar (ribose or deoxyribose), a phosphate group, and a nucleobase. Nucleic acids are also generated within the laboratory, through the use of enzymes (DNA and RNA polymerases) and by solid-phase chemical synthesis. The chemical methods also enable the generation of altered nucleic acids that are not found in nature, for example peptide nucleic acids.
Molecular composition and size
Nucleic acids are generally very large molecules. Indeed, DNA molecules are probably the largest individual molecules known. Well-studied biological nucleic acid molecules range in size from 21 nucleotides (small interfering RNA) to large chromosomes (human chromosome 1 is a single molecule that contains 247 million base pairs).
In most cases, naturally occurring DNA molecules are double-stranded and RNA molecules are single-stranded. There are numerous exceptions, however—some viruses have genomes made of double-stranded RNA and other viruses have single-stranded DNA genomes, and, in some circumstances, nucleic acid structures with three or four strands can form.
Nucleic acids are linear polymers (chains) of nucleotides. Each nucleotide consists of three components: a purine or pyrimidine nucleobase (sometimes termed nitrogenous base or simply base), a pentose sugar, and a phosphate group. The substructure consisting of a nucleobase plus sugar is termed a nucleoside. Nucleic acid types differ in the structure of the sugar in their nucleotides–DNA contains 2'-deoxyribose while RNA contains ribose (where the only difference is the presence of a hydroxyl group). Also, the nucleobases found in the two nucleic acid types are different: adenine, cytosine, and guanine are found in both RNA and DNA, while thymine occurs in DNA and uracil occurs in RNA.
The sugars and phosphates in nucleic acids are connected to each other in an alternating chain (sugar-phosphate backbone) through phosphodiester linkages. In conventional nomenclature, the carbons to which the phosphate groups attach are the 3'-end and the 5'-end carbons of the sugar. This gives nucleic acids directionality, and the ends of nucleic acid molecules are referred to as 5'-end and 3'-end. The nucleobases are joined to the sugars via an N-glycosidic linkage involving a nucleobase ring nitrogen (N-1 for pyrimidines and N-9 for purines) and the 1' carbon of the pentose sugar ring.
Non-standard nucleosides are also found in both RNA and DNA and usually arise from modification of the standard nucleosides within the DNA molecule or the primary (initial) RNA transcript. Transfer RNA (tRNA) molecules contain a particularly large number of modified nucleosides.
Double-stranded nucleic acids are made up of complementary sequences, in which extensive Watson-Crick base pairing results in a highly repeated and quite uniform double-helical three-dimensional structure. In contrast, single-stranded RNA and DNA molecules are not constrained to a regular double helix, and can adopt highly complex three-dimensional structures that are based on short stretches of intramolecular base-paired sequences including both Watson-Crick and noncanonical base pairs, and a wide range of complex tertiary interactions.
Nucleic acid molecules are usually unbranched and may occur as linear and circular molecules. For example, bacterial chromosomes, plasmids, mitochondrial DNA, and chloroplast DNA are usually circular double-stranded DNA molecules, while chromosomes of the eukaryotic nucleus are usually linear double-stranded DNA molecules. Most RNA molecules are linear, single-stranded molecules, but both circular and branched molecules can result from RNA splicing reactions. The total amount of pyrimidines is equal to the total amount of purines. The diameter of the helix is about 20Å.
One DNA or RNA molecule differs from another primarily in the sequence of nucleotides. Nucleotide sequences are of great importance in biology since they carry the ultimate instructions that encode all biological molecules, molecular assemblies, subcellular and cellular structures, organs, and organisms, and directly enable cognition, memory, and behavior (see Genetics). Enormous efforts have gone into the development of experimental methods to determine the nucleotide sequence of biological DNA and RNA molecules, and today hundreds of millions of nucleotides are sequenced daily at genome centers and smaller laboratories worldwide. In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, https://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site.
Deoxyribonucleic acid (DNA) is a nucleic acid containing the genetic instructions used in the development and functioning of all known living organisms. The DNA segments carrying this genetic information are called genes. Likewise, other DNA sequences have structural purposes or are involved in regulating the use of this genetic information. Along with RNA and proteins, DNA is one of the three major macromolecules that are essential for all known forms of life. DNA consists of two long polymers of simple units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands run in opposite directions to each other and are, therefore, anti-parallel. Attached to each sugar is one of four types of molecules called nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA in a process called transcription. Within cells, DNA is organized into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
Ribonucleic acid (RNA) functions in converting genetic information from genes into the amino acid sequences of proteins. The three universal types of RNA include transfer RNA (tRNA), messenger RNA (mRNA), and ribosomal RNA (rRNA). Messenger RNA acts to carry genetic sequence information between DNA and ribosomes, directing protein synthesis. Ribosomal RNA is a major component of the ribosome, and catalyzes peptide bond formation. Transfer RNA serves as the carrier molecule for amino acids to be used in protein synthesis, and is responsible for decoding the mRNA. In addition, many other classes of RNA are now known.
Artificial nucleic acid
Artificial nucleic acid analogues have been designed and synthesized by chemists, and include peptide nucleic acid, morpholino- and locked nucleic acid, glycol nucleic acid, and threose nucleic acid. Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecules.
- Comparison of nucleic acid simulation software
- History of biochemistry
- History of molecular biology
- History of RNA biology
- Molecular biology
- Nucleic acid methods
- Nucleic acid metabolism
- Nucleic acid structure
- Nucleic acid thermodynamics
- Oligonucleotide synthesis
- Quantification of nucleic acids
- He called them nuclein.
- "What is DNA". What is DNA. Linda Clarks. Retrieved 6 August 2016.
- Bill Bryson, A Short History of Nearly Everything, Broadway Books, 2015.p. 500.
- Dahm R (January 2008). "Discovering DNA: Friedrich Miescher and the early years of nucleic acid research". Human Genetics. 122 (6): 565–81. doi:10.1007/s00439-007-0433-0. PMID 17901982. S2CID 915930.
- Cox M, Nelson D (2008). Principles of Biochemistry. Susan Winslow. p. 288. ISBN 9781464163074.
- "DNA Structure". What is DNA. Linda Clarks. Retrieved 6 August 2016.
- Lander ES, Linton LM, Birren B, Nusbaum C, Zody MC, Baldwin J, et al. (February 2001). "Initial sequencing and analysis of the human genome" (PDF). Nature. 409 (6822): 860–921. Bibcode:2001Natur.409..860L. doi:10.1038/35057062. PMID 11237011.
- Venter JC, Adams MD, Myers EW, Li PW, Mural RJ, Sutton GG, et al. (February 2001). "The sequence of the human genome". Science. 291 (5507): 1304–51. Bibcode:2001Sci...291.1304V. doi:10.1126/science.1058040. PMID 11181995.
- Budowle B, van Daal A (April 2009). "Extracting evidence from forensic DNA analyses: future molecular biology directions". BioTechniques. 46 (5): 339–40, 342–50. doi:10.2144/000113136. PMID 19480629.
- Elson D (1965). "Metabolism of Nucleic Acids (Macromolecular DNA and RNA)". Annual Review of Biochemistry. 34: 449–86. doi:10.1146/annurev.bi.34.070165.002313. PMID 14321176.
- Dahm R (January 2008). "Discovering DNA: Friedrich Miescher and the early years of nucleic acid research". Human Genetics. nih.gov. 122 (6): 565–81. doi:10.1007/s00439-007-0433-0. PMID 17901982. S2CID 915930.
- Brock TD, Madigan MT (2009). Brock biology of microorganisms. Pearson / Benjamin Cummings. ISBN 978-0-321-53615-0.
- Hardinger, Steven; University of California, Los Angeles (2011). "Knowing Nucleic Acids" (PDF). ucla.edu.
- Mullis, Kary B. The Polymerase Chain Reaction (Nobel Lecture). 1993. (retrieved December 1, 2010) http://nobelprize.org/nobel_prizes/chemistry/laureates/1993/mullis-lecture.html
- Verma S, Eckstein F (1998). "Modified oligonucleotides: synthesis and strategy for users". Annual Review of Biochemistry. 67: 99–134. doi:10.1146/annurev.biochem.67.1.99. PMID 9759484.
- Gregory SG, Barlow KF, McLay KE, Kaul R, Swarbreck D, Dunham A, et al. (May 2006). "The DNA sequence and biological annotation of human chromosome 1". Nature. 441 (7091): 315–21. Bibcode:2006Natur.441..315G. doi:10.1038/nature04727. PMID 16710414.
- Todorov TI, Morris MD (April 2002). National Institutes of Health. "Comparison of RNA, single-stranded DNA and double-stranded DNA behavior during capillary electrophoresis in semidilute polymer solutions". Electrophoresis. nih.gov. 23 (7–8): 1033–44. doi:10.1002/1522-2683(200204)23:7/8<1033::AID-ELPS1033>3.0.CO;2-7. PMID 11981850.
- Margaret Hunt; University of South Carolina (2010). "RN Virus Replication Strategies". sc.edu.
- McGlynn P, Lloyd RG (August 1999). "RecG helicase activity at three- and four-strand DNA structures". Nucleic Acids Research. 27 (15): 3049–56. doi:10.1093/nar/27.15.3049. PMC 148529. PMID 10454599.
- Stryer, Lubert; Berg, Jeremy Mark; Tymoczko, John L. (2007). Biochemistry. San Francisco: W.H. Freeman. ISBN 978-0-7167-6766-4.
- Rich A, RajBhandary UL (1976). "Transfer RNA: molecular structure, sequence, and properties". Annual Review of Biochemistry. 45: 805–60. doi:10.1146/annurev.bi.45.070176.004105. PMID 60910.
- Watson JD, Crick FH (April 1953). "Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid". Nature. 171 (4356): 737–8. Bibcode:1953Natur.171..737W. doi:10.1038/171737a0. PMID 13054692. S2CID 4253007.
- Ferré-D'Amaré AR, Doudna JA (1999). "RNA folds: insights from recent crystal structures". Annual Review of Biophysics and Biomolecular Structure. 28: 57–73. doi:10.1146/annurev.biophys.28.1.57. PMID 10410795.
- Alberts, Bruce (2008). Molecular biology of the cell. New York: Garland Science. ISBN 978-0-8153-4105-5.
- Gilbert, Walter G. 1980. DNA Sequencing and Gene Structure (Nobel Lecture) http://nobelprize.org/nobel_prizes/chemistry/laureates/1980/gilbert-lecture.html
- Sanger, Frederick. 1980. Determination of Nucleotide Sequences in DNA (Nobel Lecture) http://nobelprize.org/nobel_prizes/chemistry/laureates/1980/sanger-lecture.html
- NCBI Resource Coordinators (January 2014). "Database resources of the National Center for Biotechnology Information". Nucleic Acids Research. 42 (Database issue): D7-17. doi:10.1093/nar/gkt1146. PMC 3965057. PMID 24259429.
- Wolfram Saenger, Principles of Nucleic Acid Structure, 1984, Springer-Verlag New York Inc.
- Bruce Alberts, Alexander Johnson, Julian Lewis, Martin Raff, Keith Roberts, and Peter Walter Molecular Biology of the Cell, 2007, ISBN 978-0-8153-4105-5. Fourth edition is available online through the NCBI Bookshelf: link
- Jeremy M Berg, John L Tymoczko, and Lubert Stryer, Biochemistry 5th edition, 2002, W H Freeman. Available online through the NCBI Bookshelf: link
- Astrid Sigel; Helmut Sigel; Roland K. O. Sigel, eds. (2012). Interplay between Metal Ions and Nucleic Acids. Metal Ions in Life Sciences. 10. Springer. doi:10.1007/978-94-007-2172-2. ISBN 978-94-007-2171-5. S2CID 92951134. |
By the end of this section, you will be able to:
- Distinguish between systolic pressure, diastolic pressure, pulse pressure, and mean arterial pressure
- Describe the clinical measurement of pulse and blood pressure
- Identify and discuss five variables affecting arterial blood flow and blood pressure
- Discuss several factors affecting blood flow in the venous system
Blood flow refers to the movement of blood through a vessel, tissue, or organ, and is usually expressed in terms of volume of blood per unit of time. It is initiated by the contraction of the ventricles of the heart. If we consider the entire cardiovascular system, blood flow equals cardiac output. Ventricular contraction ejects blood into the major arteries, resulting in flow from regions of higher pressure to regions of lower pressure. This section discusses a number of critical variables that contribute to blood flow throughout the body. It also discusses resistance which is due to factors that impede or slow blood flow.
As noted earlier, hydrostatic pressure is the force exerted by a fluid due to gravitational pull, usually against the wall of the container in which it is located. One form of hydrostatic pressure is blood pressure, the force exerted by blood upon the walls of the blood vessels or the chambers of the heart. Blood pressure may be measured in both the systemic and pulmonary circulation; however, the term blood pressure without any specific descriptors typically refers to systemic arterial blood pressure—that is, the pressure of blood flowing in the arteries of the systemic circulation. In clinical practice, this pressure is measured in mm Hg and is usually obtained using the brachial artery of the arm.
Arterial Blood Pressure
Arterial blood pressure in the larger vessels varies between systolic and diastolic pressures. Pulse pressure and mean arterial pressure are calculated values based upon the systolic and diastolic pressures (Figure 20.2.1).
Systolic and Diastolic Pressures
When systemic arterial blood pressure is measured, it is recorded as a ratio of two numbers (e.g., 120/80 is a normal adult blood pressure), expressed as systolic pressure over diastolic pressure. The systolic pressure is the higher value (typically around 120 mm Hg) and reflects the arterial pressure resulting from the ejection of blood during ventricular contraction, or systole. The diastolic pressure is the lower value (usually about 80 mm Hg) and represents the arterial pressure of blood during ventricular relaxation, or diastole.
As shown in Figure 20.2.1, the difference between the systolic pressure and the diastolic pressure is the pulse pressure. For example, an individual with a systolic pressure of 120 mm Hg and a diastolic pressure of 80 mm Hg would have a pulse pressure of 40 mmHg.
Generally, a pulse pressure should be at least 25 percent of the systolic pressure. A pulse pressure below this level is described as low or narrow. This may occur, for example, in patients with a low stroke volume, which may be seen in congestive heart failure, stenosis of the aortic valve, or significant blood loss following trauma. In contrast, a high or wide pulse pressure is common in healthy people following strenuous exercise, when their resting pulse pressure of 30–40 mm Hg may increase temporarily to 100 mm Hg as stroke volume increases. A persistently high pulse pressure at or above 100 mm Hg may indicate excessive resistance in the arteries and can be caused by a variety of disorders such as atherosclerosis. Chronic high resting pulse pressures can degrade the heart, brain, and kidneys, and warrant medical treatment.
Mean Arterial Pressure
Mean arterial pressure (MAP) represents the “average” pressure of blood in the arteries, that is, the average force driving blood into vessels that serve the tissues. Mean is a statistical concept and is calculated by taking the sum of the values divided by the number of values. Although complicated to measure directly and complicated to calculate, MAP can be approximated by adding the diastolic pressure to one-third of the pulse pressure or systolic pressure minus the diastolic pressure:
MAP = diastolic BP + ((systolic-diastolic BP) / 3)
In Figure 20.2.1, this value is approximately 80 + (120 − 80) / 3, or 93.33. Normally, the MAP falls within the range of 70–110 mm Hg. If the value falls below 60 mm Hg for an extended time, blood pressure will not be high enough to ensure circulation to and through the tissues, which results in ischemia, or insufficient blood flow. A condition called hypoxia, inadequate oxygenation of tissues, commonly accompanies ischemia. The term hypoxemia refers to low levels of oxygen in systemic arterial blood. Neurons are especially sensitive to hypoxia and may die or be damaged if blood flow and oxygen supplies are not quickly restored.
After blood is ejected from the heart, elastic fibers in the arteries help maintain a high-pressure gradient as they expand to accommodate the blood, then recoil to keep pressure on the blood. This expansion and recoiling effect, known as the pulse, can be palpated manually or measured electronically. Although the effect diminishes over distance from the heart, elements of the systolic and diastolic components of the pulse are still evident down to the level of the arterioles.
Because pulse indicates heart rate, it is measured clinically to provide clues to a patient’s state of health. It is recorded as beats per minute. Both the rate and the strength of the pulse are important clinically. A high or irregular pulse rate can be caused by physical activity or other temporary factors, but it may also indicate a heart condition. The pulse strength indicates the strength of ventricular contraction and cardiac output. If the pulse is strong, then systolic pressure is high. If it is weak, systolic pressure has fallen, and medical intervention may be warranted.
Pulse can be palpated manually by placing the tips of the fingers across an artery that runs close to the body surface and pressing lightly. While this procedure is normally performed using the radial artery in the wrist or the common carotid artery in the neck, any superficial artery that can be palpated may be used (Figure 20.2.2). Common sites to find a pulse include temporal and facial arteries in the head, brachial arteries in the upper arm, femoral arteries in the thigh, popliteal arteries behind the knees, posterior tibial arteries near the medial tarsal regions, and dorsalis pedis arteries in the feet. A variety of commercial electronic devices are also available to measure pulse.
Measurement of Blood Pressure
Blood pressure is one of the critical parameters measured on virtually every patient in every healthcare setting. The technique used today was developed more than 100 years ago by a pioneering Russian physician, Dr. Nikolai Korotkoff. Turbulent blood flow through the vessels can be heard as a soft ticking while measuring blood pressure; these sounds are known as Korotkoff sounds. The technique of measuring blood pressure requires the use of a sphygmomanometer (a blood pressure cuff attached to a measuring device) and a stethoscope. The technique is as follows:
- The clinician wraps an inflatable cuff tightly around the patient’s arm at about the level of the heart.
- The clinician squeezes a rubber pump to inject air into the cuff, raising pressure around the artery and temporarily cutting off blood flow into the patient’s arm.
- The clinician places the stethoscope on the patient’s antecubital region and, while gradually allowing air within the cuff to escape, listens for the Korotkoff sounds.
Although there are five recognized Korotkoff sounds, only two are normally recorded. Initially, no sounds are heard since there is no blood flow through the vessels, but as air pressure drops, the cuff relaxes, and blood flow returns to the arm. As shown in Figure 20.2.3, the first sound heard through the stethoscope—the first Korotkoff sound—indicates systolic pressure. As more air is released from the cuff, blood is able to flow freely through the brachial artery and all sounds disappear. The point at which the last sound is heard is recorded as the patient’s diastolic pressure.
EDITORS NOTE: suggest the addition of an interactive link to Korotkoff sounds similar to: http://www.atitesting.com/ati_next_gen/skillsmodules/content/vital-signs/equipment/bldpressure.html
The majority of hospitals and clinics have automated equipment for measuring blood pressure that work on the same principles. An even more recent innovation is a small instrument that wraps around a patient’s wrist. The patient then holds the wrist over the heart while the device measures blood flow and records pressure.
Variables Affecting Blood Flow and Blood Pressure
Four variables influence blood flow and blood pressure:
- Cardiac output
- Volume of the blood
Recall that blood moves from higher pressure to lower pressure. It is pumped from the heart into the arteries at high pressure. Since pressure in the veins is normally relatively low, for blood to flow back into the heart, the pressure in the atria during atrial diastole must be even lower. It normally approaches zero, except when the atria contract (see Figure 20.2.1).
Cardiac output is the measurement of blood flow from the heart through the ventricles, and is usually measured in liters per minute. Any factor that causes cardiac output to increase, by elevating heart rate or stroke volume or both, will elevate blood pressure and promote blood flow. These factors include sympathetic stimulation, the catecholamines epinephrine and norepinephrine, thyroid hormones, and increased calcium ion levels. Conversely, any factor that decreases cardiac output, by decreasing heart rate or stroke volume or both, will decrease arterial pressure and blood flow. These factors include parasympathetic stimulation, elevated or decreased potassium ion levels, decreased calcium levels, anoxia, and acidosis.
Compliance is the ability of any compartment to expand to accommodate increased content. A metal pipe, for example, is not compliant, whereas a balloon is. The greater the compliance of an artery, the more effectively it is able to expand to accommodate surges in blood flow without increased resistance or blood pressure. Veins are more compliant than arteries and can expand to hold more blood. When vascular disease causes stiffening of arteries, compliance is reduced and resistance to blood flow is increased. The result is more turbulence, higher pressure within the vessel, and reduced blood flow. This increases the work of the heart.
The relationship between blood volume, blood pressure, and blood flow is intuitively obvious. Water may merely trickle along a creek bed in a dry season, but rush quickly and under great pressure after a heavy rain. Similarly, as blood volume decreases, pressure and flow decrease. As blood volume increases, pressure and flow increase.
Under normal circumstances, blood volume varies little. Low blood volume, called hypovolemia, may be caused by bleeding, dehydration, vomiting, severe burns, or some medications used to treat hypertension. It is important to recognize that other regulatory mechanisms in the body are so effective at maintaining blood pressure that an individual may be asymptomatic until 10–20 percent of the blood volume has been lost. Treatment typically includes intravenous fluid replacement.
Hypervolemia, excessive fluid volume, may be caused by retention of water and sodium, as seen in patients with heart failure, liver cirrhosis, some forms of kidney disease, hyperaldosteronism, and some glucocorticoid steroid treatments. Restoring homeostasis in these patients depends upon reversing the condition that triggered the hypervolemia.
The three most important factors affecting resistance are blood viscosity, vessel length and vessel diameter and are each considered below.
Blood viscosity is the thickness of fluids that affects their ability to flow. Clean water, for example, is less viscous than mud. The viscosity of blood is directly proportional to resistance and inversely proportional to flow; therefore, any condition that causes viscosity to increase will also increase resistance and decrease flow. For example, imagine sipping milk, then a milkshake, through the same size straw. You experience more resistance and therefore less flow from the milkshake. Conversely, any condition that causes viscosity to decrease (such as when the milkshake melts) will decrease resistance and increase flow.
Normally the viscosity of blood does not change over short periods of time. The two primary determinants of blood viscosity are the formed elements and plasma proteins. Since the vast majority of formed elements are erythrocytes, any condition affecting erythropoiesis, such as polycythemia or anemia, can alter viscosity. Since most plasma proteins are produced by the liver, any condition affecting liver function can also change the viscosity slightly and therefore decrease blood flow. Liver abnormalities include hepatitis, cirrhosis, alcohol damage, and drug toxicities. While leukocytes and platelets are normally a small component of the formed elements, there are some rare conditions in which severe overproduction can impact viscosity as well.
The length of our blood vessels increases throughout childhood as we grow, of course, but is unchanging in adults under normal physiological circumstances. Further, the distribution of vessels is not the same in all tissues. Adipose tissue does not have an extensive vascular supply. One pound of adipose tissue contains approximately 200 miles of vessels, whereas skeletal muscle contains more than twice that. Overall, vessels decrease in length only during loss of mass or amputation. An individual weighing 150 pounds has approximately 60,000 miles of vessels in the body. Gaining about 10 pounds adds from 2000 to 4000 miles of vessels, depending upon the nature of the gained tissue. One of the great benefits of weight reduction is the reduced stress to the heart, which does not have to overcome the resistance of as many miles of vessels.
In contrast to length, the blood vessel diameter changes throughout the body, according to the type of vessel, as we discussed earlier. The diameter of any given vessel may also change frequently throughout the day in response to neural and chemical signals that trigger vasodilation and vasoconstriction. The vascular tone of the vessel is the contractile state of the smooth muscle and the primary determinant of diameter, and thus of resistance and flow. The effect of vessel diameter on resistance is inverse: Given the same volume of blood, an increased diameter means there is less blood contacting the vessel wall, thus lower friction and lower resistance, subsequently increasing flow. A decreased diameter means more of the blood contacts the vessel wall, and resistance increases, subsequently decreasing flow.
The influence of lumen diameter on resistance is dramatic: A slight increase or decrease in diameter causes a huge decrease or increase in resistance. This is because resistance is inversely proportional to the radius of the blood vessel (one-half of the vessel’s diameter) raised to the fourth power (R = 1/r4). This means, for example, that if an artery or arteriole constricts to one-half of its original radius, the resistance to flow will increase 16 times. And if an artery or arteriole dilates to twice its initial radius, then resistance in the vessel will decrease to 1/16 of its original value and flow will increase 16 times.
A Mathematical Approach to Factors Affecting Blood Flow
Jean Louis Marie Poiseuille was a French physician and physiologist who devised a mathematical equation describing blood flow and its relationship to known parameters. The same equation also applies to engineering studies of the flow of fluids. Although understanding the math behind the relationships among the factors affecting blood flow is not necessary to understand blood flow, it can help solidify an understanding of their relationships. Please note that even if the equation looks intimidating, breaking it down into its components and following the relationships will make these relationships clearer, even if you are weak in math. Focus on the three critical variables: radius (r), vessel length (λ), and viscosity (η).
- π is the Greek letter pi, used to represent the mathematical constant that is the ratio of a circle’s circumference to its diameter. It may commonly be represented as 3.14, although the actual number extends to infinity.
- ΔP represents the difference in pressure.
- r4 is the radius (one-half of the diameter) of the vessel to the fourth power.
- η is the Greek letter eta and represents the viscosity of the blood.
- λ is the Greek letter lambda and represents the length of a blood vessel.
One of several things this equation allows us to do is calculate the resistance in the vascular system. Normally this value is extremely difficult to measure, but it can be calculated from this known relationship:
If we rearrange this slightly,
Then by substituting Pouseille’s equation for blood flow:
By examining this equation, you can see that there are only three variables: viscosity, vessel length, and radius, since 8 and π are both constants. The important thing to remember is this: Two of these variables, viscosity and vessel length, will change slowly in the body. Only one of these factors, the radius, can be changed rapidly by vasoconstriction and vasodilation, thus dramatically impacting resistance and flow. Further, small changes in the radius will greatly affect flow, since it is raised to the fourth power in the equation.
We have briefly considered how cardiac output and blood volume impact blood flow and pressure; the next step is to see how the other variables (contraction, vessel length, and viscosity) articulate with Pouseille’s equation and what they can teach us about the impact on blood flow.
The Roles of Vessel Diameter and Total Area in Blood Flow and Blood Pressure
Recall that we classified arterioles as resistance vessels, because given their small lumen, they dramatically slow the flow of blood from arteries. In fact, arterioles are the site of greatest resistance in the entire vascular network. This may seem surprising, given that capillaries have a smaller size. How can this phenomenon be explained?
Figure 20.2.4 compares vessel diameter, total cross-sectional area, average blood pressure, and blood velocity through the systemic vessels. Notice in parts (a) and (b) that the total cross-sectional area of the body’s capillary beds is far greater than any other type of vessel. Although the diameter of an individual capillary is significantly smaller than the diameter of an arteriole, there are vastly more capillaries in the body than there are other types of blood vessels. Part (c) shows that blood pressure drops unevenly as blood travels from arteries to arterioles, capillaries, venules, and veins, and encounters greater resistance. However, the site of the most precipitous drop, and the site of greatest resistance, is the arterioles. This explains why vasodilation and vasoconstriction of arterioles play more significant roles in regulating blood pressure than do the vasodilation and vasoconstriction of other vessels.
Part (d) shows that the velocity (speed) of blood flow decreases dramatically as the blood moves from arteries to arterioles to capillaries. This slow flow rate allows more time for exchange processes to occur. As blood flows through the veins, the rate of velocity increases, as blood is returned to the heart.
Arteriosclerosis begins with injury to the endothelium of an artery, which may be caused by irritation from high blood glucose, infection, tobacco use, excessive blood lipids, and other factors. Artery walls that are constantly stressed by blood flowing at high pressure are also more likely to be injured—which means that hypertension can promote arteriosclerosis, as well as result from it.
Recall that tissue injury causes inflammation. As inflammation spreads into the artery wall, it weakens and scars it, leaving it stiff (sclerotic). As a result, compliance is reduced. Moreover, circulating triglycerides and cholesterol can seep between the damaged lining cells and become trapped within the artery wall, where they are frequently joined by leukocytes, calcium, and cellular debris. Eventually, this buildup, called plaque, can narrow arteries enough to impair blood flow. The term for this condition, atherosclerosis (athero- = “porridge”) describes the mealy deposits (Figure 20.2.5).
Sometimes a plaque can rupture, causing microscopic tears in the artery wall that allow blood to leak into the tissue on the other side. When this happens, platelets rush to the site to clot the blood. This clot can further obstruct the artery and—if it occurs in a coronary or cerebral artery—cause a sudden heart attack or stroke. Alternatively, plaque can break off and travel through the bloodstream as an embolus until it blocks a more distant, smaller artery.
Even without total blockage, vessel narrowing leads to ischemia—reduced blood flow—to the tissue region “downstream” of the narrowed vessel. Ischemia in turn leads to hypoxia—decreased supply of oxygen to the tissues. Hypoxia involving cardiac muscle or brain tissue can lead to cell death and severe impairment of brain or heart function.
A major risk factor for both arteriosclerosis and atherosclerosis is advanced age, as the conditions tend to progress over time. Arteriosclerosis is normally defined as the more generalized loss of compliance, “hardening of the arteries,” whereas atherosclerosis is a more specific term for the build-up of plaque in the walls of the vessel and is a specific type of arteriosclerosis. There is also a distinct genetic component, and pre-existing hypertension and/or diabetes also greatly increase the risk. However, obesity, poor nutrition, lack of physical activity, and tobacco use all are major risk factors.
Treatment includes lifestyle changes, such as weight loss, smoking cessation, regular exercise, and adoption of a diet low in sodium and saturated fats. Medications to reduce cholesterol and blood pressure may be prescribed. For blocked coronary arteries, surgery is warranted. In angioplasty, a catheter is inserted into the vessel at the point of narrowing, and a second catheter with a balloon-like tip is inflated to widen the opening. To prevent subsequent collapse of the vessel, a small mesh tube called a stent is often inserted. In an endarterectomy, plaque is surgically removed from the walls of a vessel. This operation is typically performed on the carotid arteries of the neck, which are a prime source of oxygenated blood for the brain. In a coronary bypass procedure, a non-vital superficial vessel from another part of the body (often the great saphenous vein) or a synthetic vessel is inserted to create a path around the blocked area of a coronary artery.
The pumping action of the heart propels the blood into the arteries, from an area of higher pressure toward an area of lower pressure. If blood is to flow from the veins back into the heart, the pressure in the veins must be greater than the pressure in the atria of the heart. Two factors help maintain this pressure gradient between the veins and the heart. First, the pressure in the atria during diastole is very low, often approaching zero when the atria are relaxed (atrial diastole). Second, two physiologic “pumps” increase pressure in the venous system. The use of the term “pump” implies a physical device that speeds flow. These physiological pumps are less obvious.
Skeletal Muscle Pump
In many body regions, the pressure within the veins can be increased by the contraction of the surrounding skeletal muscle. This mechanism, known as the skeletal muscle pump (Figure 20.2.6), helps the lower-pressure veins counteract the force of gravity, increasing pressure to move blood back to the heart. As leg muscles contract, for example during walking or running, they exert pressure on nearby veins with their numerous one-way valves. This increased pressure causes blood to flow upward, opening valves superior to the contracting muscles so blood flows through. Simultaneously, valves inferior to the contracting muscles close; thus, blood should not seep back downward toward the feet. Military recruits are trained to flex their legs slightly while standing at attention for prolonged periods. Failure to do so may allow blood to pool in the lower limbs rather than returning to the heart. Consequently, the brain will not receive enough oxygenated blood, and the individual may lose consciousness.
The respiratory pump aids blood flow through the veins of the thorax and abdomen. During inhalation, the volume of the thorax increases, largely through the contraction of the diaphragm, which moves downward and compresses the abdominal cavity. The elevation of the chest caused by the contraction of the external intercostal muscles also contributes to the increased volume of the thorax. The volume increase causes air pressure within the thorax to decrease, allowing us to inhale. Additionally, as air pressure within the thorax drops, blood pressure in the thoracic veins also decreases, falling below the pressure in the abdominal veins. This causes blood to flow along its pressure gradient from veins outside the thorax, where pressure is higher, into the thoracic region, where pressure is now lower. This in turn promotes the return of blood from the thoracic veins to the atria. During exhalation, when air pressure increases within the thoracic cavity, pressure in the thoracic veins increases, speeding blood flow into the heart while valves in the veins prevent blood from flowing backward from the thoracic and abdominal veins.
Pressure Relationships in the Venous System
Although vessel diameter increases from the smaller venules to the larger veins and eventually to the venae cavae (singular = vena cava), the total cross-sectional area actually decreases (see Figure 20.2.6a and b). The individual veins are larger in diameter than the venules, but their total number is much lower, so their total cross-sectional area is also lower.
Also notice that, as blood moves from venules to veins, the average blood pressure drops (see Figure 20.2.6c), but the blood velocity actually increases (see Figure 20.2.6). This pressure gradient drives blood back toward the heart. Again, the presence of one-way valves and the skeletal muscle and respiratory pumps contribute to this increased flow. Since approximately 64 percent of the total blood volume resides in systemic veins, any action that increases the flow of blood through the veins will increase venous return to the heart. Maintaining vascular tone within the veins prevents the veins from merely distending, dampening the flow of blood, and as you will see, vasoconstriction actually enhances the flow.
The Role of Venoconstriction in Resistance, Blood Pressure, and Flow
As previously discussed, vasoconstriction of an artery or arteriole decreases the radius, increasing resistance and pressure, but decreasing flow. Venoconstriction, on the other hand, has a very different outcome. The walls of veins are thin but irregular; thus, when the smooth muscle in those walls constricts, the lumen becomes more rounded. The more rounded the lumen, the less surface area the blood encounters, and the less resistance the vessel offers. Vasoconstriction increases pressure within a vein as it does in an artery, but in veins, the increased pressure increases flow. Recall that the pressure in the atria, into which the venous blood will flow, is very low, approaching zero for at least part of the relaxation phase of the cardiac cycle. Thus, venoconstriction increases the return of blood to the heart. Another way of stating this is that venoconstriction increases the preload or stretch of the cardiac muscle and increases contraction.
Blood flow is the movement of blood through a vessel, tissue, or organ. The slowing or blocking of blood flow is called resistance. Blood pressure is the force that blood exerts upon the walls of the blood vessels or chambers of the heart. The components of blood pressure include systolic pressure, which results from ventricular contraction, and diastolic pressure, which results from ventricular relaxation. Pulse pressure is the difference between systolic and diastolic measures, and mean arterial pressure is the “average” pressure of blood in the arterial system, driving blood into the tissues. Pulse, the expansion and recoiling of an artery, reflects the heartbeat. The variables affecting blood flow and blood pressure in the systemic circulation are cardiac output, compliance, blood volume, blood viscosity, and the length and diameter of the blood vessels. In the arterial system, vasodilation and vasoconstriction of the arterioles is a significant factor in systemic blood pressure: Slight vasodilation greatly decreases resistance and increases flow, whereas slight vasoconstriction greatly increases resistance and decreases flow. In the arterial system, as resistance increases, blood pressure increases and flow decreases. In the venous system, constriction increases blood pressure as it does in arteries; the increasing pressure helps to return blood to the heart. In addition, constriction causes the vessel lumen to become more rounded, decreasing resistance and increasing blood flow. Venoconstriction, while less important than arterial vasoconstriction, works with the skeletal muscle pump, the respiratory pump, and their valves to promote venous return to the heart.
Critical Thinking Questions
1. You measure a patient’s blood pressure at 130/85. Calculate the patient’s pulse pressure and mean arterial pressure. Determine whether each pressure is low, normal, or high.
2. An obese patient comes to the clinic complaining of swollen feet and ankles, fatigue, shortness of breath, and often feeling “spaced out.” She is a cashier in a grocery store, a job that requires her to stand all day. Outside of work, she engages in no physical activity. She confesses that, because of her weight, she finds even walking uncomfortable. Explain how the skeletal muscle pump might play a role in this patient’s signs and symptoms.
- blood flow
- movement of blood through a vessel, tissue, or organ that is usually expressed in terms of volume per unit of time
- blood pressure
- force exerted by the blood against the wall of a vessel or heart chamber; can be described with the more generic term hydrostatic pressure
- degree to which a blood vessel can stretch as opposed to being rigid
- diastolic pressure
- lower number recorded when measuring arterial blood pressure; represents the minimal value corresponding to the pressure that remains during ventricular relaxation
- abnormally high levels of fluid and blood within the body
- abnormally low levels of fluid and blood within the body
- lack of oxygen supply to the tissues
- insufficient blood flow to the tissues
- Korotkoff sounds
- noises created by turbulent blood flow through the vessels
- mean arterial pressure (MAP)
- average driving force of blood to the tissues; approximated by taking diastolic pressure and adding 1/3 of pulse pressure
- alternating expansion and recoil of an artery as blood moves through the vessel; an indicator of heart rate
- pulse pressure
- difference between the systolic and diastolic pressures
- any condition or parameter that slows or counteracts the flow of blood
- respiratory pump
- increase in the volume of the thorax during inhalation that decreases air pressure, enabling venous blood to flow into the thoracic region, then exhalation increases pressure, moving blood into the atria
- skeletal muscle pump
- effect on increasing blood pressure within veins by compression of the vessel caused by the contraction of nearby skeletal muscle
- blood pressure cuff attached to a device that measures blood pressure
- systolic pressure
- larger number recorded when measuring arterial blood pressure; represents the maximum value following ventricular contraction
- vascular tone
- contractile state of smooth muscle in a blood vessel
Answers for Critical Thinking Questions
- The patient’s pulse pressure is 130 – 85 = 45 mm Hg. Generally, a pulse pressure should be at least 25 percent of the systolic pressure, but not more than 100 mm Hg. Since 25 percent of 130 = 32.5, the patient’s pulse pressure of 45 is normal. The patient’s mean arterial pressure is 85 + 1/3 (45) = 85 + 15 = 100. Normally, the mean arterial blood pressure falls within the range of 70 – 110 mmHg, so 100 is normal.
- People who stand upright all day and are inactive overall have very little skeletal muscle activity in the legs. Pooling of blood in the legs and feet is common. Venous return to the heart is reduced, a condition that in turn reduces cardiac output and therefore oxygenation of tissues throughout the body. This could at least partially account for the patient’s fatigue and shortness of breath, as well as her “spaced out” feeling, which commonly reflects reduced oxygen to the brain. |
Welcome to the first lesson in the Straight Line series!
Over the next few (actually many) lessons, I’ll be covering the various forms of equations to a straight line, and a lot of related concepts, formulas and applications.
Without further ado, let’s get started !
To find the equation to a straight line, we’ll take a general point P(x, y) on the line, and find the relation between the coordinates, which will always hold true. (This in fact is the definition of an equation to any curve, as explained here).
We’ll start with the simple ones. Lines which are parallel to the axes.
1. Equation of a line parallel to the X axis
A line which is parallel to the X axis will always remain at a fixed distance from it. That is, the y-coordinate (i.e. the distance from the X axis) of any point P(x, y) will always remain the same (and will be equal to that distance)
Therefore the equation of a line parallel to the X axis will be of the form y = d, where d is the (signed) distance of the line from the X axis.
2. Equation of a line parallel to the Y axis
Same here. The x-coordinate of a point on the line (i.e. its distance from the Y axis) will always remain the same. Therefore the equation be of the form x = d, where d is the (signed) distance of the line from the Y axis.
3. Equation of the X axis and the Y axis
Well, what about the equations of the axes themselves? They are lines too, aren’t they? Of course they are !
Following the same method as above, the distance of the X axis from the X axis is… zero !
Therefore its equation is y = 0 .Similarly the equation of the Y axis is x = 0.
That’s it for now.
Note that, to uniquely describe a line, two conditions were required – the line being parallel to one of the axes, and being at a certain distance from that axis. I’ll come back to this in the subsequent lessons.
- The equation of a line parallel to the X axis will be of the form y = d, where d is the signed distance of the line from the X axis.
- The equation of a line parallel to the Y axis will be of the form x = c, where c is the signed distance of the line from the Y axis.
- The equation of the X axis is y = 0
- The equation of the Y axis is x = 0
The next lesson will deal with the equations of lines which are not parallel to any of the axes. |
The devastating environmental impacts of the Exxon Valdez spill in 1989 and its media notoriety made it a frequent comparison to the BP Deepwater Horizon spill in the popular press in 2010, even though the nature of the two spills and the environments impacted were vastly different. Fortunately, unlike higher organisms that are adversely impacted by oil spills, microorganisms are able to consume petroleum hydrocarbons. These oil degrading indigenous microorganisms played a significant role in reducing the overall environmental impact of both the Exxon Valdez and BP Deepwater Horizon oil spills.
Introduction to Biodegradation of Petroleum Hydrocarbons
Petroleum hydrocarbons in crude oils, such as those released into marine ecosystems by the Exxon Valdez and BP Deepwater Horizon spills, are natural products derived from aquatic algae laid down between 180 and 85 million years ago. Crude oils, composed mostly of diverse aliphatic and aromatic hydrocarbons, regularly escape into the environment from underground reservoirs. Because petroleum hydrocarbons occur naturally in all marine environments, there has been time for numerous diverse microorganisms to evolve the capability of utilizing hydrocarbons as sources of carbon and energy for growth. Oil-degrading microorganisms are ubiquitous, but may only be a small proportion of the prespill microbial community. There are hundreds of species of bacteria, archaea, and fungi that can degrade petroleum.
Most petroleum hydrocarbons are biodegradable under aerobic conditions; though a few compounds found in crude oils, for example, resins, hopanes, polar molecules, and asphaltenes, have practically imperceptible biodegradation rates. Lighter crudes, such as the oil released from the BP Deepwater Horizon spill, contain a higher proportion of simpler lower molecular weight hydrocarbons that are more readily biodegraded than heavy crudes, such as the oil released from the Exxon Valdez. The polycyclic aromatic hydrocarbons (PAHs) are a minor constituent of crude oils; however, they are among the most toxic to plants and animals. Bacteria can convert PAHs completely to biomass, CO2, and H2O, but they usually require the initial insertion of O2 via dioxygenase enzymes. Anaerobic degradation of petroleum hydrocarbons can also occur albeit at a much slower rates. Petroleum hydrocarbons can be biodegraded at temperatures below 0 °C to more than 80 °C. Microorganisms require elements other than carbon for growth. The concentrations of these elements in marine environments—primarily nitrates (NO3–), phosphates (PO43-), and iron (Fe)—can limit rates of oil biodegradation. Having an adequate supply of these rate limiting nutrients when large quantities of hydrocarbons are released into the marine environment is critical for controlling the rates of biodegradation and hence the persistence of potentially harmful environmental impacts. Bioremediation, which was used extensively in the Exxon Valdez spill, involved adding fertilizers containing nitrogen (N) nutrients to speed up the rates of oil biodegradation.
Most petroleum hydrocarbons are highly insoluble in water. Hydrocarbon biodegradation takes place at the hydrocarbon–water interface. Thus the surface area to volume ratio of the oil can significantly impact the biodegradation rate. Dispersants, such as Corexit 9500, which was used during the BP Deepwater Horizon spill, increase the available surface area and, thus, potentially increase the rates of biodegradation.
Overarching Differences Between the Two Spills
Once the BP Deepwater Horizon oil leak started, the public and the popular media began to compare it to the Exxon Valdez spill which had been up until that time the largest marine spill in the United States. The public notoriety of Exxon Valdez spill was dramatic due to its impact on Alaska wildlife and the long litigation process, which is still seeing court action. However, the Exxon Valdez and BP Deepwater Horizon oil spills were vastly different in terms of the volume of oil, the nature of the oil, and the environments impacted (Table 1). The BP Deepwater Horizon oil spill was more than an order of magnitude greater in total volume of oil than the Exxon Valdez spill; the BP spill also released considerable amounts of natural gas (methane (CH4)). The Exxon Valdez spill occurred near shore and occurred as a surface slick, while the BP Deepwater Horizon spill was a leak from a well 5000 ft (1500 m) below the ocean surface as both a deep-sea “cloud” or “plume” and a surface water slick, more than 50 mi (80 km) from the nearest shore. The BP Deepwater Horizon spill was a light crude and more inherently biodegradable initially than the Exxon Valdez heavy crude from the North Slope of Alaska. The environments impacted were also very different in terms of climate, weather, and ecosystems, with the Exxon Valdez spill occurring in a sub-Arctic region and the BP Deepwater Horizon spill occurring in a subtropical region, although the deepwater region directly impacted by the BP spill was cold (<5 °C). The Gulf of Mexico has lots of natural seeps of oil and there have been other spills from drilling rigs, such as the IXTOC well blowout of 1979. This is in contrast to the relatively pristine conditions of Prince William Sound which is much more enclosed and shallower than the more open ocean environment where the BP Deepwater Horizon spill occurred in the Gulf of Mexico. Indeed the treatments used were also quite different.
Comparison of BP Deepwater Horizon and Exxon Valdez Spills
Since a storm with 50 mi/h (80 km/h) winds hit Prince William Sound within 2 days of the initial spill, no dispersant was used. Much of the oil washed onto the shorelines of islands in the path of the oil, making shoreline cleanup the primary focus. During the Exxon Valdez spill water washing and bioremediation (biostimulation using fertilizers containing N nutrients) were the major strategies. In the case of the BP Deepwater Horizon spill millions of gallons (1 U.S. gal = 3.79 L) of dispersant was used both on the surface and at the leaking wellhead in the Gulf of Mexico. A major focus was to protect shorelines from oil contamination. This was also the first time dispersant had been applied to a deepwater leaking well, primarily for safety reasons to prevent the highly flammable oil from reaching the surface immediately above the wellhead where many ships were involved in leak operations. The BP Deepwater Horizon spill was the largest emergency response to a marine oil spill that the world has seen to date. In addition to dispersant, controlled burns, skimming, siphoning from the wellhead, containment booms, shoreline scavenging/berms, and beach sand mixing were used extensively to mitigate the spill’s impact.
Although numerous physical means were used to remove or disperse the oil, ultimately it was the microbes that played the major role in mitigating the environmental impacts of these two worst oil spills in U.S. history.
The Exxon Valdez Spill in Prince William Sound
On March 24, 1989 the oil tanker Exxon Valdez ran aground on Bligh Reef in Prince William Sound, AK, spilling an estimated 11 million gallons (42 million liters) of crude oil that spread as a surface slick(1) (Figure 1). At the time this was the worst U.S. oil spill disaster. Dispersant tests were quickly conducted but due to weather conditions and the nature of the oil, which was a North Slope relatively heavy oil (API gravity = 29), as well as State of Alaska concerns about the use of dispersants, the decision was made not to try to disperse the oil. Despite efforts to contain the spill, tidal currents and winds caused a significant portion of the oil to float ashore. Approximately 486 mi (778 km) of the 3000 mi (4800 km) of shoreline in Prince William Sound, and 818 mi (1309 km) of the 6000 mi (9600 km) of shoreline in the Gulf of Alaska, or about 15% of the total shoreline, became oiled to some degree.(2) Much of this oiling, especially in the Gulf of Alaska, was patchy and scattered in a light covering, for example, as tar balls. Oiling was heaviest on the shorelines of islands in Prince William Sound that were directly in the path of the slick.
Graphic depiction of Exxon Valdez spill and cleanup.
Assessing the Efficacy and Safety of Bioremediation
Because of the difficulty of achieving sufficient oil removal by physical washing and collection, especially for oil that had moved into the subsurface, bioremediation became a prime candidate for continuing treatment of the shoreline. Bioremediation had been independently identified as a potential emerging technology within weeks of the spill. Both the EPA and Exxon quickly began laboratory tests, which were soon followed by field trials to determine whether fertilizer addition would enhance the rates of oil biodegradation.3,4 The focus of these tests was on the changes in oil composition due to microbial degradation, that is, the emphasis was placed on changes in oil chemistry rather than on the microbes themselves.
Field tests showed that fertilizer addition enhanced rates of biodegradation by the indigenous hydrocarbon-degrading microorganisms. Rates of biodegradation in bioremediation studies resulted in total petroleum-hydrocarbon losses as high as 1.2% per day. The rate of biodegradation slowed down once the more readily degradable components were depleted even when fertilizer was reapplied. The rate of oil degradation was a function of the ratio of N/biodegradable oil and time. Both polynuclear aromatic and aliphatic compounds in the oil were extensively biodegraded. Bioremediation increased the rate of polycyclic-aromatic-hydrocarbon (PAH) degradation in relatively undegraded oil by a factor of 2, and of alkanes by 5 relative to the controls. O2 dissolved in water was not rate-limiting—there was up to a 30% decline in O2 concentration in pore water following fertilizer application, but hypoxia was not detected.
Full-Scale Use of Bioremediation
Based upon the laboratory and field demonstration test results, the federal on-scene-coordinator approved the use of bioremediation employing fertilizer application for use on oiled shorelines of Prince William Sound.3−5 Several fertilizer formulations were considered; key considerations were retention in the oiled shorelines long enough to support biodegradation, availability in quantities needed to treat these shorelines, and lack of toxicity. Two fertilizers were selected for full scale bioremediation: the oleophilic fertilizer Inipol EAP22, manufactured by Elf Aquitaine of France; and the slow release fertilizer Customblen 28–8–0, manufactured by Sierra Chemicals of California. Customblen was spread at a rate of 27.8 g/m2. Inipol was then applied at a rate of 300 g/m2. These rates ensured a safe margin below concentrations of ammonium (NH4+) or NO3– ions considered toxic by EPA water quality standards.
Results for sediment samples collected and analyzed in 1989 indicated that about 25–30% of the total hydrocarbon in the oil originally stranded on Prince William Sound shorelines had been lost within the first days to weeks after the spill. The natural background rates of oil biodegradation initially were estimated at 1.3 g oil/kg sediment/yr for surface oil and 0.8 g oil/kg sediment/yr for subsurface oil.(3) Concentrations of naturally occurring oil-degrading bacteria during this period were (1–5) × 103cells/mL of seawater or about 1–10% of the total heterotrophic bacterial population. In late 1989 oil-degrading bacterial populations had greatly increased to about 1 × 105 cells/mL and made up about 40% of the heterotrophic population in oiled shoreline pore waters.
Large-scale applications of fertilizer during summer 1990 included over 1400 individual site treatments at 378 shoreline segments. Measurements in September of 1990 showed that the proportion of oil degrading bacteria had returned to background levels of under 1% of the total bacterial populations in pore waters. In 1991 about 220 individual site treatments were applied. By 1992 the length of shoreline still containing any significant amount of oil was 6.4 mi (10.2 km) or 1.3% of the shoreline oiled in 1989.(5)
In all, 107 000 lbs (48 600 kg) of N in the fertilizers were applied from 1989 to 1991, involving 2237 separate shoreline applications of fertilizer. This represents the largest use of bioremediation ever undertaken. A survey in May-June 1992 found that most of the oil had been removed from shorelines and on June 12, 1992 the U.S. Coast Guard and the State of Alaska officially declared the cleanup concluded. At that time some oil still remained but it was felt that further cleanup activities would not provide a net environmental benefit. The oil residue remaining in the shorelines was left to naturally biodegrade further although based upon previous oil spills it was clear that some residual oil would remain for an extended time period.
Should Bioremediation Be Reapplied Today to Treat Residual Subsurface oil?
In 2001 and 2003 the National Oceanic and Atmospheric Administration (NOAA) conducted random sampling of 4982 pits dug at 114 sites in Prince William Sound to determine how much residual oil remained;6,7 these studies found that 97.8% of the pits had no oil or light oil residues even though these sites had been heavily to-moderately oiled in 1989. Based upon the amount of oil remaining as of 2001 it was estimated that there had been a 22% per year decline from 1991 to 2001 in the amount of oil remaining on the shore.(6) After 2001 the rates of decrease by natural processes of subsurface oil slowed to about 4% per year as the remaining oil became more weathered and more sequestered.(8) Additional grid surveys were conducted by ExxonMobil in 2002 and 20079−11 (Exxon having merged with Mobil in 1999). The 2007 survey at 22 sites reported to be heavily oiled in NOAA’s 2001–2003 surveys found no oil in 71% of the pits, 21.8% had light levels or only traces of subsurface oil residue, 4.6% had moderate oiling and 2.6% had heavy oil levels;(9) 87% of the samples were completely depleted of alkanes and 82% also had lost more than 70% of the original PAHs.12−14
The residual oil occurs as localized patches. Persistent buried oil has been found in other spills, for example, the Florida spill in a saltmarsh in Falmouth, MA. In Prince William Sound the remaining oil residue is buried in boulder/cobble armored beaches in thin (typically about 10 cm thick) lenses containing fine-grained sediments. It is sequestered and the low water flow means that O2 and nutrients found in the surrounding pore waters are not flowing through the oil layer, limiting biodegradation rates,15,16 even though there are sufficient concentrations of nutrients and oxygen in the adjacent pore waters to support biodegradation of the residual oil components.(14) Most of the remaining subsurface oil residue is located in the mid-upper intertidal zone away from biota.9,14 Concerns, however, have been raised that the lingering oil residue could have adverse impacts.7,17,18 Given that the residual subsurface oil is sequestered the risks of mobilizing the oil through any treatment would seem to outweigh the potential benefits, that is, the best approach would seem to simply allow the residual oil to slowly undergo further natural biodegradation. Nevertheless there have been proposals to bioremediate the remaining subsurface oil residues19,20 even though direct exposure of biota has been demonstrated to be extremely unlikely.21−23
Venosa et al.(20) showed in laboratory experiments that if sediments were displaced, so that the oil was no longer sequestered, rapid biodegradation of the residual oil would occur. They concluded that O2 is the main limiting factor. They also postulated that if NO3– was added there could be anaerobic biodegradation of associated organic matter so that the porosity of the sediments would increase and oxygenated water could reach the oil. Given the patchy distribution of oil, the fact that most of the oil is already highly weathered so that the residual compounds are highly insoluble, and that sequestered oil is not reaching sensitive biota, Atlas and Bragg(13) have contended that the value of any such treatment will likely be very limited. Additional bioremediation field trials, though, are planned for 2011. The debate, thus, continues about whether bioremediation can still be effective more than 21 years after the spill.
The BP Deepwater Horizon Oil Leak in the Gulf of Mexico
On April 20, 2010, high-pressure oil and gas escaped from BP’s Deepwater Horizon exploratory well in Mississippi Canyon Block 252 which was located 77 km offshore. In the subsequent fire and explosions, 11 men tragically lost their lives. The Deepwater Horizon drilling rig burned and ultimately sank in 1500 m of water 2 days later. The blowout prevention device (BOP) at the wellhead and all the emergency shut-off equipment failed.
Upon sinking, the 21 in. (53 cm) riser pipe, from the wellhead to the drilling platform, collapsed onto the sea floor. Oil leaked from multiple locations along the riser pipe and the top of the BOP (Figure 2). In all, it took 84 days to stop the flow of oil from the Deepwater Horizon well. The oil from this well (Macondo oil) is typical of light Louisiana crude from petroleum reservoirs more than 5000 m deep; it has an API gravity = 35.2.(24)
Graphic depiction of Deepwater Horizon spill and cleanup.
The actual volume of oil and gas released from the Deepwater Horizon well is very difficult to determine. The oil release was estimated at 4.9 million barrels (205.8 million gallons (780 million liters)), 0.8 million barrels of which were captured before release into the water column, by the National Incident Command’s Flow Rate Technical Group (FRTG).(24) Previously the IXTOC-I well blowout in the Bay of Campeche, estimated at 147 million gallons (556 million liters), was the largest oil spill in the Gulf of Mexico and the second largest in the world (the largest spill was in the Persian Gulf in 1991 as a result of the intentional release of oil by Iraq). The Oil Budget calculator from the FRTG for the Deepwater Horizon well oil release estimated that 3% was skimmed, 5% was burned, 8% was chemically dispersed, 16% was naturally dispersed, 17% was captured, 25% was evaporated or dissolved, and 26% was remaining.
Dispersion of Oil
One of the strategies employed to defray the environmental and safety impact of the oil from the Deepwater Horizon was to inject the dispersant COREXIT 9500 directly at the wellhead or end of the riser pipe at a water depth of 1500 m. The goal was to disperse the oil at depth, thereby preventing large slicks from forming directly at the surface above the wellhead where many ships were gathered to stop the leak, and to prevent the oil from impacting the shoreline. The EPA established a rigorous, daily water sampling program, once it was demonstrated in early May that within 4 h of injecting COREXIT 9500 at the wellhead less oil was coming to the surface immediately above the wellhead making it safer for leak operations.
Additionally, there was physical dispersion because the oil was injected into the deep-sea at high pressure and temperature. While large oil droplets moved to the surface, droplets between 10 and 60 μm were neutrally buoyant and were picked up by the current between 900 and 1300 m.(25) The deep-water dispersed oil droplets that had a concentration of less than 10 ppm total petroleum hydrocarbons has been likened to a “cloud”. This “cloud” of dispersed oil could be detected by fluorescence moving away from the wellhead, generally in a southwesterly direction.(25) O2 concentration drops that did not result in anoxic conditions often were also detected in association with the “cloud” of dispersed oil in the deep-sea.(26)
Microbiology of the Deep-Sea “Cloud” of Dispersed Oil
The deep-sea “cloud” of dispersed oil was found to have lower PO43- and dissolved O2 concentrations, slightly higher NH4+ concentrations and significantly lower NO3– concentrations(26) suggesting bacterial activity in the “cloud” of dispersed oil. The total bacterial density was significantly higher in the “cloud” (up to 105 cells/mL) versus outside the “cloud” (approximately 103 cells/mL).
Using a 16S rRNA microarray, 951 subfamilies of bacteria were detected from 62 phyla; however, only 16 subfamilies of the γ-proteobacteria were significantly enriched in the cloud, with 3 families in the class Oceanospirillales dominating.(26) Clone libraries, qPCR, phospholipids, and functional gene arrays further supported the finding of enrichment of oil degraders. The “cloud community” was also cold loving (psychrophilic) since the temperature below 700 m in the Gulf of Mexico is always 2–5 °C.
The average half-life of alkanes from two different cloud analyses and two different lab microcosm assays ranged from 1.2 to 6.1 days,(26) which are similar to those reported for other cold-water studies(27) (Table 2). During the release (April–July), concentrations of polynuclear aromatic hydrocarbons also decreased rapidly with distance from the release point (the wellhead) and were seen to reach <1.0 ppb within 15–20 mi (24–32 km) in all directions other than to the southwest, where a small number of samples exceeded 1 ppb out to 40 mi (64 km).(28) Much of decline in PAHs is attributable to microbial degradation.(29)
Oil Biodegradation Half-Life Comparisons
Gaseous compounds also were biodegraded in the water column. Valentine et al.(30) reported that early in the spill propane (C3H8) and ethane (C2H6) were the primary drivers of microbial respiration, accounting for up to 70% of the observed oxygen dips in fresh “plumes”. Based on CH4 and O2 distributions Kessler et al.(31) reported that within ∼120 days from the onset of release, a vigorous deepwater bacterial bloom of methanotrophs had respired nearly all the released methane. Molecular analyses for methanotrophs in September 2010 showed relative abundances of 5–36% of the gene sequences detected, whereas in June 2010, before the leak was stopped, no methanotrophs were detected. Clearly as the spill events progressed the microbial populations changed in response to the available hydrocarbons.
Oil Biodegradation in Surface Waters and Sediments
There have been reports of sediment contamination based upon visual observations.(32) Sediment collected from more than 120 sites showed qualitative evidence for oil in up to 29% of the cores. However, detailed chemical analyses indicate that only 6% of these cores were contaminated with Macondo oil, all of which were within 2.7 km of the wellhead.(33) Thus, the evidence so far indicates that sediment contamination was limited primarily to near the wellhead.
With regard to surface oil and shorelines, up to 40% of the oil was lost in the water column between the wellhead and the surface, largely due to dissolution and mixing as the oil moved to the surface and evaporation as soon as it reached the surface which lowered the hydrocarbon concentrations and changed the composition of the oil.34,35 Analyses of surface oil samples from the source toward the shore showed that volatile organic compounds were either dissolved or evaporated from the Macondo oil near the source, and oil that approached the near shore environment no longer had BTEX compounds present.34−36 Photooxidation may also have been important for oil on the surface as it moved shoreward. In samples that were analyzed for BTEX, these compounds were never detected in Macondo oil that reached the shore, nor were BTEX compounds detected in near shore sediments.(33) Dissolution and evaporation appear to have been more important than biodegradation in the weathering of the surface slick.34,36 Evaporation resulted in the loss of alkanes with chain lengths up to C20. Clearly physical dispersion and evaporation competed with biodegradation so that the overall weathering of the oil that did reach the shore was the result of multiple processes. Certainly the oil that has sunk into the shoreline and marsh sediments will degrade much more slowly as it becomes nutrient depleted and potentially anaerobic due to O2 diffusion limitations. It is too soon to tell what the impact of the Macondo oil will be to the delicate marsh environments and beach communities in Louisiana, Mississippi, Alabama, and Florida, many further studies will be needed.
The Exxon Valdez and BP Deepwater Horizon oil spills provide a number of lessons regarding the role of microbial biodegradation in determining the fate of the spilled oil. Biodegradation and other natural weathering processes will remove most of the contaminating hydrocarbons but it can take months to years in areas of high oil concentrations. Such was the case for oil on shorelines impacted by the Exxon Valdez oil spill. The major focus of biodegradation for the Exxon Valdez was on the shorelines—oil moved on the surface and while there were studies on decreasing concentrations of oil in the water column no specific biodegradation studies were conducted as they were for the BP spill with its unique deep-water cloud of dispersed oil. Also the advanced molecular techniques for characterizing microbial communities were not available at the time of the Exxon Valdez spill; given the advances in molecular biology over the past two decades it is not surprising, therefore, that extensive molecular analyses of microbial communities have been performed in the Gulf of Mexico following the BP Deepwater Horizon spill.
When oil is highly dispersed in the water column and where microbial populations are well adapted to hydrocarbon exposure, such as in Gulf of Mexico waters, biodegradation of oil proceeds very rapidly. Bioremediation through fertilizer addition can be an effective means of speeding up rates of oil biodegradation in some situations. One should, however, not expect 100% removal of oil by biodegradation—patches of highly weathered oil likely will remain in some environments. Decisions as to whether or not to rely upon microbial oil biodegradation, including whether to apply bioremediation, should be driven by risk and not just the presence of detectable hydrocarbons. In the case of the BP Deepwater Horizon spill, the leak was capped on July 15; by the first week of August, no surface oil slick was observed and concentrations of detectable oil in the water column were greatly diminished.(33) The natural rapid attenuation of oil in the BP Deepwater Horizon spill is due to a number of parameters, for example, type of crude, offshore, jetting of the oil in to the deep-sea, rapid dissolution, and microbial adaption. The Gulf of Mexico has more natural seeps of oil then any marine area in North America, contributing more than 400000 barrels of oil a year to the Gulf of Mexico.(37) In the Gulf of Mexico the microbiota are likely to be better adapted to oil because of natural seeps and offshore drilling then almost anywhere else in the world. Thus, it is not surprising that bacteria in the Gulf of Mexico responded rapidly to the influx of oil.
In conclusion, the fate of all oil spills will depend upon a unique set of circumstances that will govern risk and impacts, including the volume of oil spilled, the chemical nature of the oil, and the ecosystems with their specific environmental conditions impacted by the spilled oil. However, one common denominator is the cosmopolitan nature of oil-degrading microbes. Natural and enhanced biodegradation greatly reduced the concentrations of oil following both the Exxon Valdez and BP Deepwater Horizon oil spills. It was the unseen microbes that were largely responsible for the disappearance of the spilled oil that had spread into the environment. Responders to future spills would do well to mobilize as rapidly as possible a scientific understanding of the unique conditions of the spill, that is, to determine both natural and enhanced biodegradation and what the best possible approach will be to minimize the risk and impact of the spill on the environment.
Terry Hazen who was primarily responsible for the discussion of the BP Deepwater Horizon spill was funded by a subcontract from the University of California at Berkeley, Energy Biosciences Institute (EBI), to Lawrence Berkeley National Laboratory. EBI receives funds from BP. Ronald Atlas who was primarily responsible for the discussion of the Exxon Valdez spill serves as a consultant to Exxon-Mobil on bioremediation; he also is a consultant to BP on oil biodegradation.
Ronald Atlas is Professor of Biology at the University of Louisville. He has over 40 years experience studying the role of microorganisms in oil biodegradation and helped pioneer the field of bioremediation. He has worked extensively on the bioremediation the Exxon Valdez spill. Terry Hazen is DOE BER distinguished scientist in the Earth Sciences Division at Lawrence Berkeley National Laboratory. He has studied oil, chlorinated solvent, and metal and radionuclide bioremediation for more then 30 years. He has been extensively studying the microbial degradation of the BP Deepwater Horizon Spill in the Gulf of Mexico.
- Wolfe D. A.; Hameedi M. J.; Galt J. A.; Watabayashi D.; Short J.; O’Clair C.; Rice S.; Michel J.; Payne J. R.; Braddock J.; Hanna S.; Sale D. Fate of the oil spilled from the T/V Exxon Valdez in Prince William Sound, Alaska. Environ. Sci. Technol. 1994, 28, 561A–568A.
- Owens E. H.. Shoreline conditions following the Exxon Valdez oil spill as of Fall 1990. In Proceedings of the 14th Arctic and Marine Oilspill Program (AMOP) Technical Seminar; Environment Canada, Vancouver: British Columbia, Canada, 1991; pp 579–606.
- Bragg J. R.; Prince R. C.; Wilkinson J. B.; Atlas R.Bioremediation for Shoreline Cleanup Following the 1989 Alaskan oil Spill; Exxon: Houston, TX, 1992.
- Pritchard P. H.; Costa C. F. EPA’s Alaska oil spill bioremediation project. Environ. Sci. Technol. 1991, 25, 372–379.
- Bragg J. R.; Prince R. C.; Harner E. J.; Atlas R. M. Effectiveness of bioremediation for the Exxon Valdez oil spill. Nature 1994, 368, 413–418.
- Short J. W.; Lindeberg M. R.; Harris P. M.; Maselko J. M.; Pellak J. J.; Rice S. D. Estimate of oil persisting on the beaches of Prince William Sound 12 years after the Exxon Valdez oil spill. Environ. Sci. Technol. 2004, 38, 19–25. [PubMed]
- Short J. W.; Maselko J. M.; Lindeberg M. R.; Harris P. M.; Rice S. D. Vertical distribution and probability of encountering intertidal Exxon Valdez oil on shorelines of three embayments within Prince William Sound. Environ. Sci. Technol. 2006, 40, 3723–3729. [PubMed]
- Short J. W.; Irvine G. V.; Mann D. H.; Maselko J. M.; Pella J. J.; Lindeberg M. R.; Payne J. M.; Driskell W. B.; Rice S. D. Slightly weathered Exxon Valdez oil persists in Gulf of Alaska beach sediments after 16 Years. Environ. Sci. Technol. 2007, 41, 1245–1250. [PubMed]
- Boehm P. D.; Page D. S.; Brown J. S.; Neff J. M.; Bragg J. R.; Atlas R. M. Distribution and weathering of crude oil residues on shorelines 18 years after the Exxon Valdez spill. Environ. Sci. Technol. 2008, 42, 9210–9216. [PubMed]
- Owens E. H.; Taylor E.; Humphrey B. The persistence and character of stranded oil on coarse-sediment beaches. Mar. Pollut. Bull. 2008, 56, 14–26. [PubMed]
- Taylor E.; Reimer D. Oil persistence on beaches in Prince William Sound—A review of SCAT surveys conducted from 1989 to 2002. Mar. Pollut. Bull. 2008, 43, 458–474. [PubMed]
- Atlas R. M.; Bragg J.Assessing the long-term weathering of petroleum on shorelines: uses of conserved components for calibrating loss and bioremediation potential. In Proceedings of the 29th Arctic and Marine Oil Spill Program (AMOP), June 5–7, Edmonton, Alberta, Canada. 2007, pp. 263–290.
- Atlas R.; Bragg J. R. Bioremediation of marine oil spills: when and when not - the Exxon Valdez experience. Microbial Biotechnol. 2009, 2, 213–221. [PMC free article][PubMed]
- Atlas R.; Bragg J. R.Evaluation of PAH depletion of subsurface Exxon Valdez oil residues remaining in Prince William Sound in 2007–2008 and their likely bioremediation potential. In Proceedings of the 32nd Arctic and Marine Oil Spill Program (AMOP), Technical Seminar; Environment Canada: Ottawa, ON, 2009; Vol. 2, pp 723-747.
- Boufadel M, C.; Harifi Y.; VanAken B.; Wrenn B.; Lee K. Nutrient and oxygen concentrations within the sediments of an Alaskan beach polluted with the Exxon Valdez Oil Spill. Environ. Sci. Technol. 2010, 44, 7418–7424. [PubMed]
- Li H.; Boufadel M. C. Long-term persistence of oil from the Exxon Valdez spill in two-layer beaches. Nat. Geosci. 2010, 3, 96–99.
- Bodkin J. L.; Ballachey B. E.; Dean T. A.; Fukuyama A. K.; Jewett S. C.; McDonald L.; Monson D. H.; O’Clair C. E.; VanBlaricom G. R. Sea otter population status and the process of recovery from the 1989 Exxon Valdez oil spill. Mar. Ecol.: Prog. Ser. 2002, 241, 237–253.
- Esler D.; Bowman T. D.; Trust K. A.; Ballachey B. E.; Dean T. A.; Jewett S. C.; O’Clair C. E. Harlequin duck population recovery following the Exxon Valdez oil spill: Progress, process and constraints. Mar. Ecol.: Prog. Ser. 2002, 241, 271–286.
- Michel J.; Nixon Z.; Cotsapas L.Evaluation of oil remediation technologies for lingering oil from the Exxon Valdez oil spill in Prince William Sound, Alaska. The Exxon Valdez Oil Spill Trustee Council Restoration Project No. 050778 Final Report. 2006.
- Venosa A. D.; Campo P.; Suidan M. T. Biodegradability of lingering crude oil 19 years after the Exxon Valdez oil spill. Environ. Sci. Technol. 2010, 44, 7613–7621. [PubMed]
- Boehm P. D.; Neff J. M.; Page. D. S. Assessment of polycyclic aromatic hydrocarbon exposure in the waters of Prince William Sound after the Exxon Valdez oil spill: 1989–2005. Mar. Pollut. Bull. 2007, 54, 339–356. [PubMed]
- Boehm P. D.; Page D. S.; Neff J. M.; Johnson C. B. Potential for sea otter exposure to remnants of buried oil from the Exxon Valdez oil spill. Environ. Sci. Technol. 2007, 41, 6860–6867. [PubMed]
- Neff J. M.; Bence A. E.; Parker K. R.; Page D. S.; Brown J. S.; Boehm P. D. Bioavailability of polycyclic aromatic hydrocarbons from buried shoreline oil residues 13 years after the Exxon Valdez oil spill: A multispecies assessment. Environ. Toxicol. Chem. 2006, 25, 947–961. [PubMed]
- The Federal Interagency Solutions Group: Oil Budget Calculator Science and Engineering Team. 2010. Oil Budget Calculator Technical Documentation. http://www.restorethegulf.gov/sites/default/files/documents/pdf/OilBudgetCalc_Full_HQ-Print_111110.pdf (accessed July 6, 2011).
- Camilli R.; Reddy C. M.; Yoerger D. R.; Van Mooy B. A. S.; Jakuba M. V.; Kinsey J. C.; McIntyre C. P.; Sylva S. P.; Maloney J. V. Tracking hydrocarbon plume transport and biodegradation at Deepwater Horizon. Science 2010, 330, 201–204. [PubMed]
- Hazen T. C.; Dubinsky E. A.; DeSantis T. Z.; Andersen G. L.; Piceno Y. M.; Singh N.; Jansson J. K.; Probst A.; Borglin S. E.; Fortney J. L.; Stringfellow W. T.; Bill M.; Conrad M. E.; Tom L. M.; Chavarria K. L.; Alusi T. R.; Lamendella R.; Joyner D. C.; Spier C.; Baelum J.; Auer M.; Zemla M. L.; Chakraborty R.; Sonnenthal E. L.; D’Haeseleer P.; Holman H. Y. N.; Osman S.; Lu Z. M.; Van Nostrand. J.; Deng Y.; Zhou J. Z.; Mason O. U. Deep-sea oil plume enriches indigenous oil-degrading bacteria. Science 2010, 330, 204–208. [PubMed]
- Venosa A. D.; Holder E. L. Biodegradability of dispersed crude oil at two different temperatures. Mar. Pollut. Bull. 2007, 54, 545–553. [PubMed]
- Boehm P. D.; Cook L.; Murray K. J.Aromatic hydrocarbon concentration in seawater: Deepwater Horizon oil spill. In Proceedings 2011 International Oil Spill Conference; American Petroleum Institute: Washington DC, 2011.
- Boehm P. D., Cook L. L.; Barrick R. Atlas R. M.Preliminary water column PAH exposure assessment: weathering of Oil in the water column, and evidence for rapid biodegradation. Paper presented at Gulf Oil Spill Focused Topic SETAC Meeting. Pensacola Florida. April 26, 2011.
- Valentine D. L.; Kessler J. D.; Redmond M. C.; Mendes S. D.; Heintz M. B.; Farwell C.; Hu L.; Kinnaman F. S.; Yvon-Lewis S.; Du M. R.; Chan E. W.; Tigreros F. G.; Villanueva C. J. Propane respiration jump-starts microbial response to a deep oil spill. Science 2010, 330, 208–211. [PubMed]
- Kessler J. D.; Valentine D. L.; Redmond M. C.; Du M. R.; Chan E. W.; Mendes S. D.; Quiroz E. W.; Villanueva C. J.; Shusta S. S.; Werra L. M.; Yvon-Lewis S. A.; Weber T. C. A persistent oxygen anomaly reveals the fate of spilled methane in the deep Gulf of Mexico. Science 2011, 331, 312–315. [PubMed]
- Joye S.. Offshore ocean aspects of the Gulf oil well blowout. Paper Presented at AAAS Annual Meeting, Washington DC. Feb. 19, 2011.
- Operational Science Advisory Team. 2010. Summary Report for Sub-Sea and Sub-Surface Oil and Dispersant Detection: Sampling and Monitoring. New Orleans: Unified Area Command. http://www.restorethegulf.gov/sites/default/files/documents/pdf/OSAT_Report_FINAL_17DEC.pdf (accessed July 6, 2011).
- Gong C.; Milkov A. V.; Grass D.; Sullivan M.; Searcy T.; Dzou L.; Depret P.The significant impact of weathering on MC252 oil chemistry and its fingerprinting of samples collected from May to September 2010, Paper presented at SETAC North America 31st Annual Meeting, Portland, OR, November 7–11, 2010.
- Operational Science Advisory Team. 2011. Summary Report for Fate and Effects of Remnant Oil in the Beach Environment. New Orleans: Gulf Coast Incident Management Team. http://www.restorethegulf.gov/sites/default/files/u316/OSAT-2%20Report%20no%20ltr.pdf (accessed July 6, 2011).
- Brown J. S. ; Beckmann D.; Bruce L.; Mudge S.PAH depletion ratios document rapid weathering and attenuation of PAHs in oil samples collected after the Deepwater Horizon Incident. Paper presented at SETAC North America 31st Annual Meeting, Portland, OR, November 7–11, 2010.
- Oil in the Sea III: Inputs. Fates, and Effects; National Academy of Sciences: The National Academies Press: Washington, DC, 2003.
CASE STUDIES OF BIOREMEDIATION
- CASE STUDY 1: Audio presentation - a common application of cleaning up oil contaminated water. This is the Exxon Valdez case study, 1989 (looked at within lecture 11).
- CASE STUDY 2: Audio presentation - Bioremediation of Gulf Oil Spill, 2010
- CASE STUDY 3: Audio presentation - Bioremediation of Petroleum Hydrocarbons and Solvents. You will see how bio-stimulation of existing microorganisms was successfully used to remediate almost all of the contamination.
- CASE STUDY 4: Audio presentation - Bioremediation of multiple hazardous materials by phyto-remediation and constructed wetlands.
- CASE STUDY 5: Audio presentation - Bioremediation of a former tar distillery, Merseyside, UK. Here, organic pollutants, tar residuals & heavy metals were bioremediated by windrow composting and biopiles. This case study is a paper from a project in the UK and will greatly broaden your understanding of bioremediation as a whole process, e.g. from initial feasibility studies, considerations and results achieved.
- CASE STUDY 6: Love Canal - health consequences of poorly managed landfill, covering how bioremediation was implemented to treat the contaminated land.
1) Bioremediation of Exxon Valdez Oil Spill, Alaska: 1989
2) Bioremediation of Gulf Oil Spill
3) Bioremediation of Petroleum Hydrocarbons and Solvents:
Bio-stimulation of Native Microorganisms
4) Audio Presentation: In-situ & Ex-situ Bioremediation of Multiple Hazardous Materials:
Phytoremediation and Constructed Wetlands
5) Bioremediation of a former industrial site using Windrow Composting and Non-aerated Bio-piles:
The Lanstar Tar Distillery, Merseyside, UK
It describes the soil sampling, monitoring, bioremediation rates and results achieved by both windrow composting and non-aerated bio-piles.
HEALTH CONSEQUENCES OF HAZARDOUS WASTE & THE NEED FOR BIOREMEDIATION!
Love Canal Landfill (Part 1) 1978
Love Canal Landfill (Part 2)
Links to Further Case Studies.
Other Information Sources
“A Citizen’s Guide to Bioremediation,” EPA 542-F-01-001, Office of Solid Waste and Emergency Response, U.S. Environmental Protection Agency, April 2001...
"Bioremediation". In: Encyclopedia of Earth. Eds. Cutler J. Cleveland (Washington, D.C.: Environmental Information Coalition, National Council for Science and the Environment). [First published in the Encyclopedia of Earth January 12, 2011; Last revised Date September 23, 2011; Retrieved March 7, 2012 <http://www.eoearth.org/article/Bioremediation?topic=49587> |
Straight Angle – Definition With Examples
Understanding the basics of geometry is like learning a new language. We’re here at Brighterly to help simplify this language of shapes and sizes for our young learners. Today, we’re looking at a simple yet vital concept in geometry, the Straight Angle.
What Is an Angle?
Before we dive into straight angles, let’s first understand what an angle is. An angle is a fundamental concept in geometry. Formed by two rays (or lines) that share a common endpoint, it measures the amount of rotation between the two rays. This common endpoint is known as the vertex. Angles are usually measured in degrees. They are a foundational element in the study of geometry and play a crucial role in various applications, from architecture to game design.
What Is a Straight Angle?
Now, let’s focus on the Straight Angle. A straight angle, as the name suggests, is an angle that measures exactly 180 degrees. It looks like a straight line and is thus named a “straight” angle. Every line segment can be considered to include a straight angle.
Properties of a Straight Angle
Some interesting properties set straight angles apart:
- They measure exactly 180 degrees.
- They look like a straight line.
- Any line segment includes a straight angle.
- It’s the intermediate between a zero angle and a full angle.
Straight Angle Degree
A Straight Angle Degree is the measurement of a straight angle, which is always 180 degrees. It signifies half a revolution. So, if you have a line and draw another line from its midpoint, making sure it doesn’t overlap the original line, you have drawn a straight angle.
Drawing a Straight Angle Using a Protractor
Drawing a straight angle can be a fun activity using a simple tool called a protractor. Start by drawing a line segment on a piece of paper. Place the protractor on the line such that the midpoint of the protractor lies on the line segment. Mark the 180-degree point, and draw a line from the midpoint to this point. Congratulations, you have drawn a straight angle!
Straight Angle Pair
A Straight Angle Pair refers to two straight angles that combine to form a full angle of 360 degrees. A simple example of this is when two straight lines intersect.
Straight Angles in Real Life
Straight angles are everywhere around us. From the hands of a clock at 6 o’clock to the corners of a rectangular or square table, you’ll find countless straight angles in your everyday life.
Six Types of Angles
There are several types of angles that we commonly study in geometry, namely:
- Acute Angle (less than 90 degrees)
- Right Angle (90 degrees)
- Obtuse Angle (between 90 and 180 degrees)
- Straight Angle (180 degrees)
- Reflex Angle (greater than 180 degrees but less than 360)
- Full Angle (360 degrees)
Can we consider a Triangle Made from a Straight Angle?
Technically, a triangle cannot be made from a straight angle. A triangle, by definition, is a closed shape with three sides and three angles. The sum of these angles is always 180 degrees, the measure of a straight angle.
How to construct a Straight Angle?
Constructing a straight angle is quite simple. All you need is a ruler. Draw a straight line with the ruler, and there you have it – a straight angle!
Solved Examples on Straight Angles
Let’s try out some problems involving straight angles:
If a straight line intersects another at a point, what are the measures of the angles formed? Answer: 180 degrees, because each angleformed is a straight angle.
If one angle of a linear pair (a pair of adjacent angles where their non-common sides form a straight line) is 75 degrees, what is the measure of the other angle? Answer: 105 degrees, because the sum of a linear pair is always 180 degrees, i.e., a straight angle.
Practice Problems on Straight Angles
Now, here are some practice problems for you:
- Two angles form a linear pair. The measure of one angle is twice that of the other. What are the measures of the angles?
- Can you draw a straight angle using only a ruler? If yes, how?
- If the sum of two angles is 180 degrees, what can you say about the angles?
Geometry, with its fascinating world of shapes and angles, can initially appear complex and daunting. However, at Brighterly, we believe that by breaking down these concepts, we can make them accessible and even exciting for young learners. In this article, we’ve explored the concept of angles, with a special focus on straight angles.
Straight angles, as we’ve discovered, are angles that measure exactly 180 degrees, resembling a straight line. They possess unique properties, such as being the intermediate between a zero angle and a full angle, and can be found in various real-life scenarios. From the hands of a clock pointing at 6 o’clock to the corners of everyday objects like tables, straight angles are all around us, connecting geometry to our everyday experiences.
Frequently Asked Questions on Straight Angles
Can a straight angle be complementary to another angle?
No, a straight angle cannot be complementary to another angle. Complementary angles are two angles whose measures add up to 90 degrees, while a straight angle measures exactly 180 degrees.
How do straight angles contribute to everyday life?
Straight angles can be found in numerous aspects of our daily lives. From the shape of a clock’s hands at 6 o’clock to the corners of geometric shapes like tables, windows, and buildings, straight angles are fundamental elements that define the structure and design of objects around us.
Are straight angles the largest possible angle?
No, straight angles are not the largest possible angles. A straight angle measures 180 degrees, while a full angle measures 360 degrees, representing a complete revolution or a circle.
Can a triangle contain a straight angle?
No, a triangle cannot contain a straight angle. By definition, a triangle is a polygon with three angles, and the sum of its interior angles is always 180 degrees. Therefore, the angles in a triangle are always less than 180 degrees.
Wikipedia: Wikipedia offers a comprehensive overview of angles, including their definitions, properties, and various types. It serves as a valuable starting point for understanding geometric concepts.
Britannica: Britannica’s mathematics section offers in-depth articles on angles, providing a scholarly perspective on the topic. It is a reliable resource for gaining a deeper understanding of geometric concepts.
National Council of Teachers of Mathematics (NCTM): The NCTM is a professional organization that provides resources and guidance for teaching mathematics. Their publications and research papers offer valuable insights into teaching angles and geometry to children.
Need help with Numbers?
- Does your child struggle with understanding numbers lessons?
- Try learning with an online tutor.
Is your child having difficulties with mastering the basics of numbers? An online tutor could provide the necessary help.Book a Free Class |
Formation and evolution of the Solar System
The formation and evolution of the Solar System is the name for ideas of how the Solar System began, and how it will go on changing. The accepted idea is that 4.6 billion years ago, there was a very big cloud of gas in our area of space, known as a nebula. All things with mass come together, or gravitate towards one another. This pulled all the gas towards the center. Eventually the pressure at the center raised the temperature so that hydrogen atoms fused together to make helium. The process by which solar systems are created is called the nebular theory.
The spin of the planets around the Sun, and each around its own axis, was first caused by the original gas cloud having different density in different places. The spin increased because of the contraction under gravity (conservation of energy). So did the flatness of the solar system's shape. As the collapse continues, conservation of angular momentum means that the rotation accelerated. This largely prevents the gas from directly accreting (moving) onto the central core. The gas is forced to spread outwards near its equatorial plane, forming a disk, which in turn accretes onto the core.
Gravity caused the atoms in the Sun to become very close to each other. All this energy eventually made our star: the Sun. The leftover gas mostly went to the gas giants—also known as Jovian planets. The rock and dust went off to make the terrestrial planets, their moons, asteroids and all other objects in the Solar System.
Because of the sun's huge mass (99.86% of the whole mass of the solar system), it had very strong gravity. The centrifugal force of the planets going round the Sun balances the gravitational pulll of the Sun. The huge density at its core causes a fusion reaction which turns hydrogen into helium with the radiation of heat, light and other forms of electromagnetic radiation.
The next issue is: if the Sun turns hydrogen into helium, where do all the other elements come from? There is only one possible answer: these higher elements came from earlier generations of stars. Huge supernovas which exploded billions of years ago in the neighbourhood of the young Solar System produced the higher elements. Huge stars run through their life cycle much faster than smaller stars. That is caused by the even higher pressures and temperatures inside them as compared with an average main sequence star like the Sun.
History of the ideaEdit
The nebular hypothesis, as it was called, was first worked out in the 18th century. Three men worked on it:
Swedenborg first had the idea, and Kant worked it up into a proper theory. In 1755 Kant published his Universal natural history and theory of the heavens (in German, of course). He argued that gaseous clouds, nebulae, slowly rotate, gradually collapse and flatten due to gravity. They eventually form stars and planets.
Meanwhile, a similar model was developed independently and proposed in 1796 by Laplace. in his Exposition du systeme du monde. He thought that the Sun originally had an extended hot atmosphere throughout the volume of the Solar System. His theory had a contracting and cooling protosolar nebula. As this cooled and contracted, it flattened and spun more rapidly, throwing off (or shedding) a series of gaseous rings of material; and according to him, the planets condensed from this material. His model was similar to Kant's, except more detailed and on a smaller scale. Unfortunately, there was a problem with Laplace's version. The main problem was the angular momentum distribution between the Sun and planets. The planets have 99% of the angular momentum, and this fact could not be explained by the nebular model. It was quite a long time before this was understood.
The birth of the modern widely accepted theory of planetary formation – the solar nebular disk model (SNDM) – is due to the Soviet astronomer Victor Safronov. His book Evolution of the protoplanetary cloud and formation of the Earth and the planets, translated to English in 1972, had a big effect. In this book almost all major problems of the planetary formation process were formulated and some of them solved. Safronov's ideas were further developed. There are still quite a few aspects of the Solar System which need to be explained.
Although it originally applied only to our own Solar System, the SNDM is now thought to be the usual way of star formation throughout the universe. As of August 2017, over 3000 extrasolar planets have been discovered in our galaxy.
- Nakamoto, Taishi; Nakagawa, Yushitsugu (1994). "Formation, early evolution, and gravitational stability of protoplanetary disks". The Astrophysical Journal. 421: 640–650. Bibcode:1994ApJ...421..640N. doi:10.1086/173678.
- Yorke, Harold W.; Bodenheimer, Peter (1999). "The formation of protostellar disks. III. The influence of gravitationally induced angular momentum transport on disk structure and appearance". The Astrophysical Journal. 525 (1): 330–342. Bibcode:1999ApJ...525..330Y. doi:10.1086/307867. [arXiv:1008.2973v1 [astro-ph.EP] ]
- Charles H. Lineweaver 2001. An estimate of the age distribution of terrestrial planets in the Universe: quantifying metallicity as a selection Effect". Icarus. 151 (2): 307–313.
- Williams J. 2010. The astrophysical environment of the solar birthplace.. Contemporary Physics. 51 (5): 381–396.
- Swedenborg, Emanuel (1734). (Principia) Latin: Opera Philosophica et Mineralia (English: Philosophical and Mineralogical Works). I.
- Woolfson, M.M. (1993). "Solar System – its origin and evolution". Q.J.R. Astr. Soc. 34: 1–20. Bibcode:1993QJRAS..34....1W. For details of Kant's position, see Stephen Palmquist, "Kant's cosmogony re-evaluated", Studies in History and Philosophy of Science 18:3 (September 1987), pp.255-269.
- Henbest, Nigel (1991). "Birth of the planets: the Earth and its fellow planets may be survivors from a time when planets ricocheted around the Sun like ball bearings on a pinball table". New Scientist. Retrieved 2008-04-18.
- Safronov, Viktor Sergeevich (1972). Evolution of the protoplanetary cloud and formation of the Earth and the planets. Israel Program for Scientific Translations. ISBN 0-7065-1225-1.
- Wetherill, George W. (1989). "Leonard Medal citation for Victor Sergeevich Safronov". Meteoritics. 24: 347. Bibcode:1989Metic..24..347W. doi:10.1111/j.1945-5100.1989.tb00700.x.(paywall)
- "The Extrasolar Planet Encyclopaedia — Catalog Listing". exoplanet.eu. Retrieved 2017-09-03. |
Back To CourseMath 104: Calculus
14 chapters | 116 lessons | 11 flashcard sets
As a member, you'll also get unlimited access to over 75,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.Free 5-day trial
Let's think for a minute about the mad scientist in some movie who's trying to take over the world. Have you ever noticed that his plans, his dastardly plans, always fail for some reason? Well often, they show some chemicals bubbling in a beaker somewhere, so I think his dastardly plans fail because he doesn't understand what chemists and chemical engineers understand, which is, he doesn't understand differential equations.
Remember that a differential equation relates a variable with its rate of change. So let's think about a chemical reaction. Now an engineer and chemist, they understand that you can write the amount of chemical that you have, let's call it A, how it changes as a function of time. So you've got dA/dt. dA/dt = -kA. So in this case, what you're doing is you're throwing chemical A into a pot, and you're letting it react. And it's going to react with a rate k times the concentration of A, where k can be any number you want. People who understand differential equations can take this differential equation, then, and determine from it the concentration of A as a function of time. So can you write A as a function of time?
Well let's go back, and let's think about equations that we're more familiar with first. Let's say that we're looking at Super C, the human cannonball. We know that Super C starts out at a height of 0 at time t=0. When you shoot him out of the cannon, he's shot out at 13 meters per second, straight up. This means that his velocity straight up is 13 meters per second. He's always pulled down by gravity, though, which is 9.8 meters per second, squared, downwards. So given all of this information, can we determine his height as a function of time? Well first he's pulled down by gravity, which is acceleration at -9.8 meters per second squared. Acceleration is nothing more than the change in velocity over time. So I could write his acceleration at all points along his flight as being dv/dt=-9.8. It's minus because it's always being pulled back toward Earth. Now if I want to use this equation to find his actual velocity as a function of time, I'm going to multiply both sides by dt and integrate. Well, I know what the integral of dv is; it's just v. And I know what the integral of -9.8dt is; it's -9.8t. I have to add a constant of integration here, because I'm not integrating over some set time, I'm just taking an indefinite integral. So I end up with velocity equals -9.8t, plus my constant, C.
Now if I actually want to know where he is at any given point in time, I need to determine what C is. So what do I know? I know that his velocity when he was shot out of the cannon, so his velocity at time = 0, was equal to 13. So if I plug in 13 for v and 0 for t, I find that this constant, C, must be equal to 13. Okay, so we've got his velocity as a function of time. Velocity = -9.8t + 13. Can I find his position from this? Well, his velocity is dx/dt - it's how fast his position is changing with respect to time. So if I set this equal to dx/dt and multiply both sides of the equation by dt, I can integrate and find the integral of dx equals the integral of (-9.8t + 13)dt. Well, x=-4.9t^2 + 13t, plus my constant of integration. Once again, if I want to know where he is at any given point in time, I can't actually leave this constant of integration here. I need to solve for C somehow. Well, I know that at time t=0, he was at 0 height - he was just about to be shot out of the cannon, or just starting to be shot out of the cannon - so at t=0, x=0, so C has to be 0. I know that his position, his height, as a function of time, equals -4.9t^2 + 13t. So what, exactly, did I do here, other than find out how high he was at a point in time?
Well, I solved a differential equation. In particular, I solved the equation, the second derivative of x with respect to time equals -9.8, and I solved that for x as a function of time subject to what I'm going to call the initial conditions x=0 and dx/dt=13. That is, x=0, and the velocity equals 13 at time equals 0; hence, it's the initial conditions. So what does all of this have to do with determining the concentration of chemical A to avoid the dastardly fate of our dastardly mad scientist? Well, here's my differential equation: dA/dt=-kA. To solve this, to find A as a function of time, I need to integrate. But in order to integrate, I need to have t on one side of the equation and A on the other. If I just multiply both sides by dt, I still have an A on this side of the equation.
So this point is known as the separation of variables. Separation of variables means that we're going to rewrite a differential equation, like dx/dt, so that x is only on one side of the equation, and t is only on the other. This is kind of like making an explicit equation. Not all differential equations can be separated. So not all equations can be solved explicitly; some equations are implicit. It's the same thing with differential equations. But when you can write them explicitly, with x on one side and t on the other, you can use this separation of variables concept.
We actually did that with Super C. We had dv/dt=-9.8. We got t on one side of the equation by multiplying both sides by dt. When we had our velocity, we again got t limited to this side of the equation by multiplying it by dt. So we ended up with x on one side and t on the other. Can we do this for our chemical equation dA/dt=-kA? Well, if I divide both sides by A and multiply both sides by dt, my equation becomes dA/A=-kdt. So here, A is limited to the left side, and t is limited to the right side. I can integrate this just as I did for Super C, so I've got the integral of 1/A da equals the integral of -kdt. Well, the integral of 1/A is the natural log of A, and the integral of -k is -kt. Here's my constant of integration that I have to include, because I'm not taking any limits on these integrals. So I end up with lnA=-kt + C. That's almost there, but I really want A as a function of t, not the natural log of A as a function of t, so I'm going to take both sides of this equation and take e to the power of the left side and e to the power of the right side, so that this left side becomes A, and this right side becomes e^(-kt) + C. Now, if I knew what A was for some given C, I could solve this and get rid of that C.
So let's say we have an initial value again. Let's say that at time t=0, the concentration of A was just equal to 1. So I can plug in 0 for t, and I get e^0 + C = 1, because the concentration is 1 at t=0. This means that e^C is equal to 1. Well, I could take the natural log of that, but let's instead use what I know about exponentials, and let's write e^(-kt+C) = e^(C) * e^(-kt). So I've just split up this exponential. Well e^C, we've just determined, is 1. But it's important to not that e^C is still just a constant number to a constant power. So e^C I could write as just a different constant variable. Let's call it C sub 2. So I know even before I use my initial value that the concentration A = C sub 2 * e^(-kt). Now if I use the concentration of A at time equals 0 is 1, then I get A=e^(-kt).
So let's review. Differential equations are absolutely everywhere in physics. They relate a variable with its rate of change, such as the position of Super C with his velocity and the concentration of our chemical with the rate that it's being depleted. Often, we can solve these differential equations using a separation of variables. In separation of variables, we split the independent and dependent variables to different sides of the equation. In the case of Super C, we split everything that depended on t to the right-hand side and the velocity, or his position, to the left-hand side, so all the x's were on the left and all the t's were on the right. In the case of concentration, we put all of the A's, that is, all of the concentration variables, on the left, and all the time variables, all the t's, on the right.
To unlock this lesson you must be a Study.com Member.
Create your account
Already a member? Log InBack
Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Back To CourseMath 104: Calculus
14 chapters | 116 lessons | 11 flashcard sets |
Let's talk about unit rates. A unit rate is just a ratio of two different quantities each with different units. So maybe the most common unit rate you're going to see every time you drive down the road, and that's the speed limit. So for example 60 miles per hour or per 1 hour is a unit rate.
We've got a ratio, in this case a fraction comparing two different quantities. And they each have different units. So 60 miles per hour is a unit rate.
Another example might be maybe I save $200 every 1 month. So $200 per month is another example of a unit rate. I'm comparing two different quantities, and each of these quantities have a different unit.
So let's review how to use a conversion factor to convert between different units. So let's say I want to know 26.2 miles, what that is equivalent to in feet. 26.2 miles is the distance of a marathon. So we're going to start with 26.2 miles. And I want to multiply that by a conversion factor, which is a fraction that's equal to 1. And in this case, we want to use one that has miles and feet.
So since the conversion factor is a fraction, I want to write 26.2 miles also as a fraction. So I can just write it as 26.2 miles over 1. And then for my conversion factor, I want to have miles in the denominator of the fraction so that when I multiply they will cancel out.
I know that 1 mile is equal to 5,280 feet. And so now when I multiply these two fractions together, my miles in the numerator and miles in the denominator of each fraction will cancel out. And I'll be left with the units of feet, which is what I'm looking for. So I know my units will be when I'm looking for. And so now I just need to multiply my fraction in the numerators and the denominator.
So 26.2 times 5,280 is 138,336. And then my denominator, I have 1 times 1, which is just 1. Again, my units are feet. So this is just equal to 138,336 feet, which again, is 26.2 miles, or the distance of a marathon.
All right, let's do an example converting between unit rates. So I want to convert 40 miles per hour to be something in inches per second. And when we are converting rates, we need to convert multiple units. So I need to both convert between miles to inches and from hours to seconds. So I'm going to use conversion factors to do that.
So let's start with 40 miles per hour or 40 miles in 1 hour. And I'm going to start by focusing on my distance, so converting from miles to inches. So my conversion factor, if I want to cancel at the miles, I want to use a conversion factor with miles in the denominator so it will cancel with this miles in the numerator.
So I know that 1 mile is equivalent to 5,280 feet. So now I see that my miles here and here cancel, and now my units are in terms of feet. However, I need to go one step farther because I want to get to inches. So now I'm going to multiply by a conversion factor that has feet in the denominator so that the feet here will cancel. I know that 1 foot is equal to 12 inches. So now here my feet cancel, and my units for my distance are in inches. So I'm halfway there.
Now I want to focus on converting from hours to seconds. So I'm going to multiply by a conversion factor that has hours in the numerator so that it will cancel with hours here and the denominator. So I know that 1 hour is equal to 60 minutes. So my hours here and here will cancel.
But I'm not quite at seconds yet, so I need to use one more conversion factor. So I want one that has minutes in the numerator so that it will cancel with minutes here in the denominator. And I know that 1 minute is equal to 60 seconds. So here my minutes will cancel, and I'll be left with seconds.
So I can check that my distance unit is in inches, which is what I want. And my time unit is in seconds, which is also what I wanted. So now I'm done. I just need to simplify the numbers in my fraction so I can figure out what the value in inches per second is.
So simplifying my numerators by multiplying, I'm going to do 40 times 5,280 times 12 times 1 times 1. And that gives me 2 million 534,400 inches. And in my denominator I've got 1 times 1 times 1 times 60 times 60, which is 3,600. And again this is inches and seconds.
And now dividing these two values, I'm going to get 704 inches per second. So traveling at 40 miles per hour is the same as traveling at 704 inches per second.
So let's go over our key points from today. Make sure you get these in your notes if you don't have them already so you can refer to them later. So unit rates are ratios comparing two quantities with different units.
And conversion factors, which can be used to convert between different units, can also then be used to convert between different rates. So I hope that these key points and the examples that we did helped you understand a little bit more about converting unit rates. Keep on practicing, and keep using your notes and soon you'll be a pro. Thanks for watching. |
Drag Force Calculator
The drag force calculator calculates the force of drag of an object as it moves through a fluid environment such as water or air.
The drag force is the force which opposes the movement of the object. The drag force vector is opposite that of the velocity vector. The larger the drag force, the more resistance there is against the object. Thus, the harder and more force the object has to exert to overcome this drag force and move through the medium. The smaller the drag force, the less drag there is and the object has to exert less energy to overcome drag. is a dimensionless unit (has no units) that is used to quantify the drag or resistance of an object in a fluid environment. Thus, the larger the drag cofficient of an item, the more drag or resistance that the fluid has on it. The smaller the coefficient, the less resistance that the fluid has on the object.
The drag force is determined by 4 variables, the fluid density of the fluid the object is passing through, ρ, the velocity that the object is travelling through this medium, v, the drag coefficient, Cd, and the reference area, A, of the object.
The fluid density, ρ, is the mass density of the fluid which the object (car, plane) is travelling through. Water has greater fluid density than air, so an object travelling through water will have a greater drag coefficient than when travelling through air. The more dense a fluid is, the more difficult it is for an object to travel through, because of the greater density of the material. The less dense the fluid of an object, the easier it is for an object to travel through. Thus, the fluid density has a direct relationship with the drag force. As the fluid density increases, the drag force increases. Conversely as the fluid density decreases, the drag force decreases. The unit of fluid density is kilograms per meters cubed (kg/m3).
The velocity, v, is the velocity of the object moving through the fluid. This is the speed of the object. The faster an object goes, the more drag it receives. This is because of Newton's 3rd law. For every action, there is an opposite and equal reaction. Since the car or plane is going faster forward, there will be an opposite, equal force push in the opposite direction for this action. So the velocity of the vehicle has a direct relationship on drag force. As the velocity increases, there is an opposite and equal force of drag. Thus, the drag force increases. For every doubling of the velocity, the drag force quadruples. Conversely, as velocity decreases, drag force decreases. For every half of speed there is lessened, the drag force decreases fourfold. The unit of velocity is meters per second (m/s).
The drag coefficient, Cd, is the dimensionless unit (has no units) that is used to quantify the drag or resistance of an object in a fluid environment. Thus, the larger the drag cofficient of an item, the more drag or resistance that the fluid has on it. The smaller the coefficient, the less resistance that the fluid has on the object. There is a direct relationship between the drag coefficient and drag force.
The reference area, A, of an object is the area of the object which will be exposed to the air drag. Usually for automobiles and airplanes, this is the frontal area of the vehicle. The car's bumper and windshield will be the main area that drag affects. The larger the frontal surface area, the greater the drag. Therefore, there is a direct relationship between the surface area and drag force. As surface area increases, drag force decreases. Conversely, as it decreases, so does drag force. The unit of surface area is meters squared (m2).
To use this calculator, a user fills in the 4 parameters, the fluid density, ρ, the velocity, v, the drag coefficient, Cd, and the area, A, and clicks the 'Calculate' button. The resultant value will be automatically computed and shown. The resultant value, drag force, is in unit newtons (N). |
The great Danish physicist Niels Bohr (1885–1962) made immediate use of Rutherford’s planetary model of the atom. (Figure 30.14). Bohr became convinced of its validity and spent part of 1912 at Rutherford’s laboratory. In 1913, after returning to Copenhagen, he began publishing his theory of the simplest atom, hydrogen, based on the planetary model of the atom. For decades, many questions had been asked about atomic characteristics. From their sizes to their spectra, much was known about atoms, but little had been explained in terms of the laws of physics. Bohr’s theory explained the atomic spectrum of hydrogen and established new and broadly applicable principles in quantum mechanics.
Mysteries of Atomic Spectra
As noted in Quantization of Energy , the energies of some small systems are quantized. Atomic and molecular emission and absorption spectra have been known for over a century to be discrete (or quantized). (See Figure 30.15.) Maxwell and others had realized that there must be a connection between the spectrum of an atom and its structure, something like the resonant frequencies of musical instruments. But, in spite of years of efforts by many great minds, no one had a workable theory. (It was a running joke that any theory of atomic and molecular spectra could be destroyed by throwing a book of data at it, so complex were the spectra.) Following Einstein’s proposal of photons with quantized energies directly proportional to their wavelengths, it became even more evident that electrons in atoms can exist only in discrete orbits.
In some cases, it had been possible to devise formulas that described the emission spectra. As you might expect, the simplest atom—hydrogen, with its single electron—has a relatively simple spectrum. The hydrogen spectrum had been observed in the infrared (IR), visible, and ultraviolet (UV), and several series of spectral lines had been observed. (See Figure 30.16.) These series are named after early researchers who studied them in particular depth.
The observed hydrogen-spectrum wavelengths can be calculated using the following formula:
where is the wavelength of the emitted EM radiation and is the Rydberg constant, determined by the experiment to be
The constant is a positive integer associated with a specific series. For the Lyman series, ; for the Balmer series, ; for the Paschen series, ; and so on. The Lyman series is entirely in the UV, while part of the Balmer series is visible with the remainder UV. The Paschen series and all the rest are entirely IR. There are apparently an unlimited number of series, although they lie progressively farther into the infrared and become difficult to observe as increases. The constant is a positive integer, but it must be greater than . Thus, for the Balmer series, and . Note that can approach infinity. While the formula in the wavelengths equation was just a recipe designed to fit data and was not based on physical principles, it did imply a deeper meaning. Balmer first devised the formula for his series alone, and it was later found to describe all the other series by using different values of . Bohr was the first to comprehend the deeper meaning. Again, we see the interplay between experiment and theory in physics. Experimentally, the spectra were well established, an equation was found to fit the experimental data, but the theoretical foundation was missing.
What is the distance between the slits of a grating that produces a first-order maximum for the second Balmer line at an angle of ?
Strategy and Concept
For an Integrated Concept problem, we must first identify the physical principles involved. In this example, we need to know (a) the wavelength of light as well as (b) conditions for an interference maximum for the pattern from a double slit. Part (a) deals with a topic of the present chapter, while part (b) considers the wave interference material of Wave Optics.
Solution for (a)
Hydrogen spectrum wavelength. The Balmer series requires that . The first line in the series is taken to be for , and so the second would have .
The calculation is a straightforward application of the wavelength equation. Entering the determined values for and yields
Inverting to find gives
Discussion for (a)
This is indeed the experimentally observed wavelength, corresponding to the second (blue-green) line in the Balmer series. More impressive is the fact that the same simple recipe predicts all of the hydrogen spectrum lines, including new ones observed in subsequent experiments. What is nature telling us?
Solution for (b)
Double-slit interference (Wave Optics). To obtain constructive interference for a double slit, the path length difference from two slits must be an integral multiple of the wavelength. This condition was expressed by the equation
where is the distance between slits and is the angle from the original direction of the beam. The number is the order of the interference; in this example. Solving for and entering known values yields
Discussion for (b)
This number is similar to those used in the interference examples of Introduction to Quantum Physics (and is close to the spacing between slits in commonly used diffraction glasses).
Bohr’s Solution for Hydrogen
Bohr was able to derive the formula for the hydrogen spectrum using basic physics, the planetary model of the atom, and some very important new proposals. His first proposal is that only certain orbits are allowed: we say that the orbits of electrons in atoms are quantized. Each orbit has a different energy, and electrons can move to a higher orbit by absorbing energy and drop to a lower orbit by emitting energy. If the orbits are quantized, the amount of energy absorbed or emitted is also quantized, producing discrete spectra. Photon absorption and emission are among the primary methods of transferring energy into and out of atoms. The energies of the photons are quantized, and their energy is explained as being equal to the change in energy of the electron when it moves from one orbit to another. In equation form, this is
Here, is the change in energy between the initial and final orbits, and is the energy of the absorbed or emitted photon. It is quite logical (that is, expected from our everyday experience) that energy is involved in changing orbits. A blast of energy is required for the space shuttle, for example, to climb to a higher orbit. What is not expected is that atomic orbits should be quantized. This is not observed for satellites or planets, which can have any orbit given the proper energy. (See Figure 30.17.)
Figure 30.18 shows an energy-level diagram, a convenient way to display energy states. In the present discussion, we take these to be the allowed energy levels of the electron. Energy is plotted vertically with the lowest or ground state at the bottom and with excited states above. Given the energies of the lines in an atomic spectrum, it is possible (although sometimes very difficult) to determine the energy levels of an atom. Energy-level diagrams are used for many systems, including molecules and nuclei. A theory of the atom or any other system must predict its energies based on the physics of the system.
Bohr was clever enough to find a way to calculate the electron orbital energies in hydrogen. This was an important first step that has been improved upon, but it is well worth repeating here, because it does correctly describe many characteristics of hydrogen. Assuming circular orbits, Bohr proposed that the angular momentum of an electron in its orbit is quantized, that is, it has only specific, discrete values. The value for is given by the formula
where is the angular momentum, is the electron’s mass, is the radius of the th orbit, and is Planck’s constant. Note that angular momentum is . For a small object at a radius and , so that . Quantization says that this value of can only be equal to , etc. At the time, Bohr himself did not know why angular momentum should be quantized, but using this assumption he was able to calculate the energies in the hydrogen spectrum, something no one else had done at the time.
From Bohr’s assumptions, we will now derive a number of important properties of the hydrogen atom from the classical physics we have covered in the text. We start by noting the centripetal force causing the electron to follow a circular path is supplied by the Coulomb force. To be more general, we note that this analysis is valid for any single-electron atom. So, if a nucleus has protons ( for hydrogen, 2 for helium, etc.) and only one electron, that atom is called a hydrogen-like atom. The spectra of hydrogen-like ions are similar to hydrogen, but shifted to higher energy by the greater attractive force between the electron and nucleus. The magnitude of the centripetal force is , while the Coulomb force is . The tacit assumption here is that the nucleus is more massive than the stationary electron, and the electron orbits about it. This is consistent with the planetary model of the atom. Equating these,
Angular momentum quantization is stated in an earlier equation. We solve that equation for , substitute it into the above, and rearrange the expression to obtain the radius of the orbit. This yields:
where is defined to be the Bohr radius, since for the lowest orbit and for hydrogen , . It is left for this chapter’s Problems and Exercises to show that the Bohr radius is
These last two equations can be used to calculate the radii of the allowed (quantized) electron orbits in any hydrogen-like atom. It is impressive that the formula gives the correct size of hydrogen, which is measured experimentally to be very close to the Bohr radius. The earlier equation also tells us that the orbital radius is proportional to , as illustrated in Figure 30.19.
To get the electron orbital energies, we start by noting that the electron energy is the sum of its kinetic and potential energy:
Kinetic energy is the familiar , assuming the electron is not moving at relativistic speeds. Potential energy for the electron is electrical, or , where is the potential due to the nucleus, which looks like a point charge. The nucleus has a positive charge ; thus, , recalling an earlier equation for the potential due to a point charge. Since the electron’s charge is negative, we see that . Entering the expressions for and , we find
Now we substitute and from earlier equations into the above expression for energy. Algebraic manipulation yields
for the orbital energies of hydrogen-like atoms. Here, is the ground-state energy for hydrogen and is given by
Thus, for hydrogen,
Figure 30.20 shows an energy-level diagram for hydrogen that also illustrates how the various spectral series for hydrogen are related to transitions between energy levels.
Electron total energies are negative, since the electron is bound to the nucleus, analogous to being in a hole without enough kinetic energy to escape. As approaches infinity, the total energy becomes zero. This corresponds to a free electron with no kinetic energy, since gets very large for large , and the electric potential energy thus becomes zero. Thus, 13.6 eV is needed to ionize hydrogen (to go from –13.6 eV to 0, or unbound), an experimentally verified number. Given more energy, the electron becomes unbound with some kinetic energy. For example, giving 15.0 eV to an electron in the ground state of hydrogen strips it from the atom and leaves it with 1.4 eV of kinetic energy.
Finally, let us consider the energy of a photon emitted in a downward transition, given by the equation to be
Substituting , we see that
Dividing both sides of this equation by gives an expression for :
It can be shown that
is the Rydberg constant. Thus, we have used Bohr’s assumptions to derive the formula first proposed by Balmer years earlier as a recipe to fit experimental data.
We see that Bohr’s theory of the hydrogen atom answers the question as to why this previously known formula describes the hydrogen spectrum. It is because the energy levels are proportional to , where is a non-negative integer. A downward transition releases energy, and so must be greater than . The various series are those where the transitions end on a certain level. For the Lyman series, — that is, all the transitions end in the ground state (see also Figure 30.20). For the Balmer series, , or all the transitions end in the first excited state; and so on. What was once a recipe is now based in physics, and something new is emerging—angular momentum is quantized.
Triumphs and Limits of the Bohr Theory
Bohr did what no one had been able to do before. Not only did he explain the spectrum of hydrogen, he correctly calculated the size of the atom from basic physics. Some of his ideas are broadly applicable. Electron orbital energies are quantized in all atoms and molecules. Angular momentum is quantized. The electrons do not spiral into the nucleus, as expected classically (accelerated charges radiate, so that the electron orbits classically would decay quickly, and the electrons would sit on the nucleus—matter would collapse). These are major triumphs.
But there are limits to Bohr’s theory. It cannot be applied to multielectron atoms, even one as simple as a two-electron helium atom. Bohr’s model is what we call semiclassical. The orbits are quantized (nonclassical) but are assumed to be simple circular paths (classical). As quantum mechanics was developed, it became clear that there are no well-defined orbits; rather, there are clouds of probability. Bohr’s theory also did not explain that some spectral lines are doublets (split into two) when examined closely. We shall examine many of these aspects of quantum mechanics in more detail, but it should be kept in mind that Bohr did not fail. Rather, he made very important steps along the path to greater knowledge and laid the foundation for all of atomic physics that has since evolved.
How did scientists figure out the structure of atoms without looking at them? Try out different models by shooting light at the atom. Check how the prediction of the model matches the experimental results. |
Section 4.1 Sample Spaces and Probability Basic Probability Vocabulary: Probability experiment- A chance process that leads to well-defined results called outcomes Outcome- a results of a single trial of a probability experiment Sample space – a set of all possible outcomes of a probability experiment Tree diagram – device consisting of line segments emanating from a starting point and also from the outcome point. It is used to determine all possible outcomes of a probability experiment. Event – consists of a set of outcomes of a probability experiment Simple event – only one outcome Compound event – two or more simple events Complement of an Event E – the set of outcomes in the sample space that are not included in the outcomes of even E. The complement of E is denoted by E (E bar) Classical – uses sample spaces to determine the numerical probability that an event will happen. (theoretical). We assume that the events are equally likely. The probability of any event E is: Number of outcomes in E P(E) = n( E ) n( S ) Total number of outcomes in the sample space Empirical – Relies on actual experience to determine the likelihood of outcomes. (experimental) Use of frequencies to determine the probability of an outcome. Based on observations. The probability of an event being in a given class is P(E) = frequency for the class Total frequencies in the distribution = f n Law of Large Numbers – If the empirical probability is using a small number of trials, it is usually not exactly the same value as the classical probability. However, as the number of trials increases, the empirical probability will approach the theoretical probability. Subjective – uses a probability value based on an educated guess or estimate, employing opinions and inexact information. A person or group makes an educated guess at the chance that an event will occur. This guess is based on the person’s experience and evaluation of a solution. Examples of Sample Space: --------------------------------------------------------------------------------------------------------------------------------------------------------------Experiment Samples Space Toss one coin Head, tail Roll a die 1,2,3,4,5,6 Answer a true/false question true, false Toss two coins Head-head, tail-tail, head-tail, tail-head Drawing one card from an ordinary deck of cards Hearts – A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K Diamonds - A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K Spades - A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K Clubs - A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K --------------------------------------------------------------------------------------------------------------------------------------------------------------- Probability Rules: 1. The probability of any event E is a number (either a fraction or a decimal) between and including 0 and 1. This is denoted by 0 < P(E) < 1 2. If an event E cannot occur (i.e., the event contains no members in the sample space), its probability is 0. (Example: When a single die is rolled, find the probability of getting a 9) 3. If an event E is certain, then the probability of E is 1. (Example: When a single die is rolled, what is the probability of getting a number less than 7?) 4. The sum of the probabilities of all the outcomes in the sample space is 1. Rule for Complementary Events: P( E ) = 1 - P(E) Venn diagrams – or P(E) = 1 – P( E ) or P(E) + P( E ) = 1 Objective 1: Sample Spaces, Tree Diagrams, and Outcomes Use a tree diagram to find the sample space for each of the following. Example: Find the sample space for rolling two dice Example: Find the sample space for the gender of the children if a family has three children. Use B for boy and G for girl. Examples: State the a. outcomes for the events that are listed. Rolling two dice: a sum greater than 8 b. Drawing one card from a deck: a red face card. c. Three children: At least 1 boy d. Rolling two dice: a sum of 12 e. Drawing one card from a deck: an ace of spades f. Three children: All boys Objective 2: Classical Probability Example: Find the probability of getting a black 10 when drawing a card from a deck. Example: If a family has three children, find the probability that two of the three children are girls. Example: Find the probability of rolling a sum of 7 when rolling two dice. Example: A card is drawn from an ordinary deck. Find these probabilities. a. Of getting a jack b. Of getting the 6 of clubs (i.e., a 6 and a club) c. Of getting a 3 or a diamond. d. Of getting a 3 or a 6. Objective 3: Complements Example: Find the a. complement of each event: Rolling a die and getting a 4. b. Selecting a letter of the alphabet and getting a vowel. c. Selecting a month and getting a month that begins with a J. d. Selecting a day of the week and getting a weekday. Example: If the probability that a person lives in an industrialized country of the world is 1/5, find the probability that a person does not live in an industrialized country. Objective 4: Empirical Probabiltiies Example: AAA asked 50 people who plan to travel over Thanksgiving holiday how they will get to their destination. Method Frequency Drive 41 Fly 6 Train or bus 3 P(driving) = 41/50 P(airplane) = Example: In a sample of 50 people, 21 had type O blood, 22 had type A blood, 5 had type B blood, and 2 had type AB blood. Set up a frequency distribution and find the following probabilities. a. A person has type O blood. b. A person has type A or type B blood c. A person has neither type A nor type O blood. d. A person does not have type AB blood. Example: Hospital records indicated that knee replacement patients stayed in the hospital for the number of days shown in the distribution. Find these probabilities: Number of days stayed Frequency 3 15 a. A patient stayed exactly 5 days 4 32 b. A patient stayed less than 6 days 5 56 6 19 7 5 c. A patient stated at most 4 days d. A patient stated at least 5 days Name: ________________________________ Exit 4.1 Sample Spaces and Probability Assume you are at a carnival and decide to play one of the games. You spot a table where a person is flipping a coin, and since you have an understanding of basic probability, you believe that the odds of winning are in your favor. When you get to the table, you find out that all you have to do is to guess which side of the coin will be facing up after it is tossed. You are assured that the coin is fair, meaning that each of the two sides has an equally likely chance of occurring. You think back about what you learned in your statistics class about probability before you decide what to bet on. Answer the following questions about the coin-tossing game. 1) What is the sample space? 2) What are the possible outcomes? 3) What does the classical approach to probability say about computing probabilities for this type of problem? You decide to bet on heads, believing that it has a 50% chance of coming up. A friend of yours who had been playing the game for a while before you got there, tells you that heads has come up the last 9 times in a row. You remember the law of large numbers. 4) What is the law of large numbers, and does it change your thoughts about what will occur on the next toss? 5) What does the empirical approach to probability say about this problem, and could you use it to solve this problem? 6) Can subjective probabilities be used to help solve this problem? Explain. 7) Assume you could win $1 million if you could guess what the results of the next toss will be. What would you bet on? Why? Name: _____________________________________ Statistics Homework 4.1 Sample Space and Probability 1. If a die is rolled one time, find these probabilities: a. Getting a 2 b. Getting a number greater than 6 d. Getting a 4 or an odd number c. Getting an odd number e. Getting a number greater than or equal to 3 f. Getting a number greater than 2 or an even number. 2. a. If two dice are rolled one time, find the probability of getting these results. A sum of 9 b. A sum of 7 or 11 c. Doubles d. A sum less than 9 e. A sum greater than or equal to 10 3. a. If one card is drawn from a deck, find the probability of getting these results. A queen b. A club c. A queen of clubs d. A 3 or an 8 a. A 6 or a spade i. A diamond or a heart f. A 6 and a spade g. A black king h. A red card and a 7 j. A black card. 4. A shopping mall has set up a promotion as follows. With any mall purchase of $50 or more, the customer gets to spin the wheel shown here. If a number 1 comes up, the customer wins $10. If the number 2 comes up, the customer wins $5; and if the number 3 or 4 comes up, the customer wins a discount coupon. Find the following probabilities. a. The customer wins $10 b. The customer wins money c. The customer wins a coupon. 5. Human blood is grouped into four types. The percentages of Americans with each type are listed below. O 43% A 40% B 12% AB 5% Choose one American at random. Find the probability that this person a. Has type O blood b. Has type A or B c. Does not have type O or A 6. In 2004, 57.2% of all enrolled college students were female. Choose one enrolled student at random. What is the probability that the student was male? 7. A couple has three children. Find each probability. a. All boys b. All girls or all boys c. Exactly two boys or two girls d. At least one child of each gender 8. Elementary and secondary schools were classified by the number of computers they had. Choose one of these schools at random. Computers 1-10 11-20 21-50 51-100 100+ Schools 3170 4590 16,741 23,753 34,803 Choose one school at random. Find the probability that it has a. 50 or fewer computers b. More than 100 computers c. No more than 20 computers 8. There are 1,765,000 five thousand dollar bills in circulation and 3,460,000 ten thousand dollar bills in circulation. Choose one bill at random (wouldn’t that be nice!). What is the probability that it is a ten thousand dollar bill? 9. The source of federal government revenue for a specific year is 50% from individual income taxes 10% from corporate income taxes 32% from social insurance payroll taxes 3% from excise taxes 5% other If a revenue source is selected at random, what is the probability that it comes from individual or corporate income taxes? 10. A box contains a $1 bill, a $5 bill, a $10 bill, and a $20 bill. A bill is selected at random, and it is not replaced; then a second bill is selected at random. Draw a tree diagram and determine the sample space. 11. A coin is tossed; if it falls heads up, it is tossed again. If it falls tails up, a die is rolled. Draw a tree diagram and determine the outcomes. Name: ___________________________________ Practice 4.1 Sample Space and Probability Now put it all together. With your groups answer the following questions. Hand in when complete. 1. (Classical) The prime number less than 100 are listed below. 2 3 53 5 59 7 61 11 67 13 71 17 73 19 79 23 83 29 89 31 79 37 41 43 47 Choose one of these numbers at random. Find the probability that a. The number is even b. The sum of the number’s digits is even c. The number is greater than 50 2. (Empirical) Rural speed limits for all 50 states are indicated below. 60 mph 65 mph 70 mph 75 mph 1 (HI) 18 18 13 Choose one state at random. Find the probability that its speed limit is a. 60 or 70 mph b. Greater than 65 miles per hour c. 70 miles per hour or less 3. (Empirical) The following information shows the amount of debt students who graduated from college incur. $1 to $5000 $5001 to $20,000 $20,001 to $50,000 $50,000+ 27% 40% 19% 14% If a person who graduates has some debt, find the probability that a. It is less than $50001 b. It is more than $20,000 c. It is between $1 and $20,000 d. It is more than $50,000 4. (Classical and Complements) A breakdown of the sources of energy used in the United States is shown below. Oil 39% Natural gas 24% Coal 23% Nuclear 8% Hydropower 3% Other 3% Choose one energy source at random. Find the probability that it is a. Not oil b. Natural gas or oil c. Nuclear 5. (Classical and sample space) Roll two dice and multiply the numbers. a. Write out the sample space. b. What is the probability that the product is a multiple of 6. c. What is the probability that the product is less than 10. 6. (Tree diagrams and sample space) Four balls numbered 1 through 4 are placed in a box. A ball is selected at random, and its number is noted; then it is replaced. A second ball is selected at random, and its number is noted. Draw a tree diagram and determine the sample space. 7. (Tree diagrams and sample space) First-year students at a particular college must take one English class, one class in mathematics, a first-year seminar, and an elective. There are 2 English classes to choose from, 3 mathematics classes, 4 electives, and everyone takes the same first year seminar. Represent the possible schedules, using a tree diagram. 8. (Empirical) The distribution of ages of CEOs is as follow: Age Frequency 21-30 1 31-40 8 41-50 27 51-60 29 61-70 24 71-up 11 If a CEO is selected at random, find the probability that his or her age is a. Between 31 and 40 c. Over 30 and under 51 b. Under 31 d. Under 31 or over 60 9. (Classical) The wheel spinner shown here is spun twice. Determine the probability of the following events. a. An odd number on the first spin and an even number on the second spin (0 is considered even) b. A sum greater than 4 c. Even number on both spins d. A sum that is odd e. The same number on both spins 10. (Tree diagrams) A family special at a neighborhood restaurant offers dinner for four for $39.99. There are 3 appetizers available, 4 entrees, and 3 desserts from which to choose. The special includes one of each. Represent the possible dinner combinations with a tree diagram. |
Supreme Court of the United States
|Supreme Court of the United States|
|Established||March 4, 1789|
|Composition method||Presidential nomination with Senate confirmation|
|Authorized by||Constitution of the United States|
|Judge term length||Life tenure|
|Number of positions||9 (by statute)|
|Chief Justice of the United States|
|Since||September 29, 2005|
|This article is part of the series on the|
|Lists of justices|
The Supreme Court of the United States (SCOTUS) is the highest court in the federal judiciary of the United States of America. It has ultimate (and largely discretionary) appellate jurisdiction over all federal and state court cases that involve a point of federal law, and original jurisdiction over a narrow range of cases, specifically "all Cases affecting Ambassadors, other public Ministers and Consuls, and those in which a State shall be Party". The Court holds the power of judicial review, the ability to invalidate a statute for violating a provision of the Constitution. It is also able to strike down presidential directives for violating either the Constitution or statutory law. However, it may act only within the context of a case in an area of law over which it has jurisdiction. The Court may decide cases having political overtones, but it has ruled that it does not have power to decide non-justiciable political questions.
Established by Article Three of the United States Constitution, the composition and procedures of the Supreme Court were initially established by the 1st Congress through the Judiciary Act of 1789. As later set by the Judiciary Act of 1869, the Court consists of the chief justice of the United States and eight associate justices. Each justice has lifetime tenure, meaning they remain on the Court until they resign, retire, die, or are removed from office. When a vacancy occurs, the president, with the advice and consent of the Senate, appoints a new justice. Each justice has a single vote in deciding the cases argued before it. When in majority, the chief justice decides who writes the opinion of the court; otherwise, the most senior justice in the majority assigns the task of writing the opinion.
It was while debating the separation of powers between the legislative and executive departments that delegates to the 1787 Constitutional Convention established the parameters for the national judiciary. Creating a "third branch" of government was a novel idea; in the English tradition, judicial matters had been treated as an aspect of royal (executive) authority. Early on, the delegates who were opposed to having a strong central government argued that national laws could be enforced by state courts, while others, including James Madison, advocated for a national judicial authority consisting of various tribunals chosen by the national legislature. It was also proposed that the judiciary should have a role in checking the executive's power to veto or revise laws. In the end, the framers compromised by sketching only a general outline of the judiciary, vesting federal judicial power in "one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish". They delineated neither the exact powers and prerogatives of the Supreme Court nor the organization of the judicial branch as a whole.
The 1st United States Congress provided the detailed organization of a federal judiciary through the Judiciary Act of 1789. The Supreme Court, the country's highest judicial tribunal, was to sit in the nation's Capital and would initially be composed of a chief justice and five associate justices. The act also divided the country into judicial districts, which were in turn organized into circuits. Justices were required to "ride circuit" and hold circuit court twice a year in their assigned judicial district.
Immediately after signing the act into law, President George Washington nominated the following people to serve on the court: John Jay for chief justice and John Rutledge, William Cushing, Robert H. Harrison, James Wilson, and John Blair Jr. as associate justices. All six were confirmed by the Senate on September 26, 1789. Harrison, however, declined to serve. In his place, Washington later nominated James Iredell.
The Supreme Court held its inaugural session from February 2 through February 10, 1790, at the Royal Exchange in New York City, then the U.S. capital. A second session was held there in August 1790. The earliest sessions of the court were devoted to organizational proceedings, as the first cases did not reach it until 1791. When the nation's capital was moved to Philadelphia in 1790, the Supreme Court did so as well. After initially meeting at Independence Hall, the Court established its chambers at City Hall.
Earliest beginnings through Marshall
Under Chief Justices Jay, Rutledge, and Ellsworth (1789–1801), the Court heard few cases; its first decision was West v. Barnes (1791), a case involving procedure. As the Court initially had only six members, every decision that it made by a majority was also made by two-thirds (voting four to two). However, Congress has always allowed less than the court's full membership to make decisions, starting with a quorum of four justices in 1789. The court lacked a home of its own and had little prestige, a situation not helped by the era's highest-profile case, Chisholm v. Georgia (1793), which was reversed within two years by the adoption of the Eleventh Amendment.
The court's power and prestige grew substantially during the Marshall Court (1801–1835). Under Marshall, the court established the power of judicial review over acts of Congress, including specifying itself as the supreme expositor of the Constitution (Marbury v. Madison) and making several important constitutional rulings that gave shape and substance to the balance of power between the federal government and states (notably, Martin v. Hunter's Lessee, McCulloch v. Maryland and Gibbons v. Ogden).
The Marshall Court also ended the practice of each justice issuing his opinion seriatim, a remnant of British tradition, and instead issuing a single majority opinion. Also during Marshall's tenure, although beyond the Court's control, the impeachment and acquittal of Justice Samuel Chase in 1804–05 helped cement the principle of judicial independence.
From Taney to Taft
The Taney Court (1836–1864) made several important rulings, such as Sheldon v. Sill, which held that while Congress may not limit the subjects the Supreme Court may hear, it may limit the jurisdiction of the lower federal courts to prevent them from hearing cases dealing with certain subjects. Nevertheless, it is primarily remembered for its ruling in Dred Scott v. Sandford, which helped precipitate the Civil War. In the Reconstruction era, the Chase, Waite, and Fuller Courts (1864–1910) interpreted the new Civil War amendments to the Constitution and developed the doctrine of substantive due process (Lochner v. New York; Adair v. United States).
Under the White and Taft Courts (1910–1930), the Court held that the Fourteenth Amendment had incorporated some guarantees of the Bill of Rights against the states (Gitlow v. New York), grappled with the new antitrust statutes (Standard Oil Co. of New Jersey v. United States), upheld the constitutionality of military conscription (Selective Draft Law Cases) and brought the substantive due process doctrine to its first apogee (Adkins v. Children's Hospital).
New Deal era
During the Hughes, Stone, and Vinson Courts (1930–1953), the Court gained its own accommodation in 1935 and changed its interpretation of the Constitution, giving a broader reading to the powers of the federal government to facilitate President Franklin Roosevelt's New Deal (most prominently West Coast Hotel Co. v. Parrish, Wickard v. Filburn, United States v. Darby and United States v. Butler). During World War II, the Court continued to favor government power, upholding the internment of Japanese citizens (Korematsu v. United States) and the mandatory pledge of allegiance (Minersville School District v. Gobitis). Nevertheless, Gobitis was soon repudiated (West Virginia State Board of Education v. Barnette), and the Steel Seizure Case restricted the pro-government trend.
Warren and Burger
The Warren Court (1953–1969) dramatically expanded the force of Constitutional civil liberties. It held that segregation in public schools violates the equal protection clause of the fourteenth amendment (Brown v. Board of Education, Bolling v. Sharpe and Green v. County School Bd.) and that legislative districts must be roughly equal in population (Reynolds v. Sims). It created a general right to privacy (Griswold v. Connecticut), limited the role of religion in public school (most prominently Engel v. Vitale and Abington School District v. Schempp), incorporated most guarantees of the Bill of Rights against the States—prominently Mapp v. Ohio (the exclusionary rule) and Gideon v. Wainwright (right to appointed counsel),—and required that criminal suspects be apprised of all these rights by police (Miranda v. Arizona). At the same time, however, the Court limited defamation suits by public figures (New York Times v. Sullivan) and supplied the government with an unbroken run of antitrust victories.
The Burger Court (1969–1986) marked a conservative shift. It also expanded Griswold's right to privacy to strike down abortion laws (Roe v. Wade), but divided deeply on affirmative action (Regents of the University of California v. Bakke) and campaign finance regulation (Buckley v. Valeo). It also wavered on the death penalty, ruling first that most applications were defective (Furman v. Georgia), but later, that the death penalty itself was not unconstitutional (Gregg v. Georgia).
Rehnquist and Roberts
The Rehnquist Court (1986–2005) was noted for its revival of judicial enforcement of federalism, emphasizing the limits of the Constitution's affirmative grants of power (United States v. Lopez) and the force of its restrictions on those powers (Seminole Tribe v. Florida, City of Boerne v. Flores). It struck down single-sex state schools as a violation of equal protection (United States v. Virginia), laws against sodomy as violations of substantive due process (Lawrence v. Texas), and the line item veto (Clinton v. New York), but upheld school vouchers (Zelman v. Simmons-Harris) and reaffirmed Roe's restrictions on abortion laws (Planned Parenthood v. Casey). The Court's decision in Bush v. Gore, which ended the electoral recount during the presidential election of 2000, was especially controversial.
The Roberts Court (2005–present) is regarded as more conservative than the Rehnquist Court. Some of its major rulings have concerned federal preemption (Wyeth v. Levine), civil procedure (Twombly-Iqbal), abortion (Gonzales v. Carhart), climate change (Massachusetts v. EPA), same-sex marriage (United States v. Windsor and Obergefell v. Hodges) and the Bill of Rights, notably in Citizens United v. Federal Election Commission (First Amendment), Heller-McDonald (Second Amendment) and Baze v. Rees (Eighth Amendment).
Size of the court
Article III of the Constitution sets neither the size of the Supreme Court nor any specific positions on it (though the existence of the office of the chief justice is tacitly acknowledged in Article I, Section 3, Clause 6). Instead, these powers are entrusted to Congress, which initially established a six-member Supreme Court composed of a chief justice and five associate justices through the Judiciary Act of 1789. The size of the Court was first altered by an 1801 act which would have reduced the size of the court to five members upon its next vacancy, but an 1802 act promptly negated the 1801 act, legally restoring the court's size to six members before any such vacancy occurred. As the nation's boundaries grew, Congress added justices to correspond with the growing number of judicial circuits: seven in 1807, nine in 1837, and ten in 1863.
In 1866, at the behest of Chief Justice Chase and in an attempt to limit the power of Andrew Johnson, Congress passed an act providing that the next three justices to retire would not be replaced, which would thin the bench to seven justices by attrition. Consequently, one seat was removed in 1866 and a second in 1867. In 1869, however, the Circuit Judges Act returned the number of justices to nine, where it has since remained.
President Franklin D. Roosevelt attempted to expand the Court in 1937. His proposal envisioned the appointment of one additional justice for each incumbent justice who reached the age of 70 years 6 months and refused retirement, up to a maximum bench of 15 justices. The proposal was ostensibly to ease the burden of the docket on elderly judges, but the actual purpose was widely understood as an effort to "pack" the Court with justices who would support Roosevelt's New Deal. The plan, usually called the "court-packing plan", failed in Congress. Nevertheless, the Court's balance began to shift within months when Justice Willis Van Devanter retired and was replaced by Senator Hugo Black. By the end of 1941, Roosevelt had appointed seven associate justices and elevated Harlan F. Stone to Chief Justice.
Nomination, confirmation, and appointment
Article II, Section 2, Clause 2 of the United States Constitution, known as the Appointments Clause, empowers the president to nominate and, with the confirmation (advice and consent) of the United States Senate, to appoint public officials, including justices of the Supreme Court. This clause is one example of the system of checks and balances inherent in the Constitution. The president has the plenary power to nominate, while the Senate possesses the plenary power to reject or confirm the nominee. The Constitution sets no qualifications for service as a justice, thus a president may nominate anyone to serve, and the Senate may not set any qualifications or otherwise limit who the president can choose.
In modern times, the confirmation process has attracted considerable attention from the press and advocacy groups, which lobby senators to confirm or to reject a nominee depending on whether their track record aligns with the group's views. The Senate Judiciary Committee conducts hearings and votes on whether the nomination should go to the full Senate with a positive, negative or neutral report. The committee's practice of personally interviewing nominees is relatively recent. The first nominee to appear before the committee was Harlan Fiske Stone in 1925, who sought to quell concerns about his links to Wall Street, and the modern practice of questioning began with John Marshall Harlan II in 1955. Once the committee reports out the nomination, the full Senate considers it. Rejections are relatively uncommon; the Senate has explicitly rejected twelve Supreme Court nominees, most recently Robert Bork, nominated by President Ronald Reagan in 1987.
Although Senate rules do not necessarily allow a negative vote in committee to block a nomination, prior to 2017 a nomination could be blocked by filibuster once debate had begun in the full Senate. President Lyndon B. Johnson's nomination of sitting Associate Justice Abe Fortas to succeed Earl Warren as Chief Justice in 1968 was the first successful filibuster of a Supreme Court nominee. It included both Republican and Democratic senators concerned with Fortas's ethics. President Donald Trump's nomination of Neil Gorsuch to the seat left vacant by Antonin Scalia's death was the second. Unlike the Fortas filibuster, however, only Democratic Senators voted against cloture on the Gorsuch nomination, citing his perceived conservative judicial philosophy, and the Republican majority's prior refusal to take up President Barack Obama's nomination of Merrick Garland to fill the vacancy. This led the Republican majority to change the rules and eliminate the filibuster for Supreme Court nominations.
Not every Supreme Court nominee has received a floor vote in the Senate. A president may withdraw a nomination before an actual confirmation vote occurs, typically because it is clear that the Senate will reject the nominee; this occurred most recently with President George W. Bush's nomination of Harriet Miers in 2006. The Senate may also fail to act on a nomination, which expires at the end of the session. For example, President Dwight Eisenhower's first nomination of John Marshall Harlan II in November 1954 was not acted on by the Senate; Eisenhower re-nominated Harlan in January 1955, and Harlan was confirmed two months later. Most recently, as previously noted, the Senate failed to act on the March 2016 nomination of Merrick Garland; the nomination expired in January 2017, and the vacancy was filled by Neil Gorsuch, an appointee of President Trump.
Once the Senate confirms a nomination, the president must prepare and sign a commission, to which the Seal of the Department of Justice must be affixed, before the new justice can take office. The seniority of an associate justice is based on the commissioning date, not the confirmation or swearing-in date. The importance of commissioning is underscored by the case of Edwin M. Stanton. Although appointed to the court on December 19, 1869, by President Ulysses S. Grant and confirmed by the Senate a few days later, Stanton died on December 24, prior to receiving his commission. He is not, therefore, considered to have been an actual member of the court.
Before 1981, the approval process of justices was usually rapid. From the Truman through Nixon administrations, justices were typically approved within one month. From the Reagan administration to the present, however, the process has taken much longer. Some believe this is because Congress sees justices as playing a more political role than in the past. According to the Congressional Research Service, the average number of days from nomination to final Senate vote since 1975 is 67 days (2.2 months), while the median is 71 days (or 2.3 months).
When the Senate is in recess, a president may make temporary appointments to fill vacancies. Recess appointees hold office only until the end of the next Senate session (less than two years). The Senate must confirm the nominee for them to continue serving; of the two chief justices and eleven associate justices who have received recess appointments, only Chief Justice John Rutledge was not subsequently confirmed.
No president since Dwight D. Eisenhower has made a recess appointment to the Court, and the practice has become rare and controversial even in lower federal courts. In 1960, after Eisenhower had made three such appointments, the Senate passed a "sense of the Senate" resolution that recess appointments to the Court should only be made in "unusual circumstances". Such resolutions are not legally binding but are an expression of Congress's views in the hope of guiding executive action.
The Supreme Court's 2014 decision in National Labor Relations Board v. Noel Canning limited the ability of the President to make recess appointments (including appointments to the Supreme Court); the Court ruled that the Senate decides when the Senate is in session (or in recess). Writing for the Court, Justice Breyer stated, "We hold that, for purposes of the Recess Appointments Clause, the Senate is in session when it says it is, provided that, under its own rules, it retains the capacity to transact Senate business." This ruling allows the Senate to prevent recess appointments through the use of pro-forma sessions.
The Constitution provides that justices "shall hold their offices during good behavior" (unless appointed during a Senate recess). The term "good behavior" is understood to mean justices may serve for the remainder of their lives, unless they are impeached and convicted by Congress, resign, or retire. Only one justice has been impeached by the House of Representatives (Samuel Chase, March 1804), but he was acquitted in the Senate (March 1805). Moves to impeach sitting justices have occurred more recently (for example, William O. Douglas was the subject of hearings twice, in 1953 and again in 1970; and Abe Fortas resigned while hearings were being organized in 1969), but they did not reach a vote in the House. No mechanism exists for removing a justice who is permanently incapacitated by illness or injury, but unable (or unwilling) to resign.
Because justices have indefinite tenure, timing of vacancies can be unpredictable. Sometimes vacancies arise in quick succession, as in the early 1970s when Lewis F. Powell Jr. and William Rehnquist were nominated to replace Hugo Black and John Marshall Harlan II, who retired within a week of each other. Sometimes a great length of time passes between nominations, such as the eleven years between Stephen Breyer's nomination in 1994 to succeed Harry Blackmun and the nomination of John Roberts in 2005 to fill the seat of Sandra Day O'Connor (though Roberts' nomination was withdrawn and resubmitted for the role of chief justice after Rehnquist died).
Despite the variability, all but four presidents have been able to appoint at least one justice. William Henry Harrison died a month after taking office, though his successor (John Tyler) made an appointment during that presidential term. Likewise, Zachary Taylor died 16 months after taking office, but his successor (Millard Fillmore) also made a Supreme Court nomination before the end of that term. Andrew Johnson, who became president after the assassination of Abraham Lincoln, was denied the opportunity to appoint a justice by a reduction in the size of the court. Jimmy Carter is the only person elected president to have left office after at least one full term without having the opportunity to appoint a justice. Presidents James Monroe, Franklin D. Roosevelt, and George W. Bush each served a full term without an opportunity to appoint a justice, but made appointments during their subsequent terms in office. No president who has served more than one full term has gone without at least one opportunity to make an appointment.
There are currently nine justices on the Supreme Court: Chief Justice John Roberts and eight associate justices. Among the current members of the Court, Clarence Thomas is the longest-serving justice, with a tenure of 10,671 days (29 years, 78 days) as of January 9, 2021; the most recent justice to join the court is Amy Coney Barrett, whose tenure began on October 27, 2020.
Length of tenure
This graphical timeline depicts the length of each current Supreme Court justice's tenure (not seniority) on the Court:
The Court currently has six male and three female justices. Among the nine justices, there is one African-American justice (Justice Thomas) and one Hispanic justice (Justice Sotomayor). One of the justices was born to at least one immigrant parent: Justice Alito's father was born in Italy.
At least six justices are Roman Catholics and two are Jewish. It is unclear whether Neil Gorsuch considers himself a Catholic or an Episcopalian. Historically, most justices have been Protestants, including 36 Episcopalians, 19 Presbyterians, 10 Unitarians, 5 Methodists, and 3 Baptists. The first Catholic justice was Roger Taney in 1836, and 1916 saw the appointment of the first Jewish justice, Louis Brandeis. In recent years the historical situation has reversed, as most recent justices have been either Catholic or Jewish.
All current justices except for Amy Coney Barrett have Ivy League backgrounds as either undergraduates or law students. Barrett received her bachelor's degree at Rhodes College and her law degree at the University of Notre Dame. Three justices are from the state of New York, and one each is from California, New Jersey, Georgia, Colorado, Louisiana and Washington, D.C. In the 19th century, every justice was a man of Northwestern European descent, and almost always Protestant. Diversity concerns focused on geography, to represent all regions of the country, rather than religious, ethnic, or gender diversity.
Racial, ethnic, and gender diversity in the Court increased in the late 20th century. Thurgood Marshall became the first African-American justice in 1967. Sandra Day O'Connor became the first female justice in 1981. In 1986, Antonin Scalia became the first Italian-American justice. Marshall was succeeded by African-American Clarence Thomas in 1991. O'Connor was joined by Ruth Bader Ginsburg in 1993. After O'Connor's retirement Ginsburg was joined in 2009 by Sonia Sotomayor, the first Hispanic and Latina justice, and in 2010 by Elena Kagan. After Ginsburg's death on September 18, 2020, Amy Coney Barrett was confirmed as the fifth woman in the Court's history on October 26, 2020.
There have been six foreign-born justices in the Court's history: James Wilson (1789–1798), born in Caskardy, Scotland; James Iredell (1790–1799), born in Lewes, England; William Paterson (1793–1806), born in County Antrim, Northern Ireland; David Brewer (1889–1910), born to American missionaries in Smyrna, Ottoman Empire (now Izmir, Turkey); George Sutherland (1922–1939), born in Buckinghamshire, England; and Felix Frankfurter (1939–1962), born in Vienna, Austria-Hungary (now in Austria).
There are currently three living retired justices of the Supreme Court of the United States: Sandra Day O'Connor, Anthony Kennedy, and David Souter. As retired justices, they no longer participate in the work of the Supreme Court, but may be designated for temporary assignments to sit on lower federal courts, usually the United States Courts of Appeals. Such assignments are formally made by the chief justice, on request of the chief judge of the lower court and with the consent of the retired justice. In recent years, Justice O'Connor has sat with several Courts of Appeals around the country, and Justice Souter has frequently sat on the First Circuit, the court of which he was briefly a member before joining the Supreme Court.
The status of a retired justice is analogous to that of a circuit or district court judge who has taken senior status, and eligibility of a supreme court justice to assume retired status (rather than simply resign from the bench) is governed by the same age and service criteria.
In recent times, justices tend to strategically plan their decisions to leave the bench with personal, institutional, ideological, partisan and sometimes even political factors playing a role. The fear of mental decline and death often motivates justices to step down. The desire to maximize the Court's strength and legitimacy through one retirement at a time, when the Court is in recess, and during non-presidential election years suggests a concern for institutional health. Finally, especially in recent decades, many justices have timed their departure to coincide with a philosophically compatible president holding office, to ensure that a like-minded successor would be appointed.
Birthdate and place
|Appointed by||Retired under||Age at||Tenure|
|Start||Retirement||Present||Start date||End date||Length|
|Sandra Day O'Connor
March 26, 1930
El Paso, Texas
|Reagan||G. W. Bush||51||75||90||September 25, 1981||January 31, 2006||24 years, 128 days|
July 23, 1936
|Reagan||Trump||51||82||84||February 18, 1988||July 31, 2018||30 years, 163 days|
September 17, 1939
|G. H. W. Bush||Obama||51||69||81||October 9, 1990||June 29, 2009||18 years, 263 days|
Seniority and seating
This section needs additional citations for verification. (January 2019) (Learn how and when to remove this template message)
For the most part, the day-to-day activities of the justices are governed by rules of protocol based upon the seniority of justices. The chief justice always ranks first in the order of precedence—regardless of the length of their service. The associate justices are then ranked by the length of their service. The chief justice sits in the center on the bench, or at the head of the table during conferences. The other justices are seated in order of seniority. The senior-most associate justice sits immediately to the chief justice's right; the second most senior sits immediately to their left. The seats alternate right to left in order of seniority, with the most junior justice occupying the last seat. Therefore, starting in the middle of the October 2020 term, the court will sit as follows from left to right, from the perspective of those facing the Court: Kavanaugh, Kagan, Alito, Thomas (most senior associate justice), Roberts (chief justice), Breyer, Sotomayor, Gorsuch, and Barrett. Likewise, when the members of the Court gather for official group photographs, justices are arranged in order of seniority, with the five most senior members seated in the front row in the same order as they would sit during Court sessions, and the four most junior justices standing behind them, again in the same order as they would sit during Court sessions.
In the justices' private conferences, current practice is for them to speak and vote in order of seniority, beginning with the chief justice first and ending with the most junior associate justice. By custom, the most junior associate justice in these conferences is charged with any menial tasks the justices may require as they convene alone, such as answering the door of their conference room, serving beverages and transmitting orders of the court to the clerk. Justice Joseph Story served the longest as junior justice, from February 3, 1812, to September 1, 1823, for a total of 4,228 days. Justice Stephen Breyer follows very closely behind serving from August 3, 1994, to January 31, 2006, for a total of 4,199 days. Justice Elena Kagan comes in at a distant third serving from August 6, 2010, to April 10, 2017, for a total of 2,439 days.
As of 2018, associate justices receive a yearly salary of $255,300 and the chief justice is paid $267,000 per year. Article III, Section 1 of the U.S. Constitution prohibits Congress from reducing the pay for incumbent justices. Once a justice meets age and service requirements, the justice may retire. Judicial pensions are based on the same formula used for federal employees, but a justice's pension, as with other federal courts judges, can never be less than their salary at the time of retirement.
Although justices are nominated by the president in power, and receive confirmation by the Senate, justices do not represent or receive official endorsements from political parties, as is accepted practice in the legislative and executive branches. Jurists are, however, informally categorized in legal and political circles as being judicial conservatives, moderates, or liberals. Such leanings, however, generally refer to legal outlook rather than a political or legislative one. The nominations of justices are endorsed by individual politicians in the legislative branch who vote their approval[clarification needed] or disapproval of the nominated justice. The ideologies of jurists can be measured and compared with several metrics, including the Segal–Cover score, Martin-Quinn score, and Judicial Common Space score.
Following the confirmation of Amy Coney Barrett in 2020, the Court currently consists of six justices appointed by Republican presidents and three appointed by Democratic presidents. It is popularly accepted that Chief Justice Roberts and associate justices Thomas, Alito, Gorsuch, Kavanaugh, and Barrett appointed by Republican presidents, compose the Court's conservative wing. Justices Breyer, Sotomayor and Kagan, appointed by Democratic presidents, compose the Court's liberal wing. Gorsuch had a track record as a reliably conservative judge in the 10th circuit. Kavanaugh was considered one of the more conservative judges in the DC Circuit prior to his appointment to the Supreme Court. Likewise, Barrett's brief track record on the Seventh Circuit is conservative. Prior to Justice Ginsburg's death, Chief Justice Roberts was considered the Court's median justice (in the middle of the ideological spectrum, with four justices more liberal and four more conservative than him), making him the ideological center of the Court.
Tom Goldstein argued in an article in SCOTUSblog in 2010, that the popular view of the Supreme Court as sharply divided along ideological lines and each side pushing an agenda at every turn is "in significant part a caricature designed to fit certain preconceptions". He pointed out that in the 2009 term, almost half the cases were decided unanimously, and only about 20% were decided by a 5-to-4 vote. Barely one in ten cases involved the narrow liberal/conservative divide (fewer if the cases where Sotomayor recused herself are not included). He also pointed to several cases that defied the popular conception of the ideological lines of the Court. Goldstein further argued that the large number of pro-criminal-defendant summary dismissals (usually cases where the justices decide that the lower courts significantly misapplied precedent and reverse the case without briefing or argument) were an illustration that the conservative justices had not been aggressively ideological. Likewise, Goldstein stated that the critique that the liberal justices are more likely to invalidate acts of Congress, show inadequate deference to the political process, and be disrespectful of precedent, also lacked merit: Thomas has most often called for overruling prior precedent (even if long standing) that he views as having been wrongly decided, and during the 2009 term Scalia and Thomas voted most often to invalidate legislation.
According to statistics compiled by SCOTUSblog, in the twelve terms from 2000 to 2011, an average of 19 of the opinions on major issues (22%) were decided by a 5–4 vote, with an average of 70% of those split opinions decided by a Court divided along the traditionally perceived ideological lines (about 15% of all opinions issued). Over that period, the conservative bloc has been in the majority about 62% of the time that the Court has divided along ideological lines, which represents about 44% of all the 5–4 decisions.
In the October 2010 term, the Court decided 86 cases, including 75 signed opinions and 5 summary reversals (where the Court reverses a lower court without arguments and without issuing an opinion on the case). Four were decided with unsigned opinions, two cases affirmed by an equally divided Court, and two cases were dismissed as improvidently granted. Justice Kagan recused herself from 26 of the cases due to her prior role as United States Solicitor General. Of the 80 cases, 38 (about 48%, the highest percentage since the October 2005 term) were decided unanimously (9–0 or 8–0), and 16 decisions were made by a 5–4 vote (about 20%, compared to 18% in the October 2009 term, and 29% in the October 2008 term). However, in fourteen of the sixteen 5–4 decisions, the Court divided along the traditional ideological lines (with Ginsburg, Breyer, Sotomayor, and Kagan on the liberal side, and Roberts, Scalia, Thomas, and Alito on the conservative, and Kennedy providing the "swing vote"). This represents 87% of those 16 cases, the highest rate in the past 10 years. The conservative bloc, joined by Kennedy, formed the majority in 63% of the 5–4 decisions, the highest cohesion rate of that bloc in the Roberts Court.
The October 2017 term had a low rate of unanimous rulings, with only 39% of the cases decided by unanimous rulings, the lowest percentage since the October 2008 term when 30% of rulings were unanimous. Chief Justice Roberts was in the majority most often (68 out of 73 cases, or 93.2%), with retiring Justice Anthony Kennedy in second (67 out of 73 cases, or 91.8%); this was typical of the Roberts Court, in which Roberts and Kennedy have been in the majority most frequently in all terms except for the 2013 and 2014 terms (though Kennedy was in the top on both those terms). Justice Sotomayor was the justice least likely to be in the majority (in 50 out of 73 cases, or 68.5%). The highest agreement between justices was between Ginsburg and Sotomayor, who agreed on 95.8% of the cases, followed by Thomas and Alito agreeing on 93% of cases. There were 19 cases that were decided by a 5–4 vote (26% of the total cases); 74% of those cases (14 out of 19) broke along ideological lines, and for the first time in the Roberts Court, all of those resulted in a conservative majority, with Roberts, Kennedy, Thomas, Alito, and Gorsuch on the majority.
The October 2018 term, which saw the replacement of Anthony Kennedy by Brett Kavanaugh, once again saw a low rate of unanimity: only 28 of 71 decided cases were decided by a unanimous court, about 39% of the cases. Of these, only 19 cases had the Justices in total agreement. Chief Justice Roberts was once again the justice most often in the majority (61 out of 72 cases, or 85% of the time). Though Kavanaugh had a higher percentage of times in the majority, he did not participate in all cases, voting in the majority 58 out of 64 times, or 91% of the cases in which he participated. Of the justices who participated in all 72 cases, Kagan and Alito tied in second place, voting in the majority 59 out of 72 times (or 82% of the time). Looking only at cases that were not decided unanimously, Roberts and Kavanaugh were the most frequently in the majority (33 cases, with Roberts being in the majority in 75% of the divided cases, and Kavanaugh in 85% of the divided cases he participated in). Of 20 cases that were decided by a vote of 5–4, eight featured the conservative justices in the majority (Roberts, Thomas, Alito, Gorsuch, and Kavanaugh), and eight had the liberal justices (Ginsburg, Breyer, Sotomayor, and Kagan) joined by a conservative: Gorsuch was the most frequent, joining them four times, and the remaining conservative justices joining the liberals once each. The remaining 4 cases were decided by different coalitions. The highest agreement between justices was between Roberts and Kavanaugh, who agreed at least in judgement 94% of the time; the second highest agreement was again between Ginsburg and Sotomayor, who agreed 93% of the time. The highest rate of full agreement was between Ginsburg and Kagan (82% of the time), closely followed by Roberts and Alito, Ginsburg and Sotomayor, and Breyer and Kagan (81% of the time). The largest rate of disagreement was between Thomas and both Ginsburg and Sotomayor; Thomas disagreed with each of them 50% of the time.
The Supreme Court first met on February 1, 1790, at the Merchants' Exchange Building in New York City. When Philadelphia became the capital, the Court met briefly in Independence Hall before settling in Old City Hall from 1791 until 1800. After the government moved to Washington, D.C., the Court occupied various spaces in the United States Capitol building until 1935, when it moved into its own purpose-built home. The four-story building was designed by Cass Gilbert in a classical style sympathetic to the surrounding buildings of the Capitol and Library of Congress, and is clad in marble. The building includes the courtroom, justices' chambers, an extensive law library, various meeting spaces, and auxiliary services including a gymnasium. The Supreme Court building is within the ambit of the Architect of the Capitol, but maintains its own police force separate from the Capitol Police.
Located across First Street from the United States Capitol at One First Street NE and Maryland Avenue, the building is open to the public from 9 am to 4:30 pm weekdays but closed on weekends and holidays. Visitors may not tour the actual courtroom unaccompanied. There is a cafeteria, a gift shop, exhibits, and a half-hour informational film. When the Court is not in session, lectures about the courtroom are held hourly from 9:30 am to 3:30 pm and reservations are not necessary. When the Court is in session the public may attend oral arguments, which are held twice each morning (and sometimes afternoons) on Mondays, Tuesdays, and Wednesdays in two-week intervals from October through late April, with breaks during December and February. Visitors are seated on a first-come first-served basis. One estimate is there are about 250 seats available. The number of open seats varies from case to case; for important cases, some visitors arrive the day before and wait through the night. From mid-May until the end of June, the court releases orders and opinions beginning at 10 am, and these 15 to 30-minute sessions are open to the public on a similar basis. Supreme Court Police are available to answer questions.
Congress is authorized by Article III of the federal Constitution to regulate the Supreme Court's appellate jurisdiction. The Supreme Court has original and exclusive jurisdiction over cases between two or more states but may decline to hear such cases. It also possesses original but not exclusive jurisdiction to hear "all actions or proceedings to which ambassadors, other public ministers, consuls, or vice consuls of foreign states are parties; all controversies between the United States and a State; and all actions or proceedings by a State against the citizens of another State or against aliens".
In 1906, the Court asserted its original jurisdiction to prosecute individuals for contempt of court in United States v. Shipp. The resulting proceeding remains the only contempt proceeding and only criminal trial in the Court's history. The contempt proceeding arose from the lynching of Ed Johnson in Chattanooga, Tennessee the evening after Justice John Marshall Harlan granted Johnson a stay of execution to allow his lawyers to file an appeal. Johnson was removed from his jail cell by a lynch mob—aided by the local sheriff who left the prison virtually unguarded—and hung from a bridge, after which a deputy sheriff pinned a note on Johnson's body reading: "To Justice Harlan. Come get your nigger now." The local sheriff, John Shipp, cited the Supreme Court's intervention as the rationale for the lynching. The Court appointed its deputy clerk as special master to preside over the trial in Chattanooga with closing arguments made in Washington before the Supreme Court justices, who found nine individuals guilty of contempt, sentencing three to 90 days in jail and the rest to 60 days in jail.
In all other cases, however, the Court has only appellate jurisdiction, including the ability to issue writs of mandamus and writs of prohibition to lower courts. It considers cases based on its original jurisdiction very rarely; almost all cases are brought to the Supreme Court on appeal. In practice, the only original jurisdiction cases heard by the Court are disputes between two or more states.
The Court's appellate jurisdiction consists of appeals from federal courts of appeal (through certiorari, certiorari before judgment, and certified questions), the United States Court of Appeals for the Armed Forces (through certiorari), the Supreme Court of Puerto Rico (through certiorari), the Supreme Court of the Virgin Islands (through certiorari), the District of Columbia Court of Appeals (through certiorari), and "final judgments or decrees rendered by the highest court of a State in which a decision could be had" (through certiorari). In the last case, an appeal may be made to the Supreme Court from a lower state court if the state's highest court declined to hear an appeal or lacks jurisdiction to hear an appeal. For example, a decision rendered by one of the Florida District Courts of Appeal can be appealed to the U.S. Supreme Court if (a) the Supreme Court of Florida declined to grant certiorari, e.g. Florida Star v. B. J. F., or (b) the district court of appeal issued a per curiam decision simply affirming the lower court's decision without discussing the merits of the case, since the Supreme Court of Florida lacks jurisdiction to hear appeals of such decisions. The power of the Supreme Court to consider appeals from state courts, rather than just federal courts, was created by the Judiciary Act of 1789 and upheld early in the Court's history, by its rulings in Martin v. Hunter's Lessee (1816) and Cohens v. Virginia (1821). The Supreme Court is the only federal court that has jurisdiction over direct appeals from state court decisions, although there are several devices that permit so-called "collateral review" of state cases. It has to be noted that this "collateral review" often only applies to individuals on death row and not through the regular judicial system.
Since Article Three of the United States Constitution stipulates that federal courts may only entertain "cases" or "controversies", the Supreme Court cannot decide cases that are moot and it does not render advisory opinions, as the supreme courts of some states may do. For example, in DeFunis v. Odegaard, 416 U.S. 312 (1974), the Court dismissed a lawsuit challenging the constitutionality of a law school affirmative action policy because the plaintiff student had graduated since he began the lawsuit, and a decision from the Court on his claim would not be able to redress any injury he had suffered. However, the Court recognizes some circumstances where it is appropriate to hear a case that is seemingly moot. If an issue is "capable of repetition yet evading review", the Court will address it even though the party before the Court would not themselves be made whole by a favorable result. In Roe v. Wade, 410 U.S. 113 (1973), and other abortion cases, the Court addresses the merits of claims pressed by pregnant women seeking abortions even if they are no longer pregnant because it takes longer than the typical human gestation period to appeal a case through the lower courts to the Supreme Court. Another mootness exception is voluntary cessation of unlawful conduct, in which the Court considers the probability of recurrence and plaintiff's need for relief.
Justices as circuit justices
The United States is divided into thirteen circuit courts of appeals, each of which is assigned a "circuit justice" from the Supreme Court. Although this concept has been in continuous existence throughout the history of the republic, its meaning has changed through time.
Under the Judiciary Act of 1789, each justice was required to "ride circuit", or to travel within the assigned circuit and consider cases alongside local judges. This practice encountered opposition from many justices, who cited the difficulty of travel. Moreover, there was a potential for a conflict of interest on the Court if a justice had previously decided the same case while riding circuit. Circuit riding ended in 1901, when the Circuit Court of Appeals Act was passed, and circuit riding was officially abolished by Congress in 1911.
The circuit justice for each circuit is responsible for dealing with certain types of applications that, under the Court's rules, may be addressed by a single justice. These include applications for emergency stays (including stays of execution in death-penalty cases) and injunctions pursuant to the All Writs Act arising from cases within that circuit, as well as routine requests such as requests for extensions of time. In the past,[when?] circuit justices also sometimes ruled on motions for bail in criminal cases, writs of habeas corpus, and applications for writs of error granting permission to appeal.
A circuit justice may sit as a judge on the Court of Appeals of that circuit, but over the past hundred years, this has rarely occurred. A circuit justice sitting with the Court of Appeals has seniority over the chief judge of the circuit.
The chief justice has traditionally been assigned to the District of Columbia Circuit, the Fourth Circuit (which includes Maryland and Virginia, the states surrounding the District of Columbia), and since it was established, the Federal Circuit. Each associate justice is assigned to one or two judicial circuits.
As of November 20, 2020, the allotment of the justices among the circuits is as follows:
|District of Columbia Circuit||Chief Justice Roberts|
|First Circuit||Justice Breyer|
|Second Circuit||Justice Sotomayor|
|Third Circuit||Justice Alito|
|Fourth Circuit||Chief Justice Roberts|
|Fifth Circuit||Justice Alito|
|Sixth Circuit||Justice Kavanaugh|
|Seventh Circuit||Justice Barrett|
|Eighth Circuit||Justice Kavanaugh|
|Ninth Circuit||Justice Kagan|
|Tenth Circuit||Justice Gorsuch|
|Eleventh Circuit||Justice Thomas|
|Federal Circuit||Chief Justice Roberts|
Six of the current justices are assigned to circuits on which they previously sat as circuit judges: Chief Justice Roberts (D.C. Circuit), Justice Breyer (First Circuit), Justice Sotomayor (Second Circuit), Justice Alito (Third Circuit), Justice Barrett (Seventh Circuit), and Justice Gorsuch (Tenth Circuit).
A term of the Supreme Court commences on the first Monday of each October, and continues until June or early July of the following year. Each term consists of alternating periods of around two weeks known as "sittings" and "recesses". Justices hear cases and deliver rulings during sittings; they discuss cases and write opinions during recesses.
Nearly all cases come before the court by way of petitions for writs of certiorari, commonly referred to as "cert". The Court may review any case in the federal courts of appeals "by writ of certiorari granted upon the petition of any party to any civil or criminal case". The Court may only review "final judgments rendered by the highest court of a state in which a decision could be had" if those judgments involve a question of federal statutory or constitutional law. The party that appealed to the Court is the petitioner and the non-mover is the respondent. All case names before the Court are styled petitioner v. respondent, regardless of which party initiated the lawsuit in the trial court. For example, criminal prosecutions are brought in the name of the state and against an individual, as in State of Arizona v. Ernesto Miranda. If the defendant is convicted, and his conviction then is affirmed on appeal in the state supreme court, when he petitions for cert the name of the case becomes Miranda v. Arizona.
There are situations where the Court has original jurisdiction, such as when two states have a dispute against each other, or when there is a dispute between the United States and a state. In such instances, a case is filed with the Supreme Court directly. Examples of such cases include United States v. Texas, a case to determine whether a parcel of land belonged to the United States or to Texas, and Virginia v. Tennessee, a case turning on whether an incorrectly drawn boundary between two states can be changed by a state court, and whether the setting of the correct boundary requires Congressional approval. Although it has not happened since 1794 in the case of Georgia v. Brailsford, parties in an action at law in which the Supreme Court has original jurisdiction may request that a jury determine issues of fact. Georgia v. Brailsford remains the only case in which the court has empaneled a jury, in this case a special jury. Two other original jurisdiction cases involve colonial era borders and rights under navigable waters in New Jersey v. Delaware, and water rights between riparian states upstream of navigable waters in Kansas v. Colorado.
A cert petition is voted on at a session of the court called a conference. A conference is a private meeting of the nine Justices by themselves; the public and the Justices' clerks are excluded. The rule of four permits four of the nine justices to grant a writ of certiorari. If it is granted, the case proceeds to the briefing stage; otherwise, the case ends. Except in death penalty cases and other cases in which the Court orders briefing from the respondent, the respondent may, but is not required to, file a response to the cert petition.
The court grants a petition for cert only for "compelling reasons", spelled out in the court's Rule 10. Such reasons include:
- Resolving a conflict in the interpretation of a federal law or a provision of the federal Constitution
- Correcting an egregious departure from the accepted and usual course of judicial proceedings
- Resolving an important question of federal law, or to expressly review a decision of a lower court that conflicts directly with a previous decision of the Court.
When a conflict of interpretations arises from differing interpretations of the same law or constitutional provision issued by different federal circuit courts of appeals, lawyers call this situation a "circuit split". If the court votes to deny a cert petition, as it does in the vast majority of such petitions that come before it, it does so typically without comment. A denial of a cert petition is not a judgment on the merits of a case, and the decision of the lower court stands as the case's final ruling.
To manage the high volume of cert petitions received by the Court each year (of the more than 7,000 petitions the Court receives each year, it will usually request briefing and hear oral argument in 100 or fewer), the Court employs an internal case management tool known as the "cert pool". Currently, all justices except for Justices Alito and Gorsuch participate in the cert pool.
When the Court grants a cert petition, the case is set for oral argument. Both parties will file briefs on the merits of the case, as distinct from the reasons they may have argued for granting or denying the cert petition. With the consent of the parties or approval of the Court, amici curiae, or "friends of the court", may also file briefs. The Court holds two-week oral argument sessions each month from October through April. Each side has thirty minutes to present its argument (the Court may choose to give more time, though this is rare), and during that time, the Justices may interrupt the advocate and ask questions. The petitioner gives the first presentation, and may reserve some time to rebut the respondent's arguments after the respondent has concluded. Amici curiae may also present oral argument on behalf of one party if that party agrees. The Court advises counsel to assume that the Justices are familiar with and have read the briefs filed in a case.
Supreme Court bar
In order to plead before the court, an attorney must first be admitted to the court's bar. Approximately 4,000 lawyers join the bar each year. The bar contains an estimated 230,000 members. In reality, pleading is limited to several hundred attorneys. The rest join for a one-time fee of $200, earning the court about $750,000 annually. Attorneys can be admitted as either individuals or as groups. The group admission is held before the current justices of the Supreme Court, wherein the chief justice approves a motion to admit the new attorneys. Lawyers commonly apply for the cosmetic value of a certificate to display in their office or on their resume. They also receive access to better seating if they wish to attend an oral argument. Members of the Supreme Court Bar are also granted access to the collections of the Supreme Court Library.
At the conclusion of oral argument, the case is submitted for decision. Cases are decided by majority vote of the Justices. It is the Court's practice to issue decisions in all cases argued in a particular term by the end of that term. Within that term, however, the Court is under no obligation to release a decision within any set time after oral argument.
After the oral argument is concluded, usually in the same week as the case was submitted, the Justices retire to another conference at which the preliminary votes are tallied and the Court sees which side has prevailed. One of the Justices in the majority is then assigned to write the Court's opinion—also known as the "majority opinion". This assignment is made by the most senior Justice in the majority (with the Chief Justice always being considered the most senior). Drafts of the Court's opinion circulate among the Justices until the Court is prepared to announce the judgment in a particular case. Justices are free to change their votes on a case up until the decision is finalized and published. In any given case, a Justice is free to choose whether or not to author an opinion or else simply join the majority or another Justice's opinion. There are several primary types of opinions:
- Opinion of the Court: this is the binding decision of the Supreme Court. An opinion that more than half of the Justices join (usually at least five Justices, since there are nine Justices in total; but in cases where some Justices do not participate it could be fewer) is known as "majority opinion" and creates binding precedent in American law. Whereas an opinion that fewer than half of the Justices join is known as a "plurality opinion" and is only partially binding precedent.
- Concurring: when a Justice "concurs", he or she agrees with and joins the majority opinion but authors a separate concurrence to give additional explanations, rationales, or commentary. Concurrences do not create binding precedent.
- Concurring in the judgment: when a justice "concurs in the judgment", he or she agrees with the outcome the Court reached but disagrees with its reasons for doing so. A justice in this situation does not join the majority opinion. Like regular concurrences, these do not create binding precedent.
- Dissent: a dissenting Justice disagrees with the outcome the Court reached and its reasoning. Justices who dissent from a decision may author their own dissenting opinions or, if there are multiple dissenting Justices in a decision, may join another Justice's dissent. Dissents do not create binding precedent.
A justices may also join only parts of a particular decision, and may even agree with some parts of the outcome and disagree with others.
Since recording devices are banned inside the courtroom of the Supreme Court Building, the delivery of the decision to the media is done via paper copies and is known as the "Running of the Interns".
It is possible that, through recusals or vacancies, the Court divides evenly on a case. If that occurs, then the decision of the court below is affirmed, but does not establish binding precedent. In effect, it results in a return to the status quo ante. For a case to be heard, there must be a quorum of at least six justices. If a quorum is not available to hear a case and a majority of qualified justices believes that the case cannot be heard and determined in the next term, then the judgment of the court below is affirmed as if the Court had been evenly divided. For cases brought to the Supreme Court by direct appeal from a United States District Court, the chief justice may order the case remanded to the appropriate U.S. Court of Appeals for a final decision there. This has only occurred once in U.S. history, in the case of United States v. Alcoa (1945).
The Court's opinions are published in three stages. First, a slip opinion is made available on the Court's web site and through other outlets. Next, several opinions and lists of the court's orders are bound together in paperback form, called a preliminary print of United States Reports, the official series of books in which the final version of the Court's opinions appears. About a year after the preliminary prints are issued, a final bound volume of U.S. Reports is issued. The individual volumes of U.S. Reports are numbered so that users may cite this set of reports (or a competing version published by another commercial legal publisher but containing parallel citations) to allow those who read their pleadings and other briefs to find the cases quickly and easily.
As of January 2019[update], there are:
- Final bound volumes of U.S. Reports: 569 volumes, covering cases through June 13, 2013 (part of the October 2012 term).
- Slip opinions: 21 volumes (565–585 for 2011–2017 terms, three two-part volumes each), plus part 1 of volume 586 (2018 term).
As of March 2012[update], the U.S. Reports have published a total of 30,161 Supreme Court opinions, covering the decisions handed down from February 1790 to March 2012. This figure does not reflect the number of cases the Court has taken up, as several cases can be addressed by a single opinion (see, for example, Parents v. Seattle, where Meredith v. Jefferson County Board of Education was also decided in the same opinion; by a similar logic, Miranda v. Arizona actually decided not only Miranda but also three other cases: Vignera v. New York, Westover v. United States, and California v. Stewart). A more unusual example is The Telephone Cases, which are a single set of interlinked opinions that take up the entire 126th volume of the U.S. Reports.
Opinions are also collected and published in two unofficial, parallel reporters: Supreme Court Reporter, published by West (now a part of Thomson Reuters), and United States Supreme Court Reports, Lawyers' Edition (simply known as Lawyers' Edition), published by LexisNexis. In court documents, legal periodicals and other legal media, case citations generally contain cites from each of the three reporters; for example, citation to Citizens United v. Federal Election Commission is presented as Citizens United v. Federal Election Com'n, 585 U.S. 50, 130 S. Ct. 876, 175 L. Ed. 2d 753 (2010), with "S. Ct." representing the Supreme Court Reporter, and "L. Ed." representing the Lawyers' Edition.
Citations to published opinions
Lawyers use an abbreviated format to cite cases, in the form "vol U.S. page, pin (year)", where vol is the volume number, page is the page number on which the opinion begins, and year is the year in which the case was decided. Optionally, pin is used to "pinpoint" to a specific page number within the opinion. For instance, the citation for Roe v. Wade is 410 U.S. 113 (1973), which means the case was decided in 1973 and appears on page 113 of volume 410 of U.S. Reports. For opinions or orders that have not yet been published in the preliminary print, the volume and page numbers may be replaced with "___".
Institutional powers and constraints
The Federal court system and the judicial authority to interpret the Constitution received little attention in the debates over the drafting and ratification of the Constitution. The power of judicial review, in fact, is nowhere mentioned in it. Over the ensuing years, the question of whether the power of judicial review was even intended by the drafters of the Constitution was quickly frustrated by the lack of evidence bearing on the question either way. Nevertheless, the power of judiciary to overturn laws and executive actions it determines are unlawful or unconstitutional is a well-established precedent. Many of the Founding Fathers accepted the notion of judicial review; in Federalist No. 78, Alexander Hamilton wrote: "A Constitution is, in fact, and must be regarded by the judges, as a fundamental law. It therefore belongs to them to ascertain its meaning, as well as the meaning of any particular act proceeding from the legislative body. If there should happen to be an irreconcilable variance between the two, that which has the superior obligation and validity ought, of course, to be preferred; or, in other words, the Constitution ought to be preferred to the statute."
The Supreme Court firmly established its power to declare laws unconstitutional in Marbury v. Madison (1803), consummating the American system of checks and balances. In explaining the power of judicial review, Chief Justice John Marshall stated that the authority to interpret the law was the particular province of the courts, part of the duty of the judicial department to say what the law is. His contention was not that the Court had privileged insight into constitutional requirements, but that it was the constitutional duty of the judiciary, as well as the other branches of government, to read and obey the dictates of the Constitution.
Since the founding of the republic, there has been a tension between the practice of judicial review and the democratic ideals of egalitarianism, self-government, self-determination and freedom of conscience. At one pole are those who view the Federal Judiciary and especially the Supreme Court as being "the most separated and least checked of all branches of government". Indeed, federal judges and justices on the Supreme Court are not required to stand for election by virtue of their tenure "during good behavior", and their pay may "not be diminished" while they hold their position (Section 1 of Article Three). Though subject to the process of impeachment, only one Justice has ever been impeached and no Supreme Court Justice has been removed from office. At the other pole are those who view the judiciary as the least dangerous branch, with little ability to resist the exhortations of the other branches of government.
The Supreme Court, it is noted, cannot directly enforce its rulings; instead, it relies on respect for the Constitution and for the law for adherence to its judgments. One notable instance of nonacquiescence came in 1832, when the state of Georgia ignored the Supreme Court's decision in Worcester v. Georgia. President Andrew Jackson, who sided with the Georgia courts, is supposed to have remarked, "John Marshall has made his decision; now let him enforce it!"; Some state governments in the South also resisted the desegregation of public schools after the 1954 judgment Brown v. Board of Education. More recently, many feared that President Nixon would refuse to comply with the Court's order in United States v. Nixon (1974) to surrender the Watergate tapes. Nixon, however, ultimately complied with the Supreme Court's ruling.
Supreme Court decisions can be (and have been) purposefully overturned by constitutional amendment, which has happened on five occasions:
- Chisholm v. Georgia (1793) – overturned by the Eleventh Amendment (1795)
- Dred Scott v. Sandford (1857) – overturned by the Thirteenth Amendment (1865) and the Fourteenth Amendment (1868)
- Pollock v. Farmers' Loan & Trust Co. (1895) – overturned by the Sixteenth Amendment (1913)
- Minor v. Happersett (1875) – overturned by the Nineteenth Amendment (1920)
- Oregon v. Mitchell (1970) – overturned by the Twenty-sixth Amendment (1971)
When the Court rules on matters involving the interpretation of laws rather than of the Constitution, simple legislative action can reverse the decisions (for example, in 2009 Congress passed the Lilly Ledbetter act, superseding the limitations given in Ledbetter v. Goodyear Tire & Rubber Co. in 2007). Also, the Supreme Court is not immune from political and institutional consideration: lower federal courts and state courts sometimes resist doctrinal innovations, as do law enforcement officials.
In addition, the other two branches can restrain the Court through other mechanisms. Congress can increase the number of justices, giving the President power to influence future decisions by appointments (as in Roosevelt's Court Packing Plan discussed above). Congress can pass legislation that restricts the jurisdiction of the Supreme Court and other federal courts over certain topics and cases: this is suggested by language in Section 2 of Article Three, where the appellate jurisdiction is granted "with such Exceptions, and under such Regulations as the Congress shall make." The Court sanctioned such congressional action in the Reconstruction case ex parte McCardle (1869), though it rejected Congress' power to dictate how particular cases must be decided in United States v. Klein (1871).
On the other hand, through its power of judicial review, the Supreme Court has defined the scope and nature of the powers and separation between the legislative and executive branches of the federal government; for example, in United States v. Curtiss-Wright Export Corp. (1936), Dames & Moore v. Regan (1981), and notably in Goldwater v. Carter (1979), (where it effectively gave the Presidency the power to terminate ratified treaties without the consent of Congress). The Court's decisions can also impose limitations on the scope of Executive authority, as in Humphrey's Executor v. United States (1935), the Steel Seizure Case (1952), and United States v. Nixon (1974).
Each Supreme Court justice hires several law Clerks to review petitions for writ of certiorari, research them, prepare bench memorandums, and draft opinions. Associate justices are allowed four clerks. The chief justice is allowed five clerks, but Chief Justice Rehnquist hired only three per year, and Chief Justice Roberts usually hires only four. Generally, law clerks serve a term of one to two years.
The first law clerk was hired by Associate Justice Horace Gray in 1882. Oliver Wendell Holmes, Jr. and Louis Brandeis were the first Supreme Court justices to use recent law school graduates as clerks, rather than hiring a "stenographer-secretary". Most law clerks are recent law school graduates.
The first female clerk was Lucile Lomen, hired in 1944 by Justice William O. Douglas. The first African-American, William T. Coleman, Jr., was hired in 1948 by Justice Felix Frankfurter. A disproportionately large number of law clerks have obtained law degrees from elite law schools, especially Harvard, Yale, the University of Chicago, Columbia, and Stanford. From 1882 to 1940, 62% of law clerks were graduates of Harvard Law School. Those chosen to be Supreme Court law clerks usually have graduated in the top of their law school class and were often an editor of the law review or a member of the moot court board. By the mid-1970s, clerking previously for a judge in a federal court of appeals had also become a prerequisite to clerking for a Supreme Court justice.
Nine Supreme Court justices previously clerked for other justices: Byron White for Frederick M. Vinson, John Paul Stevens for Wiley Rutledge, William Rehnquist for Robert H. Jackson, Stephen Breyer for Arthur Goldberg, John Roberts for William Rehnquist, Elena Kagan for Thurgood Marshall, Neil Gorsuch for both Byron White and Anthony Kennedy, Brett Kavanaugh also for Kennedy, and Amy Coney Barrett for Antonin Scalia. Justices Gorsuch and Kavanaugh served under Kennedy during the same term. Gorsuch is the first justice to serve alongside a justice for whom he or she clerked, serving alongside Kennedy from April 2017 through Kennedy's retirement in 2018. With the confirmation of Justice Kavanaugh, for the first time a majority of the Supreme Court was composed of former Supreme Court law clerks (Roberts, Breyer, Kagan, Gorsuch and Kavanaugh, now joined by Barrett).
Several current Supreme Court justices have also clerked in the federal courts of appeals: John Roberts for Judge Henry Friendly of the United States Court of Appeals for the Second Circuit, Justice Samuel Alito for Judge Leonard I. Garth of the United States Court of Appeals for the Third Circuit, Elena Kagan for Judge Abner J. Mikva of the United States Court of Appeals for the District of Columbia Circuit, Neil Gorsuch for Judge David B. Sentelle of the United States Court of Appeals for the District of Columbia, Brett Kavanaugh for Judge Walter Stapleton of the United States Court of Appeals for the Third Circuit and Judge Alex Kozinski of the United States Court of Appeals for the Ninth Circuit, and Amy Coney Barrett for Judge Laurence Silberman of the U.S. Court of Appeals for the D.C. Circuit.
Politicization of the Court
Clerks hired by each of the justices of the Supreme Court are often given considerable leeway in the opinions they draft. "Supreme Court clerkship appeared to be a nonpartisan institution from the 1940s into the 1980s," according to a study published in 2009 by the law review of Vanderbilt University Law School. "As law has moved closer to mere politics, political affiliations have naturally and predictably become proxies for the different political agendas that have been pressed in and through the courts," former federal court of appeals judge J. Michael Luttig said. David J. Garrow, professor of history at the University of Cambridge, stated that the Court had thus begun to mirror the political branches of government. "We are getting a composition of the clerk workforce that is getting to be like the House of Representatives," Professor Garrow said. "Each side is putting forward only ideological purists."
According to the Vanderbilt Law Review study, this politicized hiring trend reinforces the impression that the Supreme Court is "a superlegislature responding to ideological arguments rather than a legal institution responding to concerns grounded in the rule of law". A poll conducted in June 2012 by The New York Times and CBS News showed just 44% of Americans approve of the job the Supreme Court is doing. Three-quarters said justices' decisions are sometimes influenced by their political or personal views.
The Supreme Court has been the object of criticisms on a range of issues. Among them:
The Supreme Court has been criticized for not keeping within Constitutional bounds by engaging in judicial activism, rather than merely interpreting law and exercising judicial restraint. Claims of judicial activism are not confined to any particular ideology. An often cited example of conservative judicial activism is the 1905 decision in Lochner v. New York, which has been criticized by many prominent thinkers, including Robert Bork, Justice Antonin Scalia, and Chief Justice John Roberts, and which was reversed in the 1930s.
An often cited example of liberal judicial activism is Roe v. Wade (1973), which legalized abortion on the basis of the "right to privacy" inferred from the Fourteenth Amendment, a reasoning that some critics argued was circuitous. Legal scholars, justices, and presidential candidates have criticized the Roe decision. The progressive Brown v. Board of Education decision banning racial segregation in public schools has been criticized by conservatives such as Patrick Buchanan, former Associate Justice nominee and Solicitor General Robert Bork and former presidential contender Barry Goldwater.
More recently, Citizens United v. Federal Election Commission was criticized for expanding upon the precedent in First National Bank of Boston v. Bellotti (1978) that the First Amendment applies to corporations, including campaign spending. President Abraham Lincoln warned, referring to the Dred Scott decision, that if government policy became "irrevocably fixed by decisions of the Supreme Court...the people will have ceased to be their own rulers." Former justice Thurgood Marshall justified judicial activism with these words: "You do what you think is right and let the law catch up."
During different historical periods, the Court has leaned in different directions. Critics from both sides complain that activist judges abandon the Constitution and substitute their own views instead. Critics include writers such as Andrew Napolitano, Phyllis Schlafly, Mark R. Levin, Mark I. Sutherland, and James MacGregor Burns. Past presidents from both parties have attacked judicial activism, including Franklin D. Roosevelt, Richard Nixon, and Ronald Reagan. Failed Supreme Court nominee Robert Bork wrote: "What judges have wrought is a coup d'état, – slow-moving and genteel, but a coup d'état nonetheless." Brian Leiter wrote that "Given the complexity of the law and the complexity involved in saying what really happened in a given dispute, all judges, and especially those on the Supreme Court, often have to exercise a quasi-legislative power," and "Supreme Court nominations are controversial because the court is a super-legislature, and because its moral and political judgments are controversial."
Court decisions have been criticized for failing to protect individual rights: the Dred Scott (1857) decision upheld slavery; Plessy v Ferguson (1896) upheld segregation under the doctrine of separate but equal; Kelo v. City of New London (2005) was criticized by prominent politicians, including New Jersey governor Jon Corzine, as undermining property rights. Some critics suggest the 2009 bench with a conservative majority has "become increasingly hostile to voters" by siding with Indiana's voter identification laws which tend to "disenfranchise large numbers of people without driver's licenses, especially poor and minority voters", according to one report. Senator Al Franken criticized the Court for "eroding individual rights". However, others argue that the Court is too protective of some individual rights, particularly those of people accused of crimes or in detention. For example, Chief Justice Warren Burger was an outspoken critic of the exclusionary rule, and Justice Scalia criticized the Court's decision in Boumediene v. Bush for being too protective of the rights of Guantanamo detainees, on the grounds that habeas corpus was "limited" to sovereign territory.
This criticism is related to complaints about judicial activism. George Will wrote that the Court has an "increasingly central role in American governance". It was criticized for intervening in bankruptcy proceedings regarding ailing carmaker Chrysler Corporation in 2009. A reporter wrote that "Justice Ruth Bader Ginsburg's intervention in the Chrysler bankruptcy" left open the "possibility of further judicial review" but argued overall that the intervention was a proper use of Supreme Court power to check the executive branch. Warren E. Burger, before becoming Chief Justice, argued that since the Supreme Court has such "unreviewable power" it is likely to "self-indulge itself" and unlikely to "engage in dispassionate analysis". Larry Sabato wrote "excessive authority has accrued to the federal courts, especially the Supreme Court."
Courts are a poor check on executive power
British constitutional scholar Adam Tomkins sees flaws in the American system of having courts (and specifically the Supreme Court) act as checks on the Executive and Legislative branches; he argues that because the courts must wait, sometimes for years, for cases to navigate their way through the system, their ability to restrain other branches is severely weakened. In contrast, various other countries have a dedicated constitutional court that has original jurisdiction on constitutional claims brought by persons or political institutions; for example, the Federal Constitutional Court of Germany, which can declare a law unconstitutional when challenged.
Federal versus state power
There has been debate throughout American history about the boundary between federal and state power. While Framers such as James Madison and Alexander Hamilton argued in The Federalist Papers that their then-proposed Constitution would not infringe on the power of state governments, others argue that expansive federal power is good and consistent with the Framers' wishes. The Tenth Amendment to the United States Constitution explicitly grants "powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people."
The Court has been criticized for giving the federal government too much power to interfere with state authority. One criticism is that it has allowed the federal government to misuse the Commerce Clause by upholding regulations and legislation which have little to do with interstate commerce, but that were enacted under the guise of regulating interstate commerce; and by voiding state legislation for allegedly interfering with interstate commerce. For example, the Commerce Clause was used by the Fifth Circuit Court of Appeals to uphold the Endangered Species Act, thus protecting six endemic species of insect near Austin, Texas, despite the fact that the insects had no commercial value and did not travel across state lines; the Supreme Court let that ruling stand without comment in 2005. Chief Justice John Marshall asserted Congress's power over interstate commerce was "complete in itself, may be exercised to its utmost extent, and acknowledges no limitations, other than are prescribed in the Constitution". Justice Alito said congressional authority under the Commerce Clause is "quite broad". Modern day theorist Robert B. Reich suggests debate over the Commerce Clause continues today.
Advocates of states' rights such as constitutional scholar Kevin Gutzman have also criticized the Court, saying it has misused the Fourteenth Amendment to undermine state authority. Justice Brandeis, in arguing for allowing the states to operate without federal interference, suggested that states should be laboratories of democracy. One critic wrote "the great majority of Supreme Court rulings of unconstitutionality involve state, not federal, law." However, others see the Fourteenth Amendment as a positive force that extends "protection of those rights and guarantees to the state level". More recently, the issue of federal power is central in the prosecution of Gamble v. United States, which is examining the doctrine of "separate sovereigns", whereby a criminal defendant can be prosecuted by a state court and then by a federal court.
The Court has been criticized for keeping its deliberations hidden from public view. According to a review of Jeffrey Toobin's 2007 expose The Nine: Inside the Secret World of the Supreme Court; "Its inner workings are difficult for reporters to cover, like a closed 'cartel', only revealing itself through 'public events and printed releases, with nothing about its inner workings.'" The reviewer writes: "few (reporters) dig deeply into court affairs. It all works very neatly; the only ones hurt are the American people, who know little about nine individuals with enormous power over their lives." Larry Sabato complains about the Court's "insularity". A Fairleigh Dickinson University poll conducted in 2010 found that 61% of American voters agreed that televising Court hearings would "be good for democracy", and 50% of voters stated they would watch Court proceedings if they were televised. More recently, several justices have appeared on television, written books and made public statements to journalists. In a 2009 interview on C-SPAN, journalists Joan Biskupic (of USA Today) and Lyle Denniston (of SCOTUSblog) argued that the Court is a "very open" institution with only the justices' private conferences inaccessible to others. In October 2010, the Court began the practice of posting on its website recordings and transcripts of oral arguments on the Friday after they occur.
Judicial interference in political disputes
Some Court decisions have been criticized for injecting the Court into the political arena, and deciding questions that are the purview of the other two branches of government. The Bush v. Gore decision, in which the Supreme Court intervened in the 2000 presidential election and effectively chose George W. Bush over Al Gore, has been criticized extensively, particularly by liberals. Another example are Court decisions on apportionment and re-districting: in Baker v. Carr, the court decided it could rule on apportionment questions; Justice Frankfurter in a "scathing dissent" argued against the court wading into so-called political questions.
Not choosing enough cases to review
Senator Arlen Specter said the Court should "decide more cases". On the other hand, although Justice Scalia acknowledged in a 2009 interview that the number of cases that the Court heard then was smaller than when he first joined the Supreme Court, he also stated that he had not changed his standards for deciding whether to review a case, nor did he believe his colleagues had changed their standards. He attributed the high volume of cases in the late 1980s, at least in part, to an earlier flurry of new federal legislation that was making its way through the courts.
Critic Larry Sabato wrote: "The insularity of lifetime tenure, combined with the appointments of relatively young attorneys who give long service on the bench, produces senior judges representing the views of past generations better than views of the current day." Sanford Levinson has been critical of justices who stayed in office despite medical deterioration based on longevity. James MacGregor Burns stated lifelong tenure has "produced a critical time lag, with the Supreme Court institutionally almost always behind the times". Proposals to solve these problems include term limits for justices, as proposed by Levinson and Sabato as well as a mandatory retirement age proposed by Richard Epstein, among others. However, others suggest lifetime tenure brings substantial benefits, such as impartiality and freedom from political pressure. Alexander Hamilton in Federalist 78 wrote "nothing can contribute so much to its firmness and independence as permanency in office."
Accepting gifts and outside income
The 21st century has seen increased scrutiny of justices accepting expensive gifts and travel. All of the members of the Roberts Court have accepted travel or gifts. In 2012, Justice Sonia Sotomayor received $1.9 million in advances from her publisher Knopf Doubleday. Justice Scalia and others took dozens of expensive trips to exotic locations paid for by private donors. Private events sponsored by partisan groups that are attended by both the justices and those who have an interest in their decisions have raised concerns about access and inappropriate communications. Stephen Spaulding, the legal director at Common Cause, said: "There are fair questions raised by some of these trips about their commitment to being impartial."
- Judicial appointment history for United States federal courts
- List of presidents of the United States by judicial appointments
- List of law schools attended by United States Supreme Court justices
- Lists of United States Supreme Court cases
- Oyez Project
- Reporter of Decisions of the Supreme Court of the United States
Landmark Supreme Court decisions (selection)
- Marbury v. Madison (1803, judicial review)
- McCulloch v. Maryland (1819, implied powers)
- Gibbons v. Ogden (1824, interstate commerce)
- Dred Scott v. Sandford (1857, slavery)
- Plessy v. Ferguson (1896, separate but equal treatment of races)
- Wickard v. Filburn (1942, federal regulation of economic activity)
- Brown v. Board of Education (1954, school segregation of races)
- Engel v. Vitale (1962, state-sponsored prayers in public schools)
- Abington School District v. Schempp (1963, Bible readings and recitation of the Lord's prayer in U.S. public schools)
- Gideon v. Wainwright (1963, right to an attorney)
- Griswold v. Connecticut (1965, contraception)
- Miranda v. Arizona (1966, rights of those detained by police)
- In re Gault (1967, rights of juvenile suspects)
- Loving v. Virginia (1967, interracial marriage)
- Lemon v. Kurtzman (1971, religious activities in public schools)
- New York Times Co. v. United States (1971, freedom of the press)
- Eisenstadt v. Baird (1972, privacy for unmarried people)
- Roe v. Wade (1973, abortion)
- Miller v. California (1973, obscenity)
- United States v. Nixon (1974, executive privilege)
- Buckley v. Valeo (1976, campaign finance)
- Bowers v. Hardwick (1986, sodomy)
- Bush v. Gore (2000, presidential election)
- Lawrence v. Texas (2003, sodomy)
- District of Columbia v. Heller (2008, gun rights)
- Citizens United v. FEC (2010, campaign finance)
- United States v. Windsor (2013, same-sex marriage)
- Shelby County v. Holder (2013, voting rights)
- Obergefell v. Hodges (2015, same-sex marriage)
- Bostock v. Clayton County (2020, discrimination on LGBT workers)
- Lawson, Gary; Seidman, Guy (2001). "When Did the Constitution Become Law?". Notre Dame Law Review. 77: 1–37.
- U.S. Constitution, Article III, Section 2. This was narrowed by the Eleventh Amendment to exclude suits against states that are brought by persons who are not citizens of that state.
- "About the Supreme Court". Washington, D.C.: Administrative Office of the United States Courts. Retrieved September 3, 2018.
- Turley, Jonathan. "Essays on Article III: Good Behavior Clause". Heritage Guide to the Constitution. Washington, D.C.: The Heritage Foundation. Retrieved September 3, 2018.
- Pushaw Jr., Robert J. "Essays on Article III: Judicial Vesting Clause". Heritage Guide to the Constitution. Washington, D.C.: The Heritage Foundation. Retrieved September 3, 2018.
- Watson, Bradley C. S. "Essays on Article III: Supreme Court". Heritage Guide to the Constitution. Washington, D.C.: The Heritage Foundation. Retrieved September 3, 2018.
- "The Court as an Institution". Washington, D.C.: Supreme Court of the United States. Retrieved September 3, 2018.
- "Supreme Court Nominations: present–1789". Washington, D.C.: Office of the Secretary, United States Senate. Retrieved September 3, 2018.
- Hodak, George (February 1, 2011). "February 2, 1790: Supreme Court Holds Inaugural Session". abajournal.com. Chicago, Illinois: American Bar Association. Retrieved September 3, 2018.
- Pigott, Robert (2014). New York's Legal Landmarks: A Guide to Legal Edifices, Institutions, Lore, History, and Curiosities on the City's Streets. New York: Attorney Street Editions. p. 7. ISBN 978-0-61599-283-9.
- "Building History". Washington, D.C.: Supreme Court of the United States. Retrieved September 3, 2018.
- Ashmore, Anne (August 2006). "Dates of Supreme Court decisions and arguments, United States Reports volumes 2–107 (1791–82)" (PDF). Library, Supreme Court of the United States. Retrieved April 26, 2009.
- Shugerman, Jed. "A Six-Three Rule: Reviving Consensus and Deference on the Supreme Court". Georgia Law Review. 37: 893.
- Irons, Peter. A People's History of the Supreme Court, p. 101 (Penguin 2006).
- Scott Douglas Gerber, ed. (1998). "Seriatim: The Supreme Court Before John Marshall". New York University Press. p. 3. ISBN 0-8147-3114-7. Retrieved October 31, 2009.
Finally many scholars cite the absence of a separate Supreme Court building as evidence that the early Court lacked prestige.
- Manning, John F. (2004). "The Eleventh Amendment and the Reading of Precise Constitutional Texts". Yale Law Journal. 113 (8): 1663–1750. doi:10.2307/4135780. JSTOR 4135780.
- Epps, Garrett (October 24, 2004). "Don't Do It, Justices". The Washington Post. Retrieved October 31, 2009.
The court's prestige has been hard-won. In the early 1800s, Chief Justice John Marshall made the court respected
- The Supreme Court had first used the power of judicial review in the case Ware v. Hylton, (1796), wherein it overturned a state law that conflicted with a treaty between the United States and Great Britain.
- Rosen, Jeffrey (July 5, 2009). "Black Robe Politics" (book review of Packing the Court by James MacGregor Burns). The Washington Post. Retrieved October 31, 2009.
From the beginning, Burns continues, the Court has established its "supremacy" over the president and Congress because of Chief Justice John Marshall's "brilliant political coup" in Marbury v. Madison (1803): asserting a power to strike down unconstitutional laws.
- "The People's Vote: 100 Documents that Shaped America – Marbury v. Madison (1803)". U.S. News & World Report. 2003. Archived from the original on September 20, 2003. Retrieved October 31, 2009.
With his decision in Marbury v. Madison, Chief Justice John Marshall established the principle of judicial review, an important addition to the system of "checks and balances" created to prevent any one branch of the Federal Government from becoming too powerful...A Law repugnant to the Constitution is void.
- Sloan, Cliff; McKean, David (February 21, 2009). "Why Marbury V. Madison Still Matters". Newsweek. Retrieved October 31, 2009.
More than 200 years after the high court ruled, the decision in that landmark case continues to resonate.
- "The Constitution in Law: Its Phases Construed by the Federal Supreme Court" (PDF). The New York Times. February 27, 1893. Retrieved October 31, 2009.
The decision … in Martin vs. Hunter's Lessee is the authority on which lawyers and Judges have rested the doctrine that where there is in question, in the highest court of a State, and decided adversely to the validity of a State statute... such claim is reviewable by the Supreme Court ...
- Ginsburg, Ruth Bader; Stevens, John P.; Souter, David; Breyer, Stephen (December 13, 2000). "Dissenting opinions in Bush v. Gore". USA Today. Archived from the original on May 25, 2010. Retrieved December 8, 2019.
Rarely has this Court rejected outright an interpretation of state law by a state high court … The Virginia court refused to obey this Court's Fairfax's Devisee mandate to enter judgment for the British subject's successor in interest. That refusal led to the Court's pathmarking decision in Martin v. Hunter's Lessee, 1 Wheat. 304 (1816).
- "Decisions of the Supreme Court – Historic Decrees Issued in One Hundred an Eleven Years" (PDF). The New York Times. February 3, 1901. Retrieved October 31, 2009.
Very important also was the decision in Martin vs. Hunter's lessee, in which the court asserted its authority to overrule, within certain limits, the decisions of the highest State courts.
- "The Supreme Quiz". The Washington Post. October 2, 2000. Archived from the original on May 30, 2012. Retrieved October 31, 2009.
According to the Oxford Companion to the Supreme Court of the United States, Marshall's most important innovation was to persuade the other justices to stop seriatim opinions—each issuing one—so that the court could speak in a single voice. Since the mid-1940s, however, there's been a significant increase in individual "concurring" and "dissenting" opinions.
- Slater, Dan (April 18, 2008). "Justice Stevens on the Death Penalty: A Promise of Fairness Unfulfilled". The Wall Street Journal. Retrieved October 31, 2009.
The first Chief Justice, John Marshall set out to do away with seriatim opinions–a practice originating in England in which each appellate judge writes an opinion in ruling on a single case. (You may have read old tort cases in law school with such opinions). Marshall sought to do away with this practice to help build the Court into a coequal branch.
- Suddath, Claire (December 19, 2008). "A Brief History of Impeachment". Time. Retrieved October 31, 2009.
Congress tried the process again in 1804, when it voted to impeach Supreme Court Justice Samuel Chase on charges of bad conduct. As a judge, Chase was overzealous and notoriously unfair … But Chase never committed a crime—he was just incredibly bad at his job. The Senate acquitted him on every count.
- Greenhouse, Linda (April 10, 1996). "Rehnquist Joins Fray on Rulings, Defending Judicial Independence". The New York Times. Retrieved October 31, 2009.
the 1805 Senate trial of Justice Samuel Chase, who had been impeached by the House of Representatives … This decision by the Senate was enormously important in securing the kind of judicial independence contemplated by Article III" of the Constitution, Chief Justice Rehnquist said
- Edward Keynes; Randall K. Miller (1989). "The Court vs. Congress: Prayer, Busing, and Abortion". Duke University Press. ISBN 0822309688. Retrieved October 31, 2009.
(page 115)... Grier maintained that Congress has plenary power to limit the federal courts' jurisdiction.
- Ifill, Sherrilyn A. (May 27, 2009). "Sotomayor's Great Legal Mind Long Ago Defeated Race, Gender Nonsense". U.S. News & World Report. Retrieved October 31, 2009.
But his decision in Dred Scott v. Sandford doomed thousands of black slaves and freedmen to a stateless existence within the United States until the passage of the 14th Amendment. Justice Taney's coldly self-fulfilling statement in Dred Scott, that blacks had "no rights which the white man [was] bound to respect," has ensured his place in history—not as a brilliant jurist, but as among the most insensitive
- Irons, Peter (2006). A People's History of the Supreme Court: The Men and Women Whose Cases and Decisions Have Shaped Our Constitution. United States: Penguin Books. pp. 176–177. ISBN 978-0-14-303738-5.
The rhetorical battle that followed the Dred Scott decision, as we know, later erupted into the gunfire and bloodshed of the Civil War (p. 176)... his opinion (Taney's) touched off an explosive reaction on both sides of the slavery issue... (p. 177)
- "Liberty of Contract?". Exploring Constitutional Conflicts. October 31, 2009. Archived from the original on November 22, 2009. Retrieved October 31, 2009.
The term "substantive due process" is often used to describe the approach first used in Lochner—the finding of liberties not explicitly protected by the text of the Constitution to be impliedly protected by the liberty clause of the Fourteenth Amendment. In the 1960s, long after the Court repudiated its Lochner line of cases, substantive due process became the basis for protecting personal rights such as the right of privacy, the right to maintain intimate family relationships.
- "Adair v. United States 208 U.S. 161". Cornell University Law School. 1908. Retrieved October 31, 2009.
No. 293 Argued: October 29, 30, 1907 – Decided: January 27, 1908
- Bodenhamer, David J.; James W. Ely (1993). The Bill of Rights in modern America. Bloomington, Indiana: Indiana University Press. p. 245. ISBN 978-0-253-35159-3.
… of what eventually became the 'incorporation doctrine,' by which various federal Bill of Rights guarantees were held to be implicit in the Fourteenth Amendment due process or equal protection.
- White, Edward Douglass. "Opinion for the Court, Arver v. U.S. 245 U.S. 366".
Finally, as we are unable to conceive upon what theory the exaction by government from the citizen of the performance of his supreme and noble duty of contributing to the defense of the rights and honor of the nation, as the result of a war declared by the great representative body of the people, can be said to be the imposition of involuntary servitude in violation of the prohibitions of the Thirteenth Amendment, we are constrained to the conclusion that the contention to that effect is refuted by its mere statement.
- Siegan, Bernard H. (1987). The Supreme Court's Constitution. Transaction Publishers. p. 146. ISBN 978-0-88738-671-8. Retrieved October 31, 2009.
In the 1923 case of Adkins v. Children's Hospital, the court invalidated a classification based on gender as inconsistent with the substantive due process requirements of the fifth amendment. At issue was congressional legislation providing for the fixing of minimum wages for women and minors in the District of Columbia. (p. 146)
- Biskupic, Joan (March 29, 2005). "Supreme Court gets makeover". USA Today. Retrieved October 31, 2009.
The building is getting its first renovation since its completion in 1935.
- Justice Roberts (September 21, 2005). "Responses of Judge John G. Roberts, Jr. to the Written Questions of Senator Joseph R. Biden" (PDF). The Washington Post. Retrieved October 31, 2009.
I agree that West Coast Hotel Co. v. Parrish correctly overruled Adkins. Lochner era cases—Adkins in particular—evince an expansive view of the judicial role inconsistent with what I believe to be the appropriately more limited vision of the Framers.
- Lipsky, Seth (October 22, 2009). "All the News That's Fit to Subsidize". The Wall Street Journal. Retrieved October 31, 2009.
He was a farmer in Ohio ... during the 1930s, when subsidies were brought in for farmers. With subsidies came restrictions on how much wheat one could grow—even, Filburn learned in a landmark Supreme Court case, Wickard v. Filburn (1942), wheat grown on his modest farm.
- Cohen, Adam (December 14, 2004). "What's New in the Legal World? A Growing Campaign to Undo the New Deal". The New York Times. Retrieved October 31, 2009.
Some prominent states' rights conservatives were asking the court to overturn Wickard v. Filburn, a landmark ruling that laid out an expansive view of Congress's power to legislate in the public interest. Supporters of states' rights have always blamed Wickard ... for paving the way for strong federal action...
- "Justice Black Dies at 85; Served on Court 34 Years". The New York Times. United Press International (UPI). September 25, 1971. Retrieved October 31, 2009.
Justice Black developed his controversial theory, first stated in a lengthy, scholarly dissent in 1947, that the due process clause applied the first eight amendments of the Bill of Rights to the states.
- "100 Documents that Shaped America Brown v. Board of Education (1954)". U.S. News & World Report. May 17, 1954. Archived from the original on November 6, 2009. Retrieved October 31, 2009.
On May 17, 1954, U.S. Supreme Court Justice Earl Warren delivered the unanimous ruling in the landmark civil rights case Brown v. Board of Education of Topeka, Kansas. State-sanctioned segregation of public schools was a violation of the 14th amendment and was therefore unconstitutional. This historic decision marked the end of the "separate but equal" … and served as a catalyst for the expanding civil rights movement...
- "Essay: In defense of privacy". Time. July 15, 1966. Retrieved October 31, 2009.
The biggest legal milestone in this field was last year's Supreme Court decision in Griswold v. Connecticut, which overthrew the state's law against the use of contraceptives as an invasion of marital privacy, and for the first time declared the "right of privacy" to be derived from the Constitution itself.
- Gibbs, Nancy (December 9, 1991). "America's Holy War". Time. Retrieved October 31, 2009.
In the landmark 1962 case Engel v. Vitale, the high court threw out a brief nondenominational prayer composed by state officials that was recommended for use in New York State schools. "It is no part of the business of government," ruled the court, "to compose official prayers for any group of the American people to recite."
- Mattox, William R., Jr; Trinko, Katrina (August 17, 2009). "Teach the Bible? Of course". USA Today. Archived from the original on August 20, 2009. Retrieved October 31, 2009.
Public schools need not proselytize—indeed, must not—in teaching students about the Good Book … In Abington School District v. Schempp, decided in 1963, the Supreme Court stated that "study of the Bible or of religion, when presented objectively as part of a secular program of education," was permissible under the First Amendment.
- "The Law: The Retroactivity Riddle". Time. June 18, 1965. Retrieved October 31, 2009.
Last week, in a 7 to 2 decision, the court refused for the first time to give retroactive effect to a great Bill of Rights decision—Mapp v. Ohio (1961).
- "The Supreme Court: Now Comes the Sixth Amendment". Time. April 16, 1965. Retrieved October 31, 2009.
Sixth Amendment's right to counsel (Gideon v. Wainwright in 1963). … the court said flatly in 1904: 'The Sixth Amendment does not apply to proceedings in state criminal courts." But in the light of Gideon … ruled Black, statements 'generally declaring that the Sixth Amendment does not apply to states can no longer be regarded as law.'
- "Guilt and Mr. Meese". The New York Times. January 31, 1987. Retrieved October 31, 2009.
1966 Miranda v. Arizona decision. That's the famous decision that made confessions inadmissible as evidence unless an accused person has been warned by police of the right to silence and to a lawyer, and waived it.
- Graglia, Lino A. (October 2008). "The Antitrust Revolution" (PDF). Engage. 9 (3). Archived from the original (PDF) on June 21, 2017. Retrieved February 6, 2016.
- Earl M. Maltz, The Coming of the Nixon Court: The 1972 Term and the Transformation of Constitutional Law (University Press of Kansas; 2016)
- O'Connor, Karen (January 22, 2009). "Roe v. Wade: On Anniversary, Abortion Is out of the Spotlight". U.S. News & World Report. Retrieved October 31, 2009.
The shocker, however, came in 1973, when the Court, by a vote of 7 to 2, relied on Griswold's basic underpinnings to rule that a Texas law prohibiting abortions in most situations was unconstitutional, invalidating the laws of most states. Relying on a woman's right to privacy...
- "Bakke Wins, Quotas Lose". Time. July 10, 1978. Retrieved October 31, 2009.
Split almost exactly down the middle, the Supreme Court last week offered a Solomonic compromise. It said that rigid quotas based solely on race were forbidden, but it also said that race might legitimately be an element in judging students for admission to universities. It thus approved the principle of 'affirmative action'…
- "Time to Rethink Buckley v. Valeo". The New York Times. November 12, 1998. Retrieved October 31, 2009.
...Buckley v. Valeo. The nation's political system has suffered ever since from that decision, which held that mandatory limits on campaign spending unconstitutionally limit free speech. The decision did much to promote the explosive growth of campaign contributions from special interests and to enhance the advantage incumbents enjoy over underfunded challengers.
- Staff writer (June 29, 1972). "Supreme Court Justice Rehnquist's Key Decisions". The Washington Post. Retrieved October 31, 2009.
Furman v. Georgia … Rehnquist dissents from the Supreme Court conclusion that many state laws on capital punishment are capricious and arbitrary and therefore unconstitutional.
- History of the Court, in Hall, Ely Jr., Grossman, and Wiecek (eds) The Oxford Companion to the Supreme Court of the United States. Oxford University Press, 1992, ISBN 0-19-505835-6
- "A Supreme Revelation". The Wall Street Journal. April 19, 2008. Retrieved October 31, 2009.
Thirty-two years ago, Justice John Paul Stevens sided with the majority in a famous "never mind" ruling by the Supreme Court. Gregg v. Georgia, in 1976, overturned Furman v. Georgia, which had declared the death penalty unconstitutional only four years earlier.
- Greenhouse, Linda (January 8, 2009). "The Chief Justice on the Spot". The New York Times. Retrieved October 31, 2009.
The federalism issue at the core of the new case grows out of a series of cases from 1997 to 2003 in which the Rehnquist court applied a new level of scrutiny to Congressional action enforcing the guarantees of the Reconstruction amendments.
- Greenhouse, Linda (September 4, 2005). "William H. Rehnquist, Chief Justice of Supreme Court, Is Dead at 80". The New York Times. Retrieved October 31, 2009.
United States v. Lopez in 1995 raised the stakes in the debate over federal authority even higher. The decision declared unconstitutional a Federal law, the Gun Free School Zones Act of 1990, that made it a federal crime to carry a gun within 1,000 feet of a school.
- Greenhouse, Linda (June 12, 2005). "The Rehnquist Court and Its Imperiled States' Rights Legacy". The New York Times. Retrieved October 31, 2009.
Intrastate activity that was not essentially economic was beyond Congress's reach under the Commerce Clause, Chief Justice Rehnquist wrote for the 5-to-4 majority in United States v. Morrison.
- Greenhouse, Linda (March 22, 2005). "Inmates Who Follow Satanism and Wicca Find Unlikely Ally". The New York Times. Retrieved October 31, 2009.
His (Rehnquist's) reference was to a landmark 1997 decision, City of Boerne v. Flores, in which the court ruled that the predecessor to the current law, the Religious Freedom Restoration Act, exceeded Congress's authority and was unconstitutional as applied to the states.
- Amar, Vikram David (July 27, 2005). "Casing John Roberts". The New York Times. Retrieved October 31, 2009.
Seminole Tribe v. Florida (1996) In this seemingly technical 11th Amendment dispute about whether states can be sued in federal courts, Justice O'Connor joined four others to override Congress's will and protect state prerogatives, even though the text of the Constitution contradicts this result.
- Greenhouse, Linda (April 1, 1999). "Justices Seem Ready to Tilt More Toward States in Federalism". The New York Times. Retrieved October 31, 2009.
The argument in this case, Alden v. Maine, No. 98-436, proceeded on several levels simultaneously. On the surface … On a deeper level, the argument was a continuation of the Court's struggle over an even more basic issue: the Government's substantive authority over the states.
- Lindenberger, Michael A. "The Court's Gay Rights Legacy". Time. Retrieved October 31, 2009.
The decision in the Lawrence v. Texas case overturned convictions against two Houston men, whom police had arrested after busting into their home and finding them engaged in sex. And for the first time in their lives, thousands of gay men and women who lived in states where sodomy had been illegal were free to be gay without being criminals.
- Justice Sotomayor (July 16, 2009). "Retire the 'Ginsburg rule' – The 'Roe' recital". USA Today. Archived from the original on August 22, 2009. Retrieved October 31, 2009.
The court's decision in Planned Parenthood v. Casey reaffirmed the court holding of Roe. That is the precedent of the court and settled, in terms of the holding of the court.
- Kamiya, Gary (July 4, 2001). "Against the Law". Salon. Retrieved November 21, 2012.
...the remedy was far more harmful than the problem. By stopping the recount, the high court clearly denied many thousands of voters who cast legal votes, as defined by established Florida law, their constitutional right to have their votes counted. … It cannot be a legitimate use of law to disenfranchise legal voters when recourse is available. …
- Krauthammer, Charles (December 18, 2000). "The Winner in Bush v. Gore?". Time. Retrieved October 31, 2009.
Re-enter the Rehnquist court. Amid the chaos, somebody had to play Daddy. … the Supreme Court eschewed subtlety this time and bluntly stopped the Florida Supreme Court in its tracks—and stayed its willfulness. By, mind you, …
- Babington, Charles; Baker, Peter (September 30, 2005). "Roberts Confirmed as 17th Chief Justice". The Washington Post. Retrieved November 1, 2009.
John Glover Roberts Jr. was sworn in yesterday as the 17th chief justice of the United States, enabling President Bush to put his stamp on the Supreme Court for decades to come, even as he prepares to name a second nominee to the nine-member court.
- Greenhouse, Linda (July 1, 2007). "In Steps Big and Small, Supreme Court Moved Right". The New York Times. Retrieved November 1, 2009.
It was the Supreme Court that conservatives had long yearned for and that liberals feared … This was a more conservative court, sometimes muscularly so, sometimes more tentatively, its majority sometimes differing on methodology but agreeing on the outcome in cases big and small.
- Liptak, Adam (July 24, 2010). "Court Under Roberts Is Most Conservative in Decades". The New York Times. Retrieved February 1, 2019.
When Chief Justice John G. Roberts Jr. and his colleagues on the Supreme Court left for their summer break at the end of June, they marked a milestone: the Roberts court had just completed its fifth term. In those five years, the court not only moved to the right but also became the most conservative one in living memory, based on an analysis of four sets of political science data.
- Caplan, Lincoln (October 10, 2016). "A new era for the Supreme Court: the transformative potential of a shift in even one seat". The American Prospect. Retrieved February 1, 2019.
The Court has gotten increasingly more conservative with each of the Republican-appointed chief justices—Warren E. Burger (1969–1986), William H. Rehnquist (1986–2005), and John G. Roberts Jr. (2005–present). All told, Republican presidents have appointed 12 of the 16 most recent justices, including the chiefs. During Roberts's first decade as chief, the Court was the most conservative in more than a half-century and likely the most conservative since the 1930s.
- Savage, Charlie (July 14, 2009). "Respecting Precedent, or Settled Law, Unless It's Not Settled". The New York Times. Retrieved November 1, 2009.
Gonzales v. Carhart—in which the Supreme Court narrowly upheld a federal ban on the late-term abortion procedure opponents call "partial birth abortion"—to be settled law.
- "A Bad Day for Democracy". The Christian Science Monitor. January 22, 2010. Retrieved January 22, 2010.
- Barnes, Robert (October 1, 2009). "Justices to Decide if State Gun Laws Violate Rights". The Washington Post. Retrieved November 1, 2009.
The landmark 2008 decision to strike down the District of Columbia's ban on handgun possession was the first time the court had said the amendment grants an individual right to own a gun for self-defense. But the 5 to 4 opinion in District of Columbia v. Heller...
- Greenhouse, Linda (April 18, 2008). "Justice Stevens Renounces Capital Punishment". The New York Times. Retrieved November 1, 2009.
His renunciation of capital punishment in the lethal injection case, Baze v. Rees, was likewise low key and undramatic.
- Greenhouse, Linda (June 26, 2008). "Supreme Court Rejects Death Penalty for Child Rape". The New York Times. Retrieved November 1, 2009.
The death penalty is unconstitutional as a punishment for the rape of a child, a sharply divided Supreme Court ruled Wednesday … The 5-to-4 decision overturned death penalty laws in Louisiana and five other states.
- Federal Judiciary Act (1789), National Archives and Records Administration, retrieved September 12, 2017
- 16 Stat. 44
- Mintz, S. (2007). "The New Deal in Decline". Digital History. University of Houston. Archived from the original on May 5, 2008. Retrieved October 27, 2009.
- Hodak, George (2007). "February 5, 1937: FDR Unveils Court Packing Plan". ABAjournal.com. American Bar Association. Retrieved January 29, 2009.
- "Justices, Number of", in Hall, Ely Jr., Grossman, and Wiecek (editors), The Oxford Companion to the Supreme Court of the United States. Oxford University Press 1992, ISBN 0-19-505835-6
- McGinnis, John O. "Essays on Article II: Appointments Clause". The Heritage Guide To The Constitution. Heritage Foundation. Retrieved June 19, 2019.
- "United States Senate. "Nominations"".
- Brunner, Jim (March 24, 2017). "Sen. Patty Murray will oppose Neil Gorsuch for Supreme Court". The Seattle Times. Retrieved April 9, 2017.
In a statement Friday morning, Murray cited Republicans' refusal to confirm or even seriously consider President Obama's nomination of Judge Merrick Garland, a similarly well-qualified jurist – and went on to lambaste President Trump's conduct in his first few months in office. [...] And Murray added she's "deeply troubled" by Gorsuch's "extreme conservative perspective on women's health", citing his "inability" to state a clear position on Roe v. Wade, the landmark abortion-legalization decision, and his comments about the "Hobby Lobby" decision allowing employers to refuse to provide birth-control coverage.
- Flegenheimer, Matt (April 6, 2017). "Senate Republicans Deploy 'Nuclear Option' to Clear Path for Gorsuch". The New York Times.
After Democrats held together Thursday morning and filibustered President Trump's nominee, Republicans voted to lower the threshold for advancing Supreme Court nominations from 60 votes to a simple majority.
- "U.S. Senate: Supreme Court Nominations, Present-1789". United States Senate. Retrieved April 8, 2017.
- See 5 U.S.C. § 2902.
- 28 U.S.C. § 4. If two justices are commissioned on the same date, then the oldest one has precedence.
- Balkin, Jack M. "The passionate intensity of the confirmation process". Jurist. Archived from the original on December 18, 2007. Retrieved February 13, 2008.
- "The Stakes of the 2016 Election Just Got Much, Much Higher". The Huffington Post. Retrieved February 14, 2016.
- McMillion, Barry J. (October 19, 2015). "Supreme Court Appointment Process: Senate Debate and Confirmation Vote" (PDF). Congressional Research Service. Retrieved February 14, 2016.
- Hall, Kermit L., ed. (1992). "Appendix Two". Oxford Companion to the Supreme Court of the United States. Oxford University Press. pp. 965–971. ISBN 978-0-19-505835-2.
- See, e.g., Evans v. Stephens, 387 F.3d 1220 (11th Cir. 2004), which concerned the recess appointment of William Pryor. Concurring in denial of certiorari, Justice Stevens observed that the case involved "the first such appointment of an Article III judge in nearly a half century" 544 U.S. 942 (2005) (Stevens, J., concurring in denial of cert) (internal quotation marks deleted).
- Fisher, Louis (September 5, 2001). "Recess Appointments of Federal Judges" (PDF). CRS Report for Congress. Congressional Research Service. RL31112: 16. Retrieved August 6, 2010.
Resolved, That it is the sense of the Senate that the making of recess appointments to the Supreme Court of the United States may not be wholly consistent with the best interests of the Supreme Court, the nominee who may be involved, the litigants before the Court, nor indeed the people of the United States, and that such appointments, therefore, should not be made except under unusual circumstances and for the purpose of preventing or ending a demonstrable breakdown in the administration of the Court's business.
- The resolution passed by a vote of 48 to 37, mainly along party lines; Democrats supported the resolution 48–4, and Republicans opposed it 33–0.
- "National Relations Board v. Noel Canning et al" (PDF). pp. 34, 35. The Court continued, "In our view, however, the pro forma sessions count as sessions, not as periods of recess. We hold that, for purposes of the Recess Appointments Clause, the Senate is in session when it says it is, provided that, under its own rules, it retains the capacity to transact Senate business. The Senate met that standard here." Later, the opinion states: "For these reasons, we conclude that we must give great weight to the Senate's own determination of when it is and when it is not in session. But our deference to the Senate cannot be absolute. When the Senate is without the capacity to act, under its own rules, it is not in session even if it so declares."
- "Obama Won't Appoint Scalia Replacement While Senate Is Out This Week". NPR. Retrieved January 25, 2017.
- "How the Federal Courts Are Organized: Can a federal judge be fired?". Federal Judicial Center. fjc.gov. Archived from the original on September 15, 2012. Retrieved March 18, 2012.
- "History of the Federal Judiciary: Impeachments of Federal Judges". Federal Judicial Center fjc.gov. Retrieved March 18, 2012.
- Appel, Jacob M. (August 22, 2009). "Anticipating the Incapacitated Justice". The Huffington Post. Retrieved August 23, 2009.
- "Press Release Regarding The Honorable Amy Coney Barrett Oath Ceremony" (Press release). Washington, D.C.: Press Office of the Supreme Court of the United States. October 26, 2020.
- "Current Members". www.supremecourt.gov. Washington, D.C.: Supreme Court of the United States. Retrieved October 21, 2018.
- Walthr, Matthew (April 21, 2014). "Sam Alito: A Civil Man". The American Spectator. Retrieved June 15, 2017 – via The ANNOTICO Reports.
- DeMarco, Megan (February 14, 2008). "Growing up Italian in Jersey: Alito reflects on ethnic heritage". The Times. Trenton, New Jersey. Archived from the original on July 30, 2017. Retrieved June 15, 2017.
- Neil Gorsuch was raised Catholic, but attends an Episcopalian church. It is unclear if he considers himself a Catholic or a Protestant. Burke, Daniel (March 22, 2017). "What is Neil Gorsuch's religion? It's complicated". CNN.
Springer said she doesn't know whether Gorsuch considers himself a Catholic or an Episcopalian. "I have no evidence that Judge Gorsuch considers himself an Episcopalian, and likewise no evidence that he does not." Gorsuch's younger brother, J.J., said he too has "no idea how he would fill out a form. He was raised in the Catholic Church and confirmed in the Catholic Church as an adolescent, but he has been attending Episcopal services for the past 15 or so years."
- "Religion of the Supreme Court". adherents.com. January 31, 2006. Retrieved July 9, 2010.
- Segal, Jeffrey A.; Spaeth, Harold J. (2002). The Supreme Court and the Attitudinal Model Revisited. Cambridge Univ. Press. p. 183. ISBN 978-0-521-78971-4.
- Schumacher, Alvin. "Roger B. Taney". Encyclopædia Britannica. Retrieved May 3, 2017.
He was the first Roman Catholic to serve on the Supreme Court.
- "Frequently Asked Questions (FAQ)". Supreme Court of the United States. Archived from the original on March 20, 2017. Retrieved May 3, 2017.
- Baker, Peter (August 7, 2010). "Kagan Is Sworn in as the Fourth Woman, and 112th Justice, on the Supreme Court". The New York Times. Retrieved August 8, 2010.
- Mark Sherman, Is Supreme Court in need of regional diversity? (May 1, 2010).
- Shane, Scott; Eder, Steve; Ruiz, Rebecca R.; Liptak, Adam; Savage, Charlie; Protess, Ben (July 15, 2018). "Influential Judge, Loyal Friend, Conservative Warrior – and D.C. Insider". The New York Times. p. A1. Retrieved July 16, 2018.
- O'Brien, David M. (2003). Storm Center: The Supreme Court in American Politics (6th ed.). W.W. Norton & Company. p. 46. ISBN 978-0-393-93218-8.
- de Vogue, Ariane (October 22, 2016). "Clarence Thomas' Supreme Court legacy". CNN. Retrieved May 3, 2017.
- "The Four Justices". Smithsonian Institution. October 21, 2015. Archived from the original on August 20, 2016. Retrieved May 3, 2017.
- David N. Atkinson, Leaving the Bench (University Press of Kansas 1999) ISBN 0-7006-0946-6
- Greenhouse, Linda (September 9, 2010). "An Invisible Chief Justice". The New York Times. Retrieved September 9, 2010.
Had [O'Connor] anticipated that the chief justice would not serve out the next Supreme Court term, she told me after his death, she would have delayed her own retirement for a year rather than burden the court with two simultaneous vacancies. […] Her reason for leaving was that her husband, suffering from Alzheimer's disease, needed her care at home.
- Ward, Artemus (2003). Deciding to Leave: The Politics of Retirement from the United States Supreme Court. SUNY Press. p. 358. ISBN 978-0-7914-5651-4.
One byproduct of the increased [retirement benefit] provisions [in 1954], however has been a dramatic rise in the number of justices engaging in succession politics by trying to time their departures to coincide with a compatible president. The most recent departures have been partisan, some more blatantly than others, and have bolstered arguments to reform the process. A second byproduct has been an increase in justices staying on the Court past their ability to adequately contribute. p. 9
- Stolzenberg, Ross M.; Lindgren, James (May 2010). "Retirement and Death in Office of U.S. Supreme Court Justices". Demography. 47 (2): 269–298. doi:10.1353/dem.0.0100. PMC 3000028. PMID 20608097.
If the incumbent president is of the same party as the president who nominated the justice to the Court, and if the incumbent president is in the first two years of a four-year presidential term, then the justice has odds of resignation that are about 2.6 times higher than when these two conditions are not met.
- See for example Sandra Day O'Connor:How the first woman on the Supreme Court became its most influential justice, by Joan Biskupic, Harper Collins, 2005, p. 105. Also Rookie on the Bench: The Role of the Junior Justice by Clare Cushman, Journal of Supreme Court History 32 no. 3 (2008), pp. 282–296.
- "Breyer Just Missed Record as Junior Justice". Retrieved January 11, 2008.
- "Judicial Compensation". United States Courts. Retrieved May 15, 2017.
- Hasen, Richard L. (May 11, 2019). "Polarization and the Judiciary". Annual Review of Political Science. 22 (1): 261–276. doi:10.1146/annurev-polisci-051317-125141. ISSN 1094-2939.
- Harris, Allison P.; Sen, Maya (May 11, 2019). "Bias and Judging". Annual Review of Political Science. 22 (1): 241–259. doi:10.1146/annurev-polisci-051617-090650. ISSN 1094-2939.
- Mears, Bill (March 20, 2017). "Take a look through Neil Gorsuch's judicial record". Fox News.
A Fox News analysis of that record – including some 3,000 rulings he has been involved with – reveals a solid, predictable conservative philosophy, something President Trump surely was attuned to when he nominated him to fill the open ninth seat. The record in many ways mirrors the late Justice Antonin Scalia's approach to constitutional and statutory interpretation.
- Cope, Kevin; Fischman, Joshua (September 5, 2018). "It's hard to find a federal judge more conservative than Brett Kavanaugh". The Washington Post.
Kavanaugh served a dozen years on the D.C. Circuit Court of Appeals, a court viewed as first among equals of the 12 federal appellate courts. Probing nearly 200 of Kavanaugh's votes and over 3000 votes by his judicial colleagues, our analysis shows that his judicial record is significantly more conservative than that of almost every other judge on the D.C. Circuit. That doesn't mean that he'd be the most conservative justice on the Supreme Court, but it strongly suggests that he is no judicial moderate.
- Chamberlain, Samuel (July 9, 2018). "Trump nominates Brett Kavanaugh to the Supreme Court". Fox News.
Trump may have been swayed in part because of Kavanaugh's record of being a reliable conservative on the court – and reining in dozens of administrative decisions of the Obama White House. There are some question marks for conservatives, particularly an ObamaCare ruling years ago.
- Thomson-Devaux, Amelia; Bronner, Laura; Wiederkehr, Anna (October 14, 2020). "How conservative is Amy Coney Barrett?". FiveThirtyEight. Retrieved October 27, 2020.
We can look to her track record on the 7th U.S. Circuit Court of Appeals, though, for clues. Barrett has served on that court for almost three years now, and two different analyses of her rulings point to the same conclusion: Barrett is one of the more conservative judges on the circuit — and maybe even the most conservative.
- Betz, Bradford (March 2, 2019). "Chief Justice Roberts' recent votes raise doubts about 'conservative revolution' on Supreme Court". Fox News.
Erwin Chemerinsky, a law professor at the University of California at Berkeley, told Bloomberg that Roberts' recent voting record may indicate that he is taking his role as the median justice "very seriously" and that the recent period was "perhaps the beginning of his being the swing justice."
- Roeder, Oliver (October 6, 2018). "How Kavanugh will change the Supreme Court". FiveThirtyEight.
Based on what we know about measuring the ideology of justices and judges, the Supreme Court will soon take a hard and quick turn to the right. It's a new path that is likely to last for years. Chief Justice John Roberts, a George W. Bush appointee, will almost certainly become the new median justice, defining the court's new ideological center.
- Goldstein, Tom (June 30, 2010). "Everything you read about the Supreme Court is wrong (except here, maybe)". SCOTUSblog. Retrieved July 7, 2010.
- Among the examples mentioned by Goldstein for the 2009 term were:
- Dolan v. United States, 560 U.S. 605 (2010), which interpreted judges' prerogatives broadly, typically a "conservative" result. The majority consisted of the five junior Justices: Thomas, Ginsburg, Breyer, Alito, and Sotomayor.
- Magwood v. Patterson, 561 U.S. 320 (2010), which expanded habeas corpus petitions, a "liberal" result, in an opinion by Thomas, joined by Stevens, Scalia, Breyer, and Sotomayor.
- Shady Grove Orthopedic Associates, P. A. v. Allstate Ins. Co., 559 U.S. 393 (2010), which yielded a pro-plaintiff result in an opinion by Scalia joined by Roberts, Stevens, Thomas, and Sotomayor.
- "October 2011 Term, Five to Four Decisions" (PDF). SCOTUSblog. June 30, 2012. Retrieved July 2, 2012.
- "Final October 2010 Stat Pack available". SCOTUSblog. June 27, 2011. Retrieved June 28, 2011.
- "End of Term statistical analysis – October 2010" (PDF). SCOTUSblog. July 1, 2011. Retrieved July 2, 2011.
- "Cases by Vote Split" (PDF). SCOTUSblog. June 27, 2011. Retrieved June 28, 2011.
- "Justice agreement – Highs and Lows" (PDF). SCOTUSblog. June 27, 2011. Retrieved June 28, 2011.
- Bhatia, Kedar (June 29, 2018). "Final October Term 2017 Stat Pack and key takeaways". SCOTUSBlog. Retrieved June 29, 2018.
- Bhatia, Kedar S. (June 29, 2018). "Stat Pack for October Term 2017" (PDF). SCOTUSBlog. pp. 17–18. Retrieved June 29, 2018.
- Feldman, Adam (June 28, 2019). "Final Stat Pack for October Term 2018". SCOTUSBlog. Retrieved June 30, 2019.
- Feldman, Adam (June 28, 2019). "Stat Pack for October Term 2018" (PDF). pp. 5, 19, 23. Retrieved June 30, 2019.
- "Plan Your Trip (quote:) "In mid-May, after the oral argument portion of the Term has concluded, the Court takes the Bench Mondays at 10AM for the release of orders and opinions."". US Senator John McCain. October 24, 2009. Retrieved October 24, 2009.
- "Visiting the Court". Supreme Court of the United States. March 18, 2010. Retrieved March 19, 2010.
- "Visiting-Capitol-Hill". docstoc. October 24, 2009. Archived from the original on August 21, 2016. Retrieved October 24, 2009.
- "How The Court Works". The Supreme Court Historical Society. October 24, 2009. Retrieved January 31, 2014.
- Liptak, Adam (March 21, 2016). "Supreme Court Declines to Hear Challenge to Colorado's Marijuana Laws". The New York Times. Retrieved April 27, 2017.
- United States v. Shipp, 203 U.S. 563 (Supreme Court of the United States 1906).
- Curriden, Mark (June 2, 2009). "A Supreme Case of Contempt". ABA Journal. American Bar Association. Retrieved April 27, 2017.
On May 28, [U.S. Attorney General William] Moody did something unprecedented, then and now. He filed a petition charging Sheriff Shipp, six deputies and 19 leaders of the lynch mob with contempt of the Supreme Court. The justices unanimously approved the petition and agreed to retain original jurisdiction in the matter. ... May 24, 1909, stands out in the annals of the U.S. Supreme Court. On that day, the court announced a verdict after holding the first and only criminal trial in its history.
- Hindley, Meredith (November 2014). "Chattanooga versus the Supreme Court: The Strange Case of Ed Johnson". Humanities. 35 (6). Retrieved April 27, 2017.
United States v. Shipp stands out in the history of the Supreme Court as an anomaly. It remains the only time the Court has conducted a criminal trial.
- Linder, Douglas. "United States v. Shipp (U.S. Supreme Court, 1909)". Famous Trials. Retrieved April 27, 2017.
- 28 U.S.C. § 1254
- 28 U.S.C. § 1259
- 28 U.S.C. § 1258
- 28 U.S.C. § 1260
- 28 U.S.C. § 1257
- Brannock, Steven; Weinzierl, Sarah (2003). "Confronting a PCA: Finding a Path Around a Brick Wall" (PDF). Stetson Law Review. XXXII: 368–369, 387–390. Retrieved April 27, 2017.
- 🖉"Teague v. Lane, 489 U.S. 288 (1989)". Justia Law.
- Gutman, Jeffrey. "Federal Practice Manual for Legal Aid Attorneys: 3.3 Mootness". Federal Practice Manual for Legal Aid Attorneys. Sargent Shriver National Center on Poverty Law. Retrieved April 27, 2017.
- Glick, Joshua (April 2003). "On the road: The Supreme Court and the history of circuit riding" (PDF). Cardozo Law Review. 24. Retrieved September 24, 2018.
Gradually, however, circuit riding lost support. The Court's increasing business in the nation's capital following the Civil War made the circuit riding seem anachronistic and impractical and a slow shift away from the practice began. The Judiciary Act of 1869 established a separate circuit court judiciary. The justices retained nominal circuit riding duties until 1891 when the Circuit Court of Appeals Act was passed. With the Judicial Code of 1911, Congress officially ended the practice. The struggle between the legislative and judicial branches over circuit riding was finally concluded.
- "Miscellaneous Order (11/20/2020)" (PDF). Supreme Court of the United States. Retrieved November 20, 2020.
- 28 U.S.C. § 1254
- 28 U.S.C. § 1257; see also Adequate and independent state grounds
- James, Robert A. (1998). "Instructions in Supreme Court Jury Trials" (PDF). The Green Bag. 2d. 1 (4): 378. Retrieved February 5, 2013.
- 28 U.S.C. § 1872 See Georgia v. Brailsford, 3 U.S. 1 (1794), in which the Court conducted a jury trial.
- Shelfer, Lochlan F. (October 2013). "Special Juries in the Supreme Court". Yale Law Journal. 123 (1): 208–252. Archived from the original on June 30, 2017. Retrieved October 2, 2018.
- Mauro, Tony (October 21, 2005). "Roberts Dips Toe into Cert Pool". Legal Times. Retrieved October 31, 2007.
- Mauro, Tony (July 4, 2006). "Justice Alito Joins Cert Pool Party". Legal Times. Retrieved October 31, 2007.
- Liptak, Adam (September 25, 2008). "A Second Justice Opts Out of a Longtime Custom: The 'Cert. Pool'". The New York Times. Retrieved October 17, 2008.
- Liptak, Adam (May 1, 2017). "Gorsuch, in sign of independence, is out of Supreme Court's clerical pool". The New York Times. Retrieved May 2, 2017.
- For example, the arguments on the constitutionality of the Patient Protection and Affordable Care Act took place over three days and lasted over six hours, covering several issues; the arguments for Bush v. Gore were 90 minutes long; oral arguments in United States v. Nixon lasted three hours; and the Pentagon papers case was given a two-hour argument. Christy, Andrew (November 15, 2011). "'Obamacare' will rank among the longest Supreme Court arguments ever". NPR. Retrieved March 31, 2011. The longest modern-day oral arguments were in the case of California v. Arizona, in which oral arguments lasted over sixteen hours over four days in 1962.Bobic, Igor (March 26, 2012). "Oral arguments on health reform longest in 45 years". Talking Points Memo. Retrieved January 31, 2014.
- Glazer, Eric M.; Zachary, Michael (February 1997). "Joining the Bar of the U.S. Supreme Court". Volume LXXI, No. 2. Florida Bar Journal. p. 63. Retrieved February 3, 2014.
- Gresko, Jessica (March 24, 2013). "For lawyers, the Supreme Court bar is vanity trip". Florida Today. Melbourne, Florida. pp. 2A. Archived from the original on March 23, 2013.
- "How The Court Works; Library Support". The Supreme Court Historical Society. Retrieved February 3, 2014.
- See generally, Tushnet, Mark, ed. (2008) I Dissent: Great Opposing Opinions in Landmark Supreme Court Cases, Malaysia: Beacon Press, pp. 256, ISBN 978-0-8070-0036-6
- Kessler, Robert. "Why Aren't Cameras Allowed at the Supreme Court Again?". The Atlantic. Retrieved March 24, 2017.
- 28 U.S.C. § 1
- 28 U.S.C. § 2109
- Pepall, Lynne; Richards, Daniel L.; Norman, George (1999). Industrial Organization: Contemporary Theory and Practice. Cincinnati: South-Western College Publishing. pp. 11–12.
- "Bound Volumes". Supreme Court of the United States. Retrieved January 9, 2019.
- "Cases adjudged in the Supreme Court at October Term, 2012 – March 26 through June 13, 2013" (PDF). United States Reports. 569. 2018. Retrieved January 9, 2019.
- "Sliplists". Supreme Court of the United States. Retrieved January 1, 2019.
- "Supreme Court Research Guide". law.georgetown.edu. Georgetown Law Library. Retrieved August 22, 2012.
- "How to Cite Cases: U.S. Supreme Court Decisions". lib.guides.umd.edu. University of Maryland University Libraries. Retrieved August 22, 2012.
- Hall, Kermit L.; McGuire, Kevin T., eds. (2005). Institutions of American Democracy: The Judicial Branch. New York City: Oxford University Press. pp. 117–118. ISBN 978-0-19-530917-1.
- Mendelson, Wallace (1992). "Separation of Powers". In Hall, Kermit L. (ed.). The Oxford Companion to the Supreme Court of the United States. Oxford University Press. p. 775. ISBN 978-0-19-505835-2.
- The American Conflict by Horace Greeley (1873), p. 106; also in The Life of Andrew Jackson (2001) by Robert Vincent Remini
- Brokaw, Tom; Stern, Carl (July 8, 1974). "Supreme Court hears case of United States v. Nixon". NBC Universal Media LLC. Retrieved February 20, 2019.
But there is no guarantee that when the decision comes, it will end the matter. It may just set the stage for the next legal wrangle over compliance with the Court's decision.
- Vile, John R. (1992). "Court curbing". In Hall, Kermit L. (ed.). The Oxford Companion to the Supreme Court of the United States. Oxford University Press. p. 202. ISBN 978-0-19-505835-2.
- Peppers, Todd C. (2006). Courtiers of the Marble Palace: The Rise and Influence of the Supreme Court Law Clerk. Stanford University Press. pp. 195, 1, 20, 22, and 22–24 respectively. ISBN 978-0-8047-5382-1.
- Weiden, David; Ward, Artemus (2006). Sorcerers' Apprentices: 100 Years of Law Clerks at the United States Supreme Court. NYU Press. ISBN 978-0-8147-9404-3.
- Chace, James (2007). Acheson: The Secretary of State Who Created the American World. New York City: Simon & Schuster (published 1998). p. 44. ISBN 978-0-684-80843-7.
- List of law clerks of the Supreme Court of the United States
- Liptak, Adam (September 7, 2010). "Polarization of Supreme Court Is Reflected in Justices' Clerks". The New York Times. Retrieved September 7, 2010.
- William E. Nelson; Harvey Rishikof; I. Scott Messinger; Michael Jo (November 2009). "The Liberal Tradition of the Supreme Court Clerkship: Its Rise, Fall, and Reincarnation?" (PDF). Vanderbilt Law Review. p. 1749. Archived from the original (PDF) on July 27, 2010. Retrieved September 7, 2010.
- Liptak, Adam; Kopicki, Allison (June 7, 2012). "Approval Rating for Supreme Court Hits Just 44% in Poll". The New York Times. Retrieved June 28, 2019.
- See for example "Judicial activism" in The Oxford Companion to the Supreme Court of the United States, edited by Kermit Hall; article written by Gary McDowell
- Root, Damon W. (September 21, 2009). "Lochner and Liberty". The Wall Street Journal. Retrieved October 23, 2009.
- Bernstein, David. Only One Place of Redress: African Americans, Labor Regulations, and the Courts from Reconstruction to the New Deal, p. 100 (Duke University Press, 2001): "The Court also directly overturned Lochner by adding that it is no 'longer open to question that it is within the legislative power to fix maximum hours.'"
- Dorf, Michael and Morrison, Trevor. Constitutional Law, p. 18 (Oxford University Press, 2010).
- Patrick, John. The Supreme Court of the United States: A Student Companion, p. 362 (Oxford University Press, 2006).
- Steinfels, Peter (May 22, 2005). "'A Church That Can and Cannot Change': Dogma". The New York Times. Retrieved October 22, 2009.
- Savage, David G. (October 23, 2008). "Roe vs. Wade? Bush vs. Gore? What are the worst Supreme Court decisions?". Los Angeles Times. Archived from the original on October 23, 2008. Retrieved October 23, 2009.
a lack of judicial authority to enter an inherently political question that had previously been left to the states
- Lewis, Neil A. (September 19, 2002). "Judicial Nominee Says His Views Will Not Sway Him on the Bench". The New York Times. Retrieved October 22, 2009.
he has written scathingly of Roe v. Wade
- "Election Guide 2008: The Issues: Abortion". The New York Times. 2008. Retrieved October 22, 2009.
- Buchanan, Pat (July 6, 2005). "The judges war: an issue of power". Townhall.com. Retrieved October 23, 2009.
The Brown decision of 1954, desegregating the schools of 17 states and the District of Columbia, awakened the nation to the court's new claim to power.
- Sunstein, Carl R. (1991). "What Judge Bork Should Have Said". Connecticut Law Review. 23: 2 – via University of Chicago Law School - Chicago Unbound.
- Clymer, Adam (May 29, 1998). "Barry Goldwater, Conservative and Individualist, Dies at 89". The New York Times. Retrieved October 22, 2009.
- Stone, Geoffrey R. (March 26, 2012). "Citizens United and conservative judicial activism" (PDF). University of Illinois Law Review. 2012 (2): 485–500.
- Lincoln, Abraham (March 4, 1861). "First Inaugural Address". National Center. Archived from the original on October 9, 2009. Retrieved October 23, 2009.
At the same time, the candid citizen must confess that if the policy of the Government upon vital questions affecting the whole people is to be irrevocably fixed by decisions of the Supreme Court, the instant they are made in ordinary litigation between parties in personal actions the people will have ceased to be their own rulers, having to that extent practically resigned their Government into the hands of that eminent tribunal.
- Will, George F. (May 27, 2009). "Identity Justice: Obama's Conventional Choice". The Washington Post. Retrieved October 22, 2009.
Thurgood Marshall quote taken from the Stanford Law Review, summer 1992
- Irons, Peter. A People's History of the Supreme Court. London: Penguin, 1999. ISBN 0-670-87006-4
- Liptak, Adam (January 31, 2009). "To Nudge, Shift or Shove the Supreme Court Left". The New York Times. Retrieved October 23, 2009.
Every judge who's been appointed to the court since Lewis Powell...in 1971...has been more conservative than his or her predecessor
- Babington, Charles (April 5, 2005). "Senator Links Violence to 'Political' Decisions". The Washington Post. Retrieved October 22, 2009.
- Liptak, Adam (February 2, 2006). "A Court Remade in the Reagan Era's Image". The New York Times. Retrieved October 22, 2009.
- Savage, David G. (July 13, 2008). "Supreme Court finds history is a matter of opinions". Los Angeles Times. Retrieved October 22, 2009.
- Napolitano, Andrew P. (February 17, 2005). "No Defense". The New York Times. Retrieved October 23, 2009.
- Edsall, Thomas B.; Fletcher, Michael A. (September 5, 2005). "Again, Right Voices Concern About Gonzales". The Washington Post. Retrieved October 23, 2009.
- Lane, Charles (March 20, 2005). "Conservative's Book on Supreme Court Is a Bestseller". The Washington Post. Retrieved October 23, 2009.
- Mark I. Sutherland; Dave Meyer; William J. Federer; Alan Keyes; Ed Meese; Phyllis Schlafly; Howard Phillips; Alan E. Sears; Ben DuPre; Rev. Rick Scarborough; David C. Gibbs III; Mathew D. Staver; Don Feder; Herbert W. Titus (2005). Judicial Tyranny: The New Kings of America. St. Louis, Missouri: Amerisearch Inc. p. 242. ISBN 978-0-9753455-6-6.
- Kakutani, Michiko (July 6, 2009). "Appointees Who Really Govern America". The New York Times. Retrieved October 27, 2009.
- Bazelon, Emily (July 6, 2009). "The Supreme Court on Trial: James MacGregor Burns takes aim at the bench". Slate. Retrieved October 27, 2009.
- Special keynote address by President Ronald Reagan, November 1988, at the second annual lawyers convention of the Federalist Society, Washington, D.C.
- Taylor Jr., Stuart (October 15, 1987). "Reagan Points to a Critic, Who Points Out It Isn't So". The New York Times. Retrieved October 23, 2009.
- Kelley Beaucar Vlahos (September 11, 2003). "Judge Bork: Judicial Activism Is Going Global". Fox News. Archived from the original on May 23, 2010. Retrieved October 23, 2009.
What judges have wrought is a coup d'état – slow moving and genteel, but a coup d'état nonetheless.
- Leiter, Brian (March 19, 2017). "Let's start telling the truth about what the Supreme Court does". Washingtonpost.com. Retrieved September 29, 2019.
- Safire, William (April 24, 2005). "Dog Whistle". The New York Times Magazine. Retrieved October 22, 2009.
- Savage, David G. (October 23, 2008). "Roe vs. Wade? Bush vs. Gore? What are the worst Supreme Court decisions?". Los Angeles Times. Archived from the original on October 23, 2008. Retrieved October 23, 2009.
- Mansnerus, Laura (October 16, 2005). "Diminished Eminence in a Changed Domain". The New York Times. Retrieved October 22, 2009.
- Smothers, Ronald (October 16, 2005). "In Long Branch, No Olive Branches". The New York Times. Retrieved October 22, 2009.
- Cohen, Adam (January 15, 2008). "Editorial Observer – A Supreme Court Reversal: Abandoning the Rights of Voters". The New York Times. Retrieved October 23, 2009.
- Bendavid, Naftali (July 13, 2009). "Franken: 'An Incredible Honor to Be Here'". The Wall Street Journal. Retrieved October 22, 2009.
- Savage, David G. (July 13, 2008). "Supreme Court finds history is a matter of opinions". Los Angeles Times. Retrieved October 30, 2009.
This suggests that the right of habeas corpus was not limited to English subjects … protects people who are captured … at Guantanamo … Wrong, Justice Antonin Scalia wrote in dissent. He said English history showed that the writ of habeas corpus was limited to sovereign English territory
- Will, George F. (May 27, 2009). "Identity Justice: Obama's Conventional Choice". The Washington Post. Retrieved October 22, 2009.
- Taranto, James (June 9, 2009). "Speaking Ruth to Power". The Wall Street Journal. Retrieved October 22, 2009.
- Woodward, Bob; Scott Armstrong (1979). The Brethren: Inside the Supreme Court. United States of America: Simon & Schuster. p. 541. ISBN 978-0-7432-7402-9.
A court which is final and unreviewable needs more careful scrutiny than any other
- Sabato, Larry (September 26, 2007). "It's Time to Reshape the Constitution and Make America a Fairer Country". The Huffington Post. Retrieved October 23, 2009.
- Christopher Moore (November 1, 2008). "Our Canadian Republic – Do we display too much deference to authority … or not enough?". Literary Review of Canada. Retrieved October 23, 2009.
- Tomkins, Adam (2002). "In Defence of the Political Constitution". United Kingdom: 22 Oxford Journal of Legal Studies 157.
Bush v. Gore
- Madison, James (1789).
the States will retain, under the proposed Constitution, a very extensive portion of active sovereignty– via Wikisource.
- Alexander Hamilton (aka Publius) (1789). "Federalist No. 28". Independent Journal. Retrieved October 24, 2009.
Power being almost always the rival of power; the General Government will at all times stand ready to check the usurpations of the state government; and these will have the same disposition toward the General Government.
- Madison, James (January 25, 1788). "The Federalist". Independent Journal (44 (quote: 8th para)). Retrieved October 27, 2009.
seems well calculated at once to secure to the States a reasonable discretion in providing for the conveniency of their imports and exports, and to the United States a reasonable check against the abuse of this discretion.
- Madison, James (February 16, 1788). "The Federalist No. 56 (quote: 6th para)". Independent Journal. Retrieved October 27, 2009.
In every State there have been made, and must continue to be made, regulations on this subject which will, in many cases, leave little more to be done by the federal legislature, than to review the different laws, and reduce them in one general act.
- Hamilton, Alexander (December 14, 1787). "The Federalist No. 22 (quote: 4th para)". New York Packet. Retrieved October 27, 2009.
The interfering and unneighborly regulations of some States, contrary to the true spirit of the Union, have, in different instances, given just cause of umbrage and complaint to others, and it is to be feared that examples of this nature, if not restrained by a national control, would be multiplied and extended till they became not less serious sources of animosity and discord than injurious impediments to the intercourse between the different parts of the Confederacy.
- Madison, James (January 22, 1788). "The Federalist Papers". New York Packet. Retrieved October 27, 2009.
The regulation of commerce with the Indian tribes is very properly unfettered from two limitations in the articles of Confederation, which render the provision obscure and contradictory. The power is there restrained to Indians, not members of any of the States, and is not to violate or infringe the legislative right of any State within its own limits.
- Akhil Reed Amar (1998). "The Bill of Rights – Creation and Reconstruction". The New York Times: Books. Retrieved October 24, 2009.
many lawyers embrace a tradition that views state governments as the quintessential threat to individual and minority rights, and federal officials—especially federal courts—as the special guardians of those rights.
- Gold, Scott (June 14, 2005). "Justices Swat Down Texans' Effort to Weaken Species Protection Law". Los Angeles Times. Retrieved March 24, 2012.
Purcell filed a $60-million lawsuit against the U.S. government in 1999, arguing that cave bugs could not be regulated through the commerce clause because they had no commercial value and did not cross state lines. 'I'm disappointed,' Purcell said.
- Reich, Robert B. (September 13, 1987). "The Commerce Clause; The Expanding Economic Vista". The New York Times Magazine. Retrieved October 27, 2009.
- FDCH e-Media (January 10, 2006). "U.S. Senate Judiciary Committee Hearing on Judge Samuel Alito's Nomination to the Supreme Court". The Washington Post. Retrieved October 30, 2009.
I don't think there's any question at this point in our history that Congress' power under the commerce clause is quite broad, and I think that reflects a number of things, including the way in which our economy and our society has developed and all of the foreign and interstate activity that takes place – Samuel Alito
- Cohen, Adam (December 7, 2003). "Editorial Observer; Brandeis's Views on States' Rights, and Ice-Making, Have New Relevance". The New York Times. Retrieved October 30, 2009.
But Brandeis's dissent contains one of the most famous formulations in American law: that the states should be free to serve as laboratories of democracy
- Graglia, Lino (July 19, 2005). "Altering 14th Amendment would curb court's activist tendencies". University of Texas School of Law. Archived from the original on December 4, 2010. Retrieved October 23, 2009.
- Hornberger, Jacob C. (October 30, 2009). "Freedom and the Fourteenth Amendment". The Future of Freedom Foundation. Retrieved October 30, 2009.
Fourteenth Amendment. Some argue that it is detrimental to the cause of freedom because it expands the power of the federal government. Others contend that the amendment expands the ambit of individual liberty. I fall among those who believe that the Fourteenth Amendment has been a positive force for freedom.
- "Gamble v. United States". ScotusBlog. Retrieved September 28, 2018.
- Vazquez, Maegan (June 28, 2018). "Supreme Court agrees to hear 'double jeopardy' case in the fall". CNN. Retrieved September 28, 2018.
- Vicini, James (April 24, 2008). "Justice Scalia defends Bush v. Gore ruling". Reuters. Retrieved October 23, 2009.
The nine-member Supreme Court conducts its deliberations in secret and the justices traditionally won't discuss pending cases in public
- Margolick, David (September 23, 2007). "Meet the Supremes". The New York Times. Retrieved October 23, 2009.
Beat reporters and academics initially denounced the court's involvement in that case, its hastiness to enter the political thicket and the half-baked and strained decision that resulted.
- "Public Says Televising Court Is Good for Democracy". PublicMind.fdu.edu. March 9, 2010. Retrieved December 14, 2010.
- Mauro, Tony (March 9, 2010). "Poll Shows Public Support for Cameras at the High Court". The National Law Journal. Retrieved December 18, 2010.
- "C-SPAN Supreme Court Week". CSPAN. October 4, 2009. Retrieved October 25, 2009.
- Vicini, James (April 24, 2008). "Justice Scalia defends Bush v. Gore ruling". Reuters. Retrieved October 23, 2009.
Scalia was interviewed for the CBS News show "60 Minutes
- Savage, David G. (October 23, 2008). "Roe vs. Wade? Bush vs. Gore? What are the worst Supreme Court decisions?". Los Angeles Times. Archived from the original on October 23, 2008. Retrieved October 23, 2009.
UC Berkeley law professor Goodwin Liu described the decision as 'utterly lacking in any legal principle" and added that the court was "remarkably unashamed to say so explicitly.'
- McConnell, Michael W. (June 1, 2001). "Two-and-a-Half Cheers for Bush v Gore". University of Chicago Law Review. Retrieved February 16, 2016.
- CQ Transcriptions (Senator Kohl) (July 14, 2009). "Key Excerpt: Sotomayor on Bush v. Gore". The Washington Post. Retrieved October 23, 2009.
Many critics saw the Bush v. Gore decision as an example of the judiciary improperly injecting itself into a political dispute"
- Adam Cohen (March 21, 2004). "Justice Rehnquist Writes on Hayes vs. Tilden, With His Mind on Bush v. Gore". Opinion section. The New York Times. Archived from the original on May 11, 2011. Retrieved October 23, 2009.
The Bush v. Gore majority, made up of Mr. Rehnquist and his fellow conservatives, interpreted the equal protection clause in a sweeping way they had not before, and have not since. And they stated that the interpretation was 'limited to the present circumstances,' words that suggest a raw exercise of power, not legal analysis.
- Kevin McNamara (June 3, 2009). "Letters – Supreme Court Activism?". Letters to the editor. The New York Times. Retrieved October 23, 2009.
- CQ Transcriptions (January 13, 2006). "U.S. Senate Judiciary Committee Hearing on Judge Samuel Alito's Nomination to the Supreme Court". The Washington Post. Retrieved October 28, 2009.
...Baker v. Carr, the reapportionment case. We heard Justice Frankfurter who delivered a scathing dissent in that...
- Greenhouse, Linda (September 10, 2007). "New Focus on the Effects of Life Tenure". The New York Times. Retrieved October 10, 2009.
- Levinson, Sanford (February 9, 2009). "Supreme court prognosis – Ruth Bader Ginsburg's surgery for pancreatic cancer highlights why US supreme court justices shouldn't serve life terms". The Guardian. Manchester. Retrieved October 10, 2009.
- See also Arthur D. Hellman, "Reining in the Supreme Court: Are Term Limits the Answer?," in Roger C. Cramton and Paul D. Carrington, eds., Reforming the Court: Term Limits for Supreme Court Justices (Carolina Academic Press, 2006), p. 291.
- Richard Epstein, "Mandatory Retirement for Supreme Court Justices," in Roger C. Cramton and Paul D. Carrington, eds., Reforming the Court: Term Limits for Supreme Court Justices (Carolina Academic Press, 2006), p. 415.
- Brian Opeskin, "Models of Judicial Tenure: Reconsidering Life Limits, Age Limits and Term Limits for Judges", Oxford J Legal Studies 2015 35: 627–663.
- Hamilton, Alexander (June 14, 1788). "The Federalist No. 78". Independent Journal. Retrieved October 28, 2009.
and that as nothing can contribute so much to its firmness and independence as permanency in office, this quality may therefore be justly regarded as an indispensable ingredient in its constitution, and, in a great measure, as the citadel of the public justice and the public security.
- Liptak, Adam (June 22, 2016). "Justices Disclose Privately Paid Trips and Gifts". The New York Times. ISSN 0362-4331. Retrieved February 13, 2020.
- O'Brien, Reity (June 20, 2014). "Justice Obscured: Supreme court justices earn quarter-million in cash on the side". Center for Public Integrity.
- Lipton, Eric (February 26, 2016). "Scalia Took Dozens of Trips Funded by Private Sponsors". The New York Times.
- Berman, Mark; Markon, Jerry (February 17, 2016). "Why Justice Scalia was staying for free at a Texas resort". The Washington Post.
- Encyclopedia of the Supreme Court of the United States, 5 vols., Detroit [etc.] : Macmillan Reference USA, 2008
- The Rules of the Supreme Court of the United States (2013 ed.) (PDF).
- Biskupic, Joan and Elder Witt. (1997). Congressional Quarterly's Guide to the U.S. Supreme Court. Washington, D.C.: Congressional Quarterly. ISBN 1-56802-130-5
- Hall, Kermit L., ed. (1992). The Oxford Companion to the Supreme Court of the United States. New York: Oxford University Press. ISBN 978-0-19-505835-2.
- Hall, Kermit L.; McGuire, Kevin T., eds. (2005). Institutions of American Democracy: The Judicial Branch. New York, New York: Oxford University Press. ISBN 978-0-19-530917-1.
- Harvard Law Review Assn., (2000). The Bluebook: A Uniform System of Citation, 17th ed. [18th ed., 2005. ISBN 978-600-01-4329-9]
- Irons, Peter. (1999). A People's History of the Supreme Court. New York: Viking Press. ISBN 0-670-87006-4.
- Rehnquist, William. (1987). The Supreme Court. New York: Alfred A. Knopf. ISBN 0-375-40943-2.
- Skifos, Catherine Hetos. (1976)."The Supreme Court Gets a Home", Supreme Court Historical Society 1976 Yearbook. [in 1990, renamed The Journal of Supreme Court History (ISSN 1059-4329)]
- Warren, Charles. (1924). The Supreme Court in United States History. (3 volumes). Boston: Little, Brown and Co.
- Woodward, Bob and Armstrong, Scott. The Brethren: Inside the Supreme Court (1979). ISBN 978-0-7432-7402-9.
- Supreme Court Historical Society. "The Court Building" (PDF). Retrieved February 13, 2008.
- Abraham, Henry J. (1992). Justices and Presidents: A Political History of Appointments to the Supreme Court (1st ed.). New York: Oxford University Press. ISBN 978-0-19-506557-2.
- Beard, Charles A. (1912). The Supreme Court and the Constitution. New York: Macmillan Company. Reprinted Dover Publications, 2006. ISBN 0-486-44779-0.
- Corley, Pamela C.; Steigerwalt, Amy; Ward, Artemus. (2013). The Puzzle of Unanimity: Consensus on the United States Supreme Court. Stanford University Press. ISBN 978-0-8047-8472-6.
- Cushman, Barry. (1998). Rethinking the New Deal Court. Oxford University Press.
- Cushman, Clare (2001). The Supreme Court Justices: Illustrated Biographies, 1789–1995 (2nd ed.). (Supreme Court Historical Society, Congressional Quarterly Books). ISBN 978-1-56802-126-3.
- Frank, John P. (1995). Friedman, Leon; Israel, Fred L. (eds.). The Justices of the United States Supreme Court: Their Lives and Major Opinions. Chelsea House Publishers. ISBN 978-1-56802-126-3.
- Garner, Bryan A. (2004). Black's Law Dictionary. Deluxe 8th ed. Thomson West. ISBN 0-314-15199-0.
- Greenburg, Jan Crawford, Jan. (2007). Supreme Conflict: The Inside Story of the Struggle for Control for the United States Supreme Court. New York: Penguin Press. ISBN 978-1-59420-101-1.
- Martin, Fenton S.; Goehlert, Robert U. (1990). The U.S. Supreme Court: A Bibliography. Washington, D.C.: Congressional Quarterly Books. ISBN 978-0-87187-554-9.
- Lewis, Thomas Tandy, ed. The U.S. Supreme Court. 2nd ed. 3 volumes. Ipswich: Salem/Grey House, 2016. ISBN 978-168217-180-6.
- McCloskey, Robert G. (2005). The American Supreme Court. 4th ed. Chicago: University of Chicago Press. ISBN 0-226-55682-4.
- O'Brien, David M. (2008). Storm Center: The Supreme Court in American Politics (8th ed.). New York: W. W. Norton & Company. ISBN 978-0-393-93218-8.
- Spaeth, Harold J. (1979). Supreme Court Policy Making: Explanation and Prediction (3rd ed.). New York: W.H.Freeman & Co Ltd. ISBN 978-0-7167-1012-7.
- Toobin, Jeffrey. The Nine: Inside the Secret World of the Supreme Court. Doubleday, 2007. ISBN 0-385-51640-1.
- Urofsky, Melvin and Finkelman, Paul. (2001). A March of Liberty: A Constitutional History of the United States. 2 vols. New York: Oxford University Press. ISBN 0-19-512637-8 & ISBN 0-19-512635-1.
- Urofsky, Melvin I. (1994). The Supreme Court Justices: A Biographical Dictionary. New York: Garland Publishing. p. 590. ISBN 978-0-8153-1176-8.
- Supreme Court Historical Society. "The Court Building" (PDF). Retrieved February 13, 2008.
- "A General Approach for Predicting the Behavior of the Supreme Court of the United States".
This article's use of external links may not follow Wikipedia's policies or guidelines. (September 2020) (Learn how and when to remove this template message)
|Wikimedia Commons has media related to Supreme Court of the United States.|
|Wikiquote has quotations related to: Supreme Court of the United States|
|Wikisource has original text related to this article:|
- Official website
- Supreme Court decisions from World Legal Information Institution (contains no advertisements)
- Supreme Court Collection from the Legal Information Institute
- Supreme Court Opinions from FindLaw
- U.S. Supreme Court Decisions (v. 1+) from Justia, Oyez and U.S. Court Forms
- Supreme Court Records and Briefs from Cornell Law Library
- Milestone Cases in Supreme Court History from InfoPlease
- Supreme Court Nominations, present–1789
- Scales of Justice: The History of Supreme Court Nominations – Radio program explores history of appointments and confirmations
- Supreme Court Historical Society
- Complete/Searchable 1991–2004 Opinions and Orders
- The Supreme Court Database A research database with information about cases from 1946 to 2011
- The Oyez Project – audio recordings of oral arguments
- "U.S. Supreme Court collected news and commentary". The New York Times.
- U.S. Supreme Court collected news and commentary at The Washington Post
- C-SPAN's The Supreme Court: Home to America's Highest Court
- Supreme Court Briefs Hosted by the American Bar Association
- Works by Supreme Court of the United States at Project Gutenberg
- Works by or about Supreme Court of the United States at Internet Archive
- Works by Supreme Court of the United States at LibriVox (public domain audiobooks) |
In the solar system’s early days, a first Earth is thought to have been pulverised by a planet that scientists call Theia. We don’t know what it was made of or where it came from, only that it may have been the size of Mars. The powerful collision destroyed both planets so completely that scientists can only guess what they were like.
What scientists are more certain of is that the two planets became a mass of molten material that gradually cooled to form the Earth and moon.
‘This molten mass rotated around and formed a disc, which existed for a few days. The temperature, which was very high, cooled slowly and everything heavy coalesced to form the Earth today,’ said Dr Razvan Caracas, a physicist who studies the inside of planets at the French National Centre for Scientific Research in Lyon, France.
Dr Caracas is running computer simulations of what happened to this mass of atoms and materials immediately after the collision as part of a project called IMPACT. He uses computer power equivalent to 200 desktop PCs running for two weeks to calculate what occurs under one set of conditions after the collision. Three supercomputing centres in France will provide the project’s required 120 million hours of computing time in just five years.
The initial shockwave after the collision of proto-Earth and Theia generated crushing pressures and temperatures perhaps as high as 10,000°C at the centre.
‘It is hard to imagine what conditions were like,’ said Dr Caracas. Almost the entire periodic table of elements would have been in a ‘super critical condition’ - a fog of disassembled atoms neither in gas nor liquid form.
Over the next few days, heavy materials like iron began to form the centre of a new planet - Earth.
‘The metals would slowly separate as droplets as they cooled, and later the silicates (minerals) would liquify. The heaviest inner part of Earth formed, then material would have fallen down onto the planet,’ Caravas said. ‘The outer part of the disc would have formed a ring and eventually accrete to form the moon.’
While the Earth came together within days, the moon probably took weeks or months to take shape, according Dr Caracas. There may even have been two moons circling the early Earth, with one crashing into the other to create the moon we see today. Both newly minted bodies then had an ocean of molten rock.
Dr Joshua Snape, a lunar geologist at VU Amsterdam, the Netherlands, is interested in the early moon. He hopes to determine its age within a range of tens or hundreds of millions of years. The traditional view is that it formed about 4.5 billion years ago.
‘Something profound was happening on the moon between 4.35 and 4.4 billion years ago. The simplest explanation is that the lunar magma ocean (covering the moon) cooled,’ said Dr Snape.
Thanks to the moon’s lack of tectonic activity, all of its rocks can tell us about this magma period – an important stage in the moon’s formation.
Just how long the magma took to cool is a crucial question, Dr Snape says. ‘It is important to understand how long this takes, because we scale up what we know about the moon to other planets.’
‘This is why we love the Moon so much. It is a treasure trove, geologically speaking.’
Dr Joshua Snape,VU Amsterdam, the Netherlands
As the moon is the only substantial body in the solar system that we have travelled to and retrieved rocks from, its samples are valuable to scientists. Dr Snape has studied the ratios of isotopes of lead and uranium in rocks returned by the Apollo missions and from lunar meteorites. This ratio acts as a deep-time clock that he has used to calculate when a rock formed.
‘The moon has a record and acts as a beautiful lab for understanding early planetary processes. This will be applicable to Mars, Mercury or Venus, places that are hard for us to access, and it can even tell us about our own planet,’ said Dr Snape.
Earth is not quite so useful because plate tectonics bury and recycle rocks.
‘This is why we love the moon so much,’ he said. ‘It is a treasure trove, geologically speaking.’
His studies may be able to reveal, for instance, how long a planetary body remains active with volcanic eruptions when there are no plate tectonics to drive them. This could be important when it comes to studying planets around other stars.
Dr Snape is currently working on a project called MoonDiff that involves trying to recreate rock compositions that existed in the lunar magma ocean. The minerals would not have crystalised immediately but would have done so in a sequence that Dr Snape is now trying to reconstruct.
He crushes and heats the recreated rocks under conditions to match those on the moon when its surface was a molten mass of rock. ‘This week I’m running experiments at one gigapascal (one billion pascals, a unit of pressure) and 1,200°C,’ he said.
Knowing the sequence that minerals separated out from the magma ocean would help explain the moon’s history and its present geology.
‘The oldest rocks on the moon (the visible lighter parts) are formed primarily of a mineral called feldspar, which would have floated to the top of the liquid magma ocean,’ said Dr Snape.
‘On the other hand, that Apollo 12 basalt sample and other rocks like it (which make up the darker grey parts) are formed primarily of minerals that would have been more dense and sunk to the bottom.’
Dr Maud Boyet, a geochemist at the University of Clermont Auvergne, France, studies the Earth’s early molten period and is hoping to pin down when it cooled and first became habitable. To do this, she is examining Earth and lunar rocks as well as meteorites by using new mass spectroscopy techniques among others for a project called ISOREE.
She says moon rocks could tell us when the huge collision took place, but to do that we still need to understand the early history of the moon. Rocks from the far side of the moon could help.
Lunar rocks collected during Apollo missions come from the side of the moon facing Earth. The side facing away from us has a difference surface make-up. This could be because the moon underwent a second melting event, perhaps caused by a second massive collision.
‘We have no samples (gathered) from the far side of the moon,’ Boyet said. ‘But we have some meteorites (that landed on Earth) that we think came from there.’
The research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.
Eavesdropping on the shudders and groans echoing deep inside alien worlds like Mars and the moon is revealing what lies far beneath their surfaces and could teach us more about how our own planet formed.
The division of the Earth’s surface into seven major mobile plates is fundamental to our planet’s uniqueness, creating a habitable environment and possibly the conditions under which life itself originated. The theory of plate tectonics is 50 years old, but there are many puzzles left to answer, says Dr Kate Rychert, who studies the geology at the bottom of the Atlantic Ocean.
More than six months into the coronavirus crisis, data show that not just age, but also biological sex plays a pivotal role in the manifestation and response to Covid-19, with more men dying from acute infections versus women in the short term. This discrepancy has shined a spotlight on a key theme that has gained traction in recent years: is enough being done to account for sex and gender in disease and medicine? Not enough, says Dr Sabine Oertelt-Prigione, the chair of sex and gender-sensitive medicine at Radboud University in the Netherlands and a member of the European Commission’s expert group on gendered innovations.
Earth is not the only place in our solar system that shakes with seismic activity.
Dr Sabine Oertelt-Prigione on a ‘moment of awakening’ for medical research.
Dr Kate Rychert studies ocean plate structures. |
College PhysicsScience and Technology
General Relativity and Quantum Gravity
When we talk of black holes or the unification of forces, we are actually discussing aspects of general relativity and quantum gravity. We know from Special Relativity that relativity is the study of how different observers measure the same event, particularly if they move relative to one another. Einstein’s theory of general relativity describes all types of relative motion including accelerated motion and the effects of gravity. General relativity encompasses special relativity and classical relativity in situations where acceleration is zero and relative velocity is small compared with the speed of light. Many aspects of general relativity have been verified experimentally, some of which are better than science fiction in that they are bizarre but true. Quantum gravity is the theory that deals with particle exchange of gravitons as the mechanism for the force, and with extreme conditions where quantum mechanics and general relativity must both be used. A good theory of quantum gravity does not yet exist, but one will be needed to understand how all four forces may be unified. If we are successful, the theory of quantum gravity will encompass all others, from classical physics to relativity to quantum mechanics—truly a Theory of Everything (TOE).
Einstein first considered the case of no observer acceleration when he developed the revolutionary special theory of relativity, publishing his first work on it in 1905. By 1916, he had laid the foundation of general relativity, again almost on his own. Much of what Einstein did to develop his ideas was to mentally analyze certain carefully and clearly defined situations—doing this is to perform a thought experiment. [link] illustrates a thought experiment like the ones that convinced Einstein that light must fall in a gravitational field. Think about what a person feels in an elevator that is accelerated upward. It is identical to being in a stationary elevator in a gravitational field. The feet of a person are pressed against the floor, and objects released from hand fall with identical accelerations. In fact, it is not possible, without looking outside, to know what is happening—acceleration upward or gravity. This led Einstein to correctly postulate that acceleration and gravity will produce identical effects in all situations. So, if acceleration affects light, then gravity will, too. [link] shows the effect of acceleration on a beam of light shone horizontally at one wall. Since the accelerated elevator moves up during the time light travels across the elevator, the beam of light strikes low, seeming to the person to bend down. (Normally a tiny effect, since the speed of light is so great.) The same effect must occur due to gravity, Einstein reasoned, since there is no way to tell the effects of gravity acting downward from acceleration of the elevator upward. Thus gravity affects the path of light, even though we think of gravity as acting between masses and photons are massless.
Einstein’s theory of general relativity got its first verification in 1919 when starlight passing near the Sun was observed during a solar eclipse. (See [link].) During an eclipse, the sky is darkened and we can briefly see stars. Those in a line of sight nearest the Sun should have a shift in their apparent positions. Not only was this shift observed, but it agreed with Einstein’s predictions well within experimental uncertainties. This discovery created a scientific and public sensation. Einstein was now a folk hero as well as a very great scientist. The bending of light by matter is equivalent to a bending of space itself, with light following the curve. This is another radical change in our concept of space and time. It is also another connection that any particle with mass or energy (massless photons) is affected by gravity.
There are several current forefront efforts related to general relativity. One is the observation and analysis of gravitational lensing of light. Another is analysis of the definitive proof of the existence of black holes. Direct observation of gravitational waves or moving wrinkles in space is being searched for. Theoretical efforts are also being aimed at the possibility of time travel and wormholes into other parts of space due to black holes.
Gravitational lensingAs you can see in [link], light is bent toward a mass, producing an effect much like a converging lens (large masses are needed to produce observable effects). On a galactic scale, the light from a distant galaxy could be “lensed” into several images when passing close by another galaxy on its way to Earth. Einstein predicted this effect, but he considered it unlikely that we would ever observe it. A number of cases of this effect have now been observed; one is shown in [link]. This effect is a much larger scale verification of general relativity. But such gravitational lensing is also useful in verifying that the red shift is proportional to distance. The red shift of the intervening galaxy is always less than that of the one being lensed, and each image of the lensed galaxy has the same red shift. This verification supplies more evidence that red shift is proportional to distance. Confidence that the multiple images are not different objects is bolstered by the observations that if one image varies in brightness over time, the others also vary in the same manner.
Black holesBlack holes are objects having such large gravitational fields that things can fall in, but nothing, not even light, can escape. Bodies, like the Earth or the Sun, have what is called an escape velocity. If an object moves straight up from the body, starting at the escape velocity, it will just be able to escape the gravity of the body. The greater the acceleration of gravity on the body, the greater is the escape velocity. As long ago as the late 1700s, it was proposed that if the escape velocity is greater than the speed of light, then light cannot escape. Simon Laplace (1749–1827), the French astronomer and mathematician, even incorporated this idea of a dark star into his writings. But the idea was dropped after Young’s double slit experiment showed light to be a wave. For some time, light was thought not to have particle characteristics and, thus, could not be acted upon by gravity. The idea of a black hole was very quickly reincarnated in 1916 after Einstein’s theory of general relativity was published. It is now thought that black holes can form in the supernova collapse of a massive star, forming an object perhaps 10 km across and having a mass greater than that of our Sun. It is interesting that several prominent physicists who worked on the concept, including Einstein, firmly believed that nature would find a way to prohibit such objects.
Black holes are difficult to observe directly, because they are small and no light comes directly from them. In fact, no light comes from inside the event horizon, which is defined to be at a distance from the object at which the escape velocity is exactly the speed of light. The radius of the event horizon is known as the Schwarzschild radius and is given by
where is the universal gravitational constant, is the mass of the body, and is the speed of light. The event horizon is the edge of the black hole and is its radius (that is, the size of a black hole is twice ). Since is small and is large, you can see that black holes are extremely small, only a few kilometers for masses a little greater than the Sun’s. The object itself is inside the event horizon.
Physics near a black hole is fascinating. Gravity increases so rapidly that, as you approach a black hole, the tidal effects tear matter apart, with matter closer to the hole being pulled in with much more force than that only slightly farther away. This can pull a companion star apart and heat inflowing gases to the point of producing X rays. (See [link].) We have observed X rays from certain binary star systems that are consistent with such a picture. This is not quite proof of black holes, because the X rays could also be caused by matter falling onto a neutron star. These objects were first discovered in 1967 by the British astrophysicists, Jocelyn Bell and Anthony Hewish. Neutron stars are literally a star composed of neutrons. They are formed by the collapse of a star’s core in a supernova, during which electrons and protons are forced together to form neutrons (the reverse of neutron decay). Neutron stars are slightly larger than a black hole of the same mass and will not collapse further because of resistance by the strong force. However, neutron stars cannot have a mass greater than about eight solar masses or they must collapse to a black hole. With recent improvements in our ability to resolve small details, such as with the orbiting Chandra X-ray Observatory, it has become possible to measure the masses of X-ray-emitting objects by observing the motion of companion stars and other matter in their vicinity. What has emerged is a plethora of X-ray-emitting objects too massive to be neutron stars. This evidence is considered conclusive and the existence of black holes is widely accepted. These black holes are concentrated near galactic centers.
We also have evidence that supermassive black holes may exist at the cores of many galaxies, including the Milky Way. Such a black hole might have a mass millions or even billions of times that of the Sun, and it would probably have formed when matter first coalesced into a galaxy billions of years ago. Supporting this is the fact that very distant galaxies are more likely to have abnormally energetic cores. Some of the moderately distant galaxies, and hence among the younger, are known as quasars and emit as much or more energy than a normal galaxy but from a region less than a light year across. Quasar energy outputs may vary in times less than a year, so that the energy-emitting region must be less than a light year across. The best explanation of quasars is that they are young galaxies with a supermassive black hole forming at their core, and that they become less energetic over billions of years. In closer superactive galaxies, we observe tremendous amounts of energy being emitted from very small regions of space, consistent with stars falling into a black hole at the rate of one or more a month. The Hubble Space Telescope (1994) observed an accretion disk in the galaxy M87 rotating rapidly around a region of extreme energy emission. (See [link].) A jet of material being ejected perpendicular to the plane of rotation gives further evidence of a supermassive black hole as the engine.
Gravitational wavesIf a massive object distorts the space around it, like the foot of a water bug on the surface of a pond, then movement of the massive object should create waves in space like those on a pond. Gravitational waves are mass-created distortions in space that propagate at the speed of light and are predicted by general relativity. Since gravity is by far the weakest force, extreme conditions are needed to generate significant gravitational waves. Gravity near binary neutron star systems is so great that significant gravitational wave energy is radiated as the two neutron stars orbit one another. American astronomers, Joseph Taylor and Russell Hulse, measured changes in the orbit of such a binary neutron star system. They found its orbit to change precisely as predicted by general relativity, a strong indication of gravitational waves, and were awarded the 1993 Nobel Prize. But direct detection of gravitational waves on Earth would be conclusive. For many years, various attempts have been made to detect gravitational waves by observing vibrations induced in matter distorted by these waves. American physicist Joseph Weber pioneered this field in the 1960s, but no conclusive events have been observed. (No gravity wave detectors were in operation at the time of the 1987A supernova, unfortunately.) There are now several ambitious systems of gravitational wave detectors in use around the world. These include the LIGO (Laser Interferometer Gravitational Wave Observatory) system with two laser interferometer detectors, one in the state of Washington and another in Louisiana (See [link]) and the VIRGO (Variability of Irradiance and Gravitational Oscillations) facility in Italy with a single detector.
Black holes radiateQuantum gravity is important in those situations where gravity is so extremely strong that it has effects on the quantum scale, where the other forces are ordinarily much stronger. The early universe was such a place, but black holes are another. The first significant connection between gravity and quantum effects was made by the Russian physicist Yakov Zel’dovich in 1971, and other significant advances followed from the British physicist Stephen Hawking. (See [link].) These two showed that black holes could radiate away energy by quantum effects just outside the event horizon (nothing can escape from inside the event horizon). Black holes are, thus, expected to radiate energy and shrink to nothing, although extremely slowly for most black holes. The mechanism is the creation of a particle-antiparticle pair from energy in the extremely strong gravitational field near the event horizon. One member of the pair falls into the hole and the other escapes, conserving momentum. (See [link].) When a black hole loses energy and, hence, rest mass, its event horizon shrinks, creating an even greater gravitational field. This increases the rate of pair production so that the process grows exponentially until the black hole is nuclear in size. A final burst of particles and rays ensues. This is an extremely slow process for black holes about the mass of the Sun (produced by supernovas) or larger ones (like those thought to be at galactic centers), taking on the order of years or longer! Smaller black holes would evaporate faster, but they are only speculated to exist as remnants of the Big Bang. Searches for characteristic -ray bursts have produced events attributable to more mundane objects like neutron stars accreting matter.
Wormholes and time travel The subject of time travel captures the imagination. Theoretical physicists, such as the American Kip Thorne, have treated the subject seriously, looking into the possibility that falling into a black hole could result in popping up in another time and place—a trip through a so-called wormhole. Time travel and wormholes appear in innumerable science fiction dramatizations, but the consensus is that time travel is not possible in theory. While still debated, it appears that quantum gravity effects inside a black hole prevent time travel due to the creation of particle pairs. Direct evidence is elusive.
The shortest time Theoretical studies indicate that, at extremely high energies and correspondingly early in the universe, quantum fluctuations may make time intervals meaningful only down to some finite time limit. Early work indicated that this might be the case for times as long as , the time at which all forces were unified. If so, then it would be meaningless to consider the universe at times earlier than this. Subsequent studies indicate that the crucial time may be as short as . But the point remains—quantum gravity seems to imply that there is no such thing as a vanishingly short time. Time may, in fact, be grainy with no meaning to time intervals shorter than some tiny but finite size.
The future of quantum gravity Not only is quantum gravity in its infancy, no one knows how to get started on a theory of gravitons and unification of forces. The energies at which TOE should be valid may be so high (at least ) and the necessary particle separation so small (less than ) that only indirect evidence can provide clues. For some time, the common lament of theoretical physicists was one so familiar to struggling students—how do you even get started? But Hawking and others have made a start, and the approach many theorists have taken is called Superstring theory, the topic of the Superstrings.
- Einstein’s theory of general relativity includes accelerated frames and, thus, encompasses special relativity and gravity. Created by use of careful thought experiments, it has been repeatedly verified by real experiments.
- One direct result of this behavior of nature is the gravitational lensing of light by massive objects, such as galaxies, also seen in the microlensing of light by smaller bodies in our galaxy.
- Another prediction is the existence of black holes, objects for which the escape velocity is greater than the speed of light and from which nothing can escape.
- The event horizon is the distance from the object at which the escape velocity equals the speed of light . It is called the Schwarzschild radius and is given by
where is the universal gravitational constant, and is the mass of the body.
- Physics is unknown inside the event horizon, and the possibility of wormholes and time travel are being studied.
- Candidates for black holes may power the extremely energetic emissions of quasars, distant objects that seem to be early stages of galactic evolution.
- Neutron stars are stellar remnants, having the density of a nucleus, that hint that black holes could form from supernovas, too.
- Gravitational waves are wrinkles in space, predicted by general relativity but not yet observed, caused by changes in very massive objects.
- Quantum gravity is an incompletely developed theory that strives to include general relativity, quantum mechanics, and unification of forces (thus, a TOE).
- One unconfirmed connection between general relativity and quantum mechanics is the prediction of characteristic radiation from just outside black holes.
Quantum gravity, if developed, would be an improvement on both general relativity and quantum mechanics, but more mathematically difficult. Under what circumstances would it be necessary to use quantum gravity? Similarly, under what circumstances could general relativity be used? When could special relativity, quantum mechanics, or classical physics be used?
Does observed gravitational lensing correspond to a converging or diverging lens? Explain briefly.
Suppose you measure the red shifts of all the images produced by gravitational lensing, such as in [link].You find that the central image has a red shift less than the outer images, and those all have the same red shift. Discuss how this not only shows that the images are of the same object, but also implies that the red shift is not affected by taking different paths through space. Does it imply that cosmological red shifts are not caused by traveling through space (light getting tired, perhaps)?
What are gravitational waves, and have they yet been observed either directly or indirectly?
Is the event horizon of a black hole the actual physical surface of the object?
Suppose black holes radiate their mass away and the lifetime of a black hole created by a supernova is about years. How does this lifetime compare with the accepted age of the universe? Is it surprising that we do not observe the predicted characteristic radiation?
Problems & Exercises
What is the Schwarzschild radius of a black hole that has a mass eight times that of our Sun? Note that stars must be more massive than the Sun to form black holes as a result of a supernova.
Black holes with masses smaller than those formed in supernovas may have been created in the Big Bang. Calculate the radius of one that has a mass equal to the Earth’s.
Supermassive black holes are thought to exist at the center of many galaxies.
(a) What is the radius of such an object if it has a mass of Suns?
(b) What is this radius in light years?
Construct Your Own Problem
Consider a supermassive black hole near the center of a galaxy. Calculate the radius of such an object based on its mass. You must consider how much mass is reasonable for these large objects, and which is now nearly directly observed. (Information on black holes posted on the Web by NASA and other agencies is reliable, for example.)
- College Physics
- Introduction: The Nature of Science and Physics
- Introduction to One-Dimensional Kinematics
- Vectors, Scalars, and Coordinate Systems
- Time, Velocity, and Speed
- Motion Equations for Constant Acceleration in One Dimension
- Problem-Solving Basics for One-Dimensional Kinematics
- Falling Objects
- Graphical Analysis of One-Dimensional Motion
- Two-Dimensional Kinematics
- Dynamics: Force and Newton's Laws of Motion
- Introduction to Dynamics: Newton’s Laws of Motion
- Development of Force Concept
- Newton’s First Law of Motion: Inertia
- Newton’s Second Law of Motion: Concept of a System
- Newton’s Third Law of Motion: Symmetry in Forces
- Normal, Tension, and Other Examples of Forces
- Problem-Solving Strategies
- Further Applications of Newton’s Laws of Motion
- Extended Topic: The Four Basic Forces—An Introduction
- Further Applications of Newton's Laws: Friction, Drag, and Elasticity
- Uniform Circular Motion and Gravitation
- Work, Energy, and Energy Resources
- Linear Momentum and Collisions
- Statics and Torque
- Rotational Motion and Angular Momentum
- Introduction to Rotational Motion and Angular Momentum
- Angular Acceleration
- Kinematics of Rotational Motion
- Dynamics of Rotational Motion: Rotational Inertia
- Rotational Kinetic Energy: Work and Energy Revisited
- Angular Momentum and Its Conservation
- Collisions of Extended Bodies in Two Dimensions
- Gyroscopic Effects: Vector Aspects of Angular Momentum
- Fluid Statics
- Fluid Dynamics and Its Biological and Medical Applications
- Introduction to Fluid Dynamics and Its Biological and Medical Applications
- Flow Rate and Its Relation to Velocity
- Bernoulli’s Equation
- The Most General Applications of Bernoulli’s Equation
- Viscosity and Laminar Flow; Poiseuille’s Law
- The Onset of Turbulence
- Motion of an Object in a Viscous Fluid
- Molecular Transport Phenomena: Diffusion, Osmosis, and Related Processes
- Temperature, Kinetic Theory, and the Gas Laws
- Heat and Heat Transfer Methods
- Introduction to Thermodynamics
- The First Law of Thermodynamics
- The First Law of Thermodynamics and Some Simple Processes
- Introduction to the Second Law of Thermodynamics: Heat Engines and Their Efficiency
- Carnot’s Perfect Heat Engine: The Second Law of Thermodynamics Restated
- Applications of Thermodynamics: Heat Pumps and Refrigerators
- Entropy and the Second Law of Thermodynamics: Disorder and the Unavailability of Energy
- Statistical Interpretation of Entropy and the Second Law of Thermodynamics: The Underlying Explanation
- Oscillatory Motion and Waves
- Introduction to Oscillatory Motion and Waves
- Hooke’s Law: Stress and Strain Revisited
- Period and Frequency in Oscillations
- Simple Harmonic Motion: A Special Periodic Motion
- The Simple Pendulum
- Energy and the Simple Harmonic Oscillator
- Uniform Circular Motion and Simple Harmonic Motion
- Damped Harmonic Motion
- Forced Oscillations and Resonance
- Superposition and Interference
- Energy in Waves: Intensity
- Physics of Hearing
- Electric Charge and Electric Field
- Introduction to Electric Charge and Electric Field
- Static Electricity and Charge: Conservation of Charge
- Conductors and Insulators
- Coulomb’s Law
- Electric Field: Concept of a Field Revisited
- Electric Field Lines: Multiple Charges
- Electric Forces in Biology
- Conductors and Electric Fields in Static Equilibrium
- Applications of Electrostatics
- Electric Potential and Electric Field
- Electric Current, Resistance, and Ohm's Law
- Circuits, Bioelectricity, and DC Instruments
- Introduction to Magnetism
- Ferromagnets and Electromagnets
- Magnetic Fields and Magnetic Field Lines
- Magnetic Field Strength: Force on a Moving Charge in a Magnetic Field
- Force on a Moving Charge in a Magnetic Field: Examples and Applications
- The Hall Effect
- Magnetic Force on a Current-Carrying Conductor
- Torque on a Current Loop: Motors and Meters
- Magnetic Fields Produced by Currents: Ampere’s Law
- Magnetic Force between Two Parallel Conductors
- More Applications of Magnetism
- Electromagnetic Induction, AC Circuits, and Electrical Technologies
- Introduction to Electromagnetic Induction, AC Circuits and Electrical Technologies
- Induced Emf and Magnetic Flux
- Faraday’s Law of Induction: Lenz’s Law
- Motional Emf
- Eddy Currents and Magnetic Damping
- Electric Generators
- Back Emf
- Electrical Safety: Systems and Devices
- RL Circuits
- Reactance, Inductive and Capacitive
- RLC Series AC Circuits
- Electromagnetic Waves
- Geometric Optics
- Vision and Optical Instruments
- Wave Optics
- Introduction to Wave Optics
- The Wave Aspect of Light: Interference
- Huygens's Principle: Diffraction
- Young’s Double Slit Experiment
- Multiple Slit Diffraction
- Single Slit Diffraction
- Limits of Resolution: The Rayleigh Criterion
- Thin Film Interference
- *Extended Topic* Microscopy Enhanced by the Wave Characteristics of Light
- Special Relativity
- Introduction to Quantum Physics
- Atomic Physics
- Introduction to Atomic Physics
- Discovery of the Atom
- Discovery of the Parts of the Atom: Electrons and Nuclei
- Bohr’s Theory of the Hydrogen Atom
- X Rays: Atomic Origins and Applications
- Applications of Atomic Excitations and De-Excitations
- The Wave Nature of Matter Causes Quantization
- Patterns in Spectra Reveal More Quantization
- Quantum Numbers and Rules
- The Pauli Exclusion Principle
- Radioactivity and Nuclear Physics
- Medical Applications of Nuclear Physics
- Particle Physics
- Frontiers of Physics
- Atomic Masses
- Selected Radioactive Isotopes
- Useful Information
- Glossary of Key Symbols and Notation |
Sighting may help improve understanding of the early Universe
Scientists in Australia believe they’ve identified a quasar in the process of lighting up, for the very first time. This discovery could help scientists answer lingering questions about how these exceptionally bright celestial bodies form, and how they helped the ancient Universe shape today’s galaxies.
An artist’s impression of an active quasar. Image Credit: NASA
“I don’t think we’ve really seen one of these objects in this stage,” said Ray Norris, an astrophysicist at the Australia Telescope National Facility and leader of the research team. “We don’t understand how they evolve or form.”
Quasars are mostly found in far reaches of the ancient Universe. Some formed only a few hundred million years after the Big Bang, making it difficult to observe their creation.
Though quasars shine, they’re not stars. They’re intensely bright spots near the edges of supermassive black holes. While no light can escape from a black hole itself, its accretion disk — the churning mass of dust and gas spiraling down into the black hole — can shine brightly.
As dust and gas fall into the black hole, the mass speeds up, like water draining down a whirlpool. Simultaneously, matter smashes against other matter also falling into the black hole and heats up due to friction. Once the hot material is corkscrewing downward near the speed of light, it reaches millions of degrees and energized charged particles shoot off in enormous jets perpendicular to the spiraling disk.
These jets can be hundreds of thousands of light-years long, and emit powerful radio signals that can be heard by receivers billions of light-years away. Norris and his team think that they’ve found two quasar jets just starting up after the collision of two galaxies. These “new” quasars actually formed about 3.2 billion years ago. Their radio signals are just now reaching Earth.
“These two spiral galaxies are crashing into each other, there’s all this debris going everywhere and right down at the middle is this black hole with these enormously powerful jets which are blowing their way up,” Norris said of the radio source located in the Southern Hemisphere constellation Tucana the Toucan.
The jets are still relatively small, only a few thousand light-years long, and remain completely enveloped by the dust and debris from the two galaxies. The dust and gas keep their source mostly obscured from visual and infrared telescopes, but their radio signatures are making it through. That dust and gas won’t be there for long. The two jets are burrowing through their gaseous envelopes, dispersing them in the process.
“What we have here is the very early stages,” Norris said. “When it bursts out it will indeed unearth the fully fledged quasar.”
Henrik Spoon, an astrophysicist at Cornell University in Ithaca, N.Y. wasn’t part of Norris’s team, but studies colliding galaxies and interstellar dust. He said: “Usually these very deeply obscured galaxies are not associated with having radio jets. To actually see a galaxy that is still deeply buried, where the collision is ongoing, where the jets are still buried, that may be unique at this point.”
Spoon said that it was also remarkable because of its relative proximity to Earth — for a quasar. “These kind of sources are so rare in the local Universe, we are happy that this one exists. “Collisions between galaxies occurred much more frequently in the early Universe” Spoon said
Astrophysicists are intrigued by Norris’s results, however they are also cautious.
“It’s really not a slam dunk yet, but it looks exciting,” said Martin Elvis, a scientist at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. “They really need better data.”
Norris and his team are working on getting more data about the burgeoning quasar. He applied for time on the Atacama Large Millimeter/submillimeter Array radio telescope in Chile hoping to get a better picture of the two jets, and has presented his results at several scientific conferences.
Understanding how a quasar grows and matures could answer lingering questions about how the Universe began to take shape billions of years ago.
In 2005, scientists at the Max Planck Institute in Germany, developed a supercomputer-based simulation to recreate the evolution of the Universe.
“It was actually successful; it reproduced many of the main futures of the Universe,” Norris said. “But some things didn’t work, and in particular it shows galaxies much more massive than we see, they grow more quickly and there should be more of them than we’re seeing. Something is slowing down the process of galaxy formation.”
Astrophysicists now think that the gigantic plumes from quasars heated up the swirling dust and gas in primordial galaxies. Hot gas can’t coalesce into stars as efficiently as cold gas, slowing star formation as a result.
Norris hopes that by observing the formation of a quasar and its jets, they can better understand whether quasars first helped to form galaxies or vice versa. “Hopefully we’ll find many more examples of these,” Norris said, “If we get enough objects, all at different stages, we can then see how one evolves into another.”
These results are described in a paper posted on the arXiv website.
Source: Mike Lucibella, Inside Science News Service/American Institute of Physics |
This video gives more detail about the mathematical principles presented in Median.
This video shows how to work step-by-step through one or more of the examples in Median.
Explains how to determine mean, median, and mode. It also provides examples.
This lesson plan covers The Median and includes Teaching Tips, Common Errors, Differentiated Instruction, Enrichment, and Problem Solving.
A list of student-submitted discussion questions for Median.
To stress understanding of a concept by summarizing the main idea and applying that understanding to create visual aids and generate questions and comments using a Concept Matrix.
To reinforce and increase concept comprehension, and to analyze similarities and differences between topics using a Two Column Table.
Develop understanding of concepts by studying them in a relational manner. Analyze and refine the concept by summarizing the main idea, creating visual aids, and generating questions and comments using a Four Square Concept Matrix.
Summarize the main idea of a reading, create visual aids, and come up with new questions using a Four Square Concept Matrix.
Students will examine the Median, Mean, Mode, and Range of the salaries of MLS players. They will explain what each measure of central tendency says about the salaries of all MLS players.
Students will examine the Median, Mean, Mode, and Range of the salaries of MLS players. They will explain what each measure of central tendency says about the salaries of all MLS players. Answer Key.
This study guide looks at levels of measurement and the shape, measures of center (median, mean, mode), and measures of spread (standard deviation) of a data set. It also compares the measures for population vs the measures for sample.
These flashcards help you study important terms and vocabulary from Mean, Ungrouped Data to Find the Mean, Grouped Data to Find the Mean, Median, and Mode. |
Introduction to Binary Tree – Data Structure and Algorithm Tutorials
A tree is a popular data structure that is non-linear in nature. Unlike other data structures like an array, stack, queue, and linked list which are linear in nature, a tree represents a hierarchical structure. The ordering information of a tree is not important. A tree contains nodes and 2 pointers. These two pointers are the left child and the right child of the parent node. Let us understand the terms of tree in detail.
- Root: The root of a tree is the topmost node of the tree that has no parent node. There is only one root node in every tree.
- Parent Node: The node which is a predecessor of a node is called the parent node of that node.
- Child Node: The node which is the immediate successor of a node is called the child node of that node.
- Sibling: Children of the same parent node are called siblings.
- Edge: Edge acts as a link between the parent node and the child node.
- Leaf: A node that has no child is known as the leaf node. It is the last node of the tree. There can be multiple leaf nodes in a tree.
- Subtree: The subtree of a node is the tree considering that particular node as the root node.
- Depth: The depth of the node is the distance from the root node to that particular node.
- Height: The height of the node is the distance from that node to the deepest node of that subtree.
- Height of tree: The Height of the tree is the maximum height of any node. This is the same as the height of the root node.
- Level: A level is the number of parent nodes corresponding to a given node of the tree.
- Degree of node: The degree of a node is the number of its children.
- NULL: The number of NULL nodes in a binary tree is (N+1), where N is the number of nodes in a binary tree.
Why to use Tree Data Structure?
1. One reason to use trees might be because you want to store information that naturally forms a hierarchy. For example, the file system on a computer:
2. Trees (with some ordering e.g., BST) provide moderate access/search (quicker than Linked List and slower than arrays).
3. Trees provide moderate insertion/deletion (quicker than Arrays and slower than Unordered Linked Lists).
4. Like Linked Lists and unlike Arrays, Trees don’t have an upper limit on the number of nodes as nodes are linked using pointers.
The main applications of tree data structure:
- Manipulate hierarchical data.
- Make information easy to search (see tree traversal).
- Manipulate sorted lists of data.
- As a workflow for compositing digital images for visual effects.
- Router algorithms
- Form of multi-stage decision-making (see business chess).
What is a Binary Tree?
A binary tree is a tree data structure composed of nodes, each of which has at most, two children, referred to as left and right nodes and the tree begins from root node.
Representation of Binary Tree:
Each node in the tree contains the following:
- Pointer to the left child
- Pointer to the right child
In C, we can represent a tree node using structures. In other languages, we can use classes as part of their OOP feature. Below is an example of a tree node with integer data.
Basic Operations On Binary Tree:
- Inserting an element.
- Removing an element.
- Searching for an element.
- Deletion for an element.
- Traversing an element. There are four (mainly three) types of traversals in a binary tree which will be discussed ahead.
Auxiliary Operations On Binary Tree:
- Finding the height of the tree
- Find the level of the tree
- Finding the size of the entire tree.
Applications of Binary Tree:
- In compilers, Expression Trees are used which is an application of binary trees.
- Huffman coding trees are used in data compression algorithms.
- Priority Queue is another application of binary tree that is used for searching maximum or minimum in O(1) time complexity.
- Represent hierarchical data.
- used in editing software like Microsoft Excel and spreadsheets.
- useful for indexing segmented at the database is useful in storing cache in the system,
- syntax trees are used for most famous compilers for programming like GCC, and AOCL to perform arithmetic operations.
- for implementing priority queues.
- used to find elements in less time (binary search tree)
- used to enable fast memory allocation in computers.
- to perform encoding and decoding operations.
Binary Tree Traversals:
Tree Traversal algorithms can be classified broadly into two categories:
- Depth-First Search (DFS) Algorithms
- Breadth-First Search (BFS) Algorithms
Tree Traversal using Depth-First Search (DFS) algorithm can be further classified into three categories:
- Preorder Traversal (current-left-right: Visit the current node before visiting any nodes inside the left or right subtrees. Here, the traversal is root – left child – right child. It means that the root node is traversed first then its left child and finally the right child.
- Inorder Traversal (left-current-right): Visit the current node after visiting all nodes inside the left subtree but before visiting any node within the right subtree. Here, the traversal is left child – root – right child. It means that the left child is traversed first then its root node and finally the right child.
- Postorder Traversal (left-right-current): Visit the current node after visiting all the nodes of the left and right subtrees. Here, the traversal is left child – right child – root. It means that the left child has traversed first then the right child and finally its root node.
Tree Traversal using Breadth-First Search (BFS) algorithm can be further classified into one category:
- Level Order Traversal: Visit nodes level-by-level and left-to-right fashion at the same level. Here, the traversal is level-wise. It means that the most left child has traversed first and then the other children of the same level from left to right have traversed.
Let us traverse the following tree with all four traversal methods:
Pre-order Traversal of the above tree: 1-2-4-5-3-6-7
In-order Traversal of the above tree: 4-2-5-1-6-3-7
Post-order Traversal of the above tree: 4-5-2-6-7-3-1
Level-order Traversal of the above tree: 1-2-3-4-5-6-7
Implementation of Binary Tree:
Let us create a simple tree with 4 nodes. The created tree would be as follows.
Below is the Implementation of the binary tree:
Summary: Tree is a hierarchical data structure. Main uses of trees include maintaining hierarchical data, providing moderate access and insert/delete operations. Binary trees are special cases of tree where every node has at most two children.
Below are set 2 and set 3 of this post.
Properties of Binary Tree
Types of Binary Tree
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. |
In geometric algebra, a blade is a generalization of the concept of scalars and vectors to include simple bivectors, trivectors, etc. Specifically, a k-blade is any object that can be expressed as the exterior product (informally wedge product) of k vectors, and is of grade k.
- A 0-blade is a scalar. The inner product[not relevant] or dot product of two vectors a and b is a 0-blade and is denoted as:
- A 1-blade is a vector. Every vector is simple.
- A 2-blade is a simple bivector. Linear combinations of 2-blades also are bivectors, but need not be simple, and are hence not necessarily 2-blades. A 2-blade may be expressed as the wedge product of two vectors a and b:
- A 3-blade is a simple trivector, that is, it may expressed as the wedge product of three vectors a, b, and c:
- In a space of dimension n, a blade of grade n − 1 is called a pseudovector.
- The highest grade element in a space is called a pseudoscalar, and in a space of dimension n is an n-blade.
- In a space of dimension n, there are k(n − k) + 1 dimensions of freedom in choosing a k-blade, of which one dimension is an overall scaling multiplier.
In a n-dimensional spaces, there are blades of grade 0 through n. A vector subspace of finite dimension k may be represented by the k-blade formed as a wedge product of all the elements of a basis for that subspace.
For example, in 2-dimensional space scalars are described as 0-blades, vectors are 1-blades, and area elements are 2-blades known as pseudoscalars, in that they are one-dimensional objects distinct from regular scalars.
In three-dimensional space, 0-blades are again scalars and 1-blades are three-dimensional vectors, but in three-dimensions, areas have an orientation, so while 2-blades are area elements, they are oriented. 3-blades (trivectors) represent volume elements and in three-dimensional space, these are scalar-like – i.e., 3-blades in three-dimensions form a one-dimensional vector space.
See also
- Marcos A. Rodrigues (2000). "§1.2 Geometric algebra: an outline". Invariants for pattern recognition and classification. World Scientific. p. 3 ff. ISBN 981-02-4278-6.
- William E Baylis (2004). "§4.2.3 Higher-grade multivectors in Cℓn: Duals". Lectures on Clifford (geometric) algebras and applications. Birkhäuser. p. 100. ISBN 0-8176-3257-3.
- John A. Vince (2008). Geometric algebra for computer graphics. Springer. p. 85. ISBN 1-84628-996-3.
- For Grassmannians (including the result about dimension) a good book is: Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-05059-9, MR 1288523. The proof of the dimensionality is actually straightforward. Take k vectors and wedge them together and perform elementary column operations on these (factoring the pivots out) until the top k × k block are elementary basis vectors of . The wedge product is then parametrized by the product of the pivots and the lower k × (n − k) block.
- David Hestenes (1999). New foundations for classical mechanics: Fundamental Theories of Physics. Springer. p. 54. ISBN 0-7923-5302-1.
General references
- David Hestenes, Garret Sobczyk (1987). "Chapter 1: Geometric algebra". Clifford Algebra to Geometric Calculus: A Unified Language for Mathematics and Physics. Springer. p. 1 ff. ISBN 90-277-2561-6.
- Chris Doran and Anthony Lasenby (2003). Geometric algebra for physicists. Cambridge University Press. ISBN 0-521-48022-1.
- A Lasenby, J Lasenby & R Wareham (2004) A covariant approach to geometry using geometric algebra Technical Report. University of Cambridge Department of Engineering, Cambridge, UK.
- R Wareham, J Cameron, & J Lasenby (2005). "Applications of conformal geometric algebra to computer vision and graphics". In Hongbo Li, Peter J. Olver, Gerald Sommer. Computer algebra and geometric algebra with applications. Springer. p. 329 ff. ISBN 3-540-26296-2.
- A Geometric Algebra Primer, especially for computer scientists. |
Type of Correlation
1. Positive and Negative Correlation:
2. Simple Partial and Multiple Correlations.
3. Linear and Non linear or Correlations:
1. Positive and Negatives Correlations: If changes in two variables are in same direction. Increase in one variable is associated with the corresponding increase in other variable, the correlations is said to be positive. For example increase in price and increase in supply, increase in father ages and increase in sons ages, higher amount of capital employed associated with higher expected profit etc.
On the other hand if variations or fluctuations in two variables are in opposite direction or in other words the increase in one variable is associated with the corresponding decreases in other or vice versa the correlation is said to be negative . For example, increase in price associated with the decrease in demand and vice versa .Thus price and demand have negative correlation.
2. Linear and non linear Correlation: The distinction between linear and non linear correlation is based upon the constancy of the ratio of change between the two variables. If the amount of changes in one variable tends to bear constant ratio of change in the other a variable, the correlation is said to be linear. For example, if in a factory raw material or numbers of direct workers are doubled, the production is also doubled, and vice versa correlation would be linear.
On the other hand correlation would be called curvilinear if the amount of change in one variable does not bear a constant ratio of change in the other variable. For example the amount spent on advertisement will not bring the change in the amount of sales in same ratio. It means the variations in both the variables are not inconstant ratio.
Thus linear and non linear correlation may also be positive or negative .It is clear from the following chart.
Thus it is clear from the above that:
1. If changes in two variables are in the same direction and in constant ratio. The correlation s is linear positive. For example every10% increase in inflation results in 15% increase in general price level. Correlation between inflation and general price level would be linear and positive.
2. If changes in two variables are in the opposite direction in constant ratio, the correlation is linear negative. For example every 5% increase in price of a commodity is associated with 10% decrease in demand, the correlation between price and Demand would be negative linear.
3. If changes in two variable are in the same direction but not inconstant ratio, the correlation is positive nonlinear. For example every increase of 10% quantity of money in circulation, the general price level increases by 5or6% the correlation between inflation and general price level would be positive curvi linear.
4. If changes is two variables are in opposite direction and not inconstant ratio, the correlation is negative curvilinear. For example for every 5% increase in price of a commodity is associated with 2% to 10% decrease in demand, the correlation between price and demand is said to be negative and curivilinear
3. Simple, Partial and Multiple Correlations: The distinction between simple, partial and multiple correlations based upon the number of variables studied. When only two variables are studied, it is as case of simple correlation. On the other hand when three or more variable are studied, it is a problem of either multiple or partial correlation.
When three or more variable are studied simultaneously, it is called multiple correlation. When a study of yield per acre of wheat is studied with a unit change in fertilizers and the rainfall,it is a problem of multiple correlation, whereas in partial correlation more than two variables are studied, but consider the influence of a third variable on the two variables influencing variables being kept constant, it is a problems of partial correlation. For example, if the change in yield of wheat and rice is studied with reference to a unit of fertiliser or rainfall, it is a case of partial correlation. |
Humanities › English How to Write a Critical Essay Share Flipboard Email Print Hill Street Studios / Getty Images English English Grammar An Introduction to Punctuation Writing By Olivia Valdes Education Expert B.A., American Studies, Yale University Olivia Valdes is the senior editor of ThoughtCo and the founder of Zen Admissions, a college admissions advising service. our editorial process Olivia Valdes Updated August 26, 2019 A critical essay is a form of academic writing that analyzes, interprets, and/or evaluates a text. In a critical essay, an author makes a claim about how particular ideas or themes are conveyed in a text, then supports that claim with evidence from primary and/or secondary sources. In casual conversation, we often associate the word "critical" with a negative perspective. However, in the context of a critical essay, the word "critical" simply means discerning and analytical. Critical essays analyze and evaluate the meaning and significance of a text, rather than making a judgment about its content or quality. What Makes an Essay "Critical"? Imagine you've just watched the movie "Willy Wonka and the Chocolate Factory." If you were chatting with friends in the movie theater lobby, you might say something like, "Charlie was so lucky to find a Golden Ticket. That ticket changed his life." A friend might reply, "Yeah, but Willy Wonka shouldn't have let those raucous kids into his chocolate factory in the first place. They caused a big mess." These comments make for an enjoyable conversation, but they do not belong in a critical essay. Why? Because they respond to (and pass judgment on) the raw content of the movie, rather than analyzing its themes or how the director conveyed those themes. On the other hand, a critical essay about "Willy Wonka and the Chocolate Factory" might take the following topic as its thesis: "In 'Willy Wonka and the Chocolate Factory,' director Mel Stuart intertwines money and morality through his depiction of children: the angelic appearance of Charlie Bucket, a good-hearted boy of modest means, is sharply contrasted against the physically grotesque portrayal of the wealthy, and thus immoral, children." This thesis includes a claim about the themes of the film, what the director seems to be saying about those themes, and what techniques the director employs in order to communicate his message. In addition, this thesis is both supportable and disputable using evidence from the film itself, which means it's a strong central argument for a critical essay. Characteristics of a Critical Essay Critical essays are written across many academic disciplines and can have wide-ranging textual subjects: films, novels, poetry, video games, visual art, and more. However, despite their diverse subject matter, all critical essays share the following characteristics. Central claim. All critical essays contain a central claim about the text. This argument is typically expressed at the beginning of the essay in a thesis statement, then supported with evidence in each body paragraph. Some critical essays bolster their argument even further by including potential counterarguments, then using evidence to dispute them.Evidence. The central claim of a critical essay must be supported by evidence. In many critical essays, most of the evidence comes in the form of textual support: particular details from the text (dialogue, descriptions, word choice, structure, imagery, et cetera) that bolster the argument. Critical essays may also include evidence from secondary sources, often scholarly works that support or strengthen the main argument.Conclusion. After making a claim and supporting it with evidence, critical essays offer a succinct conclusion. The conclusion summarizes the trajectory of the essay's argument and emphasizes the essays' most important insights. Tips for Writing a Critical Essay Writing a critical essay requires rigorous analysis and a meticulous argument-building process. If you're struggling with a critical essay assignment, these tips will help you get started. Practice active reading strategies. These strategies for staying focused and retaining information will help you identify specific details in the text that will serve as evidence for your main argument. Active reading is an essential skill, especially if you're writing a critical essay for a literature class.Read example essays. If you're unfamiliar with critical essays as a form, writing one is going to be extremely challenging. Before you dive into the writing process, read a variety of published critical essays, paying careful attention to their structure and writing style. (As always, remember that paraphrasing an author's ideas without proper attribution is a form of plagiarism.)Resist the urge to summarize. Critical essays should consist of your own analysis and interpretation of a text, not a summary of the text in general. If you find yourself writing lengthy plot or character descriptions, pause and consider whether these summaries are in the service of your main argument or whether they are simply taking up space. |
A sphere has a diameter of 18 centimetres. Work out the volume of the sphere, giving your answer in terms of 𝜋.
So in this problem, we’re told the diameter of a sphere and asked to calculate its volume. Let’s recall first the formula for calculating the volume of a sphere. The formula is that the volume is four-thirds 𝜋𝑟 cubed, where 𝑟 is the radius of the sphere.
Now, we haven’t been given the radius of the sphere; we’ve been given the diameter, but that’s not a problem because the two are very closely related. The radius of a sphere is half of the diameter. So if the diameter of the sphere is 18 centimetres, then the radius must be nine centimetres. Let’s substitute this value of 𝑟 into our formula for calculating the volume.
We have that the volume of the sphere is equal to four-thirds multiplied by 𝜋 multiplied by nine cubed. Now the question has asked us to give our answer in terms of 𝜋. We suggest we may not have access to our calculator for this problem. We need to evaluate the constant in the volume.
Let’s write that nine cubed as nine multiplied by nine multiplied by nine. Now, there is a factor of three in the denominator and I can cancel that with one of the nines in the numerator. If I divide them both by three, then I now have four multiplied by 𝜋 multiplied by three multiplied by nine multiplied by nine. Let’s think about how to perform this multiplication easily without a calculator.
The four and the three multiplied together to give 12 and the nine and the nine multiplied together to give 81. So I now have that the volume of the sphere is equal to 12 multiplied by 81 multiplied by 𝜋. So we just need to work out what 12 multiplied by 81 is. Well, we can do this by dividing the 12 up into 10 plus two. 10 times 81 is 810; two times 81 is 162. So we can add these together in order to find 12 times 81 is 972. Therefore, our answer for the volume of the sphere in terms of 𝜋 is 972𝜋 and the units for this volume are centimetres cubed. |
Home > Flashcards > Print Preview
The flashcards below were created by user
on FreezingBlue Flashcards. What would you like to do?
habit slips - definition
unconscious intrusions of a habit when an alternative behavior had been consciously intended
habit slips - example
- - putting sugar in your cereal when you were intending to cut back
- - driving past an intended stop only to realize you didn't get gas when you get home
operant response - definition
label coined by BF Skinner indicating that the subject's response operates on the environment to produce a certain outcome and the consequences of the response can modify future responses; once conditioned, these responses can be extinguished
operant response - example
- - bar press-to-food should lead to an increase in bar pressing (positive reinforcement)
- - running a red light and getting hit should reduce red light running in the future (punishment)
discrete trial - definition
instrumental learning technique in which the subject is given separate occasions during which the response may be performed and in which the beginning and the end of the trial is known; used by Thorndike
discrete trial - example
maze, puzzle box, runway, two-goal box
intervention - definition
action taken to cause a change in behavior, cognition, and/or emotional state
intervention - example
- - types of reinforcement given for a behavior: positive, negative, omission and punishment
- - giving 3 M&M's to a child for making it to the bathroom on time
learned helplessness - definition
learning that there is an explicit lack of contingency between responses and an aversive outcome: there is no response that is causing punishment, nor is there one that prevents it
learned helplessness - example
an individual in a situation where all their actions had no influence over the outcome will become passive and not do anything next time they are in the same (or similar) situation - two separate groups of students are exposed to loud tone pulses and told they have the ability to stop the tones; the learned helplessness' button to turn off the tone did not work and in the second phase of the study this group had slower reactions to turning of the tone when they were given the real ability to stop it
primary reinforcer - definition
innate reinforcer; reduce biologial needs of the organism
primary reinforcer - example
food, water, relief from excessive heat, cold or pain
negative contrast - definition
current rf is is smaller or nonexistent in contrast with the previous rf which alters the level of responding of the organism
negative contrast - example
switching from a larger to a smaller reward, the rats run slower than they did when the reward was larger
social reinforcement - definition
powerful class of reinforcers for human behavior in which praise, attention, physical contact and/or facial expressions are used
social reinforcement - example
teacher praising an otherwise uncooperative/defiant student for completing her math problems increases the student's performance on her math problems
nonreward - definition
contingencies in which the response is not followed by a positive reinforcer - extinction
nonreward - example
in lab 3, bar presses by Faraday ceased to produce her expected rf of water, so her bar presses decreased, then disappeared (until the very end in which she had an extinction burst)
shaping - definition
method to train a behavior that is not presently in the organism's repertoire - start by reinforcing a response that is performed and that approximates the desired behavior; once this response occurs at a higher frequency, we reinforce certain deviations in the direction of the target behavior; step by step the reinforced response is slightly changed from that that will no longer be reinforced through successive approximations
shaping - example
in lab 2, Faraday was reinforced for going to the corner with the bar in it, then for being by the bar, then for touching the bar, then for actually pressing the bar; eventually she was only reinforced for bar presses
chaining - definition
form of instrumental conditioning in which reinforcement occurs only after the final response in the sequence
chaining - example
in lab 5, Faraday learned to hit the lever which turned the light on, then press the bar by the lights which activated the dipper which then gave her a reinforcement of water; if she did any of these responses out of order, she didn't receive any water
antecedents - definition
any behavior or occurrence that precedes a behavior/event
antecedents - example
organization of all supplies needed before beginning to study for a test (books, pens, notes, snacks, etc...)
avoidance-avoidance - definition
- an organism is faced with two choices/goals both of which are negative; the conflict arises when the organism is required to choose between the two negative options; movement away from one goal is countered by an increase in the repellence of
- the other goal so that the individual returns to the point where he was at the beginning
- of the conflict
avoidance-avoidance - example
I can choose to study for my nursing test or write an APA paper for my nursing class
frustration hypothesis - definition
the frustrating aftereffects of nonreward become associated with the subsequent occurrence of reward; the frustration experienced after nonreward on one trial is followed by reward on the next trial and frustration becomes a discriminative stimulus for reward; this is found in PREE
frustration hypothesis - example
in PREE trials, a rat is only partially reinforced for bar presses and the bar press behavior increases in response to emotional frustration for not being reinforced; when a reinforcement is given, it reinforces the emotional frustration level so the rat continues to bar press when there is no water reinforcement because they are increasing the frustration level in expectation of the next water reinforcement
S-R learning - definition
a discriminative stimulus leads to an instrumental response; stimulus-response conditioning; predicts that the discriminative stimuli come to elicit the previously reinforced instrumental responses; association of a new stimulus with a pre-existing stimulus; the response seems to have become separated from its reinforcing consequences and has become an automatic reaction to the stimulus; the instrumental response sometimes persists even though reinforcement is freely available and the response is no longer needed to obtain reward
S-R learning - example
advertising strategies: a customer didn't previously believe they needed a certain pair of shoes or jeans, but when they see them on a mannequin or model in a window, they now believe they NEED that item of clothing
Edward Thorndike began exploring the concept of learning by studying animals, specifically cats. He developed a theory called trial-and-error learning. Explain his theory including the process he used to demonstrate his theory of learning. Include explanations for the law of effect and the law of exercise. Why was this type of learning included in the category of instrumental response learning and stimulus-response learning?
- - Trial-and-error learning: Now known as instrumental conditioning; Sought to systematize the principles involved in the development of adaptive behavior; The organisms tries many behaviors at first, and then the ineffective responses cease over time and the effective responses increase
- - Three elements: Discriminative stimulus: environmental stimuli present at the time the response occurs; Response: organism’s action based on the environmental (discriminative) stimuli; Consequence: the result of the organism’s action
- - Cats in a puzzle box:Placed a cat in a wooden crate with a hinged door and a trip mechanism somewhere in the box; A disguised mechanism would open the door and the cat could escape; By using responses not already in the cat’s repertoire he was able to study how the new response developed with practice; Each time the cat tripped the mechanism and escaped, he would place the cat back into the box right away and time how quickly each successive escape took - the faster the cat was, the more efficient the learning had been
- - Law of effect: Associations were strengthened when the behavior resulted in the goal; Responses that produced a satisfying consequence became connected to the situation and become more likely to occur during the next trials, while responses that produced unsatisfying consequences dropped out and disappeared over time; Statement of the principle of reinforcement: behavior, in its form, timing, and probability of occurrence, is modified by the consequences of the behavior
- - Law of exercise:Associative shifting: after you have learned something, you don’t really have to think about the actions involved each time you perform them, you just do them; “Use it or lose it”: The more practice the stronger the connection, and the less practice the less likely the connection will be maintained; True for many cases but the ability to ride a bike after decades of not riding one would be a situation where this doesn’t hold true
- - Why instrumental response and stimulus-response learning?: Thorndike’s trial-and-error learning was the foundation for S-R learning in which the discriminate stimulus is connected to the instrumental response and reinforcement is what conditions or strengthens the S-R connection.; Because of Thorndike’s laws of effect and exercise, S-R learning highlighted the difference between mechanistic and insight based learning - Thorndike believed in mechanistic learning and his trial-and-error learning trials demonstrate that.
Based upon the theories of B.F. Skinner, particularly as he explained them in his book “Walden Two,” a community of people began “Twin Oaks.” Explain the principles used to establish this community. Provide examples of how the principles were put into practice in the community. How do they relate to Skinner’s theories of learning and rewards? What did Skinner think about Twin Oaks? Why?
- - Twin Oaks principles: Skinner’s behaviorism; Based on community sharing, no violence, no aggression, no jealousy, no competition; Everything based on reinforcement as primary mover of human behavior; Basic values: cooperation, egalitarianism, income-sharing and non-violence
- - How the principles were put into practice: Everyone works 40 or so hours a week; Children are raised by Metas in a children’s house modeled after Israeli kibbutzim; Labor-credit work system; Walden Two Planner-Management system
- - Relations to Skinner’s theories of learning and rewards: The Twin Oaks community used fixed ratio reinforcement schedules, whereas Skinner believed variable ratio schedules were more effective
- - Skinner’s opinion of Twin Oaks and why: Liked Twin Oaks but they didn’t follow his behaviorism as much as he would have liked; Wouldn’t set up the government the way they did Used on person to delegate jobs but they weren’t ever really in charge - just there by default; Impressed by how self-sufficient they were; Would have liked to see variable ratio reinforcement, instead of the fixed ratio (which seemed to work better for the people in this community)
Terry argues, as do other learning theorists, that “instrumental conditioning” and “operant learning” are different. Terry suggests that the difference is important but not recognizable outside the field. Explain the differences between instrumental and operant learning. Why do you think these two theories are often seen as the same?
- - Instrumental conditioning: Tends to adopt a particular form of theorizing in its attempts to explain learning, often postulating theoretical constructs; Discrete Trials: the subject is given separate occasions during which the response may be performed; the beginning and the end of the trial is known; Averages out individual variations in performance by using groups of subjects
- - Operant learning: Strictly functional approach: the frequency of responding is a function of the amount of reinforcement, or of its delay, or its schedule, and so on; Continuous availability to the response; Seeks to demonstrate lawful relationships in a single subject
- - Why are they seen as the same?: They both use reinforcement for behaviors; Behavior occurs because of the consequence it produces; The response is always voluntary and has conscious controlThe behavior is goal-directed; So without knowledge about the discrete trial and how the information is interpreted, they look the same
Clark Hull developed the drive reduction theory of learning. Explain his theory and how it can help us understand what it takes for people to change. Make sure you include the definitions for D, K, H, V, and I in your explanation.
- - Drive reduction theory of learning: Reinforcers are stimuli that reduce drives based on biological needs; Once the biological need is satiated, the drive is reduced (though it is only temporary); The drive reduction serves as a reinforcer for learning; Uses a combined influence of various factors
- - D x H x K x V - I = response
- - D: drive, level of motivation
- - K: incentive motivation, quantity or quality of goal
- - H: habit, past response (SUR = innate and SHR = acquired)
- - V: stimulus intensity, salience
- - I: inhibition, fatigue level
- - How can this theory help us understand what it takes for people to change?:People change because of a need to change and when that need is biological, their motivational factor is intensified. However, as people’s drives are satisfied, the reinforcements become less effective and must wait until there is a deprivation state again. The higher each factor, the greater the response.
Explain the basic concepts of Skinner’s reward or reinforcement theory including the four basic types of reinforcements, explanations for each, and examples. Provide information regarding what type of reinforcement works most efficiently and why? What type of reinforcement is least effective and why?
- - Reward/reinforcement theory: All behavior occurs because of the types of reinforcement it receives: positive, negative, omission training and/or punishment, along with the schedule of reinforcements. Types of reinforcers a pleasant/appetitive or unpleasant/aversive and vary based on addition or subtraction of the reinforcer.
- - Positive: Increases behavior; Most effective: because positivity lasts longer and makes us feel better about our behaviors and it doesn’t cause detrimental side effects; Example: Child receives M&M’s each time she makes it to the bathroom on time
- - Negative: Behavior stops aversive stimulus; aversive stimulus is removed because of the behavior; Escape: A behavior can stop a continuous, aversive stimulus; Avoidance: A behavior prevents the occurrence of an aversive stimulus; Example: escape: click your seatbelt to turn off the annoying buzzer; avoidance: click you seatbelt before the annoying buzzer has a chance to start
- - Omission training: Behavior presents the delivery of a pleasant stimulus, then the pleasant stimulus is taken away to decrease the response; Example: Time out away from toys and attention from others
- - Punishment: Presenting an aversive (usually physical) stimulus to decrease a response; Least effective: because the punishment has to be very severe, immediate and consistent; it’s side effects are usually not worth the actual behavioral outcomes and can actually cause the unwanted behavior to increase, rather than decrease. Also, if the behavior was violent in someway, using punishment (violence) for violence can be confusing and contradictory.; Example: Spanking a child after they have hit another child
Part of Skinner’s theory of learning has to do with schedules of reinforcement. Define the 5 types. Give an example of each type. When would each schedule be used most effectively and why?
- - Continuous: reinforcement for each instance of a response; Effectiveness: at the onset of training, continuous reinforcement usually produces more rapid conditioning but over time continuous reinforcement doesn’t encourage the continuation of the behavior if the reinforcement is taken away so extinction is very quick; Example: M&M each time the child goes to potty in the toilet might cause the child to only use the toilet when there is a guarantee of candy
- - Fixed ratio: a reinforcement is given after a certain number of tasks are performed; Effectiveness: lead to high response rates and is based solely on the participant’s efforts; however there are pauses after the delivery of the reinforcement before the behavior increases; Example: for every 5 bar presses, Faraday receives a rf of water
- - Variable ratio: a reinforcement is delivered after an average number of performances; Effectiveness: not highly effective at the beginning of conditioning but it encourages ongoing strong responses later on; Example: sales associates’ attempts to help customers are sometimes rewarded with sales. Which customer will buy may be unpredictable, but more attempts should produce more sales.
- - Fixed interval: reinforcement is delivered after a set time; Effectiveness: this produces a gradual increase in performance; Example: the bar press won’t produce any water for 60 seconds when the light is off, no matter how many times Faraday presses the bar, so she waits for the lights to come on before she presses the bar again
- - Variable interval: reinforcement is delivered on an average time schedule; Effectiveness: not highly effective at the beginning of conditioning but it encourages ongoing strong responses later on; Example: checking Facebook for updates - the updates may arrive unpredictably, but the recipient won’t know unless she checks for them
Human behavior is complex which makes explaining and changing it challenging. Explain the concepts of multiple schedules of reinforcement including concurrent schedules, under-matching, overmatching, tandem schedules, and chaining. Give examples of how a person might be responding to more than one schedule of reinforcement at a time. How do people make decisions about what should be done now or what should be done later based upon these concepts?
- - Concurrent schedules: two or more responses reinforced on different schedules --> short delay reward = cake while long delay reward = healthy teeth; the short delay reward becomes less appealing the longer the person waits; Under-matching: the proportion of rf for responding is less than the actual possible rf proportions for that stimulus; reinforcement is still left over; the organism doesn’t work enough to receive all the rf possible; Example: Person works out only 30 minutes every other day and not very hard when they know if they worked out 45 minutes every day with more intensity they would have better, faster results; Overmatching: the response proportions are greater than the available reinforcement proportions; the organism works to hard given the available rf; Example: I could have printed off a calendar and used it to keep track of my behavior project; instead I spent an hour and a half scrap-booking a calendar to keep track of my behavior project
- - Chaining: two or more cues presented successively - responding to cue #1 in the presentation of cue #2 and responding to cue #2 results in a rf; Tandem schedules: no external stimuli used for cues and no discrete ending to one event; the first sequence gives information about the next sequence; tends to work more effectively than chaining along because the end of the trial is not as evident - Faraday hits lever, which turns light on, which means she knows to press the bar, and bar pressing will produce rf; Example: In soccer, you have the ball and know that your goal is to dribble it down to the goal and score, and your score is reinforced by the cheering of your teammates, coach and crowd but there was no stimulus to get you to dribble, shoot, score
- - How might a person be responding to more than one schedule of reinforcement at a time?: Multiple schedules can be seen in the everyday of life of a college student: when this class is over, I get to eat, but when this class is over, I get to go home, when I get in my car, I can listen to music, when I get home, I can do homework, when my alarm goes off, I get up, but when it goes off again it means I have to leave for a class
- - How do people make decisions about what should be done now or what should be done later?: We make decisions by prioritizing - sometimes our priorities are based on immediate gratification (cake now) or they are based on long-term benefits (work out now, no cake and better abs)
Explain the Premack Principle using the information presented in class. What does this principle have to do with the effectiveness of reinforcers? Make sure you define each element in the principle. What did Timberlake and Allison add to the Premack Principle?
- - Premack Principle: the opportunity to perform the higher-probability response will serve as a reinforcer for the lower-probability response
- - Effectiveness of reinforcers?: the reinforcers have to be effective to increase the likelihood of the low-probability response; the reinforcer can be any activity the person is more likely to engage in than the instrumental response (TV, video games, play time, shopping, dancing, etc...)
- - Elements: High probability action: watching an episode of Gilmore Girls; Low probability action: reviewing my notes for at least 30 minutes; How to get the low probability action to happen more than the high: I would have to review my notes for at least 30 minutes before I could watch an episode of Gilmore Girls, thus I would be using the high-probability response to reinforce the low-probability response
- - Timberlake and Allison’s additions: Said the principle didn’t give all the information and the organisms weren’t really free to choose (what if they didn’t like either option); Developed the behavioral bliss point to distribute activities among available response options - given this environment and this time what are the possible reinforcements? - you have to figure out what they will respond to - response restrictions are imposed to increase the low-probability response
Terry discusses that punishment can work. What is necessary in order for punishment to be effective in training? Explain why human beings are not very good at using punishment as an effective modification method? What is meant by “paradoxical rewarding effects” of punishment?
- - What is necessary for punishment to be effective?: Punishment must occur in a timely manner - the more immediate the better; The punishment must be meaningful to the behavior; The degree of response suppression is a function of intensity so punishment cannot gradually increase or it is not effective; More effective if on a continuous reinforcement schedule than any interval or variable; Estes, Miller and Masserman worked with punishment; Estes found that extinction without punishment was more effective than punishment; Miller found sudden, intense punishment is most effective; Masserman found that punishment must be intense to decrease behavior
- - Why humans aren’t very good at using it: We tend to use punishment as an outlet for our own anger, rather than as a reinforcement to decrease a behavior; we aren’t consistent or timely enough and the punishment may not be aversive enough or too aversive for learning to occur; Punishment can lead to harmful/unwanted side effects: fear, aggression, and avoidance
- - Paradoxical rewarding effects: pairing of punishing stimulus and with a positive reinforcer can convert into a secondary reinforcer; a punishing event (air blast) may become a conditioned reinforcer by virtue of pairing with the a positive reinforcer (food). Then, the gradual increase in the intensity of punishment minimizes its power to suppress behavior. Eventually, the cat is bar pressing for blasts of air instead of food.
There are some behaviors that we want to extinguish. Explain how extinction can be accomplished with operant conditioning. What schedules of reinforcement are most susceptible to extinction and what schedules of reinforcement are most resistant to extinction? Why? Make sure you include the concept of partial reinforcement extinction effect in your discussion.
- - Extinction: a nonreward contingency in which reward is omitted after those responses that once produced positive reinforcement; by withholding the positive reinforcement, the organism should stop performing the response - in lab 3, Faraday was not given any reinforcement for bar presses in the dark, and, eventually (until the very end) she stopped pressing the bar
- - Side effects: Extinction bursts: temporary increase of the nonreinforced behavior; Spontaneous recovery: after a delay interval, the response recovers; When the old response no longer produces reward, the organism engages in new behaviors to try to restore reward and behavioral variability increases - can lead to adaptation
- - Susceptible schedules of reinforcement: continuous reinforcement schedules are very susceptible to extinction
- - Resistant schedules of reinforcement: intermittent reinforcement (PRE)
- - Partial reinforcement extinction effect: variable schedule of partial reinforcement which causes resistance to extinction later on; Discrimination hypothesis: when the reinforcement doesn’t come, the partially reinforced rat doesn’t discriminate this difference until several nonrewarded trials have occurred; Frustration hypothesis: frustrating aftereffects of nonreward become associated with the subsequent occurrence of reward - this requires frequent transitions between nonrewarded and rewarded trials for this association to occur; Sequential hypothesis: memory of nonreward on one trial becomes associated with the occurrence of reward on a later trial so at the start of a new trial, the participant remembers the outcome of the previous trial and associates it with the outcome of the current trial |
In statistics, the likelihood function (often simply called the likelihood) measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters. It is formed from the joint probability distribution of the sample, but viewed and used as function of the parameters only, thus treating the random variables as fixed at the observed values.
The likelihood function describes a hypersurface whose peak, if it exists, represents the combination of model parameter values that maximize the probability of drawing the sample obtained.The procedure for obtaining these arguments of the maximum of the likelihood function is known as maximum likelihood estimation, which for computational convenience is usually done using the natural logarithm of the likelihood, known as the log-likelihood function. Additionally, the shape and curvature of the likelihood surface represent information about the stability of the estimates, which is why the likelihood function is often plotted as part of a statistical analysis.
The case for using likelihood was first made by R. A. Fisher,who believed it to be a self-contained framework for statistical modelling and inference. Later, Barnard and Birnbaum led a school of thought that advocated the likelihood principle, postulating that all relevant information for inference is contained in the likelihood function. But even in frequentist and Bayesian statistics, the likelihood function plays a fundamental role.
The likelihood function is usually defined differently for discrete and continuous probability distributions. A general definition is also possible, as discussed below.
Let be a discrete random variable with probability mass function depending on a parameter . Then the function
considered as a function of , is the likelihood function, given the outcome of the random variable . Sometimes the probability of "the value of for the parameter value " is written as P(X = x | θ) or P(X = x; θ). should not be confused with ; the likelihood is equal to the probability that a particular outcome is observed when the true value of the parameter is , and hence it is equal to a probability density over the outcome , not over the parameter .
Consider a simple statistical model of a coin flip: a single parameter that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed. can take on any value within the range 0.0 to 1.0. For a perfectly fair coin, .
Imagine flipping a fair coin twice, and observing the following data: two heads in two tosses ("HH"). Assuming that each successive coin flip is i.i.d., then the probability of observing HH is
Hence, given the observed data HH, the likelihood that the model parameter equals 0.5 is 0.25. Mathematically, this is written as
This is not the same as saying that the probability that , given the observation HH, is 0.25. (For that, we could apply Bayes' theorem, which implies that the posterior probability is proportional to the likelihood times the prior probability.)
Suppose that the coin is not a fair coin, but instead it has . Then the probability of getting two heads is
More generally, for each value of , we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1.
In Figure 1, the integral of the likelihood over the interval [0, 1] is 1/3. That illustrates an important aspect of likelihoods: likelihoods do not have to integrate (or sum) to 1, unlike probabilities.
Let be a random variable following an absolutely continuous probability distribution with density function depending on a parameter . Then the function
considered as a function of , is the likelihood function (of , given the outcome of ). Sometimes the density function for "the value of for the parameter value " is written as . should not be confused with ; the likelihood is equal to the probability density at a particular outcome when the true value of the parameter is , and hence it is equal to a probability density over the outcome , not over the parameter .
In measure-theoretic probability theory, the density function is defined as the Radon–Nikodym derivative of the probability distribution relative to a common dominating measure.The likelihood function is that density interpreted as a function of the parameter (possibly a vector), rather than the possible outcomes. This provides a likelihood function for any statistical model with all distributions, whether discrete, absolutely continuous, a mixture or something else. (Likelihoods will be comparable, e.g. for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.)
The discussion above of likelihood with discrete probabilities is a special case of this using the counting measure, which makes the probability of any single outcome equal to the probability density for that outcome.
Given no event (no data), the probability and thus likelihood is 1;[ citation needed ] any non-trivial event will have a lower likelihood.
Among many applications, we consider here one of broad theoretical and practical importance. Given a parameterized family of probability density functions (or probability mass functions in the case of discrete distributions)
where is the parameter, the likelihood function is
where is the observed outcome of an experiment. In other words, when is viewed as a function of with fixed, it is a probability density function, and when viewed as a function of with fixed, it is a likelihood function.
This is not the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous consequences. See prosecutor's fallacy for an example of this.
From a geometric standpoint, if we consider as a function of two variables then the family of probability distributions can be viewed as a family of curves parallel to the -axis, while the family of likelihood functions is the orthogonal curves parallel to the -axis.
The use of the probability density in specifying the likelihood function above is justified as follows. Given an observation , the likelihood for the interval , where is a constant, is given by . Observe that
since is positive and constant. Because
where is the probability density function, it follows that
The first fundamental theorem of calculus and the l'Hôpital's rule together provide that
and so maximizing the probability density at amounts to maximizing the likelihood of the specific observation .
The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses and a density , where the sum of all the 's added to the integral of is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply
where is the index of the discrete probability mass corresponding to observation , because maximizing the probability mass (or probability) at amounts to maximizing the likelihood of the specific observation.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation , but not with the parameter .
In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are assumed in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance. By the extreme value theorem, a continuous likelihood function on a compact parameter space suffices for the existence of a maximum likelihood estimator.While the continuity assumption is usually met, the compactness assumption about the parameter space is often not, as the bounds of the true parameter values are unknown. In that case, concavity of the likelihood function plays a key role.
More specifically, if the likelihood function is twice continuously differentiable on the k-dimensional parameter space assumed to be an open connected subset of , there exists a unique maximum if
Mäkeläinen et al. prove this result using Morse theory while informally appealing to a mountain pass property.Mascarenhas restates their proof using the mountain pass theorem.
In the proofs of consistency and asymptotic normality of the maximum likelihood estimator, additional assumptions are made about the probability densities that form the basis of a particular likelihood function. These conditions were first established by Chanda. , and for all ,In particular, for almost all
exist for all in order to ensure the existence of a Taylor expansion. Second, for almost all and for every it must be that
where is such that . This boundedness of the derivatives is needed to allow for differentiation under the integral sign. And lastly, it is assumed that the information matrix,
is positive definite and is finite. This ensures that the score has a finite variance.
The above conditions are sufficient, but not necessary. That is, a model that does not meet these regularity conditions may or may not have a maximum likelihood estimator of the properties mentioned above. Further, in case of non-independently or non-identically distributed observations additional properties may need to be assumed.
A likelihood ratio is the ratio of any two specified likelihoods, frequently written as:
The likelihood ratio is central to likelihoodist statistics: the law of likelihood states that degree to which data (considered as evidence) supports one parameter value versus another is measured by the likelihood ratio.
In frequentist inference, the likelihood ratio is the basis for a test statistic, the so-called likelihood-ratio test. By the Neyman–Pearson lemma, this is the most powerful test for comparing two simple hypotheses at a given significance level. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof.The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given by Wilks' theorem.
The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule is that the posterior odds of two alternatives, and , given an event , is the prior odds, times the likelihood ratio. As an equation:
The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models (see below).
The likelihood ratio of two models, given the same event, may be contrasted with the odds of two events, given the same model. In terms of a parametrized probability mass function , the likelihood ratio of two values of the parameter and , given an outcome is:
while the odds of two outcomes, and , given a value of the parameter , is:
This highlights the difference between likelihood and odds: in likelihood, one compares models (parameters), holding data fixed; while in odds, one compares events (outcomes, data), holding the model fixed.
The odds ratio is a ratio of two conditional odds (of an event, given another event being present or absent). However, the odds ratio can also be interpreted as a ratio of two likelihoods ratios, if one considers one of the events to be more easily observable than the other. See diagnostic odds ratio, where the result of a diagnostic test is more easily observable than the presence or absence of an underlying medical condition.
Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that the maximum likelihood estimate for the parameter θ is . Relative plausibilities of other θ values may be found by comparing the likelihoods of those other values with the likelihood of . The relative likelihood of θ is defined to be
Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominator . This corresponds to standardizing the likelihood to have a maximum of 1.
A likelihood region is the set of all values of θ whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a p% likelihood region for θ is defined to be
If θ is a single real parameter, a p% likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.
Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism).
Given a model, likelihood intervals can be compared to confidence intervals. If θ is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for θ will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e−2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1).
In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter (or parameters) of interest: the main approaches are profile, conditional, and marginal likelihoods.These approaches are also useful when a high-dimensional likelihood surface needs to be reduced to one or two parameters of interest in order to allow a graph.
It is possible to reduce the dimensions by concentrating the likelihood function for a subset of parameters by expressing the nuisance parameters as functions of the parameters of interest and replacing them in the likelihood function. that can be partitioned into , and where a correspondence can be determined explicitly, concentration reduces computational burden of the original maximization problem.In general, for a likelihood function depending on the parameter vector
For instance, in a linear regression with normally distribution errors, , the coefficient vector could be partitioned into (and consequently the design matrix ). Maximizing with respect to yields an optimal value function . Using this result, the maximum likelihood estimator for can then be derived as
where is the projection matrix of . This result is known as the Frisch–Waugh–Lovell theorem.
Since graphically the procedure of concentration is equivalent to slicing the likelihood surface along the ridge of values of the nuisance parameter that maximizes the likelihood function, creating an isometric profile of the likelihood function for a given , the result of this procedure is also known as profile likelihood. In addition to being graphed, the profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood.
Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test.
Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components.
A partial likelihood is an adaption of the full likelihood such that only a part of the parameters (the parameters of interest) occur in it.It is a key component of the proportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time.
The likelihood, given two or more independent events, is the product of the likelihoods of each of the individual events:
This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities.
This is particularly important when the events are from independent and identically distributed random variables, such as independent observations or sampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions.
The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to a uniform prior in Bayesian statistics, but in likelihoodist statistics this is not an improper prior because likelihoods are not integrated.
Log-likelihood function is a logarithmic transformation of the likelihood function, often denoted by a lowercase l or , to contrast with the uppercase L or for the likelihood. Since concavity plays a key role in the maximization, and as the most common probability distributions—in particular the exponential family—are only logarithmically concave, it is usually more convenient to work with the log-likelihood function. Also, the log-likelihood is particularly convenient for maximum likelihood estimation. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood.
Given the independence of each event, the overall log-likelihood of intersection equals the sum of the log-likelihoods of the individual events. This is analogous to the fact that the overall log-probability is the sum of the log-probability of the individual events. In addition to the mathematical convenience from this, the adding process of log-likelihood has an intuitive interpretation, as often expressed as "support" from the data. When the parameters are estimated using the log-likelihood for the maximum likelihood estimation, each data point is used by being added to the total log-likelihood. As the data can be viewed as an evidence that support the estimated parameters, this process can be interpreted as "support from independent evidence adds", and the log-likelihood is the "weight of evidence". Interpreting negative log-probability as information content or surprisal, the support (log-likelihood) of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model.
The choice of base b for the logarithm corresponds to a choice of scale; generally the natural logarithm is used and the base is fixed, but sometimes the base is varied, in which case, writing the base as , the factor β can be interpreted as the coldness.
A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods:
Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models.
If the log-likelihood function is smooth, its gradient with respect to the parameter, known as the score and written , exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events.
The equations defined by the stationary point of the score function serve as estimating equations for the maximum likelihood estimator.
In that sense, the maximum likelihood estimator is implicitly defined by the value at of the inverse function , where is the d-dimensional Euclidean space. Using the inverse function theorem, it can be shown that is well-defined in an open neighborhood about with probability going to one, and is a consistent estimate of . As a consequence there exists a sequence such that asymptotically almost surely, and . A similar result can be established using Rolle's theorem.
The second derivative evaluated at , known as Fisher information, determines the curvature of the likelihood surface, and thus indicates the precision of the estimate.
The log-likelihood is also particularly useful for exponential families of distributions, which include many of the common parametric probability distributions. The probability distribution function (and thus likelihood function) for exponential families contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.
An exponential family is one whose probability density function is of the form (for some functions, writing for the inner product):
Each of these terms has an interpretation,but simply switching from probability to likelihood and taking logarithms yields the sum:
The and each correspond to a change of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula:
In words, the log-likelihood of an exponential family is inner product of the natural parameter and the sufficient statistic , minus the normalization factor (log-partition function) . Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statistic T and the log-partition function A.
The gamma distribution is an exponential family with two parameters, and . The likelihood function is
Finding the maximum likelihood estimate of for a single observed value looks rather daunting. Its logarithm is much simpler to work with:
To maximize the log-likelihood, we first take the partial derivative with respect to :
If there are a number of independent observations , then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:
To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for :
Here denotes the maximum-likelihood estimate, and is the sample mean of the observations.
The term "likelihood" has been in use in English since at least late Middle English.Its formal use to refer to a specific function in mathematical statistics was proposed by Ronald Fisher, in two research papers published in 1921 and 1922. The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher:
|“||[I]n 1922, I proposed the term ‘likelihood,’ in view of the fact that, with respect to [the parameter], it is not a probability, and does not obey the laws of probability, while at the same time it bears to the problem of rational choice among the possible values of [the parameter] a relation similar to that which probability bears to the problem of predicting events in games of chance. . . .Whereas, however, in relation to psychological judgment, likelihood has some resemblance to probability, the two concepts are wholly distinct. . . .”||”|
The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher
|“||I stress this because in spite of the emphasis that I have always laid upon the difference between probability and likelihood there is still a tendency to treat likelihood as though it were a sort of probability. The first result is thus that there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood.||”|
Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning called inverse probability.His use of the term "likelihood" fixed the meaning of the term within mathematical statistics.
A. W. F. Edwards (1972) established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics, but were not adopted in a general treatment of the topic of statistical evidence.
Among statisticians, there is no consensus about what the foundation of statistics should be. There are four main paradigms that have been proposed for the foundation: frequentism, Bayesianism, likelihoodism, and AIC-based.For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below.
This section is empty.You can help by adding to it.(March 2019)
In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model (see marginal likelihood), given specified data or other evidence, given another unknown quantity is proportional to the probability of given .the likelihood function remains the same entity, with the additional interpretations of (i) a conditional density of the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model. Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low probability, or vice versa. This is often the case in medical contexts. Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density. More generally, the likelihood of an unknown quantity
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations . (April 2019) (Learn how and when to remove this template message)
In frequentist statistics, the likelihood function is itself a statistic that summarizes a single sample from a population, whose calculated value depends on a choice of several parameters θ1 ... θp, where p is the count of parameters in some already-selected statistical model. The value of the likelihood serves as a figure of merit for the choice used for the parameters, and the parameter set with maximum likelihood is the best choice, given the data available.
The specific calculation of the likelihood is the probability that the observed sample would be assigned, assuming that the model chosen and the values of the several parameters θ give an accurate approximation of the frequency distribution of the population that the observed sample was drawn from. Heuristically, it makes sense that a good choice of parameters is those which render the sample actually observed the maximum possible post-hoc probability of having happened. Wilks' theorem quantifies the heuristic rule by showing that the difference in the logarithm of the likelihood generated by the estimate’s parameter values and the logarithm of the likelihood generated by population’s "true" (but unknown) parameter values is χ² distributed.
Each independent sample's maximum likelihood estimate is a separate estimate of the "true" parameter set describing the population sampled. Successive estimates from many independent samples will cluster together with the population’s "true" set of parameter values hidden somewhere in their midst. The difference in the logarithms of the maximum likelihood and adjacent parameter sets’ likelihoods may be used to draw a confidence region on a plot whose co-ordinates are the parameters θ1 ... θp. The region surrounds the maximum-likelihood estimate, and all points (parameter sets) within that region differ at most in log-likelihood by some fixed value. The χ² distribution given by Wilks' theorem converts the region's log-likelihood differences into the "confidence" that the population's "true" parameter set lies inside. The art of choosing the fixed log-likelihood difference is to make the confidence acceptably high while keeping the region acceptably small (narrow range of estimates).
As more data are observed, instead of being used to make independent estimates, they can be combined with the previous samples to make a single combined sample, and that large sample may be used for a new maximum likelihood estimate. As the size of the combined sample increases, the size of the likelihood region with the same confidence shrinks. Eventually, either the size of the confidence region is very nearly a single point, or the entire population has been sampled; in both cases, the estimated parameter set is essentially the same as the population parameter set.
This section needs expansion. You can help by adding to it.(March 2019)
Under the AIC paradigm, likelihood is interpreted within the context of information theory.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than does the statistic, as to which of those probability distributions is that of the population from which the sample was taken.
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parametrized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. The generalization to multiple variables is called a Dirichlet distribution.
In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are three different parametrizations in common use:
In statistics, the logistic model is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1 and the sum adding to one.
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman-Darmois family. The terms "distribution" and "family" are often used loosely: properly, an exponential family is a set of distributions, where the specific distribution varies with the parameter; however, a parametric family of distributions is often referred to as "a distribution", and the set of all exponential families is sometimes loosely referred to as "the" exponential family.
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
In statistics, the score is the gradient of the log-likelihood function with respect to the parameter vector. Evaluated at a particular point of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.
In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior. The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher. The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.
In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance.
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:
In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.
In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.
In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation.
Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated.
In computer vision and pattern recognition, point set registration, also known as point matching, is the process of finding a spatial transformation that aligns two point sets. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model, and mapping a new measurement to a known data set to identify features or to estimate its pose. A point set may be raw data from 3D scanning or an array of rangefinders. For use in image processing and feature-based image registration, a point set may be a set of features obtained by feature extraction from an image, for example corner detection. Point set registration is used in optical character recognition, augmented reality and aligning data from magnetic resonance imaging with computer aided tomography scans.
In statistics, the variance function is a smooth function which depicts the variance of a random quantity as a function of its mean. The variance function plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.
In econometrics, the information matrix test is used to determine whether a regression model is misspecified. The test was developed by Halbert White, who observed that in a correctly specified model and under standard regularity assumptions, the Fisher information matrix can be expressed in either of two ways: as the outer product of the gradient, or as a function of the Hessian matrix of the log-likelihood function.
In statistics, suppose that we have been given some data, and we are constructing a statistical model of that data. The relative likelihood compares the relative plausibilities of different candidate models or of different values of a parameter of a single model.
|Look up likelihood in Wiktionary, the free dictionary.| |
Area Of Polygons Worksheets
Print Area Of Polygons Worksheets Click The Buttons To Print Each Worksheet And Associated Answer Key Skill Introduction Determine The Value Of The X Find The Area Of The Rectangle Worksheet 1 Find The Area Of The Given Figure In Many Cases You Will Be Given A Height To Start With Area Of Polygons And Circles Worksheet 2 Yes There Is A Circle Up In Here Review Sheet The Area Of The
Perimeter And Area Of Polygons Worksheets Math Goodies
Perimeter And Area Of Polygons Worksheets Search Form Search Our Perimeter And Area Worksheets Are Designed To Supplement Our Perimeter And Area Lessons Solve The Problems Below Using Your Knowledge Of Perimeter And Area Concepts Be Sure To Also Check Out The Fun Perimeter Interactive Activities Below Worksheets To Supplement Our Lessons Worksheet 1 Worksheet 1 Key Worksheet 2
Area Of Polygons Worksheets Math Worksheets 4 Kids
Area Of A Polygon Worksheets Meticulously Designed For Grade 6 Through High School These Calculate The Area Of Polygons Worksheet S Feature The Formulas Used Examples And Adequate Exercises To Find The Area Of Regular Polygons Like Triangles Quadrilaterals And Irregular Polygons Using The Given Side Lengths Circumradius And Apothem
Polygons Worksheets Practice Questions And Answers Cazoomy
Polygons Maths Worksheet 1 Identifies The Properties Of Regular Polygons Polygons Maths Worksheet 2 Works On Irregular Polygons Polygons Maths Worksheet 3 Accurately Completes The Polygons Polygons Maths Worksheet 4 And Polygon Maths Worksheet 5 Work With Tessellation Polygons Maths Worksheet 6 Is An Investigation Polygons Maths Worksheet 7 Calculates Angles In Irregular Polygons And
Area And Perimeter Of Polygons Worksheets Kiddy Math
Area And Perimeter Of Polygons Area And Perimeter Of Polygons Displaying Top 8 Worksheets Found For This Concept Some Of The Worksheets For This Concept Are 6 Area Of Regular Polygons Work Area And Perimeter 3rd Answer Key Area And Perimeter 6 Area Of Triangles And Quadrilaterals Perimeter Of A Polygon Perimeter Geo HwArea Of Polygons And Complex Figures
Area Of Polygons And Complex Figures 12 Find The Area Of The Shaded Region Answers 1 158 Sq Ft 2 225 Sq M 3 303 Sq In 4 42 Sq Yd 5 95 Sq M 6 172 5 Sq M 7 252 Sq Cm 8 310 Sq Ft 9 23 Sq Cm 10 264 Sq M 11 148 5 Sq In 12 112 Sq Ft Prisms Volume And Surface Area Surface Area Of A Prism The Surface Area Of A Prism Is The Sum Of The Areas Of All Of TheCalculating Area Other Polygons Worksheets Teacher
Calculating Area Other Polygons Showing Top 8 Worksheets In The Category Calculating Area Other Polygons Some Of The Worksheets Displayed Are Work 6 Area Of Triangles And Quadrilaterals Determining The Area Of Regularirregular Polygons Geometry Notes Perimeter Area And Volume Of Regular Shapes Geometry Word Problems No Problem Answer Key Area And Perimeter Perimeter Of A PolygonCalculating Area Other Polygons Lesson Worksheets
Calculating Area Other Polygons Displaying All Worksheets Related To Calculating Area Other Polygons Worksheets Are Work 6 Area Of Triangles And Quadrilaterals Determining The Area Of Regularirregular Polygons Geometry Notes Perimeter Area And Volume Of Regular Shapes Geometry Word Problems No Problem Answer Key Area And Perimeter Perimeter Of A Polygon
Area Of Polygons Formulas Examples Solutions Games
Area Of Polygons Formulas The Area Of A Polygon Measures The Size Of The Region Enclosed By The Polygon It Is Measured In Units Squared The Following Table Gives The Formulas For The Area Of Polygons Scroll Down The Page If You Need More Explanations About The Formulas How To Use Them As Well As Worksheets
Polygons Worksheets Math Worksheets 4 Kids
Explore This Batch Of Printable Area Of Polygons Worksheets Ideal For Grade 6 Through High School To Determine The Area Of Regular And Irregular Polygons Using The Given Side Lengths Apothem And Circumradius Practice Finding The Apothem As Well 21 Worksheets Angles In Polygons Worksheets Fine Tune Your Skills Using The Angles In Polygons Worksheets With Skills To Find The Sum Of
Area Of Polygons Worksheet Answers. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use.
You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now!
Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Area Of Polygons Worksheet Answers.
There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible.
Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early.
Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too.
The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused.
Area Of Polygons Worksheet Answers. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect.
Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice. |
The substitution method, commonly introduced to Algebra I students, is a method for solving simultaneous equations. This means the equations have the same variables and, when solved, the variables have the same values. The method is the foundation for Gauss elimination in linear algebra, which is used to solve larger systems of equations with more variables.
You can make things a little easier by setting the problem up properly. Rewrite the equations so all the variables are on the left side and the solutions are on the right. Then write the equations, one above the other, so the variables line up in columns. For example:
x + y = 10 -3x + 2y = 5
In the first equation, 1 is an implied coefficient for both x and y and 10 is the constant in the equation. In the second equation, -3 and 2 are the x and y coefficients, respectively, and 5 is the constant in the equation.
Sciencing Video Vault
Solve an Equation
Choose an equation to solve and which variable you will solve for. Choose one that will require the least amount of calculation or, if possible, will not have a rational coefficient, or fraction. In this example, if you solve the second equation for y, then the x-coefficient will be 3/2 and the constant will be 5/2—both rational numbers—making the math a little more difficult and creating greater chance for error. If you solve the first equation for x, however, you end up with x = 10 - y. The equations will not always be that easy, but try to find the easiest path for solving the problem right from the very beginning.
Since you solved the equation for a variable, x = 10 - y, you can now substitute it into the other equation. Then you will have an equation with a single variable, which you should simplify and solve. In this case:
-3(10 - y) + 2y = 5 -30 + 3y + 2y = 5 5y = 35 y = 7
Now that you have a value for y, you can substitute it back into the first equation and determine x:
x = 10 - 7 x = 3
Always double check your answers by plugging them back into the original equations and verifying the equality.
3 + 7 = 10 10 = 10
-3_3 + 2_7 = 5 -9 + 14 = 5 5 = 5 |
This book uses the graphing utility to enhance the study of mathematics. Technology is used as a tool to solve problems, motivate concepts, and explore mathematical ideas. Trigonometry Enhanced with Graphing Utilities provides clear and focused coverage. Many of the problems are solved using both algebra and a graphing utility, and the text illustrates the advantages and benefits of each approach. Technology is used to solve problems when no algebraic solution is available and to help students visualize certain concepts.
1. Functions and Graphs.
Rectangular Coordinates; Graphing Utilities. Graphs of Equations. The Straight Line; Circles. Functions. More about Functions. Graphing Techniques. One- to-One Functions; Inverse Functions. 2. Trigonometric Functions.
Angles and Their Measure. Trigonometric Functions; Unit Circle Approach. Properties of the Trigonometric Functions. Right Triangle Trigonometry. Graphs of the Trigonometric Functions. The Inverse Trigonometric Functions. 3. Analytic Trigonometry.
Trigonometric Identities. Sum and Difference Formulas. Double-angle and Half-angle Formulas. Product-to-Sum and Sum-to-Product Formulas. Trigonometric Equations. 4. Applications of Trigonometric Functions.
Solving Right Triangles. The Law of Sines. The Law of Cosines. The Area of a Triangle. Sinusoidal Graphs; Simple Harmonic Motion. Damped Vibrations. 5. Polar Coordinates; Vectors.
Polar Coordinates. Polar Equations and Graphs. The Complex Plan; Demoivre's Theorem. Vectors. The Dot Product. 6. Analytic Geometry.
Conics. The Parabola. The Ellipse. The Hyperbola. Rotation of Axes; General Form of a Conic. Polar Equations of Conics. Plane Curves and Parametric Equations. 7. Exponential and Logarithmic Functions.
Exponential Functions. Logarithmic Functions. Properties of Logarithms. Logarithmic and Exponential Equations. Compound Interest. Growth and Decayi. Nonlinear Curve Fitting. Logarithmic Scales. Appendix.
Topics from Algebra and Geometry. Solving Equations. Completing the Square. Complex Numbers. Linear Curve Fitting. Answers. Index. |
Carbon dioxide sink
A carbon dioxide (CO2) sink is a carbon reservoir that is increasing in size, and is the opposite of a carbon dioxide "source". The main natural sinks are the oceans and plants and other organisms that use photosynthesis to remove carbon from the atmosphere by incorporating it into biomass and release oxygen into the atmosphere. This concept of CO2 sinks has become more widely known because the Kyoto Protocol allows the use of carbon dioxide sinks as a form of carbon offset.
Carbon sequestration is the term describing processes that remove carbon dioxide from the atmosphere. To help mitigate global warming, a variety of means of artificially capturing and storing carbon (while releasing oxygen) — as well as of enhancing natural sequestration processes — are being explored.
Carbon dioxide is incorporated into forests and forest soils by trees and other plants. Through photosynthesis, plants absorb carbon dioxide from the atmosphere, store the carbon in sugars, starch and cellulose, and release the oxygen into the atmosphere. A young forest, composed of growing trees, absorbs carbon dioxide and acts as a sink. Mature forests, made up of a mix of various aged trees as well as dead and decaying matter, may be carbon neutral above ground. In the soil, however, the gradual build-up of slowly decaying organic material will continue to accumulate carbon, but at a slower rate than an immature forest. Organic material in the form of humus in the forest floor accumulates in greater quantity in cooler regions such as the boreal and taiga forests. At warmer temperatures humus is oxidized rapidly; this, in addition to high rainfall levels, is the reason why tropical jungles have very thin organic soils. The forest eco-system may eventually become carbon neutral. Forest fires release absorbed carbon back into the atmosphere, as does deforestation due to rapidly increased oxidation of soil organic matter.
The dead trees, plants, and moss in peat bogs undergo slow anaerobic decomposition below the surface of the bog. This process is slow enough that in many cases the bog grows rapidly and fixes more carbon from the atmosphere than is released. Over time, the peat grows deeper. Peat bogs inter approximately one-quarter of the carbon stored in land plants and soils.
Under some conditions, forests and peat bogs may become sources of CO2, such as when a forest is flooded by the construction of a hydroelectric dam. Unless the forests and peat are harvested before flooding, the rotting vegetation is a source of CO2 and methane comparable in magnitude to the amount of carbon released by a fossil-fuel powered plant of equivalent power
Oceans are natural CO2 sinks, and represent the largest active carbon sink on Earth. This role as a sink for CO2 is driven by two processes, the solubility pump and the biological pump. The former is primarily a function of differential CO2 solubility in seawater and the thermohaline circulation, while the latter is the sum of a series of biological processes that transport carbon (in organic and inorganic forms) from the surface euphotic zone to the ocean's interior. A small fraction of the organic carbon transported by the biological pump to the seafloor is buried in anoxic conditions under sediments and ultimately forms fossil fuels such as oil and natural gas.
At the present time, approximately one third of anthropogenic emissions are estimated to be entering the ocean. The solubility pump is the primary mechanism driving this, with the biological pump playing a negligible role. This stems from the limitation of the biological pump by ambient light and nutrients required by the phytoplankton that ultimately drive it. Total inorganic carbon is not believed to limit primary production in the oceans, so its increasing availability in the ocean does not directly affect production (the situation on land is different, since enhanced atmospheric levels of CO2 essentially "fertilize" land plant growth). However, ocean acidification by invading anthropogenic CO2 may affect the biological pump by negatively impacting calcifying organisms such as coccolithophores, foraminiferans and pteropods. Climate change may also affect the biological pump in the future by warming and stratifying the surface ocean, thus reducing the supply of limiting nutrients to surface waters. Although the buffering capacity of sea water is keeping the pH nearly constant at present, eventually pH will drop. At this point, the dissruption of life in the sea may turn it into a carbon source rather than a carbon sink. The characteristic of buffered systems is to hold the pH reasonably constant over a large introduction of acid and then drop suddenly with a small additional amount
Carbon as plant organic matter is sequestered in soils: Soils contain more carbon than is contained in vegetation and the atmosphere combined. Soils' organic carbon (humus) levels in many agricultural areas have been severely depleted. Organic material in the form of humus accumulates below about 25 degrees Celsius. Above this temperature, humus is oxidized much more rapidly. This is part of the reason why tropical soils under jungles are so thin, despite the rapid accumulation of organic material on the jungle floor (the other being extensive rainfall leaching soluble components vital to organic soil structure). Areas where shifting cultivation or "hack-and-slash" agriculture are practised are generally only fertile for 2-3 years before they are abandoned. These tropical jungles are similar to coral reefs in that they are highly efficient at conserving and circulating necessary nutrients, which explains their lushness in a nutrient desert.
Grasslands contribute to soil organic matter, mostly in the form of their extensive fibrous root mats. Much of this organic matter can remain unoxidized for long periods of time, depending on rainfall conditions, the length of the winter season, and the frequency of naturally occurring lightning-induced grass-fires necessary to recycle inorganic compounds from existing plant material. While these fires release carbon dioxide, they improve the quality of the grass-lands overall, in turn increasing the amount of carbon retained in the retained humic material. They also desposit carbon directly to the soil in the form of char that does not significantly degrade back to carbon dioxide
Enhancing natural sequestration
Future sea level rise
In 2001, the Intergovernmental Panel on Climate Change's Third Assessment Report predicted that by 2100, global warming will lead to a sea level rise of 9 to 88 cm. At that time no significant acceleration in the rate of sea level rise during the 20th century had been detected. Subsequently, Church and White found acceleration of 0.013 ± 0.006 mm/yr².
These sea level rises could lead to difficulties for shore-based communities: for example, many major cities such as London and New Orleans already need storm-surge defenses, and would need more if sea level rose, though they also face issues such as sinking land.
Future sea level rise, like the recent rise, is not expected to be globally uniform (details below). Some regions show a sea level rise substantially more than the global average (in many cases of more than twice the average), and others a sea level fall. However, models disagree as to the likely pattern of sea level change.
Intergovernmental Panel on Climate Change results
The results from the IPCC (TAR) sea level chapter (convening authors John A. Church and Jonathan M. Gregory) are given below.
The sum of these components indicates a rate of eustatic sea level rise (corresponding to a change in ocean volume) from 1910 to 1990 ranging from –0.8 to 2.2 mm/yr, with a central value of 0.7 mm/yr. The upper bound is close to the observational upper bound (2.0 mm/yr), but the central value is less than the observational lower bound (1.0 mm/yr), i.e., the sum of components is biased low compared to the observational estimates. The sum of components indicates an acceleration of only 0.2 (mm/yr)/century, with a range from –1.1 to 0.7 (mm/yr)/century, consistent with observational finding of no acceleration in sea level rise during the 20th century. The estimated rate of sea level rise from anthropogenic climate change from 1910 to 1990 (from modeling studies of thermal expansion, glaciers and ice sheets) ranges from 0.3 to 0.8 mm/yr. It is very likely that 20th century warming has contributed significantly to the observed sea level rise, through thermal expansion of sea water and widespread loss of land ice.
A common perception is that the rate of sea level rise should have accelerated during the latter half of the 20th century, but tide gauge data for the 20th century show no significant acceleration. We have obtained estimates based on AOGCMs for the terms directly related to anthropogenic climate change in the 20th century, i.e., thermal expansion, ice sheets, glaciers and ice caps... The total computed rise indicates an acceleration of only 0.2 (mm/yr)/century, with a range from -1.1 to 0.7 (mm/yr)/century, consistent with observational finding of no acceleration in sea level rise during the 20th century. The sum of terms not related to recent climate change is -1.1 to 0.9 mm/yr (i.e., excluding thermal expansion, glaciers and ice caps, and changes in the ice sheets due to 20th century climate change). This range is less than the observational lower bound of sea level rise. Hence it is very likely that these terms alone are an insufficient explanation, implying that 20th century climate change has made a contribution to 20th century sea level rise.
Uncertainties and criticisms regarding IPCC results
- Tide records with a rate of 180 mm/century going back to the 19th century show no measurable acceleration throughout the late 19th and first half of the 20th century. The IPCC attributes about 60 mm/century to melting and other eustatic processes, leaving a residual of 120 mm of 20th century rise to be accounted for. Global ocean temperatures by Levitus et al are in accord with coupled ocean/atmosphere modeling of greenhouse warming, with heat-related change of 30 mm. Melting of polar ice sheets at the upper limit of the IPCC estimates could close the gap, but severe limits are imposed by the observed perturbations in Earth rotation. (Munk 2002)
- By the time of the IPCC TAR, attribution of sea level changes had a large unexplained gap between direct and indirect estimates of global sea level rise. Most direct estimates from tide gauges give 1.5–2.0 mm/yr, whereas indirect estimates based on the two processes responsible for global sea level rise, namely mass and volume change, are significantly below this range. Estimates of the volume increase due to ocean warming give a rate of about 0.5 mm/yr and the rate due to mass increase, primarily from the melting of continental ice, is thought to be even smaller. One study confirmed tide gauge data is correct, and concluded there must be a continental source of 1.4 mm/yr of fresh water. (Miller 2004)
- From (Douglas 2002): "In the last dozen years, published values of 20th century GSL rise have ranged from 1.0 to 2.4 mm/yr. In its Third Assessment Report, the IPCC discusses this lack of consensus at length and is careful not to present a best estimate of 20th century GSL rise. By design, the panel presents a snapshot of published analysis over the previous decade or so and interprets the broad range of estimates as reflecting the uncertainty of our knowledge of GSL rise. We disagree with the IPCC interpretation. In our view, values much below 2 mm/yr are inconsistent with regional observations of sea-level rise and with the continuing physical response of Earth to the most recent episode of deglaciation."
- The strong 1997-1998 El Niño caused regional and global sea level variations, including a temporary global increase of perhaps 20 mm. The IPCC TAR's examination of satellite trends says the major 1997/98 El Niño-Southern Oscillation (ENSO) event could bias the above estimates of sea level rise and also indicate the difficulty of separating long-term trends from climatic variability.
Effects of sea level rise
Based on the projected increases stated above, the IPCC TAR WG II report notes that current and future climate change would be expected to have a number of impacts, particularly on coastal systems. Such impacts may include increased coastal erosion, higher storm-surge flooding, inhibition of primary production processes, more extensive coastal inundation, changes in surface water quality and groundwater characteristics, increased loss of property and coastal habitats, increased flood risk and potential loss of life, loss of nonmonetary cultural resources and values, impacts on agriculture and aquaculture through decline in soil and water quality, and loss of tourism, recreation, and transportation functions.
There is an implication that many of these impacts will be detrimental. The report does, however, note that owing to the great diversity of coastal environments; regional and local differences in projected relative sea level and climate changes; and differences in the resilience and adaptive capacity of ecosystems, sectors, and countries, the impacts will be highly variable in time and space and will not necessarily be negative in all situations.
Statistical data on the human impact of sea level rise is scarce. A study in the April, 2007 issue of Environment and Urbanization reports that 634 million people live in coastal areas within 30 feet of sea level. The study also reported that about two thirds of the world's cities with over five million people are located in these low-lying coastal areas.
Are islands "sinking"
IPCC assessments have suggested that deltas and small island states may be particularly vulnerable to sea level rise. Relative sea level rise (mostly caused by subsidence) is causing substantial loss of lands in some deltas. However, sea level changes have not yet been implicated in any substantial environmental, humanitarian, or economic losses to small island states. Previous claims have been made that parts of the island nations of Tuvalu were "sinking" as a result of sea level rise. However, subsequent reviews have suggested that the loss of land area was the result of erosion during and following the actions of 1997 cyclones Gavin, Hina, and Keli. According to climate skeptic Patrick J. Michaels, "In fact, areas...such as [the island of] Tuvalu show substantial declines in sea level over that period."
Reuters has reported other Pacific islands are facing a severe risk including Tegua island in Vanuatu. Claims that Vanuatu data shows no net sea level rise, are not substantiated by tide gauge data. Vanuatu tide gauge data show a net rise of ~50 mm from 1994-2004. Linear regression of this short time series suggests a rate of rise of ~7 mm/y, though there is considerable variability and the exact threat to the islands is difficult to assess using such a short time series.
Numerous options have been proposed that would assist island nations to adapt to rising sea level.
From Wikipedia, the free encyclopedia
In Intergovernmental Panel on Climate Change (IPCC) reports, equilibrium climate sensitivity refers to the equilibrium change in global mean surface temperature following a doubling of the atmospheric (equivalent) CO2 concentration. This value is estimated, by the IPCC Fourth Assessment Report as likely to be in the range 2 to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values. This is a slight change from the IPCC Third Assessment Report, which said it was "likely to be in the range of 1.5 to 4.5°C". More generally, equilibrium climate sensitivity refers to the equilibrium change in surface air temperature following a unit change in radiative forcing, expressed in units of °C/(W/m2). In practice, the evaluation of the equilibrium climate sensitivity from models requires very long simulations with coupled global climate models, or it may be deduced from observations.
Gregory et al. (2002) estimate a lower bound of 1.6°C by estimating the change in Earth's radiation budget and comparing it to the global warming observed over the 20th century. Recent work by Annan and Hargreaves combines independent observational and model based estimates to produce a mean of about 3°C, and only a 5% chance of exceeding 4.5°C.
Shaviv (2005) carried out a similar analysis for 6 different time scales, ranging from the 11-yr solar cycle to the climate variations over geological time scales. He found a typical sensitivity of 2.0°C (ranging between 0.9°C and 2.9°C at 99% confidence) if there is no cosmic-ray climate connection, or a typical sensitivity of 1.3°C (between 0.9°C and 2.5°C at 99% confidence), if the cosmic-ray climate link is real.
Andronova and Schlesinger (2001) (using simple climate models) found that it could lie between 1 and 10°C, with a 54 percent likelihood that the climate sensitivity lies outside the IPCC range. The exact range depends on which factors are most important during the instrumental period: "At present, the most likely scenario is one that includes anthropogenic sulfate aerosol forcing but not solar variation. Although the value of the climate sensitivity in that case is most uncertain, there is a 70 percent chance that it exceeds the maximum IPCC value. This is not good news." said Schlesinger.
Forest et al. (2002) using patterns of change and the MIT EMIC estimated a 95% confidence interval of 1.4–7.7°C for the climate sensitivity, and a 30% probability that sensitivity was outside the 1.5 to 4.5°C range.
Frame et al. (2005) and Allen et al. note that the size of the confidence limits are dependent on the nature of the prior assumptions made.
Climate sensitivity is not the same as the expected climate change at, say 2100: the TAR reports this to be an increase of 1.4 to 5.8°C over 1990.
The Transient climate response (TCR) - a term first used in the TAR - is the temperature change at the time of CO2 doubling in a run with CO2 increasing at 1%/year.
The effective climate sensitivity is a related measure that circumvents this requirement. It is evaluated from model output for evolving non-equilibrium conditions. It is a measure of the strengths of the feedbacks at a particular time and may vary with forcing history and climate state.
From Wikipedia, the free encyclopedia
Carbon dioxide is a chemical compound composed of two oxygen atoms covalently bonded to a single carbon atom. It is a gas at standard temperature and pressure and exists in Earth's atmosphere as a gas. It is currently at a globally averaged concentration of approximately 385 ppm by volume in the Earth's atmosphere, although this varies both by location and time. Carbon dioxide's chemical formula is CO2.
In general, it is exhaled by animals and utilized by plants during photosynthesis. Additional carbon dioxide is created by the combustion of fossil fuels or vegetable matter, among other chemical processes.
Carbon dioxide is an important greenhouse gas because of its ability to absorb many infrared wavelengths of the Sun's light, and because of the length of time it stays in the Earth's atmosphere. Due to this, and the role it plays in the respiration of plants, it is a major component of the carbon cycle.
In its solid state, carbon dioxide is commonly called dry ice. Carbon dioxide has no liquid state at pressures below 5.1 atm.
In the Earth's atmospher
Atmospheric CO2 concentrations measured at Mauna Loa Observatory.
Carbon dioxide in earth's atmosphere is considered a trace gas, and is measured in parts per million. Current concentration levels average approximately 385 ppm, which represents a total of around 800 gigatons of carbon. Its concentration can vary considerably on a regional basis: in urban areas it is generally higher, and indoors can reach 10 times the atmospheric concentration.
Due to human activities such as the combustion of fossil fuels and deforestation, the concentration of atmospheric carbon dioxide has increased by about 35% since the beginning of the age of industrialization.
Up to 40% of the gas emitted by a volcano during a subaerial volcanic eruption is carbon dioxide. However, human activities currently release more than 130 times the amount of CO2 emitted by volcanoes. According to the best estimates, volcanoes release about 130-230 million tonnes (145-255 million tons) of CO2 into the atmosphere each year. Emissions of CO2 by human activities amount to about 27 billion tonnes per year (30 billion tons).
Carbon dioxide is an end product in organisms that obtain energy from breaking down sugars, fats and amino acids with oxygen as part of their metabolism, in a process known as cellular respiration. This includes all plants, animals, many fungi and some bacteria. In higher animals, the carbon dioxide travels in the blood from the body's tissues to the lungs where it is exhaled. In plants using photosynthesis, carbon dioxide is absorbed from the atmosphere.
Role in photosynthesis
Plants remove carbon dioxide from the atmosphere by photosynthesis, also called carbon assimilation, which uses light energy to produce organic plant materials by combining carbon dioxide and water. Free oxygen is released as gas from the decomposition of water molecules, while the hydrogen is split into its protons and electrons and used to generate chemical energy via photophosphorylation. This energy is required for the fixation of carbon dioxide in the Calvin cycle to form sugars. These sugars can then be used for growth within the plant through respiration. Carbon dioxide gas must be introduced into greenhouses to maintain plant growth, as even in vented greenhouses the concentration of carbon dioxide can fall during daylight hours to as low as 200 ppm, at which level photosynthesis is significantly reduced. Venting can help offset the drop in carbon dioxide, but will never raise it back to ambient levels of 340 ppm. Carbon dioxide supplementation is the only known method to overcome this deficiency. Direct introduction of pure carbon dioxide is ideal, but rarely done because of cost constraints. Most greenhouses burn methane or propane to supply the additional CO2, but care must be taken to have a clean burning system as increased levels of nitrogen oxides (NOx) result in reduced plant growth. Sensors for sulfur dioxide (SO2) and NOx are expensive and difficult to maintain; accordingly most systems come with a carbon monoxide (CO) sensor under the assumption that high levels of carbon monoxide mean that significant amounts of NOx are being produced. Plants can potentially grow up to 50 percent faster in concentrations of 1,000 ppm CO2 when compared with ambient conditions.
Plants also emit CO2 during respiration, so it is only during growth stages that plants are net absorbers. For example a growing forest will absorb many tonnes of CO2 each year, however a mature forest will produce as much CO2 from respiration and decomposition of dead specimens (e.g. fallen branches) as used in biosynthesis in growing plants. Regardless of this, mature forests are still valuable carbon sinks, helping maintain balance in the Earth's atmosphere. Additionally, and crucially to life on earth, phytoplankton photosynthesis absorbs dissolved CO2 in the upper ocean and thereby promotes the absorption of CO2 from the atmosphere.
Carbon dioxide content in fresh air varies between 0.03% (300 ppm) and 0.06% (600 ppm), depending on the location. A person's exhaled breath is approximately 4.5% carbon dioxide. It is dangerous when inhaled in high concentrations (greater than 5% by volume, or 50,000 ppm). The current threshold limit value (TLV) or maximum level that is considered safe for healthy adults for an eight-hour work day is 0.5% (5,000 ppm). The maximum safe level for infants, children, the elderly and individuals with cardio-pulmonary health issues is significantly less.
These figures are valid for pure carbon dioxide. In indoor spaces occupied by people the carbon dioxide concentration will reach higher levels than in pure outdoor air. Concentrations higher than 1,000 ppm will cause discomfort in more than 20% of occupants, and the discomfort will increase with increasing CO2 concentration. The discomfort will be caused by various gases coming from human respiration and perspiration, and not by CO2 itself. At 2,000 ppm the majority of occupants will feel a significant degree of discomfort, and many will develop nausea and headaches. The CO2 concentration between 300 and 2,500 ppm is used as an indicator of indoor air quality.
Acute carbon dioxide toxicity is sometimes known as by the names given to it by miners: black damp, choke damp, or stythe. Miners would try to alert themselves to dangerous levels of carbon dioxide in a mine shaft by bringing a caged canary with them as they worked. The canary would inevitably die before CO2 reached levels toxic to people. Choke damp caused a great loss of life at Lake Nyos in Cameroon in 1986, when an upwelling of CO2-laden lake water quickly blanketed a large surrounding populated area. The heavier carbon dioxide forced out the life-sustaining oxygen near the surface, killing nearly two thousand people.
Carbon dioxide ppm levels (CDPL) are a surrogate for measuring indoor pollutants that may cause occupants to grow drowsy, get headaches, or function at lower activity levels. To eliminate most Indoor Air Quality complaints, total indoor CDPL must be reduced to below 600. NIOSH considers that indoor air concentrations that exceed 1,000 are a marker suggesting inadequate ventilation. ASHRAE recommends they not exceed 1,000 inside a space. OSHA limitsconcentrations in the workplace to 5,000 for prolonged periods. The U.S. National Institute for Occupational Safety and Health limits brief exposures (up to ten minutes) to 30,000 and considers CDPL exceeding 40,000 as "immediately dangerous to life and health." People who breathe 50,000 for more than half an hour show signs of acute hypercapnia, w hile breathing 70,000 – 100,000 can produce unconsciousness in only a few minutes. Accordingly, carbon dioxide, either as a gas or as dry ice, should be handled only in well-ventilated areas.
From Wikipedia, the free encyclopedia
The greenhouse effect, discovered by Joseph Fourier in 1829 and first investigated quantitatively by Svante Arrhenius in 1896, is the process in which the emission of infrared radiation by the atmosphere warms a planet's surface. The name comes from an analogy with the warming of air inside a greenhouse compared to the air outside the greenhouse. The Earth's average surface temperature is about 20-30°C warmer than it would be without the greenhouse effect. In addition to the Earth, Mars and especially Venus have greenhouse effects
A schematic representation of the exchanges of energy between outer space, the Earth's atmosphere, and the Earth surface. The ability of the atmosphere to capture and recycle energy emitted by the Earth surface is the defining characteristic of the greenhouse effect.
Anthropogenic greenhouse effect
CO2 production from increased industrial activity (fossil fuel burning) and other human activities such as cement production and tropical deforestation has increased the CO2 concentrations in the atmosphere. Measurements of carbon dioxide amounts from Mauna Loa observatory show that CO2 has increased from about 313 ppm (parts per million) in 1960 to about 375 ppm in 2005. The current observed amount of CO2 exceeds the geological record of CO2 maxima (~300 ppm) from ice core data (Hansen, J., Climatic Change, 68, 269, 2005 ISSN 0165-0009).
Because it is a greenhouse gas, elevated CO2 levels will increase global mean temperature; based on an extensive review of the scientific literature, the Intergovernmental Panel on Climate Change concludes that "most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations".
Greenhouse gases (GHG) are components of the atmosphere that contribute to the greenhouse effect. Some greenhouse gases occur naturally in the atmosphere, while others result from human activities such as burning of fossil fuels such as coal. Greenhouse gases include water vapor, carbon dioxide, methane, nitrous oxide, and ozone.
The "Greenhouse effect"
When sunlight reaches the surface of the Earth, some of it is absorbed and warms the Earth. Because the Earth's surface is much cooler than the sun, it radiates energy at much longer wavelengths than does the sun. The atmosphere absorbs these longer wavelengths more effectively than it does the shorter wavelengths from the sun. The absorption of this longwave radiant energy warms the atmosphere; the atmosphere also is warmed by transfer of sensible and latent heat from the surface. Greenhouse gases also emit longwave radiation both upward to space and downward to the surface. The downward part of this longwave radiation emitted by the atmosphere is the "greenhouse effect." The term is a misnomer, as this process is not the mechanism that warms greenhouses.
The major natural greenhouse gases are water vapor, which causes about 36-70% of the greenhouse effect on Earth (not including clouds); carbon dioxide, which causes 9-26%; methane, which causes 4-9%, and ozone, which causes 3-7%. It is not possible to state that a certain gas causes a certain percentage of the greenhouse effect, because the influences of the various gases are not additive. (The higher ends of the ranges quoted are for the gas alone; the lower ends, for the gas counting overlaps.)
Other greenhouse gases include, but are not limited to, nitrous oxide, sulfur hexafluoride, hydrofluorocarbons, perfluorocarbons and chlorofluorocarbons.
The major atmospheric constituents (nitrogen, N2 and oxygen, O2) are not greenhouse gases. This is because homonuclear diatomic molecules such as N2 and O2 neither absorb nor emit infrared radiation, as there is no net change in the dipole moment of these molecules when they vibrate. Molecular vibrations occur at energies that are of the same magnitude as the energy of the photons on infrared light. Heteronuclear diatomics such as CO or HCl absorb IR; however, these molecules are short-lived in the atmosphere owing to their reactivity and solubility. As a consequence they do not contribute significantly to the greenhouse effect.
Late 19th century scientists experimentally discovered that N2 and O2 did not absorb infrared radiation (called, at that time, "dark radiation") and that CO2 and many other gases did absorb such radiation. It was recognized in the early 20th century that the known major greenhouse gases in the atmosphere caused the earth's temperature to be higher than it would have been without the greenhouse gases.
Anthropogenic greenhouse gases
The projected temperature increase for a range of greenhouse gas stabilization scenarios (the coloured bands). The black line in middle of the shaded area indicates 'best estimates'; the red and the blue lines the likely limits. From the work of IPCC AR4 2007.
The concentrations of several greenhouse gases have increased over time. Human activity increases the greenhouse effect primarily through release of carbon dioxide, but human influences on other greenhouse gases can also be important. Some of the main sources of greenhouse gases due to human activity include:
-burning of fossil fuels and deforestation leading to higher carbon dioxide concentrations;
-livestock and paddy rice farming, land use and wetland changes, pipeline losses, and covered vented landfill emissions leading to higher methane atmospheric concentrations. Many of the newer style fully vented septic systems that enhance and target the fermentation process also are major sources of atmospheric methane;
-use of chlorofluorocarbons (CFCs) in refrigeration systems, and use of CFCs and halons in fire suppression systems and manufacturing processes.
-agricultural activities, including the use of fertilizers, that lead to higher nitrous oxide concentrations.
The seven sources of CO2 from fossil fuel combustion are (with percentage contributions for 2000-2004):
1-Solid fuels (e.g. coal): 35%
2-Liquid fuels (e.g. petrol): 36%
3-Gaseous fuels (e.g. natural gas): 20%
4-Flaring gas industrially and at wells: <1%</p>
5-Cement production: 3%
6-Non-fuel hydrocarbons: <1%</p>
7-The "international bunkers" of shipping and air transport not included in national inventories: 4%
Greenhouse gas emissions from industry, transportation (1/3 of total US global warming pollution) and agriculture are very likely the main cause of recently observed global warming. Major sources of an individual's GHG include home heating and cooling, electricity consumption, and automobiles. Corresponding conservation measures are improving home building insulation, cellular shades, Compact fluorescent lamps, and choosing high miles per gallon vehicles.
Carbon dioxide, methane, nitrous oxide and three groups of fluorinated gases (sulfur hexafluoride, HFCs, and PFCs) are the major greenhouse gases and the subject of the Kyoto Protocol, which entered into force in 2005.
CFCs, although greenhouse gases, are regulated by the Montreal Protocol, which was motivated by CFCs' contribution to ozone depletion rather than by their contribution to global warming. Note that ozone depletion has only a minor role in greenhouse warming though the two processes often are confused in the popular media.
From Wikipedia, the free encyclopedia |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
A cognitive bias is a pattern of deviation in judgement that occurs in particular situations (see also cognitive distortion and the lists of thinking-related topics). Implicit in the concept of a "pattern of deviation" is a standard of comparison; this may be judgement of people outside those particular situations, or may be a set of independently verifiable facts. The existence of some of these cognitive biases has been verified empirically in the field of psychology, others are widespread beliefs, and may themselves be a consequence of cognitive bias.
Cognitive biases are instances of evolved mental behaviour. Some are presumably adaptive, for example, because they lead to more effective actions or enable faster decisions. Others presumably result from a lack of appropriate mental mechanisms, or from the misapplication of a mechanism that is adaptive under different circumstances.
Decision-making and behavioral biases
Many of these biases are studied for how they affect belief formation and business decisions and scientific research.
- Bandwagon effect — the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink, herd behaviour, and manias.
- Base rate fallacy — ignoring available statistical data in favor of particulars
- Bias blind spot — the tendency not to compensate for one's own cognitive biases.
- Closed world assumption - the presumption that what is not currently known to be true is false.
- Choice-supportive bias — the tendency to remember one's choices as better than they actually were.
- Confirmation bias — the tendency to search for or interpret information in a way that confirms one's preconceptions.
- Congruence bias — the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses.
- Contrast effect — the enhancement or diminishment of a weight or other measurement when compared with recently observed contrasting object.
- Déformation professionnelle — the tendency to look at things according to the conventions of one's own profession, forgetting any broader point of view.
- Distinction bias - the tendency to view two options as more dissimilar when evaluating them simultaneously than when evaluating them separately.
- Endowment effect — "the fact that people often demand much more to give up an object than they would be willing to pay to acquire it".
- Extreme aversion — the tendency to avoid extremes, being more likely to choose an option if it is the intermediate choice.
- Focusing effect — prediction bias occurring when people place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome.
- Framing — by using a too narrow approach or description of the situation or issue. Also framing effect — drawing different conclusions based on how data are presented.
- Hyperbolic discounting — the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, the closer to the present both payoffs are.
- Illusion of control — the tendency for human beings to believe they can control or at least influence outcomes that they clearly cannot.
- Impact bias — the tendency for people to overestimate the length or the intensity of the impact of future feeling states.
- Information bias — the tendency to seek information even when it cannot affect action.
- Irrational escalation — the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
- Loss aversion — "the disutility of giving up an object is greater than the utility associated with acquiring it". (see also sunk cost effects and Endowment effect).
- Mere exposure effect — the tendency for people to express undue liking for things merely because they are familiar with them.
- Moral credential effect — the tendency of a track record of non-prejudice to increase subsequent prejudice.
- Need for closure — the need to reach a verdict in important matters; to have an answer and to escape the feeling of doubt and uncertainty. The personal context (time or social pressure) might increase this bias.
- Neglect of probability — the tendency to completely disregard probability when making a decision under uncertainty.
- Omission bias — The tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).
- Open world assumption - emphasising that lack of knowledge does not imply falsity
- Outcome bias — the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.
- Planning fallacy — the tendency to underestimate task-completion times.
- Post-purchase rationalization — the tendency to persuade oneself through rational argument that a purchase was a good value.
- Pseudocertainty effect — the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.
- Reactance - the urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice.
- Selective perception — the tendency for expectations to affect perception.
- Status quo bias — the tendency for people to like things to stay relatively the same (see also Loss aversion and Endowment effect).
- Unit bias — the tendency to want to finish a given unit of a task or an item with strong effects on the consumption of food in particular
- Von Restorff effect — the tendency for an item that "stands out like a sore thumb" to be more likely to be remembered than other items.
- Zero-risk bias — preference for reducing a small risk to zero over a greater reduction in a larger risk.
Biases in probability and belief
Many of these biases are often studied for how they affect business and economic decisions and how they affect experimental research.
- Ambiguity effect — the avoidance of options for which missing information makes the probability seem "unknown".
- Anchoring — the tendency to rely too heavily, or "anchor," on a past reference or on one trait or piece of information when making decisions.
- Attentional bias — neglect of relevant data when making judgments of a correlation or association.
- Availability heuristic — estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples.
- Availability cascade - a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true").
- Clustering illusion — the tendency to see patterns where actually none exist.
- Conjunction fallacy — the tendency to assume that specific conditions are more probable than general ones.
- Gambler's fallacy — the tendency to assume that individual random events are influenced by previous random events. For example, "I've flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads."
- Hawthorne effect — refers to a phenomenon which is thought to occur when people observed during a research study temporarily change their behavior or performance (this can also be referred to as demand characteristics).
- Hindsight bias — sometimes called the "I-knew-it-all-along" effect, the inclination to see past events as being predictable.
- Illusory correlation — beliefs that inaccurately suppose a relationship between a certain type of action and an effect.
- Ludic fallacy — the analysis of chance related problems with the narrow frame of games. Ignoring the complexity of reality, and the non-gaussian distribution of many things.
- Neglect of prior base rates effect — the tendency to neglect known odds when reevaluating odds in light of weak evidence.
- Observer-expectancy effect — when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect).
- Optimism bias — the systematic tendency to be over-optimistic about the outcome of planned actions.
- Overconfidence effect — the tendency to overestimate one's own abilities.
- Positive outcome bias — a tendency in prediction to overestimate the probability of good things happening to them (see also wishful thinking, optimism bias and valence effect).
- Primacy effect — the tendency to weigh initial events more than subsequent events.
- Recency effect — the tendency to weigh recent events more than earlier events (see also peak-end rule).
- Regression toward the mean disregarded — the tendency to expect extreme performance to continue.
- Reminiscence bump — the effect that people tend to recall more personal events from adolescence and early adulthood than from other lifetime periods.
- Repetition bias - A willingness to believe what we have been told most often and by the greatest number of different of sources.
- Rosy retrospection — the tendency to rate past events more positively than they had actually rated them when the event occurred.
- Stereotyping — expecting a member of a group to have certain characteristics without having actual information about that individual.
- Subadditivity effect — the tendency to judge probability of the whole to be less than the probabilities of the parts.
- Telescoping effect — the effect that recent events appear to have occurred more remotely and remote events appear to have occurred more recently.
- Texas sharpshooter fallacy — the fallacy of selecting or adjusting a hypothesis after the data is collected, making it impossible to test the hypothesis fairly.
Most of these biases are labeled as attributional biases.
- Actor-observer bias — the tendency for explanations of other individuals' behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation (see also fundamental attribution error). However, this is coupled with the opposite tendency for the self in that explanations for our own behaviors overemphasize the influence of our situation and underemphasize the influence of our own personality.
- Dunning-Kruger effect — "...when people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden: Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, ...they are left with the mistaken impression that they are doing just fine."(see also Lake Wobegon effect, and overconfidence effect).
- Egocentric bias — occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would.
- Forer effect (aka Barnum Effect) — the tendency to give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. For example, horoscopes.
- False consensus effect — the tendency for people to overestimate the degree to which others agree with them.
- Fundamental attribution error — the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect).
- Halo effect — the tendency for a person's positive or negative traits to "spill over" from one area of their personality to another in others' perceptions of them (see also physical attractiveness stereotype).
- Herd instinct — Common tendency to adopt the opinions and follow the behaviors of the majority to feel safer and to avoid conflict.
- Illusion of asymmetric insight — people perceive their knowledge of their peers to surpass their peers' knowledge of them.
- Illusion of transparency — people overestimate others' ability to know them, and they also overestimate their ability to know others.
- Ingroup bias — the tendency for people to give preferential treatment to others they perceive to be members of their own groups.
- Just-world phenomenon — the tendency for people to believe that the world is "just" and therefore people "get what they deserve."
- Lake Wobegon effect — the human tendency to report flattering beliefs about oneself and believe that one is above average (see also worse-than-average effect, and overconfidence effect).
- Notational bias — a form of cultural bias in which a notation induces the appearance of a nonexistent natural law.
- Outgroup homogeneity bias — individuals see members of their own group as being relatively more varied than members of other groups.
- Projection bias — the tendency to unconsciously assume that others share the same or similar thoughts, beliefs, values, or positions.
- Self-serving bias — the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias).
- Self-fulfilling prophecy — the tendency to engage in behaviors that elicit results which will (consciously or not) confirm our beliefs.
- System justification — the tendency to defend and bolster the status quo, i.e. existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest.
- Trait ascription bias — the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable.
- Further information: Memory bias
- Beneffectance: perceiving oneself as responsible for desirable outcomes but not responsible for undesirable ones. (Term coined by Greenwald (1980))
- Consistency bias: incorrectly remembering one's past attitudes and behaviour as resembling present attitudes and behaviour.
- Cryptomnesia: a form of misattribution where a memory is mistaken for imagination.
- Egocentric bias: recalling the past in a self-serving manner, e.g. remembering one's exam grades as being better than they were, or remembering a caught fish as being bigger than it was
- False memory
- Hindsight bias: filtering memory of past events through present knowledge, so that those events look more predictable than they actually were; also known as the 'I-knew-it-all-along effect'.
- Selective Memory
- Suggestibility: a form of misattribution where ideas suggested by a questioner are mistaken for memory.
Common theoretical causes of some cognitive biases
- Adaptive Bias
- Attribution theory, especially:
- Cognitive dissonance, and related:
- Heuristics, including:
- Introspection illusion
- Attribution theory
- Bias (statistics)
- Cognitive distortion
- Logical fallacy
- Media bias
- Stereotype inevitability
- System justification
- Systematic bias
- Baron, J. (2000). Thinking and deciding (3d. edition). New York: Cambridge University Press. ISBN 0-521-65030-5
- Bishop, Michael A & Trout, J.D. (2004). Epistemology and the Psychology of Human Judgment. New York: Oxford University Press. ISBN 0-19-516229-3
- Gilovich, T. (1993). How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. New York: The Free Press. ISBN 0-02-911706-2
- Gilovich, T., Griffin D. & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. Cambridge, UK: Cambridge University Press. ISBN 0-521-79679-2
- Greenwald, A. (1980). "The Totalitarian Ego: Fabrication and Revision of Personal History" American Psychologist, Vol. 35, No. 7
- Kahneman, D., Slovic, P. & Tversky, A. (Eds.). (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press. ISBN 0-521-28414-7
- Kahneman, Daniel, Jack L. Knetsch, and Richard H. Thaler. (1991). "Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias." The Journal of Economic Perspectives 5(1):193-206.
- Plous, S. (1993). The Psychology of Judgment and Decision Making. New York: McGraw-Hill. ISBN 0-07-050477-6
- Schacter, D. L. (1999). "The Seven Sins of Memory: Insights From Psychology and Cognitive Neuroscience" American Psychologist Vol. 54. No. 3, 182-203
- Tetlock, Philip E. (2005). Expert Political Judgment: how good is it? how can we know?. Princeton: Princeton University Press. ISBN 978-0-691-12302-8
- Virine, L. and Trumper M., Project Decisions: The Art and Science (2007). Management Concepts. Vienna, VA, ISBN 978-1567262179
- Haselton, M. G. & Funder, D. (in press). The evolution of accuracy and bias in social judgment. In M. Schaller, D. T. Kenrick, & J. A. Simpson (Eds.), Evolution and Social Psychology. New York: Psychology Press. [Volume to be published as part of the Frontiers of Social Psychology series.] Full text
- Haselton, M. G. & Nettle, D. (in press). The paranoid optimist: An integrative evolutionary model of cognitive biases. Personality and Social Psychology Review. Full text
- Haselton, M. G. (in press). Error management theory. In R. Baumeister & K. Vohs (eds.), Encyclopedia of social psychology. Thousand Oaks, CA: Sage. Full text
- Haselton, M. G. & Buss, D. M. (2003). Biases in Social Judgment: Design Flaws or Design Features? In J. Forgas, K. Williams, & B. von Hippel (Eds.) Responding to the Social World: Implicit and Explicit Processes in Social Judgments and Decisions. New York, NY: Cambridge. Full text
- Haselton M. G. & Buss, D. M. (2000). Error management theory: A new perspective on biases in cross-sex mind reading. Journal of Personality and Social Psychology, 78, 81-91.Full text
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|
- (Hsee & Zhang, 2004)
- (Kahneman, Knetsch, and Thaler 1991: 193) Richard Thaler coined the term "endowment effect."
- (Kahneman, Knetsch, and Thaler 1991: 193) Daniel Kahneman, together with Amos Tversky, coined the term "loss aversion."
- Kruglanski, 1989; Kruglanski & Webster, 1996
- (Kahneman, Knetsch, and Thaler 1991: 193)
- Justin Kruger, David Dunning (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology 77 (6): 1121–34. |
What is Asthma?
Asthma is a chronic respiratory condition that causes inflammation and narrowing of the airways. It affects more than 25 million people in the United States alone and can have a significant impact on daily activities. Although asthma cannot be cured, it can be effectively managed through various treatments. The key to managing asthma is to identify triggers that can cause asthma symptoms to flare up and work with your doctor to develop a plan to avoid and manage them.
Asthma is a respiratory disease that affects the airways in the lungs. This chronic condition causes inflammation in the bronchial tubes, which leads to symptoms such as chest tightness, shortness of breath, wheezing, and coughing. While the cause of asthma isn’t fully understood, it is known that several factors can contribute to its development, including genetics, environmental factors, and respiratory infections. Certain triggers such as dust, pollen, exercise, and cold air can also cause asthma symptoms. A doctor can diagnose asthma through various tests, including a physical exam, lung function tests, and medical history review. If left untreated, asthma can lead to complications such as respiratory failure and even death. Individuals with a history of asthma, allergies, or a family history of respiratory diseases are at higher risk of developing asthma, but the condition can affect anyone. While there is no cure for asthma, treatment options such as rescue inhalers and long-term control medications can help manage symptoms and prevent asthma flare-ups. Proper management and prevention of asthma attacks are essential for living an active life with this chronic condition.
Asthma symptoms can range from mild to severe, with varying degrees of intensity depending on the individual. Common symptoms include chest tightness, shortness of breath, wheezing, and coughing. However, not everyone with asthma experiences these symptoms, and some may have other lesser-known signs.
Less common symptoms of asthma include difficulty exercising, fatigue, trouble sleeping due to coughing or wheezing, and rapid breathing. The severity of asthma symptoms can vary greatly from person to person. Some individuals may have mild symptoms and only experience occasional asthma episodes, while others may have severe, life-threatening asthma attacks.
Respiratory infections, allergies, and exposure to environmental triggers such as secondhand smoke, dust mites, and pollution can exacerbate asthma symptoms. When left unmanaged, these triggers can lead to an asthma flare-up, which can cause a rapid and severe worsening of asthma symptoms.
Potential exacerbating factors of asthma to look out for include an increase in coughing or wheezing, difficulty breathing, chest tightness, and shortness of breath, particularly during times of physical activity. It’s important to know the warning signs of a potential asthma attack so that medical assistance can be sought if needed. Some individuals may not experience any noticeable symptoms until they have a severe asthma attack. Therefore, it’s crucial for individuals with a history of asthma to be vigilant about monitoring their symptoms and seeking medical attention if any changes or unusual symptoms arise.
The exact cause of asthma is still unknown, but it’s believed that both genetic and environmental factors are contributing factors. While some individuals may inherit genes that make them more susceptible to developing asthma, others may develop it due to exposure to environmental factors that irritate the airways. Exposure to irritants like smoke, air pollution, and chemicals can also cause or worsen asthma symptoms.
Having allergies or a family history of asthma or allergic diseases can increase the risk of developing the condition. If one or both parents have asthma, their child is more likely to develop it. Similarly, individuals who have other allergic conditions such as eczema or hay fever are also at an increased risk of developing asthma.
With continued patient education and awareness of asthma triggers, patients can take proactive steps to minimize the likelihood of an asthma attack. Asthma triggers are defined as any factors that can cause asthma symptoms, and they vary from person to person. The triggers for asthma can be broadly classified into different categories–pollutants, allergens, respiratory infections, physical and mental stress, and climate.
Air pollutants, such as exhaust fumes, smoke, and smog, can irritate the lungs and trigger asthma symptoms. These irritants can irritate the airways and trigger inflammation, leading to coughing, wheezing, and shortness of breath. Similarly, exposure to allergens such as dust mites, pet dander, mold, and pollen can also trigger asthma. These allergens can cause an allergic reaction in the airways, leading to inflammation and tightness in the chest.
Respiratory infections caused by viruses or bacteria can also trigger asthma symptoms. When an individual with asthma contracts a respiratory infection, the infection exacerbates preexisting asthma symptoms, leading to difficulty breathing, wheezing, and coughing. Physical and mental stress can also trigger asthmatic reactions in certain individuals. Exercise-induced asthma, for instance, is triggered by increased physical activity. Similarly, individuals with asthma may experience symptoms triggered by emotional stress.
Climate can also impact asthma symptoms. Dry, cold air can irritate the airways and trigger tightness in the chest. Hot and humid weather can exacerbate symptoms such as wheezing and shortness of breath.
Diagnosing asthma can be challenging as it shares similar symptoms with other respiratory diseases. Doctors rely on a combination of factors to diagnose asthma accurately, including symptoms, medical history, potential triggers, and test results. All these factors are used to reach a conclusive diagnosis.
When diagnosing asthma, your doctor will typically begin by reviewing your medical history, including a family history of allergies and asthma. Your doctor will also ask about symptoms you may be experiencing. To assist your doctor in making an accurate diagnosis, it’s helpful to keep a log of symptoms, frequency, and potential triggers. This information is valuable in developing an appropriate treatment plan.
Your doctor may also perform tests to help diagnose asthma. These tests may include cough tests, lung function tests, spirometry, and fractional exhaled nitric oxide tests. Lung function tests measure how well the lungs are working, while spirometry measures how much air an individual can breathe in and out. Fractional exhaled nitric oxide tests measure the amount of nitric oxide in the breath, which is a marker for inflammation in the airways.
Asthma can be misdiagnosed as other respiratory diseases. Differential diagnosis is a process used by healthcare providers to distinguish asthma from other conditions with similar symptoms. It’s important to differentiate life-threatening asthma attacks from anaphylactic reactions, which share similar symptoms, including shortness of breath, chest tightness, and wheezing.
When left untreated or poorly managed, asthma can lead to a variety of potential complications that can significantly impact your quality of life. One of the main short-term complications of asthma is interference with daily activities. Asthma can cause chest tightness, coughing, wheezing, and shortness of breath, making it difficult to perform routine tasks like exercising or even walking up the stairs. It can also lead to work absenteeism during flare-ups.
If asthma is left untreated or improperly managed, it can cause long-term effects, including the permanent narrowing of bronchial tubes. Known as airway remodeling, this condition can make it even more challenging for air to pass through the bronchial tubes, leading to breathing difficulties and ultimately decreased lung function.
Complications from asthma can be severe and may require emergency care. In some cases, uncontrolled asthma can lead to serious respiratory complications that require hospitalization, specifically for severe asthma attacks. In severe cases where long-term medication use is necessary to manage asthma symptoms, prolonged use of medication can cause side effects, including headaches, an upset stomach, and trembling. Managing asthma with the help of healthcare providers is essential to minimize complications.
Although anyone can develop asthma, certain factors increase the chances of developing this respiratory disease. Knowing these risk factors can be helpful in understanding who is most at risk. Preventative measures can then be taken to manage or reduce the risk of asthma.
One major risk factor for asthma is smoking and exposure to secondhand smoke. Cigarette smoke contains hundreds of harmful chemicals that irritate the airways and lungs, making them more susceptible to respiratory problems. Exposure to secondhand smoke can be just as damaging, particularly for young children whose lungs are still developing.
Being overweight is another potential risk factor for asthma. Studies have found a correlation between obesity and an increased risk of respiratory conditions like asthma. This correlation may be due to the fact that excess weight places additional pressure on the lungs and airways, making them more prone to inflammation and respiratory distress.
Exposure to pollution or occupational triggers can also be a risk factor for asthma. Poor air quality, whether due to traffic emissions or industrial pollution, can irritate the airways and lead to respiratory disease. Additionally, individuals working in certain jobs that involve exposure to chemicals or dust may also be at higher risk.
Furthermore, a family’s history of asthma or having another allergic condition like atopic dermatitis or hay fever can also put one at risk. Genes can play a role in the development of asthma, so having a family member with the condition can increase your chances of developing it. Conditions like atopic dermatitis and hay fever are also often linked to asthma and can increase the likelihood of developing this respiratory disease.
Treatment and Management
While there is currently no cure for asthma, various treatment and management options are available to help control and prevent symptoms. It’s important for asthma sufferers to understand that asthma management is an active process that requires ongoing effort and commitment.
Treatment. One of the main types of medication prescribed by healthcare providers for asthma is inhaled corticosteroids. These medications are used as a preventative measure and work to reduce inflammation in the airways, which can help prevent asthma attacks. Other types of medication for asthma include rescue inhalers like albuterol, which can be used during an asthma attack to quickly relieve symptoms.
In addition to medication, conservative measures can be taken to manage asthma attacks and prevent disease progression. These measures include environmental control, such as reducing exposure to triggers like dust mites or pet dander, and weight reduction for those who are overweight or obese.
Management. The management of chronic asthma typically involves a five-step approach. The first step involves the use of an inhaled corticosteroid as a controller medication, with the addition of a rescue inhaler as needed. If symptoms persist, step two involves increasing the dose of the controller medication or adding a long-acting bronchodilator.
If symptoms continue to persist, steps three and four may involve the use of additional medication, such as biologics or leukotriene modifiers, and referral to a specialist for further treatment options. Step five involves the use of oral corticosteroids, which have the potential for more serious side effects and are typically used as a last resort.
In some cases, asthma attacks can be severe enough to require admission to the hospital. Indications for hospital admission may include severe symptoms that are not responding to medication, a significant drop in lung function, or a history of near-fatal asthma attacks. Treatment options in the hospital may include oxygen therapy, intravenous medications, and mechanical ventilation in cases of life-threatening or near-fatal asthma attacks.
How Asthma Affects Your Body
Asthma can affect people of all ages. It causes inflammation and narrowing of the airways, which can make it difficult to breathe. As a chronic respiratory disease, asthma can lead to asthma flare-ups and airway remodeling. By understanding how asthma affects the body, individuals with asthma can better manage their symptoms and prevent disease progression.
One of the biggest challenges for people with asthma is managing asthma flare-ups. These can happen suddenly and can be very scary, but there are steps that can be taken to recognize and manage them to prevent future attacks.
Signs of an Asthma Flare-Up. Also known as an asthma attack, an asthma flare-up can cause a range of symptoms from mild to severe. The most common symptoms include chest tightness, shortness of breath, wheezing, and coughing. It’s important to be aware of these symptoms and take action if they start to worsen.
Managing Asthma Flare-Ups. If you experience an asthma flare-up, use your quick-relief medication, also known as a rescue inhaler. This medication can help to quickly open up your airways and make it easier to breathe. Be sure to follow the instructions that come with your medication and don’t exceed the recommended dose.
In addition to using medication, it’s important to avoid triggers and monitor your symptoms. By identifying what triggers your asthma flare-ups, such as allergens, respiratory infections, physical activity, cold air, and air pollutants, you can take steps to avoid them. You can also use a peak flow meter to monitor your breathing and identify changes in your lung function. This tool can help you to track your symptoms and recognize when you need to take action.
Treatment Changes. If you experience frequent asthma flare-ups, it may be time to talk to your healthcare provider about a change in treatment. Changes may include adjusting your medication, adding new medications, or exploring other treatment options such as bronchial thermoplasty. By working closely with your healthcare provider, you can develop an asthma action plan tailored to your specific needs and manage your symptoms more effectively.
Airway remodeling refers to the long-term changes that occur in the airways of individuals with chronic asthma. Over time, the continuous inflammation and damage to the airways can cause structural changes such as thickening of the airway walls, narrowing of the air passages, and lung scarring. These changes can negatively impact lung function, making it more difficult to breathe and increasing the risk of severe asthma attacks.
The causes of airway remodeling are not entirely understood, but it is known that prolonged and uncontrolled inflammation contributes significantly to this condition. The best way to prevent airway remodeling is to control your allergy symptoms. The fewer asthma symptoms you experience the less likely airway remodeling will occur. According to allergy specialists at Asthma Canada, some remodeled airways have been found to return to their normal structure when proper treatment is followed.
It all comes down to controlling your asthma and keeping it in check.
What is Asthma Control?
Asthma control refers to the degree to which asthma symptoms are successfully prevented or managed through appropriate medical treatment and self-management. The goal is to achieve a minimal need for quick-relief medication and to maintain normal daily activities, including exercise, without experiencing asthma symptoms.
Monitoring asthma symptoms is an essential part of managing asthma control. It involves tracking and recording asthma symptoms, peak flow readings, and medication use to help identify patterns and triggers that may provoke asthma flare-ups. By monitoring asthma symptoms regularly, individuals with asthma can work with their healthcare provider to adjust their medication and treatment plan accordingly to achieve optimal asthma control.
Why is My Asthma Worse at Night?
While asthma symptoms can occur at any time of the day, many people experience more severe symptoms at night. Several potential triggers can occur in the bedroom that may worsen asthma symptoms at night. Common triggers can include dust mites, pet hair, and mold. Keep the bedroom as clean and allergen-free as possible to reduce the risk of symptoms. Additionally, controlling asthma symptoms during the day can help minimize nighttime symptoms. Staying on top of your medication regimen and monitoring your asthma symptoms can help prevent flare-ups and reduce nighttime symptoms.
Sleeping positions can also affect breathing and worsen asthma symptoms. Lying flat on your back can make it more difficult to breathe. Sleeping on your side or with your upper body elevated can help reduce symptoms.
Some medications used to manage asthma symptoms can also have side effects that may disrupt sleep. For example, inhaled corticosteroids can cause hoarseness or a sore throat, while bronchodilators may cause jitteriness or tremors. Consult with your allergist about the timing of your medication and the best way to manage any side effects that may disrupt your sleep.
Preventive measures can be taken to manage nighttime asthma symptoms. Using allergen-proof bedding can help reduce exposure to dust mites. Keeping pets out of the bedroom or grooming them frequently can also help reduce exposure to pet hair and dander. Maintaining indoor humidity levels between 30 – 50 percent can help prevent mold growth.
Can Asthma be Cured?
Asthma cannot be cured but it can be effectively managed through various treatments. The key to managing asthma is to work with an allergist to identify triggers that can cause asthma symptoms to flare up and develop a plan to avoid and manage them. |
The human fascination with extreme environments has sparked curiosity about our ability to endure and adapt to challenging conditions. In these environments, extreme cold is a formidable test of the human body’s resilience.
We gain valuable insights by delving into this realm of extreme cold survival. Mostly those insights are about thermoregulation, physiological responses, and the protective measures necessary to safeguard ourselves against the harsh cold. Join us on this journey as we uncover what is the lowest temperature a human can survive outside.
Table of Contents
Definition of Surviving in Extreme Cold
Extreme cold survival is maintaining core body temperature and avoiding life-threatening circumstances. It includes the body’s ability to tolerate physiological stressors and avoid cold-weather ailments like hypothermia and frostbite.
The Concept of Survival: Factors and Considerations
Survival involves many variables when facing life-threatening situations. Several factors help survive intense cold:
- Environmental Conditions: The severity of the cold environment temperature, wind chill, humidity, and precipitation directly influences problems and dangers. Harsh temperatures promote heat loss and cold-related injuries.
- Heat Regulation: Survival requires bodily heat regulation. This involves vasoconstriction, shivering, and metabolic rate maintenance. Preventing hypothermia requires heat conservation and creation.
- Personal Health and Fitness: Extreme cold survival depends on one’s health and fitness. Good health, cardiovascular fitness, and nutrition can boost resilience and the body’s ability to handle cold stress.
- Clothing and Insulation: Proper clothing and insulation prevent heat loss and cold. Wool or synthetic clothes can trap heat close to the body and protect against the weather. Keeping extremities warm requires a proper hat, gloves, and footwear.
- Shelter and protection: Extreme cold requires shelter. It protects from wind, rain, and high temperatures. Shelter helps prevent cold and frostbite.
- Preparedness and knowledge: Knowing cold-weather survival tactics increase survival odds. This involves recognizing hypothermia and frostbite, building emergency shelters, and knowing fire starting and navigation.
- Mental Resilience: Extreme cold requires mental fortitude and decision-making under stress. Staying cool, cheerful, and managing fear and anxiety are essential for overcoming problems and guaranteeing safety.
- Rescue and help: In extreme cases, rescue services and prompt help can save lives. Medical help, escape choices, and emergency responders or community support can make a big difference in critical situations.
Thermoregulation: How the Body Maintains Heat Balance
Thermoregulation is the body’s intricate process of maintaining heat balance, ensuring its core temperature remains within a narrow and optimal range. When exposed to extreme cold, the body employs various mechanisms to regulate heat and prevent excessive heat loss. Vasoconstriction, which constricts blood vessels at the skin’s surface, reduces blood flow and heat transfer from the core to the skin. This redirects warm blood to vital organs, preserving their temperature.
Additionally, shivering, a reflexive muscle contraction, generates heat as a byproduct of muscle activity. This internal heat production helps counteract heat loss to the environment. The body also conserves heat through behavioral adaptations such as seeking shelter, curling up, and minimizing exposed skin surface area. These systems keep the body warm and guard against extreme cold.
Hypothermia: The Risks and Implications
Hypothermia poses significant risks and implications for individuals exposed to extreme cold. Shivering, decreased coordination, confusion, drowsiness, and loss of consciousness result from the body losing heat quicker than it can produce it.
The cognitive function becomes compromised, impairing decision-making and increasing the risk of accidents. Frostbite, a condition where body tissues freeze, is another concern, potentially causing tissue damage and necessitating amputation.
Hypothermia also causes cardiac arrhythmias, organ dysfunction, and multi-organ failure. Cold water immersion further accelerates heat loss and intensifies the symptoms. Recognizing and promptly treating hypothermia is crucial to preventing severe complications and potential fatalities.
So emphasizing the importance of preventive measures and awareness of the signs and risks of this life-threatening condition.
The Human Body’s Response to Cold
When exposed to cold temperatures, the human body undergoes several responses as it strives to maintain its core temperature and protect vital organs. Understanding these responses and the effects of cold stress is crucial for comprehending the immediate and long-term impacts on the body. Some key aspects include the role of blood vessels and circulation and metabolic changes in extremely cold conditions.
Understanding Cold Stress: Immediate and Long-Term Effects
The body responds physiologically to cold temperatures to preserve its core temperature. To prevent heat loss, vasoconstriction narrows skin-surface blood vessels. This might cause pale skin and chills. Shivering raises body warmth by contracting reflexively.
Hypothermia can arise from prolonged cold exposure. When the core body temperature goes below 35 degrees Celsius (95 degrees Fahrenheit), hypothermia causes confusion, exhaustion, shivering cessation, and loss of consciousness. Cold stress also causes frostbite, which damages the skin and underlying tissues.
Chronic cold exposure can damage the body. Due to poor circulation, long-term cold intolerance causes pain, numbness, and discomfort. Long-term exposure can cause cold-related diseases, including Raynaud’s, which causes extremity blood vessel spasms. Spasms can hurt, discolor, and destroy tissue. Cold stress also weakens the immune system, making people more prone to respiratory infections and other ailments.
The Role of Blood Vessels and Circulation
Blood arteries and circulation help maintain core body temperature and preserve important organs in frigid conditions. The body adjusts its circulatory system to distribute heat and reduce heat loss when cold.
- Vasoconstriction: The skin’s blood vessels narrow in response to cold. This restriction redirects warm blood from the skin and extremities to the core organs. Reduced cutaneous blood flow helps maintain body temperature.
- Heat conservation: Vasoconstriction reduces cutaneous blood flow, conserving heat. Less warm blood reaching the skin’s surface reduces the body-environment temperature gradient, reducing heat transfer and loss. This system maintains appropriate physiological core body temperature.
- Blood Shunting: Blood arteries regulate blood flow to different body regions. Cold can cause blood arteries to divert blood to essential organs. This adaptive response keeps the heart, lungs, and brain warm and supplied with blood in cold weather.
- Thermoregulatory Adjustments: The circulatory system regulates body temperature with additional methods. The hypothalamus in the brain causes shivering to generate heat when body temperature lowers. The circulatory system distributes heat.
- Rewarming after Cold Exposure: Vasodilation increases cutaneous blood flow after cold exposure. This dilatation aids in rewarming.
Metabolic Changes in Extreme Cold Conditions
The human body undergoes significant metabolic changes in extremely cold conditions to generate and conserve heat. These adjustments include increased metabolic activity, enhanced thermogenesis through mechanisms like shivering, hormonal regulation to stimulate energy production and breakdown, increased caloric requirements to support the heightened metabolic demands, and using insulation mechanisms and stored fat as an energy source.
These metabolic changes allow the body to adapt to extreme cold and maintain core body temperature, ensuring survival in frigid environments. The body’s ability to modulate its metabolism in response to cold conditions highlights its remarkable resilience and capacity to withstand challenging environmental circumstances.
Factors Affecting Cold Tolerance
Cold tolerance, or the ability to withstand and adapt to cold temperatures, is influenced by various factors. Three important factors that affect cold tolerance are age, gender, and physical fitness.
Age and Cold Tolerance: Infants, Adults, and the Elderly
Different age groups exhibit varying degrees of cold tolerance. Smaller body sizes, larger surface area-to-volume ratios, and immature thermoregulatory systems limit cold tolerance in infants and young children. They lose heat more rapidly and struggle to generate sufficient heat to maintain body temperature in cold environments.
Conversely, adults generally have better cold tolerance as their bodies are larger, have a lower surface area-to-volume ratio and possess more efficient thermoregulatory mechanisms. However, decreasing metabolic rate, circulation, and thermoregulation may affect cold tolerance in older persons, especially those with age-related health issues. Hypothermia and cold-related injuries may affect them more.
Gender Differences in Cold Adaptation
Gender can influence cold tolerance due to physiological differences between males and females. Women have more body fat and less muscle mass than males, which can affect their heat production.
Women may have reduced cold tolerance and feel colder in similar cold conditions compared to men. Hormonal differences can also play a role, as estrogen has been associated with reduced peripheral blood flow, potentially affecting heat distribution. However, it’s important to note that individual variations and environmental factors can also influence gender differences in cold tolerance.
Physical Fitness and Cold Tolerance
Physical fitness levels can significantly impact cold tolerance. Physically fit individuals tend to have better cold tolerance due to several factors. Increased muscle mass allows for more efficient heat generation through shivering. Improved cardiovascular fitness enhances blood circulation, ensuring adequate heat distribution.
Physical fitness also contributes to better overall health and thermoregulatory function, which can positively affect cold tolerance. Regular exercise and physical activity can improve an individual’s ability to adapt to and withstand cold conditions.
Protective Measures for Survival
Surviving in extremely cold conditions requires implementing protective measures to ensure warmth, comfort, and overall well-being. Three essential protective measures are dressing appropriately, creating a warm shelter, and maintaining proper nutrition and hydration.
Dressing for Extreme Cold: Layering and Materials
Dressing in layers is key to maintaining warmth in extreme cold. Layering allows for better insulation and the ability to adjust clothing as needed. The three primary layers include:
- Base Layer: The base layer should consist of moisture-wicking materials like merino wool or synthetic fabrics that keep the skin dry and insulate even when damp.
- Insulating Layer: The middle layer provides insulation by trapping warm air close to the body. It can be fleece, down, or synthetic insulation.
- Outer Layer: The outer layer, often a waterproof and windproof shell, acts as a barrier against harsh weather conditions, preventing heat loss and protecting against moisture.
Shelter and Insulation: Creating a Warm Environment
Having a reliable shelter is crucial for protection against extreme cold. Insulated tents, cabins, or emergency shelters help retain heat and block out wind and moisture. Insulating the shelter with sleeping pads, blankets, or thermal barriers helps prevent heat loss through the ground or walls.
Additionally, using a properly rated sleeping bag and insulating the sleeping area with additional blankets or thermal liners adds an extra layer of warmth. It’s also essential to seal any gaps or openings to minimize drafts. When preparing for your outdoor adventure, knowing how to attach a sleeping bag to your backpack efficiently is key. By securely fastening your sleeping bag using compression straps or external attachment points, you can free up space inside your backpack and ensure easy access when needed. For detailed instructions on how to attach a sleeping bag to your backpack, refer to our comprehensive guide on How to Attach a Sleeping Bag to a Backpack.
Nutrition and Hydration: Fueling the Body in Cold Conditions
Maintaining proper nutrition and hydration is essential in cold environments. Cold temperatures increase the body’s energy requirements, so consuming calorie-dense foods fuels heat production. Incorporate complex carbohydrates, healthy fats, and proteins into meals to sustain energy levels.
Hot, high-calorie beverages like soups, teas, or warm water with electrolyte supplements can provide warmth and hydration. Staying well-hydrated is crucial, as dehydration can impair thermoregulation and increase the risk of cold-related injuries.
Frequently Asked Questions
Can humans adapt to survive in sub-zero temperatures?
Despite acclimatization, training, and protective measures, survival in extreme sub-zero temperatures without suitable clothing and equipment is difficult. Humans can only adapt so much.
What temperature is too cold for humans?
Wind chill, humidity, clothing, and individual tolerance determine how cold it is for people. However, below -40 degrees Celsius (-40 degrees Fahrenheit), frostbite, hypothermia, and other cold-related ailments are extremely likely.
How Long Can a Person Survive in Freezing Temperatures?
Clothing, housing, health, and environmental circumstances affect freezing-temperature survival. Extreme cold causes severe hypothermia and organ loss between minutes to hours. Survival requires immediate warming, medical care, and treatment.
Staying safe in very cold weather requires knowing the lowest temperature a person can live outside and the things that affect how well they can handle the cold.
Human thermoregulatory reflexes are robust, but the hazards of hypothermia, frostbite, and long-term health repercussions indicate how crucial measures are. Wear warm garments, make warm shelters, and eat and drink the necessary stuff to survive the intense cold.
We can survive freezing weather by being prepared, strong, and understanding what to do. |
|Using exponent rules to evaluate expressions|
|Exercise Name:||Using exponent rules to evaluate expressions|
|Math Missions:||8th grade (U.S.) Math Mission, Pre-algebra Math Mission, Mathematics I Math Mission, Algebra I Math Mission, Mathematics II Math Mission|
|Types of Problems:||2|
The Using exponent rules to evaluate expressions exercise appears under the 8th grade (U.S.) Math Mission, Pre-algebra Math Mission, Mathematics I Math Mission, Algebra I Math Mission and Mathematics II Math Mission. This exercise practices the exponents rules that were developed in earlier missions.
Types of Problems
There are two types of problems in this exercise:
- Simplify into single exponential: This problem provides an exponential expression involving multiplication, division or powers. The user is asked to find the simplified expression and type it in the provided box.
- Simplify into multiple exponentials: This problem provides an exponential expression involving the distributive properties. The user is asked to find the simplified expression and type it in the space provided.
The exponent rules are key to doing this problem efficiently and accurately.
- When multiplying with the same base, exponents add.
- When dividing with the same base, exponents subtract.
- When powering with exponents, exponents will multiply.
- The above rules can be viewed as "one step simpler in the order of operations."
- Exponents distribute through multiplication and division (but neither addition nor subtraction).
- The exponent rules are important because complicated concepts (like rational expression and radicals) can be represented in the world of exponents.
- Knowledge of algebra is essential for higher math levels like trigonometry and calculus. Algebra also has countless applications in the real world. |
Distributions and Variability
Type of Unit: Project
Students should be able to:
Represent and interpret data using a line plot.
Understand other visual representations of data.
Students begin the unit by discussing what constitutes a statistical question. In order to answer statistical questions, data must be gathered in a consistent and accurate manner and then analyzed using appropriate tools.
Students learn different tools for analyzing data, including:
Measures of center: mean (average), median, mode
Measures of spread: mean absolute deviation, lower and upper extremes, lower and upper quartile, interquartile range
Visual representations: line plot, box plot, histogram
These tools are compared and contrasted to better understand the benefits and limitations of each. Analyzing different data sets using these tools will develop an understanding for which ones are the most appropriate to interpret the given data.
To demonstrate their understanding of the concepts, students will work on a project for the duration of the unit. The project will involve identifying an appropriate statistical question, collecting data, analyzing data, and presenting the results. It will serve as the final assessment.
Students calculate the mean absolute deviation (MAD) for three data sets and use it to decide which data set is best represented by the mean.The concept of mean absolute deviation (MAD) is introduced. Students understand that the sum of the deviation of the data from the mean is zero. Students calculate the MAD and understand its significance. Students find the mean and MAD of a sample set of data.Key ConceptsThe mean absolute deviation (MAD) is a measure of how much the values in a data set deviate from the mean. It is calculated by finding the distance of each value from the mean and then finding the mean of these distances.Goals and Learning ObjectivesGain a deeper understanding of mean.Understand that the mean absolute deviation (MAD) is a measure of how well the mean represents the data.Compare data sets using measures of center (mode, median, mean) and spread (range and MAD).Show that the sum of deviations from the mean is zero.
Four full-year digital course, built from the ground up and fully-aligned to the Common Core State Standards, for 7th grade Mathematics. Created using research-based approaches to teaching and learning, the Open Access Common Core Course for Mathematics is designed with student-centered learning in mind, including activities for students to develop valuable 21st century skills and academic mindset.
Samples and ProbabilityType of Unit: ConceptualPrior KnowledgeStudents should be able to:Understand the concept of a ratio.Write ratios as percents.Describe data using measures of center.Display and interpret data in dot plots, histograms, and box plots.Lesson FlowStudents begin to think about probability by considering the relative likelihood of familiar events on the continuum between impossible and certain. Students begin to formalize this understanding of probability. They are introduced to the concept of probability as a measure of likelihood, and how to calculate probability of equally likely events using a ratio. The terms (impossible, certain, etc.) are given numerical values. Next, students compare expected results to actual results by calculating the probability of an event and conducting an experiment. Students explore the probability of outcomes that are not equally likely. They collect data to estimate the experimental probabilities. They use ratio and proportion to predict results for a large number of trials. Students learn about compound events. They use tree diagrams, tables, and systematic lists as tools to find the sample space. They determine the theoretical probability of first independent, and then dependent events. In Lesson 10 students identify a question to investigate for a unit project and submit a proposal. They then complete a Self Check. In Lesson 11, students review the results of the Self Check, solve a related problem, and take a Quiz.Students are introduced to the concept of sampling as a method of determining characteristics of a population. They consider how a sample can be random or biased, and think about methods for randomly sampling a population to ensure that it is representative. In Lesson 13, students collect and analyze data for their unit project. Students begin to apply their knowledge of statistics learned in sixth grade. They determine the typical class score from a sample of the population, and reason about the representativeness of the sample. Then, students begin to develop intuition about appropriate sample size by conducting an experiment. They compare different sample sizes, and decide whether increasing the sample size improves the results. In Lesson 16 and Lesson 17, students compare two data sets using any tools they wish. Students will be reminded of Mean Average Deviation (MAD), which will be a useful tool in this situation. Students complete another Self Check, review the results of their Self Check, and solve additional problems. The unit ends with three days for students to work on Gallery problems, possibly using one of the days to complete their project or get help on their project if needed, two days for students to present their unit projects to the class, and one day for the End of Unit Assessment.
Students estimate the length of 50 seconds by starting an unseen timer and stopping it when they think 50 seconds has elapsed. The third attempt is recorded and compiled into a data set, which students then compare to the third attempt from the previous lesson when they estimated the length of 20 seconds. Students analyze the data to make conclusions about how well seventh grade students can estimate lengths of time.Students repeat the timing activity for 50 seconds, but only the third trial is recorded. The task today is to compare this set of data with the third trial for 20 seconds. Students will need to deal with the difference in the spread of data, as well as how to compare the data sets. Students will be reminded of Mean Absolute Deviation (MAD), which will be a useful tool in this situation.Key ConceptsStudents apply the tools learned in Unit 6.8:Measures of center and spreadMean absolute deviation (MAD)Goals and Learning ObjectivesApply knowledge of statistics to compare different sets of data.Use measures of center and spread to analyze data. |
Brightstorm is like having a personal tutor for every subject
See what all the buzz is aboutCheck it out
Surface Area of Spheres - Concept 13,110 views
In general, surface area is the sum of all the shapes that cover the surface of an object. To calculate the surface area of a sphere we multiply 4 by pi by the radius of the sphere squared. Given this formula, we can find the surface area of a sphere when given the radius. Similarly, we can find the radius of a sphere is we are given the surface area. This formula is very similar to other prism volume formulas.
When we're talking about the surface area of the sphere, you can think of it as how much paint would you need to cover a tennis ball or if you'd looked at a baseball and you took all the stitching apart, how much leather would you need to make that ball?
Well, to find the surface area of a sphere, you're going to use the formula that surface area equals 4 times pi times the radius squared. Now, notice the dimensionality here. We have r to the second power which agrees with what we know about surface area which is it's a two dimensional property. So the only thing that you need to know in order to calculate the surface area of a sphere is this formula 4 times pi times the radius squared. Let's look at a very basic example of this application.
If the radius of a sphere is 3 centimetres, what is the surface area? Well we'll start off by writing our surface area formula. Surface area equals 4 pi r squared and then we'll say our radius is 3 centimetres. So then we just need to substitute in and we'll know our surface area.
We'll say that surface area is equal to 4 times pi times 3 squared. 3 squared we know is 9, 9 times 4 is 36. So the surface area of that sphere is going to be 36 pi square centimetres. So when you have a surface area problem and they tell you the radius, all you need to do is to substitute into your formula and simplify. |
This resource is a Java applet-based module relating to the simple harmonic motion produced by a block on a frictionless spring. It features a rich array of tools: motion graphs, energy graphs, vector components, reference circle, zoom toggle, and a data box that displays amplitude, angular frequency, displacement from equilibrium, phase angle, velocity, and acceleration of the oscillating block. Users control the spring constant, mass of the block, and amplitude of the oscillation. A comprehensive help section provides explicit directions and lesson ideas for instructors.
This item is part of a larger collection of physics simulations sponsored by the MAP project (Modular Approach to Physics).
Hooke's Law, MAP, SHM, angular frequency, conservation of energy, interactive simulation, lesson plan, mass and spring, oscillating, oscillator, radian, simple harmonic motion, simulation, spring, springs, unit circle
Metadata instance created
May 22, 2008
by Christopher Allen
9-12: 2A/H1. Mathematics is the study of quantities and shapes, the patterns and relationships between quantities or shapes, and operations on either quantities or shapes. Some of these relationships involve natural phenomena, while others deal with abstractions not tied to the physical world.
4. The Physical Setting
4E. Energy Transformations
9-12: 4E/H1. Although the various forms of energy appear very different, each can be measured in a way that makes it possible to keep track of how much of one form is converted into another. Whenever the amount of energy in one place diminishes, the amount in other places or forms increases by the same amount.
6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both.
9-12: 4F/H1. The change in motion (direction or speed) of an object is proportional to the applied force and inversely proportional to the mass.
11. Common Themes
6-8: 11B/M2. Mathematical models can be displayed on a computer and then modified to see what happens.
6-8: 11B/M4. Simulations are often useful in modeling events and processes.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.4 Model with mathematics.
Use functions to model relationships between quantities. (8)
8.F.5 Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally.
High School — Algebra (9-12)
Seeing Structure in Expressions (9-12)
A-SSE.1.a Interpret parts of an expression, such as terms, factors, and coefficients.
High School — Functions (9-12)
Interpreting Functions (9-12)
F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship.?
F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.?
F-IF.6 Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph.
F-IF.9 Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions).
Building Functions (9-12)
F-BF.3 Identify the effect on the graph of replacing f(x) by f(x) + k, k f(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs. Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
Trigonometric Functions (9-12)
F-TF.1 Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle.
F-TF.2 Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise around the unit circle.
F-TF.4 (+) Use the unit circle to explain symmetry (odd and even) and periodicity of trigonometric functions.
F-TF.5 Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline.?
Common Core State Reading Standards for Literacy in Science and Technical Subjects 6—12
Craft and Structure (6-12)
RST.11-12.4 Determine the meaning of symbols, key terms, and other domain-specific words and phrases as they are used in a specific scientific or technical context relevant to grades 11—12 texts and topics.
Integration of Knowledge and Ideas (6-12)
RST.11-12.9 Synthesize information from a range of sources (e.g., texts, experiments, simulations) into a coherent understanding of a process, phenomenon, or concept, resolving conflicting information when possible.
Range of Reading and Level of Text Complexity (6-12)
RST.11-12.10 By the end of grade 12, read and comprehend science/technical texts in the grades 11—CCR text complexity band independently and proficiently.
<a href="http://www.compadre.org/portal/items/detail.cfm?ID=7222">University of Calgary. Modular Approach to Physics: Simple Harmonic Motion - Weighted Spring. Calgary: University of Calgary, March 30, 2007.</a>
Modular Approach to Physics: Simple Harmonic Motion - Weighted Spring. (2007, March 30). Retrieved May 31, 2016, from University of Calgary: http://canu.ucalgary.ca/map/content/shm/springEnergy/simulate/page2.html
University of Calgary. Modular Approach to Physics: Simple Harmonic Motion - Weighted Spring. Calgary: University of Calgary, March 30, 2007. http://canu.ucalgary.ca/map/content/shm/springEnergy/simulate/page2.html (accessed 31 May 2016).
Modular Approach to Physics: Simple Harmonic Motion - Weighted Spring. Calgary: University of Calgary, 2001. 30 Mar. 2007. 31 May 2016 <http://canu.ucalgary.ca/map/content/shm/springEnergy/simulate/page2.html>.
%T Modular Approach to Physics: Simple Harmonic Motion - Weighted Spring %D March 30, 2007 %I University of Calgary %C Calgary %U http://canu.ucalgary.ca/map/content/shm/springEnergy/simulate/page2.html %O application/java
%0 Electronic Source %D March 30, 2007 %T Modular Approach to Physics: Simple Harmonic Motion - Weighted Spring %I University of Calgary %V 2016 %N 31 May 2016 %8 March 30, 2007 %9 application/java %U http://canu.ucalgary.ca/map/content/shm/springEnergy/simulate/page2.html
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. |
« AnteriorContinuar »
GHD. [Ax. 1.] Add to each of these the angle BGH. Therefore the angles EGB, BGH are equal to the angles BGH, GHD. [Ar. 2.] But the angles EGB, BGH are together equal to two right angles. [I. 13.] Therefore the angles BGH, GHD are together equal to two right angles. [Ax. 1.]
Therefore, if a straight line, &c. Q.E.D.
Ex. 1. The perpendiculars to two parallel lines are themselves parallel.
2. If the angle ABC be equal to PQR, and AB be parallel to PQ, then if BC and QR lie on the same side of AB and PQ respectively BC shall also be parallel to QR.
3. Any straight line drawn through the middle point of the diagonal of a parallelogram will cut off equal parts from its sides.
PROPOSITION 30. THEOREM.
Straight lines which are parallel to the same straight line are parallel to each other.
Let AB, CD be each of them parallel to EF:
2. Let the straight line GHK cut AB, EF, CD.
the parallel straight A
to the angle GKD. [I. 29.] And it was proved that the angle AGK is equal to the angle GHF. Therefore the angle AGK is equal to the angle GKD; [Ax. 1.] and they are alternate angles; therefore AB is parallel to CD. [I. 27.]
Therefore, straight lines, &c.
PROPOSITION 31. PROBLEM.
To draw a straight line through a given point parallel to a given straight line.
Let A be the given point, and BC the given straight line: it is required to draw a straight line through the point A parallel to the straight line BC.
In BC take any 2. point D, and join AD; at the point A in the straight line AD, make the angle DAE equal to the
angle ADO; [I. 23.] and produce the straight line EA to F. EF shall be parallel to BC.
Because the straight line AD, which meets the 3. two straight lines BC, EF, makes the alternate angles EAD, ADC equal to one another. [Const.] EF is parallel to BC. [I. 27.]
Therefore the straight line EAF is drawn through the given point A, parallel to the given straight line BC.
PROPOSITION 32. THEOREM.
If a side of any triangle be produced, the exterior angle is equal to the two interior and opposite angles; and the three interior angles of every triangle are equal to two right angles.
Let ABC be a triangle, and let one of its sides BC be produced to D: the exterior angle ACD shall be equal to the two interior and opposite angles CAB, ABC; and the three interior angles of the triangle, namely, ABC, BCA, CAB shall be equal to two right angles.
Through the point C draw CE parallel to AB. 2. [I. 31],
is parallel to CE, and BD falls on them, the exterior ECD is equal to the interior and opposite angle ABC. [I. 29.] But the angle ACE was shown to be equal to the angle BAC; therefore, the whole exterior angle ACD is equal to the two interior and opposite angles CAB, ABC. [Ax. 2.] To each of these equals add the angle ACB; therefore, the angles ACD, ACB are equal to the three angles CBA, BAC, ACB. [Ax. 2.] But the angles ACD, ACB are equal to two right angles [I. 13.]; therefore, also the angles CBA, BAC, ACB are equal to two right angles. [Ax. 1.]
Therefore, if a side of any triangle, &c. Q.E.D.
COROLLARY I. All the exterior angles of any rectilineal figure are together equal to four right angles.
Let ABCD be any rectilineal figure having the
1. exterior angles A, B, C, D, these shall be together equal to four right angles.
In BC take any point and draw from it lines parallel to the sides of the figure.
Then by I. 29 the ex3. terior angle b is equal to the interior and opposite angle B. Similarly the angle O is equal to c. Also d is equal to D, for they are each equal to e; and in like manner A is equal to a. There
fore the exterior angles are equal to the angles a, b, c, and d. But these angles are equal to four right angles. [Cor. I. 15.] Therefore the exterior angles are equal to four right angles.
COROLLARY 2. All the interior angles of any rectilineal figure together with four right angles are equal to twice as many right angles as the figure has sides.
For any interior angle ABD, together with its
3. adjacent exterior angle DBC is equal to two
right angles. Therefore, all the interior angles and all the exterior angles are equal to twice as many right angles as the figures has sides. But all the exterior angles are by the preceding corollary equal to four right angles.
Therefore, all the interior
angles and four right angles are equal to twice as many right angles as the figure has sides.
N.B. In the case of a regular figure, as all its angles are equal, the value of an exterior angle will be found by dividing four right angles by the number of sides of the figure; and if the quotient be subtracted from two right angles the result will be the value of an interior angle.
This proposition and its corollaries show what is the value of the interior and exterior angles of rectilineal figures.
The second corollary is true of all rectilineal figures. The first is only true of convex figures which have no re-entrant angle, i.e,, no angle which is greater than two right angles. Let ABCDE be a concave figure having the re-entrant angle EDC (which is dotted) greater than two right angles by the excess EDF. Then by drawing lines from B parallel to the sides it will be seen, as in Cor. 1, that the exterior angles are greater
than four right angles by the angle aBc (which is taken twice). But aBc is equal to the angle EDF, which is the excess of the re-entrant angle above two right angles. Hence the exterior angles of a concave figure are equal to four right angles together with the excess of every re-entrant angle above two right angles.
EXERCISES. 1. If any angle of a triangle be equal to the sum of the other two, it must be a right angle; but if it be greater than the sum of the other two it must be an obtuse angle, and if less an acute angle.
2. If the side CB of an equilateral triangle ABC be produced to D, making CD=CA, or CB, then DAB shall be a right angle. (Hence show how to draw a perpendicular to a given line from its extremity without producing it.)
3. Find, by the aid of Cor. 1 to this Prop., what is the value of the interior angle of a regular hexagon, and determine how many such figs. would fill up all the space about a point.
4. The middle point of the hypotenuse of a right angled triangle is equidistant from the three angles.
PROPOSITION 33. THEOREM.
The straight lines which join the extremities of two equal and parallel straight lines towards the same parts, are also themselves equal and parallel.
Let AB and CD be equal and parallel straight 1. lines, and let them be joined towards the same parts by the straight lines AC and BD: AC and BD shall be equal and parallel.
2. Join BC. |
Real vs. Nominal
Definitions and Basics
- Definition: The nominal value of a good is its value in terms of money. The real value is its value in terms of some other good, service, or bundle of goods. Examples:
- Nominal: That CD costs $18. Japan's science and technology spending is about 3 trillion yen per year.
- Real: A year of college costs about the value of a Toyota Camry. Those tickets to see Van Halen cost me three weeks' worth of food!
Gross Domestic Product. from the Concise Encyclopedia of Economics In practice BEA first uses the raw data on production to make estimates of nominal GDP. or GDP in current dollars. It then adjusts these data for inflation to arrive at real GDP. But BEA also uses the nominal GDP figures to produce the "income side" of GDP in double-entry bookkeeping. For every dollar of GDP there is a dollar of income. The income numbers inform us about overall trends in the income of corporations and individuals. Other agencies and private sources report bits and pieces of the income data, but the income data associated with the GDP provide a comprehensive and consistent set of income figures for the United States. These data can be used to address important and controversial issues such as the level and growth of disposable income per capita, the return on investment, and the level of saving. Interest. from the Concise Encyclopedia of Economics The real interest rate on money loans will be the stated (or nominal) rate minus the anticipated rate of inflation. In countries that are experiencing rapid growth in the amount of money available, interest rates will be very high. But these will be not be high real interest rates. Instead, they will be high nominal interest rates. If expected inflation is 10 percent, for example, and if the real interest rate is 5 percent, the nominal interest rate is 15 percent. But someone who lends money at 15 percent for a year will not be repaid with 15 percent more resources at the end of the year. Rather, the lender will be repaid with 15 percent more money and will be able to use that money to buy only 5 percent more resources. |
Analogue phase detectors are essential circuits in RF and mm-wave applications when the phase difference between two signals needs to be found. There are many communication scenarios when one needs to know the phase difference between two signals – such as power combining.
Analogue phase detectors are relatively simple microwave circuits; however, it is not popular knowledge how they exactly work. In this article, basics operational principles of phase detectors are presented.
Nonlinear circuits – basic principles
In order to understand how phase detectors work, one needs to know the basic principles of nonlinear circuits, such as diodes and transistors, Fig. 1. In any nonlinear circuit, the signal at the output is not only linearly proportional to the signal at the input, but also its higher order contributions, as shown in (1).
(1) is also known as the Taylor series expansion. In (1), 𝑎𝑖 are the coefficients which are usually experimentally determined. To best describe the output signal, the number of polynomial terms should, in theory, be infinite, but, in practice, only a few terms are used. The number of polynomial terms used in practice depends on the level of nonlinearities exhibited by the device (such as a diode or transistor) and the power of the input signal.
As an illustration, let us assume that the input signal is a simple sinewave given by 𝐼𝑖𝑛 = 𝐼1𝑐𝑜𝑠(𝜔𝑡 + 𝜙1), and that the number of terms in (1) is limited to 3. The output signal 𝐼𝑜𝑢𝑡 becomes:
In other words, the output signal contains not only the frequency of the input signal, but its harmonics too, namely the products of second and third order mixing. The order of harmonic mixing determines the highest frequency of the response, which in this case is 3ω. This basic rule is used in the design of mixers, and, as we will see later in the design of analogue phase detectors too.
Standard mixers are nonlinear devices and usually have 3 ports – Local Oscillator (LO) port, RF port and Intermediate Frequency (IF) port, as shown in Fig. 2. The signals emanating at the IF port are the signals which are a product of multiplication of the LO and RF signals.
If we assume that the RF signal is given by 𝐼𝑅𝐹 = 𝐼𝑅𝐹_𝑀𝐴𝐺𝑐𝑜𝑠(𝜔𝑡 + 𝜙𝑅𝐹) and LO signal by 𝐼𝐿𝑂 = 𝐼𝐿𝑂_𝑀𝐴𝐺𝑐𝑜𝑠(𝜔𝑡 + 𝜙𝐿𝑂) , the multiplication function of the mixers of Fig. 2 produces the following output:
Where 𝑎2 is the conversion coefficient, usually provided in the datasheet of a mixer and is, as mentioned earlier, experimentally determined. Index I in (3) refers to the fact that both the RF and LO signals are cosine functions or “in-line”. The composite signal given by (3) contains products of second order mixing, which in the present case are DC and the second harmonic. The first term in (2), the DC term, is of particular use in the design of phase detectors, as will be explained later.
Phase detectors using mixers
The DC term in (3) is proportional to the phase difference between the LO and RF signals and, can, in theory, be used to construct a phase detector. However, the main issue lies with fact that the extracted phase in that case would be dependent on the correct extraction of the “amplitude” of the DC “signal”, i.e., 𝑎2 ∗(𝐼𝑅𝐹∗𝐼𝐿𝑂)/2. Theoretically, this could be considered through a careful calibration of the mixer, but that would make the phase detector constructed in this way highly dependent on the power levels applied to RF and LO ports, which increases its sensitivity. It should be noted that the second harmonic in (2) can be easily eliminated using a low-pass filter. In many instances, a grounded capacitor should suffice. By eliminating the second harmonic, the DC IF output of (3) now becomes:
To eliminate phase difference dependence on LO and RF power levels, one more piece of information is required. For this purpose, let us now assume that the LO signal is phase shifted by 𝜋/2. The LO signal now becomes:
Index Q in (5) refers to the fact that the RF and LO signals are “in quadrature”, i.e., the LO signal is a sine function, while the RF signal is a cosine function. Mixing such an LO signal with an RF signal given by 𝐼𝑅𝐹 = 𝐼𝑅𝐹_𝑀𝐴𝐺 𝑐𝑜𝑠(𝜔𝑡 + 𝜙𝑅𝐹 ) produces the following IF output:
The DC part of (6) is now equal to:
The ratio of (7) over (4) now gives:
From which the phase difference can be extracted in a simple manner by:
As obvious from (9), the measured phase difference is no longer a function of input powers of the LO and RF signals. The circuit that performs this function is given in Fig. 3.
As such, in theory, a phase detector can be constructed using two mixers and passive RF circuitry. In practice, care must be taken so that appropriate power levels are applied to the LO and RF ports; here, usually the power applied to the RF port needs to be at least 10 dB lower compared to the power applied to the LO port, for the mixers not to exceed their maximum power ratings . In addition, IF ports need not be terminated in 50 Ω; the IF outputs in the present configuration are behaving as current sources and higher termination resistances are usually recommended, but that is design dependent.
In this short article, basic principles of operation of nonlinear circuits are presented, together with their use in the design of mixers and phase detectors. Both mixers and phase detectors are important RF/mm-wave devices used in a variety of telecommunications systems. |
Systems of Particles and Rotational Motion Class 11 Notes Physics Chapter 7
• A rigid body is a body with a perfectly definite and unchanging shape. The distances between all pairs of particles of such a body do not change.
• Centre of Mass
For a system of particles, the centre of mass is defined as that point where the entire mass of the system is imagined to be concentrated, for consideration of its translational motion.
If all the external forces acting on the body/system of bodies were to be applied at the centre of mass, the state of rest/ motion of the body/system of bodies shall remain unaffected.
• The centre of mass of a body or a system is its balancing point. The centre of mass of a two- particle system always lies on the line joining the two particles and is somewhere in between the particles.
• Motion of centre of Mass
The centre of mass of a system of particles moves as if the entire mass of the system were concentrated at the centre of mass and all the external forces were applied at that point. Velocity of centre of mass of a system of two particles, m1 and m2 with velocity v1 and v2 is given
• If no external force acts on the body, then the centre of mass will have constant momentum. Its velocity is constant and acceleration is zero, i.e., MVcm = constant.
• Vector Product or Cross Product of two vectors
Torque is the moment of force. Torque acting on a particle is defined as the product of the magnitude of the force acting on the particle and the perpendicular distance of the application of force from the axis of rotation of the particle.
• Angular Momentum
The angular momentum (or moment of momentum) about an axis of rotation is a vector quantity, whose magnitude is equal to the product of the magnitude of momentum and the perpendicular distance of the line of action of momentum from the axis of rotation and its direction is perpendicular to the plane containing the momentum and the perpendicular distance.
• Axis of Rotation
A rigid body is said to be rotating if every point mass that makes it up, describes a circular path of a different radius but the same angular speed. The circular paths of all the point masses have a common centre. A line passing through this common centre is the axis of rotation.
• A rigid body is said to be in equilibrium if under the action of forces/torques, the body remains in its position of rest or of uniform motion.
For translational equilibrium, the vector sum of all the forces acting on a body must be zero. For rotational equilibrium, the vector sum of torques of all the forces acting on that body about the reference point must be zero. For complete equilibrium, both these conditions must be fulfilled.
Two equal and opposite forces acting on a body but having different lines of action, form a couple. The net force due to a couple is zero, but they exert a torque and produce rotational motion.
• Moment of Inertia
The rotational inertia of a rigid body is referred to as its moment of inertia.
The moment of inertia of a body about an axis is defined as the sum of the products of the masses of the particles constituting the body and the square of their respective perpendicular distance from the axis.
It is given by .
• Radius of Gyration
The distance of a point in a body from the axis of rotation, at which if whole of the mass of the body were supposed to be concentrated, its moment of inertia about the axis of rotation would be the same as that determined by the actual distribution of mass of the body is called radius of gyration.
If we consider that the whole mass of the body is concentrated at a distance K from the axis of rotation, then moment of inertia I can be expressed as I = MK2
• Theorem of Parallel Axes
According to this theorem, the moment of inertia I of a body about any axis is equal to its moment of inertia about a parallel axis through centre of mass, Icm, plus Ma2 where M is the mass of the body and V is the perpendicular distance between the axes, i.e.,
I = Icm + Ma2
• Theorem of Perpendicular Axes
According to this theorem, the moment of inertia I of the body about a perpendicular axis is equal to the sum of moments of inertia of the body about two axes at right angles to each other in the plane of the body and intersecting at a point where the perpendicular axis passes, i.e.,
• Rolling Motion
The combination of rotational motion and the translational motion of a rigid body is known as rolling motion.
• Law of Conservation of Angular Momentum
According to the law of conservation of angular momentum, if there is no external couple acting, the total angular momentum of a rigid body or a system of particles is conserved.
• IMPORTANT TABLES |
This learning video deals with a question of geometrical probability. A key idea presented is the fact that a linear equation in three dimensions produces a plane. The video focuses on random triangles that are defined by their three respective angles. These angles are chosen randomly subject to a constraint that they must sum to 180 degrees. One class period is required to complete this learning video, and the only prerequisites are a familiarity with geometry and an understanding of the equation for a plane, which is presented in the module. Materials needed for this lesson include blackboard and chalk. Optional materials include a cardboard box and colored paper. An example of the types of in-class activities for between segments of the video is: Ask six students for numbers and make those numbers the coordinates x,y of three points. Then have the class try to figure out how to decide if the triangle with those corners is acute or obtuse.
Professor Strang teaches Linear Algebra and Computational Science at MIT, and both of these classes are videotaped and available on MIT's OpenCourseWare ocw.mit.edu. He also writes research papers and textbooks on these subjects. Click here to read more about Professor Strang.
Professor Strang’s Linear Algebra Class lecture videos
An online discussion of the problem discussed in this learning video
A free interactive math textbook on the web. Initially covering high-school geometry
An interactive lesson connecting probability and geometry
Provides extensive resources for the study of geometry
|Are Random Triangles Acute or Obtuse? (Arabic Subtitles, MPEG 4 Version)||Arabic Subtitles||MPEG 4||Download|
|Are Random Triangles Acute or Obtuse? (Arabic Voice-over, QuickTime Version)||Arabic Voice-over||Quicktime||Download| |
By: Abinaya in Java Tutorials on 2007-10-13The Java programming language is strongly-typed, which means that all variables must first be declared before they can be used. This involves stating the variable's type and name, as you've already seen:
Doing so tells your program that a field named "gear" exists, holds numerical data, and has an initial value of "1". A variable's data type determines the values it may contain, plus the operations that may be performed on it. In addition toint gear = 1;
int, the Java programming language supports seven other primitive data types. A primitive type is predefined by the language and is named by a reserved keyword. Primitive values do not share state with other primitive values. The eight primitive data types supported by the Java programming language are:
- byte: The
bytedata type is an 8-bit signed two's complement integer. It has a minimum value of -128 and a maximum value of 127 (inclusive). The
bytedata type can be useful for saving memory in large arrays, where the memory savings actually matters. They can also be used in place of
intwhere their limits help to clarify your code; the fact that a variable's range is limited can serve as a form of documentation.
- short: The
shortdata type is a 16-bit signed two's complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive). As with
byte, the same guidelines apply: you can use a
shortto save memory in large arrays, in situations where the memory savings actually matters.
- int: The
intdata type is a 32-bit signed two's complement integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647 (inclusive). For integral values, this data type is generally the default choice unless there is a reason (like the above) to choose something else. This data type will most likely be large enough for the numbers your program will use, but if you need a wider range of values, use
- long: The
longdata type is a 64-bit signed two's complement integer. It has a minimum value of -9,223,372,036,854,775,808 and a maximum value of 9,223,372,036,854,775,807 (inclusive). Use this data type when you need a range of values wider than those provided by
- float: The
floatdata type is a single-precision 32-bit IEEE 754 floating point. Its range of values is beyond the scope of this discussion. As with the recommendations for
short, use a
double) if you need to save memory in large arrays of floating point numbers. This data type should never be used for precise values, such as currency. For that, you will need to use the java.math.BigDecimal class instead. Numbers and Strings covers
BigDecimaland other useful classes provided by the Java platform.
- double: The
doubledata type is a double-precision 64-bit IEEE 754 floating point. Its range of values is beyond the scope of this discussion. For decimal values, this data type is generally the default choice. As mentioned above, this data type should never be used for precise values, such as currency.
- boolean: The
booleandata type has only two possible values:
false. Use this data type for simple flags that track true/false conditions. This data type represents one bit of information, but its "size" isn't something that's precisely defined.
- char: The
chardata type is a single 16-bit Unicode character. It has a minimum value of
'\u0000'(or 0) and a maximum value of
'\uffff'(or 65,535 inclusive).
Stringobject; for example,
String s = "this is a string";.
Stringobjects are immutable, which means that once created, their values cannot be changed. The
Stringclass is not technically a primitive data type, but considering the special support given to it by the language, you'll probably tend to think of it as such.
This policy contains information about your privacy. By posting, you are declaring that you understand this policy:
- Your name, rating, website address, town, country, state and comment will be publicly displayed if entered.
- Aside from the data entered into these form fields, other stored data about your comment will include:
- Your IP address (not displayed)
- The time/date of your submission (displayed)
- Your email address will not be shared. It is collected for only two reasons:
- Administrative purposes, should a need to contact you arise.
- To inform you of new comments, should you subscribe to receive notifications.
- A cookie may be set on your computer. This is used to remember your inputs. It will expire by itself.
This policy is subject to change at any time and without notice.
These terms and conditions contain rules about posting comments. By submitting a comment, you are declaring that you agree with these rules:
- Although the administrator will attempt to moderate comments, it is impossible for every comment to have been moderated at any given time.
- You acknowledge that all comments express the views and opinions of the original author and not those of the administrator.
- You agree not to post any material which is knowingly false, obscene, hateful, threatening, harassing or invasive of a person's privacy.
- The administrator has the right to edit, move or remove any comment for any reason and without notice.
Failure to comply with these rules may result in being banned from submitting further comments.
These terms and conditions are subject to change at any time and without notice.
Most Viewed Articles (in Java )
Latest Articles (in Java)
- Data Science
- React Native
- Cloud Computing
- Java Beans
- Mac OS X
- Office 365
- Tech Reviews |
Learn amazing information about black holes by reading black holes facts. Black holes are concentrated areas of space that have extremely strong gravitational pulls. The infinite darkness, strength, birth and mysteries of black holes fascinates the public.
- Most common black holes are created by the death of a star. Stars with masses 20 times larger than the Sun will make a black hole at death. A living star will produce enough nuclear fuel to balance gravity and pressure, but when the star dies gravity will compress material in the core. This causes the star to collapse into itself, explode as a supernova and become a black hole.
- No object can escape black holes. After the supernova explosion, the dead star compacts to zero volume. The dead star is given infinite density, also known as singularity, that no object can escape from. Only an object faster than the speed of light may escape a black hole. No object can reach a velocity greater than the speed of light.
- Black holes are invisible. Since even the speed of light can not escape the gravitational pull of black holes, they are invisible and very difficult to find. Scientists observe the effects of gas, planets, stars and dust surrounding areas in space to discover black holes. Heat and motion of gas and dust orbiting around an event horizon, which is the edge of the black hole, is a sign of the presence of a black hole.
- There are three types of black holes. Three types of black holes exist: supermassive, miniature and stellar black holes. Supermassive black holes are the largest black hole; they capture orbiting stars in the center of most galaxies. Stellar black holes are much smaller than supermassive black holes, but they are larger than miniature black holes. For example, a supermassive black hole may be 4,210,000 solar masses while a stellar black hole is just 15 solar masses.
- John Michell and Pierre LaPlace theorized black holes. In the 1790s, John Michell and Pierre LaPlace suggested the existence of an “invisible star.” The mass and size of “invisible stars” were calculated by LaPlace and Michell. They theorized that an object would need a velocity greater than the speed of light to escape from the event horizon of these unseen stars. Invisible stars, commonly known as black holes, were fully discovered by John Wheeler in 1967. |
Interested in Personalized Training with Job Assistance? Know More
Hello everyone, my name is Atharva and we are continuing with the C programming course.
In the previous video we had discussed function call parameters, in which the first way was call by value method and the second was call by reference method.
Now after knowing all these things, now we are going to talk about what is a the scope of a variable.
You might find this topic a little different because we have already learned a lot about the variables in so many videos, but here we are talking about scope of a variable.
Now, we will say what is the scope? Scope is sounding something new? Let’s understand what exactly is the meaning of scope here.
So, scope is the part of the program where a defined variable can have its existence and beyond that it cannot exist.
So, this basically is the definition of the scope.
Now, we will understand it in layman terms, what is it trying to say? Any such part of the program in which we have defined a variable, that variable will be valid in that particular part itself, if you want to use the variable before or after the programme, before or after that particular part.
We won't be able to access it because it won't be existing over there.
To understand this thing, we will go ahead and see how these things are of so much importance in the program.
We're going to go ahead and see how the scope can be used to define different things.
So, the scope basically or this boundary within which the variable exists and it doesn't exist out of it.
So, this boundary is defined in three ways, in which the first scope is of the local variables.
Now, what is this local variable? local variables like we used to use if statements or for loops statements or even in functions, the curly brackets that we used to use.
So, these curly brackets are basically the scope for the local variables.
So, whenever we will make any variable inside these curly brackets, we will initialise them.
At that time what will happen is that the variable will be valid only inside of those curly brackets.
If we want to print the value of that particular variable outside of those curly brackets or we wish to store any value in that.
Our compiler will give us an error because that particular variable was local to that curly bracket, which means it was local to it and it was existing within it.
After that we will see local variables.
Now what is the local variable? Any such variable which is not inside any curly brackets.
It would be out of all curly brackets.
And it is on the top of the main function.
We can call it this way, these are such variables, which we cannot define in any function.
We define it outside of the function.
And we call it local variables.
What happens with this? Those variables can be used in any function, anywhere, under any scope.
Third, we will talk about our formal parameters.
We had already learned that in our function definition the parameters list that we pass, those parameters are called the formal parameters because their existence is limited inside the function definition, we cannot access it out of it.
So, we have discussed all these three scopes.
Now, we will see an example of the local variables.
Now, what is this example doing? One simple X, Y, Z, three variables we have made and it is adding X and Y and storing it Z and it is printing all three values.
So, this is a simple program.
Now, it is worth seeing that X, Y and Z, the three variables that we have made.
These will basically be called as local variables.
Why is thar? Because you can see that these are made or written inside this particular curly bracket.
So, what will happen? If outside of the main function, somewhere here, if I wish to use these X, Y, Z variables.
I will not be able to use them.
Why? Because these are local to this particular area only and they will be restrained to this particular area itself.
They will be available and will exist in this restricted area only.
Outside of it we can make a variable of the same name but it will not possess this particular value.
We will have to put a new value inside it, that is the reason why we call this local variable because it is local to these curly brackets.
Which means it is local to this main function.
Now we will talk about our global variables, which is the second type of scope that we saw.
How is the global variable? It is the same program, only the Z variable in this, we had made inside the Z function.
That we have written basically outside the main function, on the top.
So, here we have defined Z.
That is the reason we call it a global variable.
You can see that it is not in any function, it is not in any function definition, neither is it in the main function.
So, it is right outside of everything.
So, when you're using Z in the main function.
Our compiler will not show an error here.
As he knows that you have declared a global variable named Z.
So, inside the main function, you don't have to define it again as long as you don’t want it.
So, this is a way to define a variable locally or globally.
Apart from this the third type that we saw of formal parameters, this is what we know very well.
In this the function definition that is there, in that whichever is the parameter’s list, in that basically the variables that we are adding we call those as formal parameters.
And you can see that this A and B, these are local to this particular function only and inside it it's validity will remain.
If you print a and b inside this function, it will give you the same value which is there in this particular function.
But if you wish to print its value outside of the function, it will not print.
In this way, these are our formal parameters, local variables and global variables.
These we call the scope of a variable.
Which means one particular variable in which area will stay limited to and it will exist in and outside of it, it won't exist.
Who tells us this? This is told to us by the scope of the variable.
After reading all these scopes now we have completed our function’s part.
So, we have seen the functions in the C program.
If you think that there is any part of this video or this topic, you might have not understood then you can ask us any query or question related to that topic without any inhibition.
You just have to go on forums.learnvern.com, you have to type your query there in your own language and we will come up with the solution or the answer as soon as possible in your own language.
Apart from this if you wish to have a discussion on any topic.
Even that you will be able to do on forums.learnvern.com.
We will go ahead of this and we will discuss derived data types.
We will go into each and every derived data type and we will explore those and we will understand how they work.
For now, we will keep practising the functions and we will meet in the coming videos.
Till that time.
Thank you so much.
Have a friend to whom you would want to share this course? |
Centimetre–gram–second system of units
- For a topical guide to this subject, see Outline of the metric system.
The centimetre–gram–second system (abbreviated CGS or cgs) is a variant of the metric system of physical units based on centimetre as the unit of length, gram as a unit of mass, and second as a unit of time. All CGS mechanical units are unambiguously derived from these three base units, but there are several different ways of extending the CGS system to cover electromagnetism.
The CGS system has been largely supplanted by the MKS system, based on metre, kilogram, and second. MKS was in turn extended and replaced by the International System of Units (SI). The latter adopts the three base units of MKS, plus the ampere, mole, candela and kelvin. In many fields of science and engineering, SI is the only system of units in use. However, there remain certain subfields where CGS is prevalent.
In measurements of purely mechanical systems (involving units of length, mass, force, energy, pressure, and so on), the differences between CGS and SI are straightforward and rather trivial; the unit-conversion factors are all powers of 10 arising from the relations 100 cm = 1 m and 1000 g = 1 kg. For example, the CGS-derived unit of force is the dyne, equal to 1 g·cm/s2, while the SI-derived unit of force is the newton, 1 kg·m/s2. Thus it is straightforward to show that 1 dyne = 10−5 newtons.
On the other hand, in measurements of electromagnetic phenomena (involving units of charge, electric and magnetic fields, voltage, and so on), converting between CGS and SI is much more subtle and involved. In fact, formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on which system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.
- 1 History
- 2 Definition of CGS units in mechanics
- 3 Derivation of CGS units in electromagnetism
- 3.1 CGS approach to electromagnetic units
- 3.2 Alternate derivations of CGS units in electromagnetism
- 3.3 Various extensions of the CGS system to electromagnetism
- 3.4 Electrostatic units (ESU)
- 3.5 Electromagnetic units (EMU)
- 3.6 Relations between ESU and EMU units
- 3.7 Other variants
- 4 Electromagnetic units in various CGS systems
- 5 Physical constants in CGS units
- 6 Pro and contra
- 7 See also
- 8 References and notes
- 9 General literature
The CGS system goes back to a proposal in 1832 by the German mathematician Carl Friedrich Gauss to base a system of absolute units on the three fundamental units of length, mass and time. Gauss chose the units of millimetre, milligram and second. In 1874, it was extended by the British physicists James Clerk Maxwell and William Thomson with a set of electromagnetic units and the selection of centimetre, gram and second and the naming of C.G.S.
The sizes of many CGS units turned out to be inconvenient for practical purposes. For example, many everyday objects are hundreds or thousands of centimetres long, such as humans, rooms and buildings. Thus the CGS system never gained wide general use outside the field of science. Starting in the 1880s, and more significantly by the mid-20th century, CGS was gradually superseded internationally for scientific purposes by the MKS (metre–kilogram–second) system, which in turn developed into the modern SI standard.
Since the international adoption of the MKS standard in the 1940s and the SI standard in the 1960s, the technical use of CGS units has gradually declined worldwide, in the United States more slowly than elsewhere. CGS units are today no longer accepted by the house styles of most scientific journals, textbook publishers, or standards bodies, although they are commonly used in astronomical journals such as the Astrophysical Journal. CGS units are still occasionally encountered in technical literature, especially in the United States in the fields of material science, electrodynamics and astronomy. The continued usage of CGS units is most prevalent in magnetism and related fields, as the primary MKS unit, the tesla, is inconvenienently large, leading to the continued common use of the gauss, the CGS equivalent.
The units gram and centimetre remain useful as prefixed units within the SI system, especially for instructional physics and chemistry experiments, where they match the small scale of table-top setups. However, where derived units are needed, the SI ones are generally used and taught instead of the CGS ones today. For example, a physics lab course might ask students to record lengths in centimetres, and masses in grams, but force (a derived unit) in newtons, a usage consistent with the SI system.
Definition of CGS units in mechanics
In mechanics, the CGS and SI systems of units are built in an identical way. The two systems differ only in the scale of two out of the three base units (centimetre versus metre and gram versus kilogram, respectively), while the third unit (second as the unit of time) is the same in both systems.
There is a one-to-one correspondence between the base units of mechanics in CGS and SI, and the laws of mechanics are not affected by the choice of units. The definitions of all derived units in terms of the three base units are therefore the same in both systems, and there is an unambiguous one-to-one correspondence of derived units:
- (definition of velocity)
- (Newton's second law of motion)
- (energy defined in terms of work)
- (pressure defined as force per unit area)
- (dynamic viscosity defined as shear stress per unit velocity gradient).
Thus, for example, the CGS unit of pressure, barye, is related to the CGS base units of length, mass, and time in the same way as the SI unit of pressure, pascal, is related to the SI base units of length, mass, and time:
- 1 unit of pressure = 1 unit of force/(1 unit of length)2 = 1 unit of mass/(1 unit of length·(1 unit of time)2)
- 1 Ba = 1 g/(cm·s2)
- 1 Pa = 1 kg/(m·s2).
Expressing a CGS derived unit in terms of the SI base units, or vice versa, requires combining the scale factors that relate the two systems:
- 1 Ba = 1 g/(cm·s2) = 10−3 kg/(10−2 m·s2) = 10−1 kg/(m·s2) = 10−1 Pa.
Definitions and conversion factors of CGS units in mechanics
|Quantity||Symbol||CGS unit||CGS unit
in SI units
|length, position||L, x||centimetre||cm||1/100 of metre||= 10−2 m|
|mass||m||gram||g||1/1000 of kilogram||= 10−3 kg|
|time||t||second||s||1 second||= 1 s|
|velocity||v||centimetre per second||cm/s||cm/s||= 10−2 m/s|
|acceleration||a||gal||Gal||cm/s2||= 10−2 m/s2|
|force||F||dyne||dyn||g·cm/s2||= 10−5 N|
|energy||E||erg||erg||g·cm2/s2||= 10−7 J|
|power||P||erg per second||erg/s||g·cm2/s3||= 10−7 W|
|pressure||p||barye||Ba||g/(cm·s2)||= 10−1 Pa|
|dynamic viscosity||μ||poise||P||g/(cm·s)||= 10−1 Pa·s|
|kinematic viscosity||ν||stokes||St||cm2/s||= 10−4 m2/s|
|wavenumber||k||kayser||cm−1||cm−1||= 100 m−1|
Derivation of CGS units in electromagnetism
CGS approach to electromagnetic units
The conversion factors relating electromagnetic units in the CGS and SI systems are much more complex – so much so that formulae expressing physical laws of electromagnetism are different depending on what system of units one uses. This illustrates the fundamental difference in the ways the two systems are built:
- In SI, the unit of electric current, the ampere (A), was historically defined such that the magnetic force exerted by two infinitely long, thin, parallel wires 1 metre apart and carrying a current of 1 ampere is exactly 2×10−7 N/m. This definition results in all SI electromagnetic units consistent (subject to factors of some integer powers of 10) with the EMU CGS system described in further sections. The ampere is a base unit of the SI system, with the same status as the metre, kilogram, and second. Thus the relationship in the definition of the ampere with the metre and newton is disregarded, and the ampere is not treated as dimensionally equivalent to any combination of other base units. As a result, electromagnetic laws in SI require an additional constant of proportionality (see Vacuum permittivity) to relate electromagnetic units to kinematic units. (This constant of proportionality is derivable directly from the above definition of the ampere.) All other electric and magnetic units are derived from these four base units using the most basic common definitions: for example, electric charge q is defined as current I multiplied by time t,
- therefore the unit of electric charge, the coulomb (C), is defined as 1 C = 1 A·s.
- The CGS system avoids introducing new base units and instead derives all electric and magnetic units directly from the centimetre, gram, and second based on the physical laws that relate electromagnetic phenomena to mechanics.
Alternate derivations of CGS units in electromagnetism
Electromagnetic relationships to length, time and mass may be derived by several equally appealing methods. Two of them rely on the forces observed on charges. Two fundamental laws relate (independently of each other) the electric charge or its rate of change (electric current) to a mechanical quantity such as force. They can be written in system-independent form as follows:
- The first is Coulomb's law, , which describes the electrostatic force F between electric charges and , separated by distance d. Here is a constant which depends on how exactly the unit of charge is derived from the CGS base units.
- The second is Ampère's force law, , which describes the magnetic force F per unit length L between currents I and I′ flowing in two straight parallel wires of infinite length, separated by a distance d that is much greater than the wire diameters. Since and , the constant also depends on how the unit of charge is derived from the CGS base units.
Maxwell's theory of electromagnetism relates these two laws to each other. It states that the ratio of proportionality constants and must obey , where c is the speed of light in vacuum. Therefore, if one derives the unit of charge from the Coulomb's law by setting , it is obvious that the Ampère's force law will contain a prefactor . Alternatively, deriving the unit of current, and therefore the unit of charge, from the Ampère's force law by setting or , will lead to a constant prefactor in the Coulomb's law.
Indeed, both of these mutually exclusive approaches have been practiced by the users of CGS system, leading to the two independent and mutually exclusive branches of CGS, described in the subsections below. However, the freedom of choice in deriving electromagnetic units from the units of length, mass, and time is not limited to the definition of charge. While the electric field can be related to the work performed by it on a moving electric charge, the magnetic force is always perpendicular to the velocity of the moving charge, and thus the work performed by the magnetic field on any charge is always zero. This leads to a choice between two laws of magnetism, each relating magnetic field to mechanical quantities and electric charge:
- The first law describes the Lorentz force produced by a magnetic field B on a charge q moving with velocity v:
- The second describes the creation of a static magnetic field B by an electric current I of finite length dl at a point displaced by a vector r, known as Biot–Savart law:
- where r and are the length and the unit vector in the direction of vector r respectively.
These two laws can be used to derive Ampère's force law above, resulting in the relationship: . Therefore, if the unit of charge is based on the Ampère's force law such that , it is natural to derive the unit of magnetic field by setting . However, if it is not the case, a choice has to be made as to which of the two laws above is a more convenient basis for deriving the unit of magnetic field.
Furthermore, if we wish to describe the electric displacement field D and the magnetic field H in a medium other than vacuum, we need to also define the constants ε0 and μ0, which are the vacuum permittivity and permeability, respectively. Then we have (generally) and , where P and M are polarization density and magnetization vectors. The factors λ and λ′ are rationalization constants, which are usually chosen to be , a dimensionless quantity. If λ = λ′ = 1, the system is said to be "rationalized": the laws for systems of spherical geometry contain factors of 4π (for example, point charges), those of cylindrical geometry – factors of 2π (for example, wires), and those of planar geometry contain no factors of π (for example, parallel-plate capacitors). However, the original CGS system used λ = λ′ = 4π, or, equivalently, . Therefore, Gaussian, ESU, and EMU subsystems of CGS (described below) are not rationalized.
Various extensions of the CGS system to electromagnetism
The table below shows the values of the above constants used in some common CGS subsystems:
(ESU, esu, or stat-)
(EMU, emu, or ab-)
The constant b in SI system is a unit-based scaling factor defined as: .
Note that of all these variants, only in Gaussian and Heaviside–Lorentz systems equals rather than 1. As a result, vectors and of an electromagnetic wave propagating in vacuum have the same units and are equal in magnitude in these two variants of CGS.
Electrostatic units (ESU)
In one variant of the CGS system, Electrostatic units (ESU), charge is defined via the force it exerts on other charges, and current is then defined as charge per time. It is done by setting the Coulomb force constant , so that Coulomb's law does not contain an explicit prefactor.
Therefore, in electrostatic CGS units, a franklin is equal to a centimetre times square root of dyne:
The unit of current is defined as:
Dimensionally in the ESU CGS system, charge q is therefore equivalent to m1/2L3/2t−1. Hence, neither charge nor current is an independent physical quantity in ESU CGS. This reduction of units is the consequence of the Buckingham π theorem.
All electromagnetic units in ESU CGS system that do not have proper names are denoted by a corresponding SI name with an attached prefix "stat" or with a separate abbreviation "esu".
Electromagnetic units (EMU)
In another variant of the CGS system, Electromagnetic units (EMU), current is defined via the force existing between two thin, parallel, infinitely long wires carrying it, and charge is then defined as current multiplied by time. (This approach was eventually used to define the SI unit of ampere as well). In the EMU CGS subsystem, this is done by setting the Ampere force constant , so that Ampère's force law simply contains 2 as an explicit prefactor (this prefactor 2 is itself a result of integrating a more general formulation of Ampère's law over the length of the infinite wire).
The biot is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one centimetre apart in vacuum, would produce between these conductors a force equal to two dynes per centimetre of length.
Therefore, in electromagnetic CGS units, a biot is equal to a square root of dyne:
The unit of charge in CGS EMU is:
Dimensionally in the EMU CGS system, charge q is therefore equivalent to m1/2L1/2. Hence, neither charge nor current is an independent physical quantity in EMU CGS.
All electromagnetic units in EMU CGS system that do not have proper names are denoted by a corresponding SI name with an attached prefix "ab" or with a separate abbreviation "emu".
Relations between ESU and EMU units
The ESU and EMU subsystems of CGS are connected by the fundamental relationship (see above), where c = 29,979,245,800 ≈ 3·1010 is the speed of light in vacuum in centimetres per second. Therefore, the ratio of the corresponding "primary" electrical and magnetic units (e.g. current, charge, voltage, etc. – quantities proportional to those that enter directly into Coulomb's law or Ampère's force law) is equal either to c−1 or c:
Units derived from these may have ratios equal to higher powers of c, for example:
There were at various points in time about half a dozen systems of electromagnetic units in use, most based on the CGS system. These also include the Gaussian units and the Heaviside–Lorentz units.
Further complicating matters is the fact that some physicists and electrical engineers in North America use hybrid units, such as volts per centimetre for electric fields and amperes per centimetre for magnetic fields. However, these are essentially the same as the SI units, by the simple conversion of all lengths used from metres into centimetres.
Electromagnetic units in various CGS systems
|Quantity||Symbol||SI unit||ESU unit||EMU unit||Gaussian unit|
|electric charge||q||1 C||↔ (10−1 c) statC||↔ (10−1) abC||↔ (10−1 c) Fr|
|electric current||I||1 A||↔ (10−1 c) statA||↔ (10−1) abA||↔ (10−1 c) Fr·s−1|
|1 V||↔ (108 c−1) statV||↔ (108) abV||↔ (108 c−1) statV|
|electric field||E||1 V/m||↔ (106 c−1) statV/cm||↔ (106) abV/cm||↔ (106 c−1) statV/cm|
|magnetic B field||B||1 T||↔ (104 c−1) statT||↔ (104) G||↔ (104) G|
|magnetic H field||H||1 A/m||↔ (4π 10−3 c) statA/cm||↔ (4π 10−3) Oe||↔ (4π 10−3) Oe|
|magnetic dipole moment||μ||1 A·m²||↔ (103 c) statA·cm²||↔ (103) abA·cm²||↔ (103) erg/G|
|magnetic flux||Φm||1 Wb||↔ (108 c−1) statT·cm²||↔ (108) Mx||↔ (108) G·cm²|
|resistance||R||1 Ω||↔ (109 c−2) s/cm||↔ (109) abΩ||↔ (109 c−2) s/cm|
|resistivity||ρ||1 Ω·m||↔ (1011 c−2) s||↔ (1011) abΩ·cm||↔ (1011 c−2) s|
|capacitance||C||1 F||↔ (10−9 c2) cm||↔ (10−9) abF||↔ (10−9 c2) cm|
|inductance||L||1 H||↔ (109 c−2) cm−1·s2||↔ (109) abH||↔ (109 c−2) cm−1·s2|
In this table, c = 29,979,245,800 ≈ 3·1010 is the speed of light in vacuum in the CGS units of centimetres per second. The symbol "↔" is used instead of "=" as a reminder that the SI and CGS units are corresponding but not equal because they have incompatible dimensions. For example, according to the next-to-last row of the table, if a capacitor has a capacitance of 1 F in SI, then it has a capacitance of (10−9 c2) cm in ESU; but it is usually incorrect to replace "1 F" with "(10−9 c2) cm" within an equation or formula. (This warning is a special aspect of electromagnetism units in CGS. By contrast, for example, it is always correct to replace "1 m" with "100 cm" within an equation or formula.)
One can think of the SI value of the Coulomb constant kC as:
This explains why SI to ESU conversions involving factors of c2 lead to significant simplifications of the ESU units, such as 1 statF = 1 cm and 1 statΩ = 1 s/cm: this is the consequence of the fact that in ESU system kC = 1. For example, a centimetre of capacitance is the capacitance between a sphere of radius 1 cm in vacuum and infinity. The capacitance C between two concentric spheres of radii R and r in ESU CGS system is:
By taking the limit as R goes to infinity we see C equals r.
Physical constants in CGS units
|Atomic mass unit||u||1.660 538 782 × 10−24 g|
|Bohr magneton||μB||9.274 009 15 × 10−21 erg/G (EMU, Gaussian)|
|2.780 278 00 × 10−10 statA·cm2 (ESU)|
|Bohr radius||a0||5.291 772 0859 × 10−9 cm|
|Boltzmann constant||k||1.380 6504 × 10−16 erg/K|
|Electron mass||me||9.109 382 15 × 10−28 g|
|Elementary charge||e||4.803 204 27 × 10−10 Fr (ESU, Gaussian)|
|1.602 176 487 × 10−20 abC (EMU)|
|Fine-structure constant||α ≈ 1/137||7.297 352 570 × 10−3|
|Gravitational constant||G||6.674 28 × 10−8 cm3/(g·s2)|
|Planck constant||h||6.626 068 85 × 10−27 erg·s|
|1.054 5716 × 10−27 erg·s|
|Speed of light in vacuum||c||≡ 2.997 924 58 × 1010 cm/s|
Pro and contra
While the absence of explicit prefactors in some CGS subsystems simplifies some theoretical calculations, it has the disadvantage that sometimes the units in CGS are hard to define through experiment. Also, lack of unique unit names leads to a great confusion: thus "15 emu" may mean either 15 abvolts, or 15 emu units of electric dipole moment, or 15 emu units of magnetic susceptibility, sometimes (but not always) per gram, or per mole. On the other hand, SI starts with a unit of current, the ampere, that is easier to determine through experiment, but which requires extra multiplicative factors in the electromagnetic equations. With its system of uniquely named units, the SI also removes any confusion in usage: 1.0 ampere is a fixed value of a specified quantity, and so are 1.0 henry, 1.0 ohm, and 1.0 volt .
A key virtue of the Gaussian CGS system is that electric and magnetic fields have the same units, is replaced by , and the only dimensional constant appearing in the Maxwell equations is , the speed of light. The Heaviside–Lorentz system has these desirable properties as well (with equaling 1), but it is a "rationalized" system (as is SI) in which the charges and fields are defined in such a way that there are many fewer factors of appearing in the formulas, and it is in Heaviside–Lorentz units that the Maxwell equations take their simplest form.
In SI, and other rationalized systems (for example, Heaviside–Lorentz), the unit of current was chosen such that electromagnetic equations concerning charged spheres contain 4π, those concerning coils of current and straight wires contain 2π and those dealing with charged surfaces lack π entirely, which was the most convenient choice for applications in electrical engineering. However, modern hand calculators and personal computers have eliminated this "advantage". In some fields where formulas concerning spheres are common (for example, in astrophysics), it has been argued[by whom?] that the nonrationalized CGS system can be somewhat more convenient notationally.
In fact, in certain fields, specialized unit systems are used to simplify formulas even further than either SI or CGS, by using some system of natural units. For example, those in particle physics use a system where every quantity is expressed by only one unit, the electron-volt, with lengths, times, and so on all converted into electron-volts by inserting factors of c and the Planck constant . This unit system is very convenient for calculations in particle physics, but it would be impractical in all other contexts.
- List of scientific units named after people
- Metre–tonne–second system of units
- United States customary units
References and notes
- Hallock, William; Wade, Herbert Treadwell (1906). Outlines of the evolution of weights and measures and the metric system. New York: The Macmillan Co. p. 200.
- Thomson, Sir W; Foster, Professor GC; Maxwell, Professor JC; Stoney, Mr GJ; Jenkin, Professor Fleeming; Siemens, Dr; Bramwell, Mr FJ (September 1873). Everett, Professor, ed. First Report of the Committee for the Selection and Nomenclature of Dynamical and Electrical Units. Forty-third Meeting of the British Association for the Advancement of Science. Bradford: John Murray. p. 223. Retrieved 2012-04-08.
- Jackson, John David (1999). Classical Electrodynamics (3rd ed.). New York: Wiley. pp. 775–784. ISBN 0-471-30932-X.
- Cardarelli, F. (2004). Encyclopaedia of Scientific Units, Weights and Measures: Their SI Equivalences and Origins (2nd ed.). Springer. p. 20. ISBN 1-85233-682-X.
- Leung, P. T. (2004). "A note on the 'system-free' expressions of Maxwell's equations". European Journal of Physics 25 (2): N1–N4. Bibcode:2004EJPh...25N...1L. doi:10.1088/0143-0807/25/2/N01.
- Cardarelli, F. (2004). Encyclopaedia of Scientific Units, Weights and Measures: Their SI Equivalences and Origins (2nd ed.). Springer. pp. 20–25. ISBN 1-85233-682-X.
- Bennett, L. H.; Page, C. H.; and Swartzendruber, L. J. (1978). "Comments on units in magnetism". Journal of Research of the National Bureau of Standards 83 (1): 9–12. doi:10.6028/jres.083.002.
- A.P. French, Edwind F. Taylor (1978). An Introduction to Quantum Physics. W.W. Norton & Company.
- Griffiths, David J. (1999). "Appendix C: Units". Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 0-13-805326-X.
- Jackson, John D. (1999). "Appendix on Units and Dimensions". Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X.
- Littlejohn, Robert (Fall 2011). "Gaussian, SI and Other Systems of Units in Electromagnetic Theory" (PDF). Physics 221A, University of California, Berkeley lecture notes. Retrieved 2008-05-06. |
In 1971, English astronomers Donald Lynden-Bell and Martin Rees hypothesized that a supermassive black hole (SMBH) resides at the center of our Milky Way Galaxy. This was based on their work with radio galaxies, which showed that the massive amounts of energy radiated by these objects was due to gas and matter being accreted onto a black hole at their center.
By 1974, the first evidence for this SMBH was found when astronomers detected a massive radio source coming from the center of our galaxy. This region, which they named Sagittarius A*, is over 10 million times as massive as our own Sun. Since its discovery, astronomers have found evidence that there are supermassive black holes at the centers of most spiral and elliptical galaxies in the observable Universe.
Supermassive black holes (SMBH) are distinct from lower-mass black holes in a number of ways. For starters, since SMBH have a much higher mass than smaller black holes, they also have a lower average density. This is due to the fact that with all spherical objects, volume is directly proportional to the cube of the radius, while the minimum density of a black hole is inversely proportional to the square of the mass.
In addition, the tidal forces in the vicinity of the event horizon are significantly weaker for massive black holes. As with density, the tidal force on a body at the event horizon is inversely proportional to the square of the mass. As such, an object would not experience significant tidal force until it was very deep into the black hole.
How SMBHs are formed remains the subject of much scholarly debate. Astrophysicists largely believe that they are the result of black hole mergers and the accretion of matter. But where the “seeds” (i.e. progenitors) of these black holes came from is where disagreement occurs. Currently, the most obvious hypothesis is that they are the remnants of several massive stars that exploded, which were formed by the accretion of matter in the galactic center.
Another theory is that before the first stars formed in our galaxy, a large gas cloud collapsed into a “qausi-star” that became unstable to radial perturbations. It then turned into a black hole of about 20 Solar Masses without the need for a supernova explosion. Over time, it rapidly accreted mass in order to become an intermediate, and then supermassive, black hole.
In yet another model, a dense stellar cluster experienced core-collapse as the as a result of velocity dispersion in its core, which happened at relativistic speeds due to negative heat capacity. Last, there is the theory that primordial black holes may have been produced directly by external pressure immediately after the Big Bang. These and other theories remain theoretical for the time being.
Multiple lines of evidence point towards the existence of a SMBH at the center of our galaxy. While no direct observations have been made of Sagittarius A*, its presence has been inferred from the influence it has on surrounding objects. The most notable of these is S2, a star that flows an elliptical orbit around the Sagittarius A* radio source.
S2 has an orbital period of 15.2 years and reaches a minimal distance of 18 billion km (11.18 billion mi, 120 AU) from the center of the central object. Only a supermassive object could account for this, since no other cause can be discerned. And from the orbital parameters of S2, astronomers have been able to produce estimates on the size and mass of the object.
For instance, S2s motions have led astronomers to calculated that the object at the center of its orbit must have no less than 4.1 million Solar Masses (8.2 × 10³³ metric tons; 9.04 × 10³³ US tons). Furthermore, the radius of this object would have to be less than 120 AU, otherwise S2 would collide with it.
As Reinhard Genzel, the team leader from the Max-Planck-Institute for Extraterrestrial Physics said:
“Undoubtedly the most spectacular aspect of our long term study is that it has delivered what is now considered to be the best empirical evidence that supermassive black holes do really exist. The stellar orbits in the Galactic Centre show that the central mass concentration of four million solar masses must be a black hole, beyond any reasonable doubt.”
Another indication of Sagittarius A*s presence came on January 5th, 2015, when NASA reported a record-breaking X-ray flare coming from the center of our galaxy. Based on readings from the Chandra X-ray Observatory, they reported emissions that were 400 times brighter than usual. These were thought to be the result of an asteroid falling into the black hole, or by the entanglement of magnetic field lines within the gas flowing into it.
Another indication is Active Galactic Nuclei (AGN), where massive bursts of radio, microwave, infrared, optical, ultra-violet (UV), X-ray and gamma ray wavebands are periodically detected coming from the regions of cold matter (gas and dust) at the center of larger galaxies. While the radiation is not coming from the black holes themselves, the influence such a massive object would have on surrounding matter is believed to be the cause.
In short, gas and dust form accretion disks at the center of galaxies that orbit supermassive black holes, gradually feeding them matter. The incredible force of gravity in this region compresses the disk’s material until it reaches millions of degrees kelvin, generating bright radiation and electromagnetic energy. A corona of hot material forms above the accretion disc as well, and can scatter photons up to X-ray energies.
The interaction between the SMBH rotating magnetic field and the accretion disk also creates powerful magnetic jets that fire material above and below the black hole at relativistic speeds (i.e. at a significant fraction of the speed of light). These jets can extend for hundreds of thousands of light-years, and are a second potential source of observed radiation.
The study of black holes is still in its infancy. And what we have learned over the past few decades alone has been both exciting and awe-inspiring. Whether they are lower-mass or supermassive, black holes are an integral part of our Universe and play an active role in its evolution.
Who knows what we will find as we peer deeper into the Universe? Perhaps some day we the technology, and sheer audacity, will exist so that we might attempt to peak beneath the veil of an event horizon. Can you imagine that happening? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.