text stringlengths 4 602k |
|---|
Discriminant Activity Question: How is the discriminant used to determine the number and type of solutions to a quadratic equation? Launch: Students will be given a half-sheet of paper with 5 quadratic equations that are not in standard form. Students will have to match the a, b and c values to the correct quadratic equation. Students will have 2 minutes to check with a partner and problem solve if there are discrepancies. Investigation: Discovering the Importance of the Discriminant Activity FUNCTION a b c Discriminant b 4ac 2 y x2 6 x 5 y 2 x 2 11 y x2 6x 9 y 3x 2 8 y 3x 2 12 x 8 Sketch of Graph # of times the graph crosses or touches the x-axis # of real solutions # of imag solutions y 3x 2 x 5 y x 2 10 x 25 y 4 x 2 8 x 13 FUNCTION a b c Sketch of Graph Discriminant b 2 4ac # of times the graph crosses or touches the x-axis # of real solutions # of imag solutions y 5x2 8x y x 2 12 Conclusions/Closure: On a separate sheet of paper, answer the following: Make a general statement about the value of the discriminant and what type of solutions a quadratic has. |
Class 9 Maths syllabus is full of some really tough topics that can make your head spin! Apart from the continuation of old topics being upgraded like Real Numbers, Trigonometry, Polynomials, Linear Equations in Two Variables, you will also be introduced to some unique concepts. One such chapter is on the Introduction to Edulid’s geometry which deals with a range of axioms and theorems. Want to know more about it? Checkout these amazing study notes on Euclid’s Geometry Class 9.
This Blog Includes:
What is Euclid’s Geometry?
Euclid was a famous mathematician and teacher popularly known as the ‘Father of Geometry’. He was the first one to introduce methods to prove mathematical concepts by using logical reasoning. Euclidean geometry deals with the understanding of geometrical shape and figures on a flat or plain surface using axioms and theorems. The definition of Euclid’s Geometry Class 9 is as follows:
- A Point has no component or part
- Anything which has length but does not have any breadth is called Line
- The endpoints of a line are known as points, and such a line is called a line segment
- Anything that has length and breadth but no height is called Surface
- When a line has points on itself, such a line is called a straight line
- Edges of the surface are in the form of lines
- A surface with straight lines on itself is referred to as a Plane Surface
Understanding Coordinate Geometry for Competitive Exams
What are Axioms in Euclid’s Geometry?
Some common properties used in mathematics that are not directly related to mathematics are called Axioms. The ones used in Euclid’s geometry class 9 are as follows:
- If two things are equal to the same thing, then they are similar to one another also. If a=b and b=c, then a=c
- If the same things are added to equals, then wholes are also equal. If a is added to b and c where b=c, then a+b=a+c
- If the same things are subtracted to equals, then wholes are also equal. If a is subtracted from b and c where b=c, then b-a=c-a
- If two things coincide with one another, then they are equal
- The whole is always greater than the part
- Things which are double of the same thing are always equal
- Things which are halves of the same thing, then they are equal
Also Read: BODMAS Questions
What are Postulates in Euclid’s Geometry Class 9?
Postulates are assumptions specifically related to Geometry. Euclid gave five postulates, all of which are part of the syllabus for Euclid’s Geometry class 9.
- A straight line may be drawn from anyone point to any other point. Axiom related to this Postulate states that only a single line can be drawn from 2 unique points.
- Euclid named a terminated line as a segment stating that it can be drawn indefinitely. So Line Segment is referred to as a line which can be extended from both sides.
- A circle can be formed with any value of radius and any point as the centre.
- All right angles formed are always equal. For Example Angle A = 90° and Angle B = 90°, then Angle A = Angle B.
- If there is a line segment passing through two straight lines such that forming two different interior angles on the same side of the line where the sum of angles is less than 180°, then these two lines will intersect with each other if extended on the side where the sum of two interior angles is less than 2 Right angles.
- If the sum of 2 interior angles on the same side of the line is equal to 2 right angles or 180°, then these two lines will be parallel to each other.
Equivalent Versions of Euclid’s Fifth and Last Postulate
Euclid’s Geometry gives states two equivalent versions of Euclid’s Fifth Postulate which states that sum of 2 interior angles on the same side is equal to 180° means that lines are parallel and If the sum is less than 180° then lines will intersect with each other if extended.
An equivalent version of Euclid’s Fifth Postulate is:
Play fair Axiom: This axiom states that if you have any ‘I’ and any point ‘P’ on some other line except ‘I’, then even if you make infinite lines from Point P still there can be only 1 line parallel to Line I passing from Point P.
Two distinct intersecting lines can never be parallel to the same line. Still, this version also states that if two lines are intersecting each other, then a line parallel to one of them can never be parallel to the other intersecting line.
Solved Questions for Euclid’s Geometry Class 9
Let’s now understand Euclid’s Geometry class 9 with the help of some solved examples:
If Point C lies between Line Segment AB such that AC = CB, then prove that AC = ½AB
Given: AC = CB
Adding AC to both
Now, AC + AC = CB + AC
2AC = AB
Hence, AC = ½ AB
Prove that each and every Line Segment has only one midpoint
Let’s assume that Line Segment AB has 2 mid points P and Q
So, AP = PB and AQ = QB
Adding AP and AQ to respective equations
So, 2AP = AP + PB and 2AQ = QB + AQ
So, Both 2AP and 2AQ are equal to AB
Thus, 2 AP = 2AQ (6th Axiom)
So, AP = AQ (7th Axiom)
This, P and Q coincide each other
So, It is proved that each line segment has only one midpoint
Practice Questions Euclid’s Geometry Class 9
- Prove existence of Parallel lines with the help of Euclid’s 5th Postulate.
- If 2 points, B and C between line segment AD such that AC = BD, then prove that AB = CD
- If Point R lies between Line Segment PQ such that PR = RQ, then prove that PR = ½ PQ
- What are the five postulates of Euclid’s Geometry?
- If 2 sales person make the same number of sales in the month of October and their sales double up in the month of November then calculate their sales in November.
This was all about Euclid’s Geometry class 9. Hope you liked the blog, do let us know your review in the comment section. For more blogs on career, courses and top universities, stay tuned to Leverage Edu. if you need any career-related advice, help or guidance feel free to reach out to Leverage Edu experts! |
- To gain new insight, astronomers plan to observe a spiral galaxy that hosts a supermassive black hole at its center.
- They will gather the data through NASA’s James Webb Space Telescope, which is set to be launched in 2021.
- By evaluating motions of stars around the black hole, they can determine the black hole’s mass.
Over the past decade, scientists have realized that most galaxies contain at least one supermassive black hole in their central regions. Both evolve in lockstep for reasons that we don’t know yet. In fact, most things about these central, supermassive black holes are not yet understood.
To extract new information, NASA’s James Webb Telescope will turn its infrared gaze to the center of an intermediate spiral Seyfert galaxay, NGC 4151. At a distance of 62 million light years, it is one of the nearest galaxies to contain an actively feeding and glowing supermassive black hole.
The galaxy was first detected in 1970 by an X-ray observatory satellite Uhuru. It appears as an average spiral that hosts a supermassive black hole (with nearly 40 million times solar mass) at its center.
James Webb Telescope (to be launched in 2021) will study every phase in the history of our Universe. A team of astronomers will use it to determine the mass of the black hole situated in the center of NGC 4151. Although the outcome may seem like a piece of trivia, it will help us understand how black holes feed and affect their surrounding galaxy. This will eventually improve our understanding of numerous galaxies in the universe.
How To Weigh Supermassive Black Holes?
There are many techniques to measure the mass of a supermassive black hole: one of them relies on determining the stars’ motions in the core of the galaxy. Since the motion of a star is greatly influenced by gravitational force(s) acting on it, we can say that the faster they move, the heavier the black hole is.
This is not as easy as it sounds: NGC 4151’s supermassive black is engulfing materials voraciously. Thus, the matter swirling around its accretion disk glows brightly, overwhelming the fainter light from nearby stars.
The spiral galaxy NGC 4151 | Credit: NASA/ESA
However, Webb’s 6.5-meter wide primary mirror will provide a sharp vision which is powerful enough to capture fainter objects in the center of the NGC 4151 even though there is an extremely bright disk there.
Astronomers believe that they will be able to examine galaxy’s 1000 light years (with the central region) and figure out stellar motions on a 15 light-years scale.
How They Will Achieve This Feat?
Astronomers will use Telescope’s Near-Infrared Spectrograph integral field unit that will function over a wavelength range of 0.6 to 5 microns. It will analyze the spectrum of an object and estimate its physical properties like chemical composition, mass, and temperature.
This integral field unit will capture light from every region in a picture and divide it into a rainbow spectrum. To precisely perform this task, the Webb telescope will feature almost 100 mirrors tightly packed into an equipment the size of a laptop. These mirrors can efficiently split a small (square-shaped) portion of the sky into strips, and spatially spread the light from strips in wavelength.
Thus, one image will yield one thousand spectra. Each spectrum will reveal new insights of gas and stars at that particular region of the sky, as well as their relative motions.
Rendering of Webb Space Telescope | Credit: NASA
However, data from Webb telescope alone won’t be enough to calculate the motion of each star. It will only provide the information about a set of stars very close to NGC 4151’s center. The team will use computer models to measure the force of gravitational field acting on stars, which is proportional to the black hole size.
The computer model produces tens of thousands of mock stars that mimic the motions of actual stars in the NGC 4151. They will simulate different types of black holes to see what perfectly aligns with the observations.
Then they will compare the outcome with another technique that focuses on the gas (instead of stars) in the center of the galaxy. It doesn’t matter what method is applied, the result should be same as long as they are observing the same black hole. |
In Australia, the methodology used for heatwave warnings is different across its states and territories. The Bureau of Meteorology is redesigning its heatwave product suite to provide nationally consistent heatwave forecasts and warnings.
Australia’s sequence of unprecedented disasters during the 2019–20 Black Summer were not unexpected. There has been declining rainfall over the southern half of Australia with Australia’s average temperature rising by 1.4° C since 1910.1 Record 2- and 3-year rainfall deficits over eastern Australia (Figure 1) created tinder-dry fuels and an environment prone to extreme heatwaves (Figure 2). Subsequent fires and persistent smoke were responsible for 332 and 417 excess deaths3, respectively, and an increase in respiratory problems and other health impacts in New South Wales and the Australian Capital Territory.4
It takes longer to detect heatwave mortality due to strict medical and coronial conventions. However, the death toll may be in the hundreds, noting studies that have demonstrated the disproportionate impact of heatwaves over other climatic hazards.5
Indirectly, heatwaves played a large part in the size and severity of the Black Summer bushfires. Heatwaves are defined by the combined effect of high minimum and maximum temperatures with the former playing the greatest role. Higher minimum temperatures reduce the diurnal cooling cycle and sets up earlier and more sustained high temperatures, rapidly building heat stored in the environment. The Bureau of Meteorology combines long- and short-term daily (average of maximum and minimum) temperatures over a 3-day period to determine heatwave severity as shown in Figure 2.6
Figure 1: Rainfall deciles for the 24 months from January 2018 to December 2019 (left) and 36 months from January 2017 to December 2019 (right), based on all years from 1900.
High minimum temperatures are extremely significant as it is difficult, if not impossible, to form a surface inversion, allowing the upper wind structure to remain coupled with the fire overnight. Without the cooling effect and higher relative humidity of a nocturnal surface inversion, fires burn as intensely at night as during the day. Fires expand further and burn more erratically without the normal benefit of reduced overnight fire danger.
The NSW sequences of heatwaves and property impacts are shown in Figure 3. The proportion of NSW that was affected by low-intensity or severe heatwaves can be seen to correlate with property losses the week finishing 23 November. At that time, nearly 80 per cent of NSW experienced a low-intensity heatwave (18 per cent severe) and over 500 homes and structures were destroyed.
The next 2 major destructive bushfire events in January 2020 followed a 6-week heatwave affecting most of NSW, with a major heatwave in early February aligning with further damage.
Antecedent heatwave severity and accumulated heat load is yet to be systematically explored for the relationship with subsequent fire and smoke activity and presents rich grounds for further research.
The Bureau of Meteorology heatwave product is statically displayed and based on a national view of Australia.7 These forecasts do not support the different needs of stakeholders; their processes or geographical factors. The Bureau's heatwave project team has carried out extensive interviews with health and emergency services stakeholders from government agencies, as well as not-for-profit groups such as Australian Red Cross, to ensure new products meet their needs. Beta products including town and weather district summaries will be trialled with partners in the 2020–21 summer season. Feedback received will help build an operational product intended for release in 2021–22.
The challenge of quantifying the direct human health impacts of heatwaves has been recently studied through a collaborative research project. A 12-month DIPA8 funded PEAN9 project was completed during 2020, which aimed to ‘Reduce Illness and Lives Lost from Heatwaves' (RILLH). A multi-agency collaboration between the Bureau of Meteorology, Department of Health, Australian Bureau of Statistics, Geoscience Australia, Department of Agriculture, Water and the Environment, Australian Institute of Health and Welfare and the Bushfire and Natural Hazards CRC, the RILLH used big data to demonstrate the utility of linked social and environmental data from multiple agencies through the MADIP10 data asset to understand complex, coupled social and environmental problems. Heatwave vulnerability has been calculated at neighbourhood-level and for individual-level factors for mortality and morbidity.
Figure 2: Highest heatwave severity for December 2019 (left) and January 2020 (right).
Figure 3: Chronology of heatwave severity and homes lost in NSW from September 2019 to February 2020. Proportion of NSW affected by all (orange line) and severe (red line) heatwaves (left axis). Homes destroyed (purple bar) sourced from NSW Rural Fire Service Building Impact Assessment.
There is also an opportunity for warning agencies such as the Bureau and partners in health and emergency services to tailor advice to communities, agencies and individuals according to the risks inherent in where they live or their type of health and environmental exposures.
The study determined neighbourhood and individual-level risk factors separately (Table 1). Most of the study’s neighbourhood-level spatial results were validated using the linked individual-level data, demonstrating the value of the neighbourhood-level results.
Table 1: RILLH results show the influence of neighbourhood-level and individual-level factors on the heatwave-mortality relationship in New South Wales.
|Household composition and instance of disability||
|Language and culture||
|Housing and transportation||
|Health status and risk factors||
|Consistent risk factors across levels||Low-equivalised household income, over 65 years and living alone, dwellings with single parents, need assistance, insufficient English language proficiency, no access to a vehicle, diabetes, mental health conditions.|
Figure 4: A) Relative Risk of heatwave-related mortality and B) overall heat health vulnerability index in New South Wales, (2007–17).
Figure 5: A) Relative Risk of heatwave- related mortality (2007–17) and B) overall heat health vulnerability index in the Sydney Greater Capital Area Statistical Area.
As an example, Figure 4 demonstrates spatial variability in mortality risk and heat-health vulnerability for NSW. Relative Risk in Figures 4 and 5 is a measure of increased or decreased impact during heatwaves compared to comparable non-heatwave periods. Heat vulnerability index is a measure of the combined effects of demographic, socioeconomic, health and the natural and built environment. The results show there is an opportunity to tailor advice to the needs of different regions.
Similarly, the contrast in vulnerability across Sydney in Figure 5 can help authorities develop policy, mitigation and response strategies to effectively manage exposure, sensitivity and adaptive capacity measures.
Heatwaves impact segments of the population in different ways with impacts related to individual characteristics of people and the types of places they live in (social and built environment). Vulnerability to heatwaves exhibits distinct geographies.
The RILLH project has generated a rich set of results with implications for strategic policy and education programs to position and prepare communities for the dangers of increasingly severe heatwaves.
The RILLH project highlights the value of high-quality multi-agency partnership studies and supports a strategic aim to enhance warnings with local behavioural recommendations to improve the value of future heatwave warnings.
1 Bettio L, Nairn JR, McGibbony SC & Hope P 2019, A heatwave forecast service for Aaustralia, Royal Society of Victoria. doi: 10.1071/RS19006
2 Commonwealth of Australia 2020, Royal Commission into National Natural Disaster Arrangements Report. At: https://naturaldisaster.royalcommission.gov.au/system/files/2020-11/Royal Commission into National Natural Disaster Arrangements - Report %5Baccessible%5D.pdf [17 November 2020].
3 Borchers Arriagada N, Palmer AJ, Bowman DMJS, Morgan GG, Jalaludin BB & Johnson FH 2020, Unprecedented smoke‐related health burden associated with the 2019–20 bushfires in eastern Australia, Research letters, The Medical Journal of Australia, vol. 213, no. 6, pp.282–83. doi: 10.5694/mja2.50545
4 Australian Institute of Health and Welfare 2020, Australian bushfires 2019–20: exploring the short-term health impacts, Summary.
5 Coates L, Haynes K, O’Brien J, McAneney J, Dimer de Olivera F 2014, Exploring 167 years of vulnerability: An examination of extreme heat events in Australia 1844–2010, Environmental Science & Policy. Elsevier, vol. 42, pp.33–44. doi: 10.1016/J.ENVSCI.2014.05.003
6 Nairn JR & Fawcett RJB 2014, The excess heat factor: A metric for heatwave intensity and its use in classifying heatwave severity, International Journal of Environmental Research and Public Health, vol. 12, no. 1. doi: 10.3390/ijerph120100227
7 Bureau of Meteorology 2020, State of The Climate 2020. At: www.bom.gov.au/state-of-the-climate/ [17 November 2020].
8 Data Integration Partnership for Australia is a 3-year (2017-20) investment to maximise the use and value of Australian Government data assets. See www.pmc.gov.au/public-data/data-integration-partnership-australia.
9 Physical Environment Analysis Network, Australian Government agencies working together to analyse government data to generate case studies and insights into complex problems. www.pean.gov.au/
10 Multi-Agency Data Integration Project is a secure data asset combining information on health, education, government payments, income and taxation, employment, and population demographics (including the Census) over time. At: www.abs. gov.au/websitedbs/D3310114.nsf/home/Multi-Agency%20Data%20 Integration%20Project%20(MADIP), [15 January 2021]. |
Name: Date: Period: Modeling sea floor spreading Summary In this activity students will use actual data from historic oceanographic cruises to examine sea floor spreading. They will also model sea floor spreading at a spreading center such as the Mid-Atlantic Ridge. Learning Objectives At the completion of this activity you should be able to: • Plot data on a diagram. • Draw conclusions from the data. • Make predictions about future events related to this data. Materials • Student Handout • Posterboard /cardstock • Paper • Scissors Background Using actual data from the Glomar Challenger cruises will allow you to see a real-world connection between classroom science and “real” science. Also, creating a model for sea floor spreading will allow you to see a hands-on example of this phenomenon. Procedure Part 1 1. To create a working model of sea floor spreading, follow the model seen below (Figure 1). The base is best if made from poster board or card stock. The width and length of the base are not critical, as long as they are at least 12 cm wide and 30 cm long. Cut 3 slits, each slightly more than 8 cm wide. One slit needs to be in the center, and the other two at either end of the base, at least 4 cm in from the ends. Label the slits A and B at either end. 2. Cut out the plate strips and place them back to back (marked sides together with number 1 at the top) and tape the end as indicated (at the end nearest number 7.) 3. Shade alternating strips to represent the reversals of the earth’s magnetic field. Be sure that the shading on either side of the strip match the alternate side. 4. Put the two strips up through the bottom slit and then off to the slits at either end of the base. (Figure 2.) 5. Thread the two strips through the center slit of the base, keeping the taped edge at the bottom. Pull the “North American Plate” strip down through slit “A” and the “Eurasian Plate” strip through slit “B”. 6. Push the strips up from below until you can see numbers 3, 4, and 5 on top of the base. Templates for plate strips Analysis and Conclusion: Use the data below and write the age of the sediments found at each site on the map below. (Write the age on the line by each site.) Then answer the questions on the next page. (Please write in complete sentences!) 1. Where are the youngest sediments found, compared to the Mid-Atlantic Ridge? 2. Where are the oldest sediments found? 3. Does the data support the theory of sea floor spreading? How? Use your spreading model to help you answer the following questions: 4. What are you modeling by pushing the strips up through the base? 5. What is being modeled at slits A and B as you pull the strips down through the base? 6. This model shows what happens at sea floor spreading centers, such as the mid-Atlantic Ridge. List at least 2 good points of this model. 7. What are at least 2 reasons why this model is not a good model? 8. Seafloor spreading is continuing today along mid-ocean ridges such as the Mid-Atlantic Ridge. Predict what effect this will have on the size of the ocean basin: Calculate the rate of movement in cm per year for the sediment samples in the data table listed within the Analysis and Conclusion section. Use the Formula rate = distance/time. Convert kilometers to centimeters. Example; Using the data, calculate the speed of the movement of the plates, in centimeters per year. (Show your work!) Rate = Distance / time = 1686 – 221 = ___1465 Km = 76 – 11 65 million years The North American Plate and the Eurasian Plate are moving apart at approximately_____________ cm per year. Enrichment: Studying Seafloor Spreading on Land Hopefully you are beginning to see how seafloor spreading changes the ocean floor. You should know that magma rises at the mid-ocean ridge and flows away from the ridge forming new oceanic crust. In general, this activity is hidden beneath the ocean’s water. But there is a place where seafloor spreading can be seen on land. 1. What is the name of the landmass through which the mid-ocean ridge in the Atlantic Ocean passes? Where is it located? 2. How do the land structures of Iceland help confirm seafloor spreading? 3. Why do you think geologists might find Iceland a useful place to conduct research on seafloor spreading? |
By the end of this section, you will be able to:
- Analyze a complex circuit using Kirchhoff’s rules, applying the conventions for determining the correct signs of various terms.
The information presented in this section supports the following AP® learning objectives and science practices:
- 5.B.9.1 The student is able to construct or interpret a graph of the energy changes within an electrical circuit with only a single battery and resistors in series and/or in, at most, one parallel branch as an application of the conservation of energy (Kirchhoff’s loop rule). (S.P. 1.1, 1.4)
- 5.B.9.2 The student is able to apply conservation of energy concepts to the design of an experiment that will demonstrate the validity of Kirchhoff’s loop rule in a circuit with only a battery and resistors either in series or in, at most, one pair of parallel branches. (S.P. 4.2, 6.4, 7.2)
- 5.B.9.3 The student is able to apply conservation of energy (Kirchhoff’s loop rule) in calculations involving the total electric potential difference for complete circuit loops with only a single battery and resistors in series and/or in, at most, one parallel branch. (S.P. 2.2, 6.4, 7.2)
- 5.B.9.4 The student is able to analyze experimental data including an analysis of experimental uncertainty that will demonstrate the validity of Kirchhoff’s loop rule. (S.P. 5.1)
- 5.B.9.5 The student is able to use conservation of energy principles (Kirchhoff’s loop rule) to describe and make predictions regarding electrical potential difference, charge, and current in steady-state circuits composed of various combinations of resistors and capacitors. (S.P. 6.4)
- 5.C.3.1 The student is able to apply conservation of electric charge (Kirchhoff’s junction rule) to the comparison of electric current in various segments of an electrical circuit with a single battery and resistors in series and in, at most, one parallel branch and predict how those values would change if configurations of the circuit are changed. (S.P. 6.4, 7.2)
- 5.C.3.2 The student is able to design an investigation of an electrical circuit with one or more resistors in which evidence of conservation of electric charge can be collected and analyzed. (S.P. 4.1, 4.2, 5.1)
- 5.C.3.3 The student is able to use a description or schematic diagram of an electrical circuit to calculate unknown values of current in various segments or branches of the circuit. (S.P. 1.4, 2.2)
- 5.C.3.4 The student is able to predict or explain current values in series and parallel arrangements of resistors and other branching circuits using Kirchhoff’s junction rule and relate the rule to the law of charge conservation. (S.P. 6.4, 7.2)
- 5.C.3.5 The student is able to determine missing values and direction of electric current in branches of a circuit with resistors and NO capacitors from values and directions of current in other branches of the circuit through appropriate selection of nodes and application of the junction rule. (S.P. 1.4, 2.2)
Many complex circuits, such as the one in Figure 21.23, cannot be analyzed with the series-parallel techniques developed in Resistors in Series and Parallel and Electromotive Force: Terminal Voltage. There are, however, two circuit analysis rules that can be used to analyze any circuit, simple or complex. These rules are special cases of the laws of conservation of charge and conservation of energy. The rules are known as Kirchhoff’s rules, after their inventor Gustav Kirchhoff (1824–1887).
- Kirchhoff’s first rule—the junction rule. The sum of all currents entering a junction must equal the sum of all currents leaving the junction.
- Kirchhoff’s second rule—the loop rule. The algebraic sum of changes in potential around any closed circuit path (loop) must be zero.
Explanations of the two rules will now be given, followed by problem-solving hints for applying Kirchhoff’s rules, and a worked example that uses them.
Kirchhoff’s First Rule
Kirchhoff’s first rule (the junction rule) is an application of the conservation of charge to a junction; it is illustrated in Figure 21.24. Current is the flow of charge, and charge is conserved; thus, whatever charge flows into the junction must flow out. Kirchhoff’s first rule requires that (see figure). Equations like this can and will be used to analyze circuits and to solve circuit problems.
Making Connections: Conservation Laws
Kirchhoff’s rules for circuit analysis are applications of conservation laws to circuits. The first rule is the application of conservation of charge, while the second rule is the application of conservation of energy. Conservation laws, even used in a specific application, such as circuit analysis, are so basic as to form the foundation of that application.
Kirchhoff’s Second Rule
Kirchhoff’s second rule (the loop rule) is an application of conservation of energy. The loop rule is stated in terms of potential, , rather than potential energy, but the two are related since . Recall that emf is the potential difference of a source when no current is flowing. In a closed loop, whatever energy is supplied by emf must be transferred into other forms by devices in the loop, since there are no other ways in which energy can be transferred into or out of the circuit. Figure 21.25 illustrates the changes in potential in a simple series circuit loop.
Kirchhoff’s second rule requires . Rearranged, this is , which means the emf equals the sum of the (voltage) drops in the loop.
Applying Kirchhoff’s Rules
By applying Kirchhoff’s rules, we generate equations that allow us to find the unknowns in circuits. The unknowns may be currents, emfs, or resistances. Each time a rule is applied, an equation is produced. If there are as many independent equations as unknowns, then the problem can be solved. There are two decisions you must make when applying Kirchhoff’s rules. These decisions determine the signs of various quantities in the equations you obtain from applying the rules.
- When applying Kirchhoff’s first rule, the junction rule, you must label the current in each branch and decide in what direction it is going. For example, in Figure 21.23, Figure 21.24, and Figure 21.25, currents are labeled , , , and , and arrows indicate their directions. There is no risk here, for if you choose the wrong direction, the current will be of the correct magnitude but negative.
- When applying Kirchhoff’s second rule, the loop rule, you must identify a closed loop and decide in which direction to go around it, clockwise or counterclockwise. For example, in Figure 21.25 the loop was traversed in the same direction as the current (clockwise). Again, there is no risk; going around the circuit in the opposite direction reverses the sign of every term in the equation, which is like multiplying both sides of the equation by
Figure 21.26 and the following points will help you get the plus or minus signs right when applying the loop rule. Note that the resistors and emfs are traversed by going from a to b. In many circuits, it will be necessary to construct more than one loop. In traversing each loop, one needs to be consistent for the sign of the change in potential. (See Example 21.5.)
- When a resistor is traversed in the same direction as the current, the change in potential is . (See Figure 21.26.)
- When a resistor is traversed in the direction opposite to the current, the change in potential is . (See Figure 21.26.)
- When an emf is traversed from to + (the same direction it moves positive charge), the change in potential is +emf. (See Figure 21.26.)
- When an emf is traversed from + to (opposite to the direction it moves positive charge), the change in potential is emf. (See Figure 21.26.)
Calculating Current: Using Kirchhoff’s Rules
Find the currents flowing in the circuit in Figure 21.27.
This circuit is sufficiently complex that the currents cannot be found using Ohm’s law and the series-parallel techniques—it is necessary to use Kirchhoff’s rules. Currents have been labeled , , and in the figure and assumptions have been made about their directions. Locations on the diagram have been labeled with letters a through h. In the solution we will apply the junction and loop rules, seeking three independent equations to allow us to solve for the three unknown currents.
We begin by applying Kirchhoff’s first or junction rule at point a. This gives
since flows into the junction, while and flow out. Applying the junction rule at e produces exactly the same equation, so that no new information is obtained. This is a single equation with three unknowns—three independent equations are needed, and so the loop rule must be applied.
Now we consider the loop abcdea. Going from a to b, we traverse in the same (assumed) direction of the current , and so the change in potential is . Then going from b to c, we go from to +, so that the change in potential is . Traversing the internal resistance from c to d gives . Completing the loop by going from d to a again traverses a resistor in the same direction as its current, giving a change in potential of .
The loop rule states that the changes in potential sum to zero. Thus,
Substituting values from the circuit diagram for the resistances and emf, and canceling the ampere unit gives
Now applying the loop rule to aefgha (we could have chosen abcdefgha as well) similarly gives
Note that the signs are reversed compared with the other loop, because elements are traversed in the opposite direction. With values entered, this becomes
These three equations are sufficient to solve for the three unknown currents. First, solve the second equation for :
Now solve the third equation for :
Substituting these two new equations into the first one allows us to find a value for :
Combining terms gives
Substituting this value for back into the fourth equation gives
The minus sign means flows in the direction opposite to that assumed in Figure 21.27.
Finally, substituting the value for into the fifth equation gives
Just as a check, we note that indeed . The results could also have been checked by entering all of the values into the equation for the abcdefgha loop.
Problem-Solving Strategies for Kirchhoff’s Rules
- Make certain there is a clear circuit diagram on which you can label all known and unknown resistances, emfs, and currents. If a current is unknown, you must assign it a direction. This is necessary for determining the signs of potential changes. If you assign the direction incorrectly, the current will be found to have a negative value—no harm done.
- Apply the junction rule to any junction in the circuit. Each time the junction rule is applied, you should get an equation with a current that does not appear in a previous application—if not, then the equation is redundant.
- Apply the loop rule to as many loops as needed to solve for the unknowns in the problem. (There must be as many independent equations as unknowns.) To apply the loop rule, you must choose a direction to go around the loop. Then carefully and consistently determine the signs of the potential changes for each element using the four bulleted points discussed above in conjunction with Figure 21.26.
- Solve the simultaneous equations for the unknowns. This may involve many algebraic steps, requiring careful checking and rechecking.
- Check to see whether the answers are reasonable and consistent. The numbers should be of the correct order of magnitude, neither exceedingly large nor vanishingly small. The signs should be reasonable—for example, no resistance should be negative. Check to see that the values obtained satisfy the various equations obtained from applying the rules. The currents should satisfy the junction rule, for example.
The material in this section is correct in theory. We should be able to verify it by making measurements of current and voltage. In fact, some of the devices used to make such measurements are straightforward applications of the principles covered so far and are explored in the next modules. As we shall see, a very basic, even profound, fact results—making a measurement alters the quantity being measured.
Check Your Understanding
Can Kirchhoff’s rules be applied to simple series and parallel circuits or are they restricted for use in more complicated circuits that are not combinations of series and parallel?
Kirchhoff's rules can be applied to any circuit since they are applications to circuits of two conservation laws. Conservation laws are the most broadly applicable principles in physics. It is usually mathematically simpler to use the rules for series and parallel in simpler circuits so we emphasize Kirchhoff’s rules for use in more complicated situations. But the rules for series and parallel can be derived from Kirchhoff’s rules. Moreover, Kirchhoff’s rules can be expanded to devices other than resistors and emfs, such as capacitors, and are one of the basic analysis devices in circuit analysis.
Making Connections: Parallel Resistors
A simple circuit shown below – with two parallel resistors and a voltage source – is implemented in a laboratory experiment with ɛ = 6.00 ± 0.02 V and R1 = 4.8 ± 0.1 Ω and R2 = 9.6 ± 0.1 Ω. The values include an allowance for experimental uncertainties as they cannot be measured with perfect certainty. For example if you measure the value for a resistor a few times, you may get slightly different results. Hence values are expressed with some level of uncertainty.
In the laboratory experiment the currents measured in the two resistors are I1 = 1.27 A and I2 = 0.62 A respectively. Let us examine these values using Kirchhoff’s laws.
For the two loops,
E - I1R1 = 0 or I1 = E/R1
E - I2R2 = 0 or I2 = E/R2
Converting the given uncertainties for voltage and resistances into percentages, we get
E = 6.00 V ± 0.33%
R1 = 4.8 Ω ± 2.08%
R2 = 9.6 Ω ± 1.04%
We now find the currents for the two loops. While the voltage is divided by the resistance to find the current, uncertainties in voltage and resistance are directly added to find the uncertainty in the current value.
I1 = (6.00/4.8) ± (0.33%+2.08%)
= 1.25 ± 2.4%
= 1.25 ± 0.03 A
I2 = (6.00/9.6) ± (0.33%+1.04%)
= 0.63 ± 1.4%
= 0.63 ± 0.01 A
Finally you can check that the two measured values in this case are within the uncertainty ranges found for the currents. However there can also be additional experimental uncertainty in the measurements of currents. |
Behavioral ecology, also spelled behavioural ecology, is the study of the evolutionary basis for animal behavior due to ecological pressures. Behavioral ecology emerged from ethology after Niko Tinbergen outlined four questions to address when studying animal behaviors that are the proximate causes, ontogeny, survival value, and phylogeny of behavior.
If an organism has a trait that provides a selective advantage (i.e., has adaptive significance) in its environment, then natural selection favors it. Adaptive significance refers to the expression of a trait that affects fitness, measured by an individual's reproductive success. Adaptive traits are those that produce more copies of the individual's genes in future generations. Maladaptive traits are those that leave fewer. For example, if a bird that can call more loudly attracts more mates, then a loud call is an adaptive trait for that species because a louder bird mates more frequently than less loud birds—thus sending more loud-calling genes into future generations.
Individuals are always in competition with others for limited resources, including food, territories, and mates. Conflict occurs between predators and prey, between rivals for mates, between siblings, mates, and even between parents and offspring.
Competing for resources
The value of a social behavior depends in part on the social behavior of an animal's neighbors. For example, the more likely a rival male is to back down from a threat, the more value a male gets out of making the threat. The more likely, however, that a rival will attack if threatened, the less useful it is to threaten other males. When a population exhibits a number of interacting social behaviors such as this, it can evolve a stable pattern of behaviors known as an evolutionarily stable strategy (or ESS). This term, derived from economic game theory, became prominent after John Maynard Smith (1982) recognized the possible application of the concept of a Nash equilibrium to model the evolution of behavioral strategies.
Evolutionarily stable strategy
In short, evolutionary game theory asserts that only strategies that, when common in the population, cannot be "invaded" by any alternative (mutant) strategy is an ESS, and thus maintained in the population. In other words, at equilibrium every player should play the best strategic response to each other. When the game is two player and symmetric, each player should play the strategy that provides the response best for it.
Therefore, the ESS is considered the evolutionary end point subsequent to the interactions. As the fitness conveyed by a strategy is influenced by what other individuals are doing (the relative frequency of each strategy in the population), behavior can be governed not only by optimality but the frequencies of strategies adopted by others and are therefore frequency dependent (frequency dependence).
Behavioral evolution is therefore influenced by both the physical environment and interactions between other individuals.
An example of how changes in geography can make a strategy susceptible to alternative strategies is the parasitization of the African honey bee, A. m. scutellata.
The term economic defendability was first introduced by Jerram Brown in 1964. Economic defendability states that defense of a resource have costs, such as energy expenditure or risk of injury, as well as benefits of priority access to the resource. Territorial behavior arises when benefits are greater than the costs.
Studies of the golden-winged sunbird have validated the concept of economic defendability. Comparing the energetic costs a sunbird expends in a day to the extra nectar gained by defending a territory, researchers showed that birds only became territorial when they were making a net energetic profit. When resources are at low density, the gains from excluding others may not be sufficient to pay for the cost of territorial defense. In contrast, when resource availability is high, there may be so many intruders that the defender would have no time to make use of the resources made available by defense.
Sometimes the economics of resource competition favors shared defense. An example is the feeding territories of the white wagtail. The white wagtails feed on insects washed up by the river onto the bank, which acts as a renewing food supply. If any intruders harvested their territory then the prey would quickly become depleted, but sometimes territory owners tolerate a second bird, known as a satellite. The two sharers would then move out of phase with one another, resulting in decreased feeding rate but also increased defense, illustrating advantages of group living.
Ideal free distribution
One of the major models used to predict the distribution of competing individuals amongst resource patches is the ideal free distribution model. Within this model, resource patches can be of variable quality, and there is no limit to the number of individuals that can occupy and extract resources from a particular patch. Competition within a particular patch means that the benefit each individual receives from exploiting a patch decreases logarithmically with increasing number of competitors sharing that resource patch. The model predicts that individuals will initially flock to higher-quality patches until the costs of crowding bring the benefits of exploiting them in line with the benefits of being the only individual on the lesser-quality resource patch. After this point has been reached, individuals will alternate between exploiting the higher-quality patches and the lower-quality patches in such a way that the average benefit for all individuals in both patches is the same. This model is ideal in that individuals have complete information about the quality of a resource patch and the number of individuals currently exploiting it, and free in that individuals are freely able to choose which resource patch to exploit.
An experiment by Manfred Malinski in 1979 demonstrated that feeding behavior in three-spined sticklebacks follows an ideal free distribution. Six fish were placed in a tank, and food items were dropped into opposite ends of the tank at different rates. The rate of food deposition at one end was set at twice that of the other end, and the fish distributed themselves with four individuals at the faster-depositing end and two individuals at the slower-depositing end. In this way, the average feeding rate was the same for all of the fish in the tank.
Mating strategies and tactics
As with any competition of resources, species across the animal kingdom may also engage in competitions for mating. If one considers mates or potentials mates as a resource, these sexual partners can be randomly distributed amongst resource pools within a given environment. Following the ideal free distribution model, suitors distribute themselves amongst the potential mates in an effort to maximize their chances or the number of potential matings. For all competitors, males of a species in most cases, there are variations in both the strategies and tactics used to obtain matings. Strategies generally refer to the genetically determined behaviors that can be described as conditional. Tactics refer to the subset of behaviors within a given genetic strategy. Thus it is not difficult for a great many variations in mating strategies to exist in a given environment or species.
An experiment conducted by Anthony Arak, where playback of synthetic calls from male natterjack toads was used to manipulate behavior of the males in a chorus, the difference between strategies and tactics is clear. While small and immature, male natterjack toads adopted a satellite tactic to parasitize larger males. Though large males on average still retained greater reproductive success, smaller males were able to intercept matings. When the large males of the chorus were removed, smaller males adopted a calling behavior, no longer competing against the loud calls of larger males. When smaller males got larger and their calls more competitive, then they started calling and competing directly for mates.
Mate choice by resources
In many sexually reproducing species, such as mammals, birds, and amphibians, females are able to bear offspring for a certain time period, during which the males are free to mate with other available females, and therefore can father many more offspring to pass on their genes. The fundamental difference between male and female reproduction mechanisms determines the different strategies each sex employs to maximize their reproductive success. For males, their reproductive success is limited by access to females, while females are limited by their access to resources. In this sense, females can be much choosier than males because they have to bet on the resources provided by the males to ensure reproductive success.
Resources usually include nest sites, food and protection. In some cases, the males provide all of them (e.g. sedge warblers). The females dwell in their chosen males’ territories for access to these resources. The males gain ownership to the territories through male-male competition that often involves physical aggression. Only the largest and strongest males manage to defend the best quality nest sites. Females choose males by inspecting the quality of different territories or by looking at some male traits that can indicate the quality of resources. One example of this is with the grayling butterfly (Hipparchia semele), where males engage in complex flight patterns to decide who defends a particular territory. The female grayling butterfly chooses a male based on the most optimal location for oviposition. Sometimes, males leave after mating. The only resource that a male provides is a nuptial gift, such as protection or food, as seen in Drosophila subobscura. The female can evaluate the quality of the protection or food provided by the male so as to decide whether to mate or not or how long she is willing to copulate.
Mate choice by genes
When males' only contribution to offspring is their sperm, females are particularly choosy. With this high level of female choice, sexual ornaments are seen in males, where the ornaments reflect the male's social status. Two hypotheses have been proposed to conceptualize the genetic benefits from female mate choice.
First, the good genes hypothesis suggests that female choice is for higher genetic quality and that this preference is favored because it increases fitness of the offspring. This includes Zahavi's handicap hypothesis and Hamilton and Zuk's host and parasite arms race. Zahavi's handicap hypothesis was proposed within the context of looking at elaborate male sexual displays. He suggested that females favor ornamented traits because they are handicaps and are indicators of the male's genetic quality. Since these ornamented traits are hazards, the male's survival must be indicative of his high genetic quality in other areas. In this way, the degree that a male expresses his sexual display indicates to the female his genetic quality. Zuk and Hamilton proposed a hypothesis after observing disease as a powerful selective pressure on a rabbit population. They suggested that sexual displays were indicators of resistance of disease on a genetic level.
Such 'choosiness' from the female individuals can be seen in wasp species too, especially among Polistes dominula wasps. The females tend to prefer males with smaller, more elliptically shaped spots than those with larger and more irregularly shaped spots. Those males would have reproductive superiority over males with irregular spots.
Fisher's hypothesis of runaway sexual selection suggests that female preference is genetically correlated with male traits and that the preference co-evolves with the evolution of that trait, thus the preference is under indirect selection. Fisher suggests that female preference began because the trait indicated the male's quality. The female preference spread, so that the females’ offspring now benefited from the higher quality from specific trait but also greater attractiveness to mates. Eventually, the trait only represents attractiveness to mates, and no longer represents increased survival.
An example of mate choice by genes is seen in the cichlid fish Tropheus moorii where males provide no parental care. An experiment found that a female T. moorii is more likely to choose a mate with the same color morph as her own. In another experiment, females have been shown to share preferences for the same males when given two to choose from, meaning some males get to reproduce more often than others.
The sensory bias hypothesis states that the preference for a trait evolves in a non-mating context, and is then exploited by one sex to obtain more mating opportunities. The competitive sex evolves traits that exploit a pre-existing bias that the choosy sex already possesses. This mechanism is thought to explain remarkable trait differences in closely related species because it produces a divergence in signaling systems, which leads to reproductive isolation.
Sensory bias has been demonstrated in guppies, freshwater fish from Trinidad and Tobago. In this mating system, female guppies prefer to mate with males with more orange body coloration. However, outside of a mating context, both sexes prefer animate orange objects, which suggests that preference originally evolved in another context, like foraging. Orange fruits are a rare treat that fall into streams where the guppies live. The ability to find these fruits quickly is an adaptive quality that has evolved outside of a mating context. Sometime after the affinity for orange objects arose, male guppies exploited this preference by incorporating large orange spots to attract females.
Another example of sensory exploitation is in the water mite Neumania papillator, an ambush predator that hunts copepods (small crustaceans) passing by in the water column. When hunting, N. papillator adopts a characteristic stance termed the 'net stance' - their first four legs are held out into the water column, with their four hind legs resting on aquatic vegetation; this allows them to detect vibrational stimuli produced by swimming prey and use this to orient towards and clutch at prey. During courtship, males actively search for females - if a male finds a female, he slowly circles around the female whilst trembling his first and second leg near her. Male leg trembling causes females (who were in the 'net stance') to orient towards often clutch the male. This did not damage the male or deter further courtship; the male then deposited spermatophores and began to vigorously fan and jerk his fourth pair of legs over the spermatophore, generating a current of water that passed over the spermatophores and towards the female. Sperm packet uptake by the female would sometimes follow. Heather Proctor hypothesised that the vibrations trembling male legs made were done to mimic the vibrations that females detect from swimming prey - this would trigger the female prey-detection responses causing females to orient and then clutch at males, mediating courtship. If this was true and males were exploiting female predation responses, then hungry females should be more receptive to male trembling – Proctor found that unfed captive females did orient and clutch at males significantly more than fed captive females did, consistent with the sensory exploitation hypothesis.
Other examples for the sensory bias mechanism include traits in auklets, wolf spiders, and manakins. Further experimental work is required to reach a fuller understanding of the prevalence and mechanisms of sensory bias.
Sexual conflict, in some form or another, may very well be inherent in the ways most animals reproduce. Females invest more in offspring prior to mating, due to the differences in gametes in species that exhibit anisogamy, and often invest more in offspring after mating. This unequal investment leads, on one hand, to intense competition between males for mates and, on the other hand, to females choosing among males for better access to resources and good genes. Because of differences in mating goals, males and females may have very different preferred outcomes to mating.
Sexual conflict occurs whenever the preferred outcome of mating is different for the male and female. This difference, in theory, should lead to each sex evolving adaptations that bias the outcome of reproduction towards its own interests. This sexual competition leads to sexually antagonistic coevolution between males and females, resulting in what has been described as an evolutionary arms race between males and females.
Conflict over mating
Males’ reproductive successes are often limited by access to mates, whereas females’ reproductive successes are more often limited by access to resources. Thus, for a given sexual encounter, it benefits the male to mate, but benefits the female to be choosy and resist. For example, male small tortoiseshell butterfly compete to gain the best territory to mate. Another example of this conflict can be found in the Eastern carpenter bee, Xylocopa virginica. Males of this species are limited in reproduction primarily by access to mates, so they claim a territory and wait for a female to pass through. Big males are, therefore, more successful in mating because they claim territories near the female nesting sites that are more sought after. Smaller males, on the other hand, monopolize less competitive sites in foraging areas so that they may mate with reduced conflict. Another example of this is Sepsis cynipsea, where males of the species mount females to guard them from other males and remain on the female, attempting to copulate, until the female either shakes them off or consents to mating. Similarly the neriid fly Telostylinus angusticollis demonstrates mate guarding by using their long limbs to hold onto the female as well as push other males away during copulation. Extreme manifestations of this conflict are seen throughout nature. For example, the male Panorpa scorpionflies attempt to force copulation. Male scorpionflies usually acquire mates by presenting them with edible nuptial gifts in the forms of salivary secretions or dead insects. However, some males attempt to force copulation by grabbing females with a specialized abdominal organ without offering a gift. Forced copulation is costly to the female as she does not receive the food from the male and has to search for food herself (costing time and energy), while it is beneficial for the male as he does not need to find a nuptial gift.
In other cases, however, it pays for the female to gain more matings and her social mate to prevent these so as to guard paternity. For example, in many socially monogamous birds, males follow females closely during their fertile periods and attempt to chase away any other males to prevent extra-pair matings. The female may attempt to sneak off to achieve these extra matings. In species where males are incapable of constant guarding, the social male may frequently copulate with the female so as to swamp rival males’ sperm.
Sexual conflict after mating has also been shown to occur in both males and females. Males employ a diverse array of tactics to increase their success in sperm competition. These can include removing other male's sperm from females, displacing other male's sperm by flushing out prior inseminations with large amounts of their own sperm, creating copulatory plugs in females’ reproductive tracts to prevent future matings with other males, spraying females with anti-aphrodisiacs to discourage other males from mating with the female, and producing sterile parasperm to protect fertile eusperm in the female's reproductive tract. For example, the male spruce bud moth (Zeiraphera canadensis) secretes an accessory gland protein during mating that makes them unattractive to other males and thus prevents females from future copulation. The Rocky Mountain parnassian also exhibits this type of sexual conflict when the male butterflies deposit a waxy genital plug onto the tip of the female's abdomen that physically prevents the female from mating again. Males can also prevent future mating by transferring an anti-Aphrodiasic to the female during mating. This behavior is seen in butterfly species such as Heliconius melpomene, where males transfer a compound that causes the female to smell like a male butterfly and thus deter any future potential mates. Furthermore, males may control the strategic allocation of sperm, producing more sperm when females are more promiscuous. All these methods are meant to ensure that females are more likely to produce offspring belonging to the males who uses the method.
Females also control the outcomes of matings, and there exists the possibility that females choose sperm (cryptic female choice). A dramatic example of this is the feral fowl Gallus gallus. In this species, females prefer to copulate with dominant males, but subordinate males can force matings. In these cases, the female is able to eject the subordinate male's sperm using cloacal contractions.
Parental care and family conflicts
Parental care is the investment a parent puts into their offspring—which includes protecting and feeding the young, preparing burrows or nests, and providing eggs with yolk. There is great variation in parental care in the animal kingdom. In some species, the parents may not care for their offspring at all, while in others the parents exhibit single-parental or even bi-parental care. As with other topics in behavioral ecology, interactions within a family involve conflicts. These conflicts can be broken down into three general types: sexual (male-female) conflict, parent-offspring conflict, and sibling conflict.
Types of parental care
There are many different patterns of parental care in the animal kingdom. The patterns can be explained by physiological constraints or ecological conditions, such as mating opportunities. In invertebrates, there is no parental care in most species because it is more favorable for parents to produce a large number of eggs whose fate is left to chance than to protect a few individual young. In other cases, parental care is indirect, manifested via actions taken before the offspring is produced, but nonetheless essential for their survival; for example, female Lasioglossum figueresi sweat bees excavate a nest, construct brood cells, and stock the cells with pollen and nectar before they lay their eggs, so when the larvae hatch they are sheltered and fed, but the females die without ever interacting with their brood. In birds, biparental care is the most common, because reproductive success directly depends on the parents' ability to feed their chicks. Two parents can feed twice as many young, so it is more favorable for birds to have both parents delivering food. In mammals, female-only care is the most common. This is most likely because females are internally fertilized and so are holding the young inside for a prolonged period of gestation, which provides males with the opportunity to desert. Females also feed the young through lactation after birth, so males are not required for feeding. Male parental care is only observed in species where they contribute to feeding or carrying of the young, such as in marmosets. In fish there is no parental care in 79% of bony fish. In fish with parental care, it usually limited to selecting, preparing, and defending a nest, as seen in sockeye salmon, for example. Also, parental care in fish, if any, is primarily done by males, as seen in gobies and redlip blennies. The cichlid fish V. moorii exhibits biparental care. In species with internal fertilization, the female is usually the one to take care of the young. In cases where fertilization is external the male becomes the main caretaker.
Familial conflict is a result of trade-offs as a function of lifetime parental investment. Parental investment was defined by Robert Trivers in 1972 as “any investment by the parent in an individual offspring that increases the offspring's chance of surviving at the cost of the parent’s ability to invest in other offspring”. Parental investment includes behaviors like guarding and feeding. Each parent has a limited amount of parental investment over the course of their lifetime. Investment trade-offs in offspring quality and quantity within a brood and trade offs between current and future broods leads to conflict over how much parental investment to provide and to whom parents should invest in. There are three major types of familial conflict: sexual, parent-offspring, and sibling-sibling conflict.
There is conflict among parents as to who should provide the care as well as how much care to provide. Each parent must decide whether or not to stay and care for their offspring, or to desert their offspring. This decision is best modeled by game theoretic approaches to evolutionarily stable strategies (ESS) where the best strategy for one parent depends on the strategy adopted by the other parent. Recent research has found response matching in parents who determine how much care to invest in their offspring. Studies found that parent great tits match their partner's increased care-giving efforts with increased provisioning rates of their own. This cued parental response is a type of behavioral negotiation between parents that leads to stabilized compensation. Sexual conflicts can give rise to antagonistic co-evolution between the sexes to try to get the other sex to care more for offspring. For example, in the waltzing fly Prochyliza xanthostoma, ejaculate feeding maximizes female reproductive success and minimizes the female's chance of mating multiply. Evidence suggests that the sperm evolved to prevent female waltzing flies from mating multiply in order to ensure the male's paternity.
According to Robert Trivers's theory on relatedness, each offspring is related to itself by 1, but is only 0.5 related to their parents and siblings. Genetically, offspring are predisposed to behave in their own self-interest while parents are predisposed to behave equally to all their offspring, including both current and future ones. Offspring selfishly try to take more than their fair shares of parental investment, while parents try to spread out their parental investment equally amongst their present young and future young. There are many examples of parent-offspring conflict in nature. One manifestation of this is asynchronous hatching in birds. A behavioral ecology hypothesis is known as Lack's brood reduction hypothesis (named after David Lack). Lack's hypothesis posits an evolutionary and ecological explanation as to why birds lay a series of eggs with an asynchronous delay leading to nestlings of mixed age and weights. According to Lack, this brood behavior is an ecological insurance that allows the larger birds to survive in poor years and all birds to survive when food is plentiful. We also see sex-ratio conflict between the queen and her workers in social hymenoptera. Because of haplodiploidy, the workers (offspring) prefer a 3:1 female to male sex allocation while the queen prefers a 1:1 sex ratio. Both the queen and the workers try to bias the sex ratio in their favor. In some species, the workers gain control of the sex ratio, while in other species, like B. terrestris, the queen has a considerable amount of control over the colony sex ratio. Lastly, there has been recent evidence regarding genomic imprinting that is a result of parent-offspring conflict. Paternal genes in offspring demand more maternal resources than maternal genes in the same offspring and vice versa. This has been show in imprinted genes like insulin-like growth factor-II.
Parent-offspring conflict resolution
Parents need an honest signal from their offspring that indicates their level of hunger or need, so that the parents can distribute resources accordingly. Offspring want more than their fair share of resources, so they exaggerate their signals to wheedle more parental investment. However, this conflict is countered by the cost of excessive begging. Not only does excessive begging attract predators, but it also retards chick growth if begging goes unrewarded. Thus, the cost of increased begging enforces offspring honesty.
Another resolution for parent-offspring conflict is that parental provisioning and offspring demand have actually coevolved, so that there is no obvious underlying conflict. Cross-fostering experiments in great tits (Parus major) have shown that offspring beg more when their biological mothers are more generous. Therefore, it seems that the willingness to invest in offspring is co-adapted to offspring demand.
The lifetime parental investment is the fixed amount of parental resources available for all of a parent's young, and an offspring wants as much of it as possible. Siblings in a brood often compete for parental resources by trying to gain more than their fair share of what their parents can offer. Nature provides numerous examples in which sibling rivalry escalates to such an extreme that one sibling tries to kill off broodmates to maximize parental investment (See Siblicide). In the Galápagos fur seal, the second pup of a female is usually born when the first pup is still suckling. This competition for the mother's milk is especially fierce during periods of food shortage such as an El Niño year, and this usually results in the older pup directly attacking and killing the younger one.
In some bird species, sibling rivalry is also abetted by the asynchronous hatching of eggs. In the blue-footed booby, for example, the first egg in a nest is hatched four days before the second one, resulting in the elder chick having a four-day head start in growth. When the elder chick falls 20-25% below its expected weight threshold, it attacks its younger sibling and drives it from the nest.
Sibling relatedness in a brood also influences the level of sibling-sibling conflict. In a study on passerine birds, it was found that chicks begged more loudly in species with higher levels of extra-pair paternity.
Some animals deceive other species into providing all parental care. These brood parasites selfishly exploit their hosts' parents and host offspring. The common cuckoo is a well known example of a brood parasite. Female cuckoos lay a single egg in the nest of the host species and when the cuckoo chick hatches, it ejects all the host eggs and young. Other examples of brood parasites include honeyguides, cowbirds, and the large blue butterfly. Brood parasite offspring have many strategies to induce their host parents to invest parental care. Studies show that the common cuckoo uses vocal mimicry to reproduce the sound of multiple hungry host young to solicit more food. Other cuckoos use visual deception with their wings to exaggerate the begging display. False gapes from brood parasite offspring cause host parents to collect more food. Another example of a brood parasite is Phengaris butterflies such as Phengaris rebeli and Phengaris arion, which differ from the cuckoo in that the butterflies do not oviposit directly in the nest of the host, an ant species Myrmica schencki. Rather, the butterfly larvae release chemicals that deceive the ants into believing that they are ant larvae, causing the ants to bring the butterfly larvae back to their own nests to feed them. Other examples of brood parasites are Polistes sulcifer, a paper wasp that has lost the ability to build its own nests so females lay their eggs in the nest of a host species, Polistes dominula, and rely on the host workers to take care of their brood, as well as Bombus bohemicus, a bumblebee that relies on host workers of various other Bombus species. Similarly, in Eulaema meriana, some Leucospidae wasps exploit the brood cells and nest for shelter and food from the bees. Vespula austriaca is another wasp in which the females force the host workers to feed and take care of the brood. In particular, Bombus hyperboreus, an Arctic bee species, is also classified as a brood parasite in that it attacks and enslaves other species within their subgenus, Alpinobombus to propagate their population.
Various types of mating systems include monogamy, polygyny, polyandry, promiscuity, and polygamy. Each is differentiated by the sexual behavior between mates, such as which males mate with certain females. An influential paper by Stephen Emlen and Lewis Oring (1977) argued that two main factors of animal behavior influence the diversity of mating systems: the relative accessibility that each sex has to mates, and the parental desertion by either sex.
Mating systems with no male parental care
In a system that does not have male parental care, resource dispersion, predation, and the effects of social living primarily influence female dispersion, which in turn influences male dispersion. Since males' primary concern is female acquisition, the males either indirectly or directly compete for the females. In direct competition, the males are directly focused on the females. Blue-headed wrasse demonstrate the behavior in which females follow resources—such as good nest sites—and males follow the females. Conversely, species with males that exemplify indirectly competitive behavior tend towards the males’ anticipation of the resources desired by females and their subsequent effort to control or acquire these resources, which helps them to achieve success with females. Grey-sided voles demonstrate indirect male competition for females. The males were experimentally observed to home in on the sites with the best food in anticipation of females settling in these areas. Males of Euglossa imperialis, a non-social bee species, also demonstrate indirect competitive behavior by forming aggregations of territories, which can be considered leks, to defend fragrant-rich primary territories. The purpose of these aggregations is largely only facultative, since the more suitable fragrant-rich sites there are, the more habitable territories there are to inhabit, giving females of this species a large selection of males with whom to potentially mate. Leks and choruses have also been deemed another behavior among the phenomena of male competition for females. Due to the resource-poor nature of the territories that lekking males often defend, it is difficult to categorize them as indirect competitors. For example, the ghost moth males display in leks to attract a female mate. Additionally, it is difficult to classify them as direct competitors seeing as they put a great deal of effort into their defense of their territories before females arrive, and upon female arrival they put for the great mating displays to attract the females to their individual sites. These observations make it difficult to determine whether female or resource dispersion primarily influences male aggregation, especially in lieu of the apparent difficulty that males may have defending resources and females in such densely populated areas. Because the reason for male aggregation into leks is unclear, five hypotheses have been proposed. These postulates propose the following as reasons for male lekking: hotspot, predation reduction, increased female attraction, hotshot males, facilitation of female choice. With all of the mating behaviors discussed, the primary factors influencing differences within and between species are ecology, social conflicts, and life history differences.
In some other instances, neither direct nor indirect competition is seen. Instead, in species like the Edith's checkerspot butterfly, males' efforts are directed at acquisition of females and they exhibit indiscriminate mate location behavior, where, given the low cost of mistakes, they blindly attempt to mate both correctly with females and incorrectly with other objects.
Mating systems with male parental care
Monogamy is the mating system in 90% of birds, possibly because each male and female has a greater number of offspring if they share in raising a brood. In obligate monogamy, males feed females on the nest, or share in incubation and chick-feeding. In some species, males and females form lifelong pair bonds. Monogamy may also arise from limited opportunities for polygamy, due to strong competition among males for mates, females suffering from loss of male help, and female-female aggression.
In birds, polygyny occurs when males indirectly monopolize females by controlling resources. In species where males normally do not contribute much to parental care, females suffer relatively little or not at all. In other species, however, females suffer through the loss of male contribution, and the cost of having to share resources that the male controls, such as nest sites or food. In some cases, a polygynous male may control a high-quality territory so for the female, the benefits of polygyny may outweigh the costs.
There also seems to be a “polyandry threshold” where males may do better by agreeing to share a female instead of maintaining a monogamous mating system. Situations that may lead to cooperation among males include when food is scarce, and when there is intense competition for territories or females. For example, male lions sometimes form coalitions to gain control of a pride of females. In some populations of Galapagos hawks, groups of males would cooperate to defend one breeding territory. The males would share matings with the female and share paternity with the offspring.
Female desertion and sex role reversal
In birds, desertion often happens when food is abundant, so the remaining partner is better able to raise the young unaided. Desertion also occurs if there is a great chance of a parent to gain another mate, which depends on environmental and populational factors. Some birds, such as the phalaropes, have reversed sex roles where the female is larger and more brightly colored, and compete for males to incubate their clutches. In jacanas, the female is larger than the male and her territory could overlap the multiple territories of up to four males.
Animals cooperate with each other to increase their own fitness. These altruistic, and sometimes spiteful behaviors can be explained by Hamilton's rule, which states that rB-C > 0 where r= relatedness, B= benefits, and C= costs.
Kin selection refers to evolutionary strategies where an individual acts to favor the reproductive success of relatives, or kin, even if the action incurs some cost to the organism's own survival and ability to procreate. John Maynard Smith coined the term in 1964, although the concept was referred to by Charles Darwin who cited that helping relatives would be favored by group selection. Mathematical descriptions of kin selection were initially offered by R. A. Fisher in 1930 and J. B. S. Haldane in 1932. and 1955. W. D. Hamilton popularized the concept later, including the mathematical treatment by George Price in 1963 and 1964.
Kin selection predicts that individuals will harbor personal costs in favor of one or multiple individuals because this can maximize their genetic contribution to future generations. For example, an organism may be inclined to expend great time and energy in parental investment to rear offspring since this future generation may be better suited for propagating genes that are highly shared between the parent and offspring. Ultimately, the initial actor performs apparent altruistic actions for kin to enhance its own reproductive fitness. In particular, organisms are hypothesized to act in favor of kin depending on their genetic relatedness. So, individuals are inclined to act altruistically for siblings, grandparents, cousins, and other relatives, but to differing degrees.
Inclusive fitness describes the component of reproductive success in both a focal individual and their relatives. Importantly, the measure embodies the sum of direct and indirect fitness and the change in their reproductive success based on the actor's behavior. That is, the effect an individual's behaviors have on: being personally better-suited to reproduce offspring, and aiding descendent and non-descendent relatives in their reproductive efforts. Natural selection is predicted to push individuals to behave in ways that maximize their inclusive fitness. Studying inclusive fitness is often done using predictions from Hamilton's rule.
One possible method of kin selection is based on genetic cues that can be recognized phenotypically. Genetic recognition has been exemplified in a species that is usually not thought of as a social creature: amoebae. Social amoebae form fruiting bodies when starved for food. These amoebae preferentially formed slugs and fruiting bodies with members of their own lineage, which is clonially related. The genetic cue comes from variable lag genes, which are involved in signaling and adhesion between cells.
Kin can also be recognized a genetically determined odor, as studied in the primitively social sweat bee, Lasioglossum zephyrus. These bees can even recognize relatives they have never met and roughly determine relatedness. The Brazilian stingless bee Schwarziana quadripunctata uses a distinct combination of chemical hydrocarbons to recognize and locate kin. Each chemical odor, emitted from the organism's epicuticles, is unique and varies according to age, sex, location, and hierarchical position. Similarly, individuals of the stingless bee species Trigona fulviventris can distinguish kin from non-kin through recognition of a number of compounds, including hydrocarbons and fatty acids that are present in their wax and floral oils from plants used to construct their nests. In the species, Osmia rufa, kin selection has also been associated with mating selection. Females, specifically, select males for mating with whom they are genetically more related to.
There are two simple rules that animals follow to determine who is kin. These rules can be exploited, but exist because they are generally successful.
The first rule is ‘treat anyone in my home as kin.’ This rule is readily seen in the reed warbler, a bird species that only focuses on chicks in their own nest. If its own kin is placed outside of the nest, a parent bird ignores that chick. This rule can sometimes lead to odd results, especially if there is a parasitic bird that lays eggs in the reed warbler nest. For example, an adult cuckoo may sneak its egg into the nest. Once the cuckoo hatches, the reed warbler parent feeds the invading bird like its own child. Even with the risk for exploitation, the rule generally proves successful.
The second rule, named by Konrad Lorenz as ‘imprinting,’ states that those who you grow up with are kin. Several species exhibit this behavior, including, but not limited to the Belding's ground squirrel. Experimentation with these squirrels showed that regardless of true genetic relatedness, those that were reared together rarely fought. Further research suggests that there is partially some genetic recognition going on as well, as siblings that were raised apart were less aggressive toward one another compared to non-relatives reared apart.
Another way animals may recognize their kin include the interchange of unique signals. While song singing is often considered a sexual trait between males and females, male-male song singing also occurs. For example, male vinegar flies Zaprionus tuberculatus can recognize each other by song.
Cooperation is broadly defined as behavior that provides a benefit to another individual that specifically evolved for that benefit. This excludes behavior that has not been expressly selected for to provide a benefit for another individual, because there are many commensal and parasitic relationships where the behavior one individual (which has evolved to benefit that individual and no others) is taken advantage of by other organisms. Stable cooperative behavior requires that it provide a benefit to both the actor and recipient, though the benefit to the actor can take many different forms.
Within species cooperation occurs among members of the same species. Examples of intraspecific cooperation include cooperative breeding (such as in weeper capuchins) and cooperative foraging (such as in wolves). There are also forms of cooperative defense mechanisms, such as the "fighting swarm" behavior used by the stingless bee Tetragonula carbonaria. Much of this behavior occurs due to kin selection. Kin selection allows cooperative behavior to evolve where the actor receives no direct benefits from the cooperation.
Cooperation (without kin selection) must evolve to provide benefits to both the actor and recipient of the behavior. This includes reciprocity, where the recipient of the cooperative behavior repays the actor at a later time. This may occur in vampire bats but it is uncommon in non-human animals. Cooperation can occur willingly between individuals when both benefit directly as well. Cooperative breeding, where one individual cares for the offspring of another, occurs in several species, including wedge-capped capuchin monkeys.
Cooperative behavior may also be enforced, where their failure to cooperate results in negative consequences. One of the best examples of this is worker policing, which occurs in social insect colonies.
The cooperative pulling paradigm is a popular experimental design used to assess if and under which conditions animals cooperate. It involves two or more animals pulling rewards towards themselves via an apparatus they can not successfully operate alone.
Cooperation can occur between members of different species. For interspecific cooperation to be evolutionarily stable, it must benefit individuals in both species. Examples include pistol shrimp and goby fish, nitrogen fixing microbes and legumes, ants and aphids. In ants and aphids, aphids secrete a sugary liquid called honeydew, which ants eat. The ants provide protection to the aphids against predators, and, in some instances, raise the aphid eggs and larvae inside the ant colony. This behavior is analogous to human domestication. The genus of goby fish, Elacatinus also demonstrate cooperation by removing and feeding on ectoparasites of their clients. The species of wasp Polybia rejecta and ants Azteca chartifex show a cooperative behavior protecting one another's nests from predators.
Hamilton's rule can also predict spiteful behaviors between non-relatives. A spiteful behavior is one that is harmful to both the actor and to the recipient. Spiteful behavior is favored if the actor is less related to the recipient than to the average member of the population making r negative and if rB-C is still greater than zero. Spite can also be thought of as a type of altruism because harming a non-relative, by taking his resources for example, could also benefit a relative, by allowing him access to those resources. Furthermore, certain spiteful behaviors may provide harmful short term consequences to the actor but also give long term reproductive benefits. Many behaviors that are commonly thought of as spiteful are actually better explained as being selfish, that is benefiting the actor and harming the recipient, and true spiteful behaviors are rare in the animal kingdom.
An example of spite is the sterile soldiers of the polyembryonic parasitoid wasp. A female wasp lays a male and a female egg in a caterpillar. The eggs divide asexually, creating many genetically identical male and female larvae. Sterile soldier wasps also develop and attack the relatively unrelated brother larvae so that the genetically identical sisters have more access to food.
Another example is bacteria that release bacteriocins. The bacteria that releases the bacteriocin may have to die to do so, but most of the harm is to unrelated individuals who are killed by the bacteriocin. This is because the ability to produce and release the bacteriocin is linked to an immunity to it. Therefore, close relatives to the releasing cell are less likely to die than non-relatives.
Many insect species of the order Hymenoptera (bees, ants, wasps) are eusocial. Within the nests or hives of social insects, individuals engage in specialized tasks to ensure the survival of the colony. Dramatic examples of these specializations include changes in body morphology or unique behaviors, such as the engorged bodies of the honeypot ant Myrmecocystus mexicanus or the waggle dance of honey bees and a wasp species, Vespula vulgaris.
In many, but not all social insects, reproduction is monopolized by the queen of the colony. Due to the effects of a haplodiploid mating system, in which unfertilized eggs become male drones and fertilized eggs become worker females, average relatedness values between sister workers can be higher than those seen in humans or other eutherian mammals. This has led to the suggestion that kin selection may be a driving force in the evolution of eusociality, as individuals could provide cooperative care that establishes a favorable benefit to cost ratio (rB-c > 0). However, not all social insects follow this rule. In the social wasp Polistes dominula, 35% of the nest mates are unrelated. In many other species, unrelated individuals only help the queen when no other options are present. In this case, subordinates work for unrelated queens even when other options may be present. No other social insect submits to unrelated queens in this way. This seemingly unfavorable behavior parallels some vertebrate systems. It is thought that this unrelated assistance is evidence of altruism in P. dominula.
Cooperation in social organisms has numerous ecological factors that can determine the benefits and costs associated with this form of organization. One suggested benefit is a type of "life insurance" for individuals who participate in the care of the young. In this instance, individuals may have a greater likelihood of transmitting genes to the next generation when helping in a group compared to individual reproduction. Another suggested benefit is the possibility of "fortress defense", where soldier castes threaten or attack intruders, thus protecting related individuals inside the territory. Such behaviors are seen in the snapping shrimp Synalpheus regalis and gall-forming aphid Pemphigus spyrothecae. A third ecological factor that is posited to promote eusociality is the distribution of resources: when food is sparse and concentrated in patches, eusociality is favored. Evidence supporting this third factor comes from studies of naked mole-rats and Damaraland mole-rats, which have communities containing a single pair of reproductive individuals.
Although eusociality has been shown to offer many benefits to the colony, there is also potential for conflict. Examples include the sex-ratio conflict and worker policing seen in certain species of social Hymenoptera such as Dolichovespula media, Dolichovespula sylvestris, Dolichovespula norwegica and Vespula vulgaris. The queen and the worker wasps either indirectly kill the laying-workers' offspring by neglecting them or directly condemn them by cannibalizing and scavenging.
The sex-ratio conflict arises from a relatedness asymmetry, which is caused by the haplodiploidy nature of Hymenoptera. For instance, workers are most related to each other because they share half of the genes from the queen and inherit all of the father's genes. Their total relatedness to each other would be 0.5+ (0.5 x 0.5) = 0.75. Thus, sisters are three-fourths related to each other. On the other hand, males arise from unfertilized larva, meaning they only inherit half of the queen's genes and none from the father. As a result, a female is related to her brother by 0.25, because 50% of her genes that come from her father have no chance of being shared with a brother. Her relatedness to her brother would therefore be 0.5 x 0.5=0.25.:382
According to Trivers and Hare's population-level sex-investment ratio theory, the ratio of relatedness between sexes determines the sex investment ratios. As a result, it has been observed that there is a tug-of-war between the queen and the workers, where the queen would prefer a 1:1 female to male ratio because she is equally related to her sons and daughters (r=0.5 in each case). However, the workers would prefer a 3:1 female to male ratio because they are 0.75 related to each other and only 0.25 related to their brothers.:382 Allozyme data of a colony may indicate who wins this conflict.
Conflict can also arise between workers in colonies of social insects. In some species, worker females retain their ability to mate and lay eggs. The colony's queen is related to her sons by half of her genes and a quarter to the sons of her worker daughters. Workers, however, are related to their sons by half of their genes and to their brothers by a quarter. Thus, the queen and her worker daughters would compete for reproduction to maximize their own reproductive fitness. Worker reproduction is limited by other workers who are more related to the queen than their sisters, a situation occurring in many polyandrous hymenopteran species. Workers police the egg-laying females by engaging in oophagy or directed acts of aggression.
The monogamy hypothesis
The monogamy hypothesis states that the presence of monogamy in insects is crucial for eusociality to occur. This is thought to be true because of Hamilton's rule that states that rB-C>0. By having a monogamous mating system, all of the offspring have high relatedness to each other. This means that it is equally beneficial to help out a sibling, as it is to help out an offspring. If there were many fathers the relatedness of the colony would be lowered.:371–375
This monogamous mating system has been observed in insects such as termites, ants, bees and wasps.:371–375 In termites the queen commits to a single male when founding a nest. In ants, bees and wasps the queens have a functional equivalent to lifetime monogamy. The male can even die before the founding of the colony. The queen can store and use the sperm from a single male throughout their lifetime, sometimes up to 30 years.:371–375
In an experiment looking at the mating of 267 hymenopteran species, the results were mapped onto a phylogeny. It was found that monogamy was the ancestral state in all the independent transitions to eusociality. This indicates that monogamy is the ancestral, likely to be crucial state for the development of eusociality. In species where queens mated with multiple mates, it was found that these were developed from lineages where sterile castes already evolved, so the multiple mating was secondary. In these cases, multiple mating is likely to be advantageous for reasons other than those important at the origin of eusociality. Most likely reasons are that a diverse worker pool attained by multiple mating by the queen increases disease resistance and may facilitate a division of labor among workers:371–375
Communication and signaling
Communication is varied at all scales of life, from interactions between microscopic organisms to those of large groups of people. Nevertheless, the signals used in communication abide by a fundamental property: they must be a quality of the receiver that can transfer information to a receiver that is capable of interpreting the signal and modifying its behavior accordingly. Signals are distinct from cues in that evolution has selected for signalling between both parties, whereas cues are merely informative to the observer and may not have originally been used for the intended purpose. The natural world is replete with examples of signals, from the luminescent flashes of light from fireflies, to chemical signaling in red harvester ants to prominent mating displays of birds such as the Guianan cock-of-the-rock, which gather in leks, the pheromones released by the corn earworm moth, the dancing patterns of the blue-footed booby, or the alarm sound Synoeca cyanea make by rubbing their mandibles against their nest. Yet other examples are the cases of the grizzled skipper and Spodoptera littoralis where pheromones are released as a sexual recognition mechanism that drives evolution.
The nature of communication poses evolutionary concerns, such as the potential for deceit or manipulation on the part of the sender. In this situation, the receiver must be able to anticipate the interests of the sender and act appropriately to a given signal. Should any side gain advantage in the short term, evolution would select against the signal or the response. The conflict of interests between the sender and the receiver results in an evolutionarily stable state only if both sides can derive an overall benefit.
Although the potential benefits of deceit could be great in terms of mating success, there are several possibilities for how dishonesty is controlled, which include indices, handicaps, and common interests. Indices are reliable indicators of a desirable quality, such as overall health, fertility, or fighting ability of the organism. Handicaps, as the term suggests, place a restrictive cost on the organisms that own them, and thus lower quality competitors experience a greater relative cost compared to their higher quality counterparts. In the common interest situation, it is beneficial to both sender and receiver to communicate honestly such that the benefit of the interaction is maximized.
Signals are often honest, but there are exceptions. Prime examples of dishonest signals include the luminescent lure of the anglerfish, which is used to attract prey, or the mimicry of non-poisonous butterfly species, like the Batesian mimic Papilio polyxenes of the poisonous model Battus philenor. Although evolution should normally favor selection against the dishonest signal, in these cases it appears that the receiver would benefit more on average by accepting the signal.
- Autonomous foraging
- Behavioral plasticity
- Evolutionary models of food sharing
- Gene-centered view of evolution
- Human behavioral ecology
- Life history theory
- Marginal value theorem
- Phylogenetic comparative methods
- Somatic effort
- Maynard Smith, J. 1982. Evolution and the Theory of Games.
- Brown, Jerram (June 1964). "The evolution of diversity in avian territorial systems". The Wilson Bulletin. 76 (2): 160–169. JSTOR 4159278.
- Gill, Frank; Larry Wolf (1975). "Economics of feeding territoriality in the golden-winged sunbird". Ecology. 56 (2): 333–345. doi:10.2307/1934964. JSTOR 1934964.
- Davies, N. B.; A. I. Houston (Feb 1981). "Owners and satellites: the economics of territory defence in the pied wagtail, Motacilla alba". Journal of Animal Ecology. 50 (1): 157–180. doi:10.2307/4038. JSTOR 4038.
- Fretwell, Stephen D. (1972). Population in a Seasonal Environment. Princeton, NJ: Princeton University Press.
- Milinski, Manfred (1979). "An Evolutionarily Stable Feeding Strategy in Sticklebacks". Zeitschrift für Tierpsychologie. 51 (1): 36–40. doi:10.1111/j.1439-0310.1979.tb00669.x.
- Dominey, Wallace (1984). "Alternative Mating Tactics and Evolutionarily Stable Strategies". American Zoology. 24 (2): 385–396. doi:10.1093/icb/24.2.385.
- Arak, Anthony (1983). "Sexual selection by male-male competition in natterjack toad choruses". Nature. 306 (5940): 261–262. Bibcode:1983Natur.306..261A. doi:10.1038/306261a0.
- Nicholas B. Davies; John R. Krebs; Stuart A. West (2012). An Introduction to Behavioral Ecology. West Sussex, UK: Wiley-Blackwell. pp. 193–202. ISBN 978-1-4051-1416-5.
- Buchanan, K.L.; Catchpole, C.K. (2000). "Song as an indicator of male parental effort in the sedge warbler". Proceedings of the Royal Society. B. 267 (1441): 321–326. doi:10.1098/rspb.2000.1003. PMC 1690544. PMID 10722211.
- "Hipparchia semele (Grayling)". IUCN Red List of Threatened Species. IUCN. Retrieved 2017-11-14.old-form url
- Dussourd, D.E.; Harvis, C.A.; Meinwald, J.; Eisner, T. (1991). "Pheromonal advertisement of a nuptial gift by a male moth". Proceedings of the National Academy of Sciences of the United States of America. 88 (20): 9224–9227. Bibcode:1991PNAS...88.9224D. doi:10.1073/pnas.88.20.9224. PMC 52686. PMID 1924385.
- Steele, RH (1986). "Courtship feeding in Drosophila subobscura. 2. Courtship feeding by males influences female mate choice". Animal Behaviour. 34: 1099–1108. doi:10.1016/s0003-3472(86)80169-5.
- Ryan, Michael J.; Anne Keddy-Hector (March 1992). "Directional patterns of female mate choice and the role of sensory biases". The American Naturalist. 139: S4–S35. doi:10.1086/285303. JSTOR 2462426.
- Salzburger, Walter; Niederstätter, Harald; Brandstätter, Anita; Berger, Burkhard; Parson, Walther; Snoeks, Jos; Sturmbauer, Christian (2006). "Colour-assortative mating among populations of Tropheus moorii, a cichlid fish from Lake Tanganyika, East Africa". Proceedings of the Royal Society B: Biological Sciences. 273 (1584): 257–66. doi:10.1098/rspb.2005.3321. PMC 1560039. PMID 16543167.
- Steinwender, Bernd; Koblmüller, Stephan; Sefc, Kristina M. (2011). "Concordant female mate preferences in the cichlid fish Tropheus moorii". Hydrobiologia. 682 (1): 121–130. doi:10.1007/s10750-011-0766-5. PMC 3841713. PMID 24293682.
- Boughman, J. W. (2002). "How sensory drive can promote speciation". Trends in Ecology and Evolution. 17 (12): 571–577. doi:10.1016/S0169-5347(02)02595-8.
- Rodd, F. H.; Hughes, K. A.; Grether, G. F.; Baril, C. T. (2002). "A possible non-sexual origin of mate preference: are male guppies mimicking fruit?". Proceedings of the Royal Society B. 269 (1490): 475–481. doi:10.1098/rspb.2001.1891. PMC 1690917. PMID 11886639.
- Proctor, Heather C. (1991-10-01). "Courtship in the water mite Neumania papillator: males capitalize on female adaptations for predation". Animal Behaviour. 42 (4): 589–598. doi:10.1016/S0003-3472(05)80242-8.
- Proctor, Heather C. (1992-10-01). "Sensory exploitation and the evolution of male mating behaviour: a cladistic test using water mites (Acari: Parasitengona)". Animal Behaviour. 44 (4): 745–752. doi:10.1016/S0003-3472(05)80300-8.
- Proctor, H. C. (1992-01-01). "Effect of Food Deprivation on Mate Searching and Spermatophore Production in Male Water Mites (Acari: Unionicolidae)". Functional Ecology. 6 (6): 661–665. doi:10.2307/2389961. JSTOR 2389961.
- Alcock, John (2013-07-01). Animal Behaviour: A Evolutionary Approach (10th ed.). Sinauer. pp. 70–72. ISBN 9780878939664.
- Jones, I. L.; Hunter, F. M. (1998). "Heterospecific mating preferences for a feather ornament in least auklets". Behavioral Ecology. 9 (2): 187–192. doi:10.1093/beheco/9.2.187.
- McClinktock, W. J.; Uetz, G. W. (1996). "Female choice and pre-existing bias: Visual cues during courtship in two Schizocosawolff spiders". Animal Behaviour. 52: 167–181. doi:10.1006/anbe.1996.0162.
- Prum, R. O. (1996). "Phylogenetic tests of alternative intersexual selection mechanisms: Trait macroevolution in a polygynous clade". The American Naturalist. 149 (4): 688–692. doi:10.1086/286014. JSTOR 2463543.
- Fuller, R. C.; Houle, D.; Travis, J. (2005). "Sensory bias as an explanation for the evolution of mate preferences". American Naturalist. 166 (4): 437–446. doi:10.1086/444443. PMID 16224700.
- Parker, G. A. (2006). "Sexual conflict over mating and fertilization: An overview". Philosophical Transactions of the Royal Society B. 361 (1466): 235–59. doi:10.1098/rstb.2005.1785. PMC 1569603. PMID 16612884.
- Davies N, Krebs J, and West S. (2012). An Introduction to Behavioral Ecology, 4th Ed. Wiley-Blackwell; Oxford: page 209-220.
- Parker, G. (1979). "Sexual selection and sexual conflict." In: Sexual Selection and Reproductive Competition in Insects (eds. M.S. Blum and N.A. Blum). Academic Press, New York: pp123-166.
- Chapman, T.; et al. (2003). "Sexual Selection". Trends in Ecology and Evolution. 18: 41–47. doi:10.1016/s0169-5347(02)00004-6.
- Baker, R. R. (1972). "Territorial behaviour of the Nymphalid butterflies, Aglais urticae (L.) and Inachis io (L.)". Journal of Animal Ecology. 41 (2): 453–469. doi:10.2307/3480. JSTOR 3480.
- Skandalis, Dimitri A.; Tattersall, Glenn J.; Prager, Sean; Richards, Miriam H. (2009). "Body Size and Shape of the Large Carpenter Bee, Xylocopa virginica (L.) (Hymenoptera: Apidae)". Journal of the Kansas Entomological Society. 82 (1): 30–42. doi:10.2317/JKES711.05.1.
- Blanckenhorn, W.U.; Mühlhäuser, C.; Morf, C.; Reusch, T.; Reuter, M. (2000). "Female choice, female reluctance to mate and sexual selection on body size in the dung fly Sepsis cynipsea". Ethology. 106 (7): 577–593. doi:10.1046/j.1439-0310.2000.00573.x.
- Bonduriansky, Russell (2006). "Convergent evolution of sexual shape dimorphism in Diptera". Journal of Morphology. 267 (5): 602–611. doi:10.1002/jmor.10426. ISSN 1097-4687. PMID 16477603.
- Thornhill, R. (1980). "Rape in Panorpa scorpionflies and a general rape hypothesis". Animal Behaviour. 28: 52–59. doi:10.1016/s0003-3472(80)80007-8.
- Birkhead, T. and Moller, A. (1992). Sperm Competition in Birds: Evolutionary Causes and Consequences. Academic Press, London.
- Carroll, Allan L. (1994-12-01). "Interactions between body size and mating history influence the reproductive success of males of a tortricid moth, Zeiraphera canadensis". Canadian Journal of Zoology. 72 (12): 2124–2132. doi:10.1139/z94-284. ISSN 0008-4301.
- Shepard, Jon; Guppy, Crispin (2011). Butterflies of British Columbia: Including Western Alberta, Southern Yukon, the Alaska Panhandle, Washington, Northern Oregon, Northern Idaho, and Northwestern Montana. UBC Press. ISBN 9780774844376. Retrieved 13 November 2017.
- Schulz, Stefan; Estrada, Catalina; Yildizhan, Selma; Boppré, Michael; Gilbert, Lawrence E. (2008-01-01). "An antiaphrodisiac in Heliconius melpomene butterflies". Journal of Chemical Ecology. 34 (1): 82–93. doi:10.1007/s10886-007-9393-z. ISSN 0098-0331. PMID 18080165.
- Pizzari, T.; Birkhead, T. (2000). "Female feral fowl eject sperm of subdominant males". Nature. 405 (6788): 787–789. Bibcode:2000Natur.405..787P. doi:10.1038/35015558. PMID 10866198.
- Clutton-Brock, T.H. (1991). The Evolution of Parental Care. Princeton NJ: Princeton University Press.
- Wcislo, W. T.; Wille, A.; Orozco, E. (1993). "Nesting biology of tropical solitary and social sweat bees, Lasioglossum (Dialictus) figueresi Wcislo and L. (D.) aeneiventre (Friese) (Hymenoptera: Halictidae)". Insectes Sociaux. 40: 21–40. doi:10.1007/BF01338830.
- Daly, M. (1979). "Why Don't Male Mammals Lactate?". Journal of Theoretical Biology. 78 (3): 325–345. doi:10.1016/0022-5193(79)90334-5. PMID 513786.
- Gross, M.R.; R.C Sargent (1985). "The evolution of male and female parentental care in fishes". American Zoologist. 25 (3): 807–822. doi:10.1093/icb/25.3.807.
- Foote, Chris J; Brown, Gayle S; Hawryshyn, Craig W (1 January 2004). "Female colour and male choice in sockeye salmon: implications for the phenotypic convergence of anadromous and nonanadromous morphs". Animal Behaviour. 67 (1): 69–83. doi:10.1016/j.anbehav.2003.02.004.
- Svensson; Magnhagen, C. (Jul 1998). "Parental behavior in relation to the occurrence of sneaking in the common goby". Animal Behaviour. 56 (1): 175–179. doi:10.1006/anbe.1998.0769. PMID 9710475.
- Sturmbauer, Christian; Corinna Fuchs; Georg Harb; Elisabeth Damm; Nina Duftner; Michaela Maderbacher; Martin Koch; Stephan Koblmüller (2008). "Abundance, Distribution, and Territory Areas of Rock-dwelling Lake Tanganyika Cichlid Fish Species". Hydrobiologia. 615 (1): 57–68. doi:10.1007/s10750-008-9557-z. Retrieved 30 September 2013.
- Johnstone, R.A.; Hinde, C.A. (2006). "Negotiation over offspring care--how should parents respond to each other's efforts?". Behavioral Ecology. 17 (5): 818–827. doi:10.1093/beheco/arl009.
- Bonduriansky, Russell; Wheeler, Jill; Rowe, Locke (2005-02-01). "Ejaculate feeding and female fitness in the sexually dimorphic fly Prochyliza xanthostoma (Diptera: Piophilidae)". Animal Behaviour. 69 (2): 489–497. doi:10.1016/j.anbehav.2004.03.018. ISSN 0003-3472.
- Amundsen, T.; Slagsvold, T. (1996). "Lack's Brood Reduction Hypothesis and Avian Hatching Asynchrony: What's Next?". Oikos. 76 (3): 613–620. doi:10.2307/3546359. JSTOR 3546359.
- Pijanowski, B. C. (1992). "A Revision of Lack's Brood Reduction Hypothesis". The American Naturalist. 139 (6): 1270–1292. doi:10.1086/285386.
- Trivers, Robert L.; Willard, Dan E. (1976). "Natural selection of parental ability to vary the sex ratio of offspring". Science. 179 (191): 90–92. Bibcode:1973Sci...179...90T. doi:10.1126/science.179.4068.90. PMID 4682135.
- Bourke, A.F.G. & F.L.W. Ratnieks (2001). "Kin-selected conflict in the bumble-bee Bombus terrestris (Hymenoptera: Apidae)". Proceedings of the Royal Society of London B. 268 (1465): 347–355. doi:10.1098/rspb.2000.1381. PMC 1088613. PMID 11270430.
- Haig, D.; Graham, C. (1991). "Genomic imprinting and the strange case of the insulin-like growth factor-II receptor". Cell. 64 (6): 1045–1046. doi:10.1016/0092-8674(91)90256-x. PMID 1848481.
- Kilner, R. M. (2001). "A Growth Cost of Begging in Captive Canary Chicks". Proceedings of the National Academy of Sciences of the United States of America. 98 (20): 11394–11398. Bibcode:2001PNAS...9811394K. doi:10.1073/pnas.191221798. PMC 58740. PMID 11572988.
- Kolliker, M.; Brinkhof, M.; Heeb, P.; Fitze, P.; Richner, H. (2000). "The Quantitative Genetic Basis of Offspring Solicitation and Parental Response in a Passerine Bird with Parental Care". Proceedings of the Royal Society B: Biological Sciences. 267 (1457): 2127–2132. doi:10.1098/rspb.2000.1259. PMC 1690782. PMID 11416919.
- Trillmitch, F.; Wolf, J.B.W. (2008). "Parent-offspring and sibling conflict in Galapagos fur seals and sea lions". Behavioral Ecology and Sociobiology. 62 (3): 363–375. doi:10.1007/s00265-007-0423-1.
- Drummond, H.; Chavelas, C.G. (1989). "Food shortage influences sibling sggression in the Blue-footed Booby". Animal Behaviour. 37: 806–819. doi:10.1016/0003-3472(89)90065-1.
- Briskie, James V.; Naugler, Christopher T.; Leech, Susan M. (1994). "Begging intensity of nestling birds varies with sibling relatedness". Proceedings of the Royal Society B: Biological Sciences. 258 (1351): 73–78. Bibcode:1994RSPSB.258...73B. doi:10.1098/rspb.1994.0144.
- Spottiswoode, C. N.; Stevens, M. (2010). "Visual modelling shows that avian host parents use multiple visual cues in rejecting parasitic eggs". Proceedings of the National Academy of Sciences of the United States of America. 107 (19): 8672–8676. Bibcode:2010PNAS..107.8672S. doi:10.1073/pnas.0910486107. PMC 2889299. PMID 20421497.
- Kliner, R.M.; Madden, Joah R.; Hauber, Mark E. (2004). "Brood parasitic cowbird nestlings use host young to procure resources". Science. 305 (5685): 877–879. Bibcode:2004Sci...305..877K. doi:10.1126/science.1098487. PMID 15297677.
- Thomas, J.A.; Settele, Josef (2004). "Butterfly mimics of ants". Nature. 432 (7015): 283–284. Bibcode:2004Natur.432..283T. doi:10.1038/432283a. PMID 15549080.
- Davies, N.B. (2011). "Cuckoo adaptations: trickery and tuning". Journal of Zoology. 281: 1–14. doi:10.1111/j.1469-7998.2011.00810.x.
- Tanaka, K. D.; Ueda, K. (2005). "Horsfield's hawk-cuckoo nestlings simulate multiple gapes for begging". Science. 308 (5722): 653. doi:10.1126/science.1109957. PMID 15860618.
- Akino, T; J. J. Knapp; J. A. Thomas; G. W. Elmes (1999). "Chemical mimicry and host specificity in the butterfly Maculinea rebeli, a social parasite of Myrmica ant colonies". Proceedings of the Royal Society B: Biological Sciences. 266 (1427): 1419–1426. doi:10.1098/rspb.1999.0796. PMC 1690087.
- Thomas, Jeremy; Karsten Schönrogge; Simona Bonelli; Francesca Barbero; Emilio Balletto (2010). "Corruption of ant acoustical signals by mimetic social parasites". Communicative and Integrative Biology. 3 (2): 169–171. doi:10.4161/cib.3.2.10603. PMC 2889977. PMID 20585513.
- Dapporto, L.; Cervo, R.; Sledge, M. F.; Turillazzi, S. (2004). "Rank integration in dominance hierarchies of host colonies by the paper wasp social parasite Polistes sulcifer (Hymenoptera, Vespidae)". Journal of Insect Physiology. 50 (2–3): 217–223. doi:10.1016/j.jinsphys.2003.11.012. PMID 15019524.
- Kreuter, Kirsten; Bunk, Elfi (2011). "How the social parasitic bumblebee Bombus bohemicus sneaks into power of reproduction". Behavioral Ecology and Sociobiology. 66 (3): 475–486. doi:10.1007/s00265-011-1294-z.
- Cameron, Sydney A.; Ramírez, Santiago (2001-07-01). "Nest Architecture and Nesting Ecology of the Orchid Bee Eulaema meriana (Hymenoptera: Apinae: Euglossini)". Journal of the Kansas Entomological Society. 74 (3): 142–165. JSTOR 25086012.
- Evans, Howard E. (1973). "Burrow sharing and nest transfer in the digger wasp Philanthus gibbosus (Fabricius)". Animal Behaviour. 21 (2): 302–308. doi:10.1016/s0003-3472(73)80071-5.
- Reed, H. C.; Akre, R. D.; Garnett, W. B. (1979). "A North American Host of the Yellowjacket Social Parasite Vespula austriaca (Panzer) (Hymenoptera: Vespidae)". Entomological News. 90 (2): 110–113.
- Gjershaug, Jan Ove (2009). "The social parasite bumblebee Bombus hyperboreus Schönherr, 1809 usurp nest of Bombus balteatus Dahlbom, 1832 (Hymenoptera, Apidae) in Norway". Norwegian Journal of Entomology. 56 (1): 28–31.
- Emlen, S. T.; Oring, L. W. (1977). "Ecology, sexual selection, and the evolution of mating systems". Science. 197 (4300): 214–223. Bibcode:1977Sci...197..215E. doi:10.1126/science.327542. PMID 327542.
- Davies, N.B., Krebs, J.R. and West., S.A., (2012). An Introduction to Behavioural Ecology. 4th ed. John Wiley & Sons, pp. 254-263
- Warner, R. R. (1987). "Female choice of sites versus mates in a coral reef fish Thalassoma bifasciatum". Animal Behaviour. 35 (5): 1470–1478. doi:10.1016/s0003-3472(87)80019-2.
- Ims, R.A. (1987). "Responses in spatial organization and behavior to manipulations of the food resource in the vole Clethrionomys rufocanus". Journal of Animal Ecology. 56 (2): 585–596. doi:10.2307/5070. JSTOR 5070.
- Kimsey, Lynn Siri (1980). "The behaviour of male orchid bees (Apidae, Hymenoptera, Insecta) and the question of leks". Animal Behaviour. 28 (4): 996–1004. doi:10.1016/s0003-3472(80)80088-1.
- Bradbury, J. E. and Gibson, R. M. (1983) Leks and mate choice. In: Mate Choice (ed. P. Bateson). pp.109-138. Cambridge University Press. Cambridge
- Moore, Sandra D. (1987). "Male-Biased Mortality in the Butterfly Euphydryas editha: a Novel Cost of Mate Acquisition". The American Naturalist. 130 (2): 306–309. doi:10.1086/284711.
- Lack, D. (1968) Ecological Adaptations for Breeding in Birds. Methuen, London.
- Davies, N. B., Krebs, J. R and West, S. A., (2012). An Introduction to Behavioral Ecology. West Sussex, UK: Wiley-Blackwell. pp. 266. ISBN 978-1-4051-1416-5.
- Lightbody, J.P.; Weatherhead, P.J. (1988). "Female settling patterns and polygyny: tests of a neutral-mate-choice hypothesis". American Naturalist. 132: 20–33. doi:10.1086/284835.
- Verner, J.; Wilson, M.F. (1966). "The influence of habitats on mating systems of North American passerine birds". Ecology. 47 (1): 143–147. doi:10.2307/1935753. JSTOR 1935753.
- Gowaty, P.A. (1981). "An extension of the Orians-Verner-Willson model to account for mating systems besides polygyny". American Naturalist. 118 (6): 851–859. doi:10.1086/283875.
- Faaborg, J.; Parker, P.G.; DeLay, L.; et al. (1995). "Confirmation of cooperative polyandry in the Galapagos hawk Buteo galapagoensis". Behavioral Ecology and Sociobiology. 36 (2): 83–90. doi:10.1007/bf00170712.
- Beissinger, S. R.; Snyder, N. F. R. (1987). "Mate desertion in the snail kite". Animal Behaviour. 35 (2): 477–487. doi:10.1016/s0003-3472(87)80273-7.
- Butchart, S. H. M.; Seddon, N.; Ekstrom, J. M. M. (1999b). "Yelling for sex: harem males compete for female access in bronze-winged jacanas". Animal Behaviour. 57 (3): 637–646. doi:10.1006/anbe.1998.0985. PMID 10196054.
- Davies, Nicholas B.; Krebs, John R.; West, Stuart A. (2012). An Introduction to Behavioral Ecology. West Sussex, UK: Wiley-Blackwell. pp. 307–333. ISBN 978-1-4051-1416-5.
- Bergstrom, Theodore (Spring 2002). "Evolution of Social Behavior: Individual and Group Selection". The Journal of Economic Perspectives. 16 (2): 67–88. CiteSeerX 10.1.1.377.5059. doi:10.1257/0895330027265. JSTOR 2696497.
- Smith, J. M. (1964). "Group Selection and Kin Selection". Nature. 201 (4924): 1145–1147. Bibcode:1964Natur.201.1145S. doi:10.1038/2011145a0.
- Fisher, R. A. (1930). The Genetical Theory of Natural Selection. Oxford: Clarendon Press.
- Haldane, J.B.S. (1932). The Causes of Evolution. London: Longmans, Green & Co.
- Haldane, J. B. S. (1955). "Population Genetics". New Biology. 18: 34–51.
- Hamilton, W. D. (1963). "The evolution of altruistic behavior". American Naturalist. 97 (896): 354–356. doi:10.1086/497114.
- Hamilton, W. D. (1964). "The Genetical Evolution of Social Behavior". Journal of Theoretical Biology. 7 (1): 1–16. doi:10.1016/0022-5193(64)90038-4. PMID 5875341.
- West, S.A.; Griffin, A.S.; Gardner, A. (2007b). "Social semantics: altruism, cooperation, mutualism, strong reciprocity and group selection". Journal of Evolutionary Biology. 20 (2): 415–432. doi:10.1111/j.1420-9101.2006.01258.x. PMID 17305808.
- Mehdiabadi, N. J., C. N. Jack, T. T. Farnham et al. (2006). "Kin preference in a social microbe". Nature. 442 (7105): 881–882. Bibcode:2006Natur.442..881M. doi:10.1038/442881a. PMID 16929288.CS1 maint: uses authors parameter (link)
- Benabentos, R., S. Hirose, R. Sucgang et al. (2009). "Polymorphic members of the lag-gene family mediate kin discrimination in Dictyostelium". Current Biology. 19 (7): 567–572. doi:10.1016/j.cub.2009.02.037. PMC 2694408. PMID 19285397.CS1 maint: uses authors parameter (link)
- Greenberg, Les (1988-07-01). "Kin recognition in the sweat bee, Lasioglossum zephyrum". Behavior Genetics. 18 (4): 425–438. doi:10.1007/BF01065512. ISSN 0001-8244. PMID 3190637.
- Nunes, T. M.; Turatti, I. C. C.; Mateus, S.; Nascimento, F. S.; Lopes, N. P.; Zucchi, R. (2009). "Cuticular hydrocarbons in the stingless bee Schwarziana quadripunctata (Hymenoptera, Apidae, Meliponini): Differences between colonies, castes and age". Genetics and Molecular Research. 8 (2): 589–595. doi:10.4238/vol8-2kerr012. PMID 19551647.
- Buchwald, Robert; Breed, Michael D. (2005). "Nestmate recognition cues in a stingless bee, Trigona fulviventris". Animal Behaviour. 70 (6): 1331–1337. doi:10.1016/j.anbehav.2005.03.017.
- Seidelmann, Karsten (2006-09-01). "Open-cell parasitism shapes maternal investment patterns in the red mason bee Osmia rufa". Behavioral Ecology. 17 (5): 839–848. doi:10.1093/beheco/arl017. ISSN 1045-2249.
- Davies, N. B. & M. de L. Brooke (1988). "Cuckoos versus reed warblers: Adaptations and counteradaptations". Animal Behaviour. 36 (1): 262–284. doi:10.1016/S0003-3472(88)80269-0.
- Holmes, W.G & P.W. Sherman (1982). "The ontogeny of kin recognition in two species of ground squirrels". American Zoologist. 22 (3): 491–517. doi:10.1093/icb/22.3.491.
- Bennet-Clark, H. C.; Leroy, Y.; Tsacas, L. (1980-02-01). "Species and sex-specific songs and courtship behaviour in the genus Zaprionus (Diptera-Drosophilidae)". Animal Behaviour. 28 (1): 230–255. doi:10.1016/S0003-3472(80)80027-3. ISSN 0003-3472.
- Gloag, R.; et al. (2008). "Nest defence in a stingless bee: What causes fighting swarms in Trigona carbonaria (Hymenoptera, Meliponini)?". Insectes Sociaux. 55 (4): 387–391. doi:10.1007/s00040-008-1018-1.
- Wilkinson, G.S. (1984). "Reciprocal food sharing in the vampire bat". Nature. 308 (5955): 181–184. Bibcode:1984Natur.308..181W. doi:10.1038/308181a0.
- O'Brien, Timothy G. & John G. Robinson (1991). "Allomaternal Care by Female Wedge-Capped Capuchin Monkeys: Effects of Age, Rank and Relatedness". Behaviour. 119 (1–2): 30–50. doi:10.1163/156853991X00355.
- Ratnieks, Francis L. W.; Heikki Helanterä (October 2009). "The evolution of extreme altruism and inequality in insect societies". Philosophical Transactions of the Royal Society B. 364 (1553): 3169–3179. doi:10.1098/rstb.2009.0129. PMC 2781879. PMID 19805425.
- de Waal, Frans (2016). "Are We Smart Enough To Know How Smart Animals Are?" ISBN 978-1-78378-305-2, p. 276
- Postgate, J (1998). Nitrogen Fixation, 3rd Edition. Cambridge University Press, Cambridge UK.
- Dawkins, Richard (1976). The Selfish Gene. Oxford University Press.
- M.C. Soares; I.M. Côté; S.C. Cardoso & R.Bshary (August 2008). "The cleaning goby mutualism: a system without punishment, partner switching or tactile stimulation" (PDF). Journal of Zoology. 276 (3): 306–312. doi:10.1111/j.1469-7998.2008.00489.x.
- Crair, Ben (1 August 2017). "The Secret Economic Lives of Animals". Bloomberg News. Retrieved 1 August 2017.
- Foster, Kevin; Tom Wenseleers; Francis L. W. Ratnieks (10 September 2001). "Spite: Hamilton's unproven theory" (PDF). Annales Zoologici Fennici: 229–238.
- Duffy, Emmett J.; Cheryl L. Morrison; Kenneth S. Macdonald (April 2002). "Colony defense and behavioral differentiation in the eusocial shrimp Synalpheus regalis". Behavioral Ecology and Sociobiology. 51 (5): 488–495. doi:10.1007/s00265-002-0455-5.
- Foster, W.A. (December 1990). "Experimental evidence for effective and altruistic colony defence against natural predators by soldiers of the gall-forming aphid Pemphigus spyrothecae (Hemiptera: Pemphigidae)". Behavioral Ecology and Sociobiology. 27 (6): 421–430. doi:10.1007/BF00164069.
- Bonckaert, W.; Tofilski, A.; Nascimento, F.S.; Billen, J.; Ratnieks, F.L.W.; Wenseleers, T. (2001). "Co-occurrence of three types of egg policing in the Norwegian wasp Dolichovespsula wasp". Behavioral Ecology and Sociobiology. 65 (4): 633–640. doi:10.1007/s00265-010-1064-3.
- Wenseleers, Tom; Heikki Helanterä; Adam G. Hart; Francis L. W. Ratnieks (May 2004). "Worker reproduction and policing in insect societies: an ESS analysis". Journal of Evolutionary Biology. 17 (5): 1035–1047. doi:10.1111/j.1420-9101.2004.00751.x. PMID 15312076.
- Foster, Kevin R. (2001). "Colony kin structure and male production in Dolichovespula wasps". Molecular Ecology. 10 (4): 1003–1010. doi:10.1046/j.1365-294X.2001.01228.x. PMID 11348506.
- Vespula vulgaris#Defensive behaviors
- Andrew F. G. Bourke (1999). "Sex allocation in a facultatively polygynous ant: between-population and between-colony variation" (PDF). Behavioral Ecology. 10 (4): 409–421. doi:10.1093/beheco/10.4.409.
- Jurgen Heinze; Lipski, Norbert; Schlehmeyer, Kathrin; Hōlldobler, Bert (1994). "Colony structure and reproduction in the ant, Leptothorax acervorum" (PDF). Behavioral Ecology. 6 (4): 359–367. doi:10.1093/beheco/6.4.359.
- Ratnieks, Francis L.W.; P. Kirk Visscher (December 1989). "Worker policing in the honeybee". Nature. 342 (6251): 796–797. Bibcode:1989Natur.342..796R. doi:10.1038/342796a0.
- Gobin, Bruno; J. Billen; C. Peeters (November 1999). "Policing behaviour towards virgin egg layers in a polygynous ponerine ant". Animal Behaviour. 58 (5): 1117–1122. doi:10.1006/anbe.1999.1245. PMID 10564615.
- Boomsma, J.J (21 August 2007). "Kin selection versus sexual selection: why the ends to not meet". Current Biology. 17 (16): R673–R683. doi:10.1016/j.cub.2007.06.033. PMID 17714661.
- Raina, Ashok K.; Klun, Jerome A. (1984). "Brain factor control of sex pheromone production in the female corn earworm moth". Science. 225 (4661): 531–533. Bibcode:1984Sci...225..531R. doi:10.1126/science.225.4661.531. PMID 17750856.
- O'Donnell, Sean (1997). "Gaster-Flagging during Colony Defense in Neotropical Swarm-Founding Wasps (Hymenoptera: Vespidae, Epiponini)". Journal of the Kansas Entomological Society.
- Hernández-Roldán, Juan L.; Bofill, Roger; Dapporto, Leonardo; Munguira, Miguel L.; Vila, Roger (2014-09-01). "Morphological and chemical analysis of male scent organs in the butterfly genus Pyrgus (Lepidoptera: Hesperiidae)". Organisms Diversity & Evolution. 14 (3): 269–278. doi:10.1007/s13127-014-0170-x. ISSN 1439-6092.
- Silverstein, Germund; Löfstedt, Christer; Rosén, Wen Qi (2005). "Circadian mating activity and effect of pheromone pre-exposure on pheromone response rhythms in the moth Spodoptera littoralis". Journal of Insect Physiology. 51 (3): 277–286. doi:10.1016/j.jinsphys.2004.11.013. PMID 15749110.
- Lederhouse, Robert C.; Silvio, G. Codella Jr (1989). "Intersexual Comparison of Mimetic Protection in the Black Swallowtail Butterfly, Papilio polyxenes: Experiments with Captive Blue Jay Predators". Evolution. 43 (2): 410–420. doi:10.2307/2409216. JSTOR 2409216. PMID 28568560.
- Alcock, J. (2009). Animal Behavior: An Evolutionary Approach (9th edition). Sinauer Associates Inc. Sunderland, MA.
- Bateson, P. (2017) Behaviour, Development and Evolution. Open Book Publishers, DOI 10.11647/OBP.0097
- Danchin, É., Girladeau, L.-A. and Cézilly, F. (2008). Behavioural Ecology: An Evolutionary Perspective on Behaviour. Oxford University Press, Oxford.
- Krebs, J.R. and Davies, N. An Introduction to Behavioural Ecology, ISBN 0-632-03546-3
- Krebs, J.R. and Davies, N. Behavioural Ecology: An Evolutionary Approach, ISBN 0-86542-731-3
- Wajnberg, E., Bernstein E. and van Alphen, E. (2008). Behavioral Ecology of Insect Parasitoids - From Theoretical Approaches to Field Applications, Blackwell Publishing.
- Media related to Behavioral ecology at Wikimedia Commons |
One of the realities of life is how so much of the world runs by mathematical rules. As one of the tools of mathematics, linear systems have multiple uses in the real world. Life is full of situations when the output of a system doubles if the input doubles, and the output cuts in half if the input does the same. That's what a linear system is, and any linear system can be described with a linear equation.
In the Kitchen
If you've ever doubled a favorite recipe, you've applied a linear equation. If one cake equals 1/2 cup of butter, 2 cups of flour, 3/4 tsp. of baking powder, three eggs and 1 cup of sugar and milk, then two cakes equal 1 cup of butter, 4 cups of flour, 1 1/2 tsp. of baking powder, six eggs and 2 cups of sugar and milk. To get twice the output, you put in twice the input. You might not have known you were using a linear equation, but that's exactly what you did.
Suppose a water district wants to know how much snowmelt runoff it can expect this year. The melt comes from a big valley, and every year the district measures the snowpack and the water supply. It gets 60 acre-feet from every 6 inches of snowpack. This year surveyors measure 6 feet and 4 inches of snow. The district put that in the linear expression (60 acre-feet/6 inches) * 76 inches. Water officials can expect 760 acre-feet of snowmelt from the water.
Just for Fun
It's springtime and Irene wants to fill her swimming pool. She doesn't want to stand there all day, but she doesn't want to waste water over the edge of the pool, either. She sees that it takes 25 minutes to raise the pool level by 4 inches. She needs to fill the pool to a depth of 4 feet; she has 44 more inches to go. She figures out her linear equation: 44 inches * (25 minutes/4 inches) is 275 minutes, so she knows she has four hours and 35 minutes more to wait.
Ralph has also noticed that it's springtime. The grass has been growing. It grew 2 inches in two weeks. He doesn't like the grass to be taller than 2 1/2 inches, but he doesn't like to cut it shorter than 1 3/4 inches. How often does he need to cut the lawn? He just puts that calculation in his linear expression, where (14 days/2 inches) * 3/4 inch tells hims he needs to cut his lawn every 5 1/4 days. He just ignores the 1/4 and figures he'll cut the lawn every five days.
It's not hard to see other similar situations. If you want to buy beer for the big party and you've got $60 in your pocket, a linear equation tells you how much you can afford. Whether you need to bring in enough wood for the fire to burn overnight, calculate your paycheck, figure out how much paint you need to redo the upstairs bedrooms or buy enough gas to make it to and from your Aunt Sylvia's, linear equations provide the answers. Linear systems are, literally, everywhere.
Where They Aren't
One of the paradoxes is that just about every linear system is also a nonlinear system. Thinking you can make one giant cake by quadrupling a recipe will probably not work. If there's a really heavy snowfall year and snow gets pushed up against the walls of the valley, the water company's estimate of available water will be off. After the pool is full and starts washing over the edge, the water won't get any deeper. So most linear systems have a "linear regime" --- a region over which the linear rules apply--- and a "nonlinear regime" --- where they don't. As long as you're in the linear regime, the linear equations hold true. |
May 9, 2013
In September 2009, after decades of speculation, evidence of water on the surface of the Moon was discovered for the first time. Chandrayaan-1, a lunar probe launched by India’s space agency, had created a detailed map of the minerals that make up the Moon’s surface and analysts determined that, in several places, the characteristics of lunar rocks indicated that they bore as much 600 million metric tonnes of water.
In the years since, we’ve seen further evidence of water both on the surface and within the interior of the Moon, locked within the pore space of rocks and perhaps even frozen in ice sheets. All this has gotten space exploration enthusiasts pretty excited, as the presence of frozen water could someday make permanent human habitation of the Moon much more feasible.
For planetary scientists, though, it’s raised a knotty question: How did water arrive on the Moon in the first place?
A new paper published today in Science suggests that, unlikely as it may seem, the Moon’s water originated from the same source as the water that comes out of the faucet when you open a tap. Just as many scientists believe the Earth’s entire supply of water was initially delivered via water-bearing meteorites that traveled from the asteroid belt billions of years ago, a new analysis of lunar volcanic rocks brought back during the Apollo missions indicates the Moon’s water has its roots in these same meteorites. But there’s a twist: Before reaching the Moon, this lunar water was first on Earth.
The research team, led by Alberto Saal of Brown University, analyzed the isotopic composition of hydrogen found in water within tiny bubbles of volcanic glass (supercooled lava) as well as melt inclusions (blobs of melted material trapped in slowly cooling magma that later solidified) in the Apollo-era rocks, as shown in the image above. Specifically, they looked at the ratio of deuterium isotopes (“heavy” hydrogen atoms that contain an added neutron) to normal hydrogen atoms.
Previously, scientists have found that in water, this ratio changes depending on where in the solar system the water molecules initially formed, as water that originated closer to the Sun has less deuterium than water formed further away. The water locked in the lunar glass and melt inclusions was found to have deuterium levels similar to that found in a class of meteorites called carbonaceous chondrites, which scientists believe to be the most unaltered remnants of the nebula from which the solar system formed. Carbonaceous chondrites that fall to Earth originate in the asteroid belt between Mars and Jupiter.
Higher deuterium levels would have suggested that water was first brought on to the Moon by comets—as many scientists have hypothesized—because comets largely come from the Kuiper belt and Oort Cloud, remote regions far beyond Neptune where deuterium is more plentiful. But if the water in these samples represents lunar water as a whole, the findings indicate that the water came from a much closer source—in fact, the same source as the water on Earth.
The simplest explanation for this similarity would be a scenario in which, when a massive collision between a young Earth and a Mars-sized proto-planet formed the Moon some 4.5 billion years ago, some of the liquid water on our planet was somehow preserved from vaporization and transferred along with the solid material that would become the Moon.
Our current understanding of massive impacts, though, doesn’t allow for this possibility: The heat we believe would be generated by such an enormous collision would theoretically vaporize all lunar water and send it off into space in a gaseous form. But there are a few other scenarios that might explain how water was transferred from our proto-Earth to the Moon in other forms.
One possibility, the researchers speculate, is that the early Moon borrowed a bit of Earth’s high-temperature atmosphere the instant it formed, so any water that had been locked in the chemical composition of Earth rocks pre-impact would have vaporized along with the rock into this shared atmosphere after impact; this vapor would have then coalesced into a solid lunar blob, binding the water into the chemical composition of lunar material. Another possibility is that the rocky chunk of Earth was kicked off to form the Moon retained the water molecules locked inside its chemical composition, and later on, these were released as a result of radioactive heating inside the Moon’s interior.
Evidence from recent lunar missions suggests that lunar rocks—not just craters at the poles—indeed contain substantial amounts of water, and this new analysis suggests that water originally came from Earth. So the findings will force scientists to rethink models of how the Moon could have formed, given that it clearly didn’t dry out completely.
April 19, 2013
Last year, to celebrate the 42nd Earth Day, we took a look at 10 of the most surprising, disheartening, and exciting things we’d learned about our home planet in the previous year—a list that included discoveries about the role pesticides play in bee colony collapses, the various environmental stresses faced by the world’s oceans and the millions of unknown species are still out in the environment, waiting to be found.
This year, in time for Earth Day on Monday, we’ve done it again, putting together another list of 10 notable discoveries made by scientists since Earth Day 2012—a list that ranges from specific topics (a species of plant, a group of catfish) to broad (the core of planet Earth), and from the alarming (the consequences of climate change) to the awe-inspiring (Earth’s place in the universe).
1. Trash is accumulating everywhere, even in Antarctica. As we’ve explored the most remote stretches of the planet, we’ve consistently left behind a trail of one supply in particular: garbage. Even in Antarctica, a February study found (PDF), abandoned field huts and piles of trash are mounting. Meanwhile, in the fall, a new research expedition went to study the Great Pacific Garbage Patch, counting nearly 70,000 pieces of garbage over the course of a month at sea.
2. Climate change could erode the ozone layer. Until recently, atmospheric scientists viewed climate change and the disintegration of the ozone layer as entirely distinct problems. Then, in July, Harvard researcher Jim Anderson (who won a Smithsonian Ingenuity Award for his work) led a team that published the troubling finding that the two might be linked. Some warm summer storms, they discovered, can pull moisture up into the stratosphere, an atmospheric layer 6 miles up. Through a chain of chemical reactions, this moisture can lead to the disintegration of ozone, which is crucial for protecting us from ultraviolet (UV) radiation. Climate change, unfortunately, is projected to cause more of these sorts of storms.
3. This flower lives on exactly two cliffs in Spain. In September, Spanish scientists told us about one of the most astounding survival stories in the plant kingdom: Borderea chouardii, an extremely rare flowering plant that is found on only two adjacent cliffs in the Pyrenees. The species is believed to be a relic of the Tertiary Period, which ended more than 2 million years ago, and relies on several different local ant species to spread pollen between its two local populations.
4. Some catfish have learned to kill pigeons. In December, a group of French scientists revealed a phenomenon they’d carefully been observing over the previous year: a group of catfish in Southwestern France had learned how to leap onto shore, briefly strand themselves, and swim back into the water to consume their prey. With more than 2,000,000 Youtube views so far, this is clearly one of the year’s most widely enjoyed scientific discoveries.
5. Fracking for natural gas can trigger moderate earthquakes. Scientists have known for a while that whenever oil and gas are extracted from the ground at a large scale, seismic activity can be induced. Over the past few years, evidence has mounted that injecting water, sand and chemicals into bedrock to cause gas and oil to flow upward—a practice commonly known as fracking—can cause earthquakes by lubricating pre-existing faults in the ground. Initially, scientists found correlations between fracking sites and the number of small earthquakes in particular areas. Then, in March, other researchers found evidence that a medium-sized 2011 earthquake in Oklahoma(which registered a 5.7 on the moment magnitude scale) was likely caused by injecting wastewater into wells to extract oil.
6. Our planet’s inner core is more complicated than we thought. Despite decades of research, new data on the iron and nickel ball 3,100 miles beneath our feet continue to upset our assumptions about just how the earth’s core operates. A paper published last May showed that iron in the outer parts of the inner core is losing heat much more quickly than previously estimated
, suggesting that it might hold more radioactive energy than we’d assumed, or that novel and unknown chemical interactions are occurring. Ideas for directly probing the core are widely regarded as pipe dreams, so our only options remains studying it from afar, largely by monitoring seismic waves.
7. The world’s most intense natural color comes from an African fruit. When a team of researchers looked closely at the blue berries of Pollia condensata, a wild plant that grows in East Africa, they found something unexpected: it uses an uncommon structural coloration method to produce the most intense natural color ever measured. Instead of pigments, the fruit’s brilliant blue results from nanoscale-size cellulose strands layered in twisting shapes, which which interact with each other to scatter light in all directions.
8. Climate change will let ships cruise across the North Pole. Climate change is sure to create countless problems for many people around the world, but one specific group is likely to see a significant benefit from it: international shipping companies. A study published last month found that rising temperatures make it probable that during summertime, reinforced ice-breaking ships will be able to sail directly across the North Pole—an area currently covered by up to 65 feet of ice—by the year 2040. This dramatic shift will shorten shipping routes from North America and Europe to Asia.
9. One bacteria species conducts electricity. In October, a group of Danish researchers revealed that the seafloor mud of Aarhus’ harbor was coursing with electricity due to an unlikely source: mutlicellular bacteria that behave like tiny electrical cables. The organisms, the team found, built structures that traveled several centimeters down into the sediment and conduct measurable levels of electricity. The researchers speculate that this seemingly strange behavior is a byproduct of the way of the bacteria harvests energy from the nutrients buried in the soil.
10. Our Earth isn’t alone. Okay, this one might not technically be a discovery about Earth, but over the past year we have learned a tremendous amount about what our Earth isn’t: the only habitable planet in the visible universe. The pace of exoplanet detection has accelerated rapidly, with a total of 866 planets in other solar systems discovered so far. As our methods have become more refined, we’ve been able to detect smaller and smaller planets, and just yesterday, scientists finally discovered a pair of distant planets in the habitable zone of their stars that are relatively close in size to Earth, making it more likely than ever that we might have spied an alien planet that actually supports life.
April 9, 2013
Ever since the collective “YOU” became Time Magazine’s Person of the Year in 2006, campaigns to get our attention have increasingly sought out our digital selves. You can name a Budweiser Clydesdale. You can pick Lays’ new potato chip flavor. And it’s not just retail that wants your online opinions: You can vote for who will win photography contests. You can play the futures market on who will win elected offices. And with enough signatures, you can get the White House to read your petitions.
Many science endeavors rely on such crowdsourcing. With a simple app, you can let researchers know the exact date that your lilacs or dogwoods bloom, helping them to track how seasonal cycles are shifting as a result of climate change. You can join the search for ever-larger prime numbers. You can even help scientists scan radio waves in space to search for intelligent life outside of Earth. These more traditional crowdsourcing efforts allow users to brainstorm ideas and process data from computers at home.
But now, a few projects are allowing us to put our virtual selves beyond Earth’s atmosphere through recently launched space missions. Who said that rovers, space probes, a handful of astronauts and pigs were the only ones in space? No longer are we just bystanders watching spacecraft launch and cooing over images returned of other planets and stars. Now, we can direct cameras, help run experiments, even send our avatars–of sorts–to inhabit nearby planetary bodies or return to us in a time capsule.
Here are a few examples:
Asteroid Chimney Rock: On April 10 (tomorrow), the Japan Aerospace Exploration Agency will open up a campaign that allows visitors to their site the opportunity of sending their names and brief messages to the near-Earth asteroid (162173) 1999 JU3. Called the “Let’s meet with Le Petit Prince! Million Campaign 2,” the effort aims to get people’s names onto the Hayabusa2 mission, which will likely launch in 2014 to study the asteroid. When Hayabusa 2 lands on the asteroid, the names submitted–embedded in a plaque of sorts on the spacecraft–will stand as a testament to the idea that humans (or at least their robotic representatives) were there.
The campaign is reminiscent of how NASA got more than 1.2 million people to submit their names and signatures, which were then etched on two dime-sized microchips and affixed to the Mars Curiosity rover. Sure, it’s a bit gimmicky–what useful function is brought by having people’s names out in space? But the idea of “tagging” a planet or an asteroid–preserving a bit of yourself on what will over decades become space junk–has powerful pull. It is why Chimney Rock, with its etchings from early explorers and pioneers, is the historical marker it is today, and why gladiators scored their names into the Colosseum before they fought to the death. For mission leaders hoping to get the public enthusiastic about space, nothing’s more exciting than a bit of digital graffiti.
Interplanetary time capsules: A key goal of Hayabusa2 is to return return a sample from the asteroid in 2020. Mission creators saw this as a perfect way to get the public to fill a time capsule. Those seeking to participate are encouraged to send to mission coordinators their thoughts and dreams for the future along with their hopes and expectations for recovery from natural disasters, the latter likely a way to get people to express their feelings on the 2011 Tohoku earthquake and tsunami that devastated Japan’s east coast. Names, messages, and illustrations will loaded onto a microchip that will not only touch down on the asteroid’s surface, but will also be a part of the probe sent back to Earth with asteroid dust.
But why stop at a mere 6-year time capsule? The European Space Agency, UNESCO, and other partners are blending crowd sourcing with space technology to create the KEO mission–so named because the letters represent common sounds across all of Earth’s languages–which will bundle thoughts and images of anyone who seeks to participate and will launch this bundle in a probe that will only return to Earth in 50,000 years.
Project operators write on KEO’s website: “Each one of us have 4 uncensored pages at our disposal: an identical space of equality and freedom of expression where we can voice our aspirations and our revolts, where we can reveal our deepest fears and our strongest beliefs, where we can relate our lives to our faraway great grandchildren, thus allowing them to witness our times.” That’s 4 pages for every person who chooses to participate.
On board will be photographs detailing Earth’s cultural richness, human blood encased in a diamond, and a durable DVD of humanity’s crowdsourced thoughts. The idea is to launch the time capsule from an Ariane 5 rocket into an orbit more than 2,000 kilometers above Earth, hopefully sometime in 2014. “50,000 years ago, Man created art thus showing his capacity for symbolic abstraction.” the website notes. And in another 50,000 years, “Will Earth still give life? Will human beings still be recognizable as such?”Another logical question: Will whatever’s left on Earth know what’s coming back to them and will be able to retrieve it?
Hayabusa2 and KEO will join capsules already launched into space on Pioneer 10 and 11 and Voyager 1 and 2. But the contents of these earlier capsules were picked by a handful of people; here, we get to choose what represents us in space, and will get to reflect (in theory) on the thoughts bound in time upon their return.
You, the mission controller and scientist: Short of going to Mars yourself, you can do the next best thing–tell an instrument currently observing Mars where to look. On NASA’s Mars Reconnaissance Orbiter is the University of Arizona’s High Resolution Imaging Science Experiment (HiRISE), a camera designed to image Mars in great detail. Dubbed “the people’s camera,” HiRISE allows you–yes, you!– to pick its next targets by filling out a form specifying your “HiWishes.”
A recently launched nanosatellite is allowing the crowdsourced winners of a crowdsourced screaming contest the chance to test whether screams can be heard in space. Launched in February, the nanosatellite’s smartphone-powered brain will broadcast the screams–no word yet on results. But you may find just listening to the yelling therapeutic! This guy’s roar got the most votes:
March 20, 2013
Update: Since the press release announcing Voyager 1′s exiting the solar system, NASA has clarified that the final indicator of this event—a change in the direction of the magnetic field surrounding the craft—has still not been observed. As was first observed in December 2012, Voyager 1 is in a new outermost region of the solar system called “the magnetic highway,” not true interstellar space. This post has been edited to reflect the clarification.
Since the dawn of the Space Age, our manned missions and unmanned probes have reached the Moon, asteroids and other planets. But only now do we have confirmation that a human-made object has reached a new milestone: The Voyager 1 space probe is at the furthermost edge of the solar system.
According to a paper recently accepted for publication by the journal Geophysical Research Letters, data transmitted by probe—which is now more than 11 billion miles away from the Sun—reveal that it has exited the heliosphere. The heliosphere (also called the heliosheath) is the region of space influenced by the solar wind and is commonly accepted as the outer border of the solar system. Thirty-five years, 6 months and 15 days after its launch, the spacecraft will soon enter the second phase of its mission—studying the interstellar medium that exists between our galaxy’s star systems.
Bill Webber of New Mexico State and F.B. McDonald of the University of Maryland (who has passed away since the paper was written) came to the conclusion after analyzing radiation data transmitted by Voyager 1 last August 25. The probe’s sensors detected that the levels of radiation from cosmic rays that had come from the Sun dropped to less than 1 percent of what they’d been previously, while radiation from galactic cosmic rays (which originate from beyond the solar system) doubled in intensity.
Although there is no exact boundary that defines the edge of the solar system, the point at which the Sun’s cosmic rays and galactic cosmic rays meet indicates the edge of the region dominated by our Sun’s solar wind, and thus the outside border of our star’s system. Webber says that the sudden change in radiation indicates Voyager 1 passed this point.
“Within just a few days, the heliospheric intensity of trapped radiation decreased, and the cosmic ray intensity went up as you would expect if it exited the heliosphere,” he said in a press release issued by the American Geophysical Union today. He also noted that it’s possible the probe hasn’t reached true interstellar space, but rather a separate, not-yet-understood region that lies in between our solar system and the interstellar medium.
Since its launch in 1977, the spacecraft has conducted a grand tour of the solar system, passing by and photographing Jupiter and Saturn and providing us with some of the first-ever close-ups of the gas giants. Voyager 2, a twin probe, visited Jupiter, Saturn, Uranus and Neptune, and is still firmly within the solar system for now, 9.4 billion miles away from the Sun.
In 2005, Voyager 1 entered the heliosheath (the region in which the solar wind begins to slow down due to encountering the interstellar medium), and last October, researchers reported that it may have left the heliosphere altogether. Soon afterward, though, scientists cautioned that it may not have exited the heliosphere’s outer boundary, because a shift in the direction of the magnetic field had not yet been detected.
Despite the announcement alongside the new paper, this may still be the case—Voyager 1 may have finally exited the heliosphere, but not yet entered interstellar space per se. According to NASA, “A change in the direction of the magnetic field is the last critical indicator of reaching interstellar space and that change of direction has not yet been observed.” Thus, the probe is in an unexpected region in between the heliosphere and interstellar space, previously referred to as a magnetic highway.
Either way, though, it’s still in the starting stages of its journey, set to spend millennia—yes, millenia—traveling through the interstellar medium, though it will probably not be able to record or send back data after around 2025.
After an estimated 40,000 years, it will come relatively close (within a light year) to another star—and at that point, could serve as something of a time capsule. The Voyager 1 carries a Golden Record, designed to present a virtual snapshot of humankind to other life forms, contains everything from images of DNA and the Taj Mahal to recordings of whale sounds and Chuck Berry’s “Johnny B. Goode.”
As Timothy Ferris wrote in Smithsonian last May when he reflected on the 35th anniversary of the Voyager mission, “The Voyagers will wander forever among the stars, mute as ghost ships but with stories to tell…Whether they will ever be found, or by whom, is utterly unknown.”
February 26, 2013
Picture a telescope orbiting in space, and your mind probably flies to the Hubble Space Telescope. At roughly 43 feet long and weighing 25,000 pounds, its footprint is the size of a small house and it’s just a little shy of the weight of a subway car. But not all satellite telescopes are behemoths–one launched yesterday from India, designed and developed by the Space Flight Laboratory of the University of Toronto Institute for Aerospace Studies, is roughly the size of a cooler you’d bring to a picnic.
The telescope is part of the Bright Target Explorer (BRITE) mission, an effort designed to observe stars and record changes in their brightness over time. Launched into orbit above the masking effects of our atmosphere, the telescope and its simultaneously launched twin will focus on the brightest stars–such as those in well-known constellations like Orion and the Big Dipper–looking for pulsations and reverberations in brightness that indicate spots on a star, a planet or another celestial object crossing its orbit, or flickering energy intensities within the star itself. These flickers, called “starquakes,” give clues to the composition and internal structure of stars.
BRITE ‘s telescopes are nanosatellites, meaning that they weigh less than ten kilograms. At seven kilograms–about as heavy as a large bowling ball–and measuring 20 centimeters on each side, they are the smallest telescopes in orbit. The cubic satellites did not require a dedicated rocket to get there–these hitched a ride on India’s Polar Satellite Launch Vehicle. Future launches of similar twin nanosatellites will help BRITE to become a satellite constellation that scans the sky for different wavelengths of light pulsing from stars.
Nanosatellites, part of a recent trend to conduct space-based science at low cost and with fast results, “can be developed quickly, by a small team and at a cost that is within reach of many universities, small companies and other organizations,” said Cordell Grant, manager of satellite systems for the Space Flight Laboratory, in a statement. “A nano-satellite can take anywhere from six months to a few years to develop and test,” he added. In contrast, Hubble took more than 12 years to design and construct before it launched with space shuttle Discovery in 1990.
But nanosatellites aren’t the only kind of small satellites out there. Here are some other tiny orbiters:
First launched on the last flight of Endeavour, sprites–also called femtosatellites–look about the size of a postage stamp. Developed by Cornell University scientists, these satellites are in interplanetary space collecting data about chemistry, radiation and particle impacts. Lead engineer Mason Peck, now a chief technologist at NASA, told the Cornell University Chronicle that “Their small size allows them to travel like space dust.” He added, “Blown by solar winds, they can ‘sail’ to distant locations without fuel.”
The grapefruit-sized CubeSat, a type of picosatellite, measures 10 centimeters on each side. “I got a 4-inch beanie baby box and tacked on some solar cells to see how many would fit on the surface,” Bob Twiggs, the satellite’s lead designer, told Space.com. “I had enough voltage for what I needed so I decided that would be the size.” Developed in 1999 with the help of Jordi Puig-Suari of California Polytechnic State University, along with students at Stanford University while Twigg was a professor there, CubeSats are now the go-to small satellite. They appeal to universities–at roughly $65,ooo to $80,000 a pop, they can fit within research budgets, allowing students the opportunity to design and build a research satellite.
Some, like GeneSat-1 provides life support for bacterium and are aimed at helping scientists learn more about how spaceflight affects the human body. Another–SwissCube-1–examines nightglow in Earth’s atmosphere. Launched alongside BRITE, the STRaND-1–a string of 3 CubeSats stacked together–is the first smartphone-powered satellite ever launched into space. The Android phone that serves as the device’s brain will run apps that will photograph its orbit, monitor the Earth’s magnetic field, and–perhaps most exciting–will allow people to upload videos of themselves screaming to test whether sounds broadcasted in space can be heard by the satellite playing them. Other CubeSats in development will assist researchers understand space weather, phenomena that could short out the other satellites that orbit Earth.
It’s interesting to remember that the first satellite–Sputnik-1, launched in 1957–was a 23-inch diameter sphere. These nano-, pico-, and femto-satellites harken back to those roots. But their size, cost, and ability to be developed quickly may make them the most useful satellites of the future. Hopefully they won’t lead to oodles more space junk! |
An X-ray telescope (XRT) is a telescope that is designed to observe remote objects in the X-ray spectrum. In order to get above the Earth's atmosphere, which is opaque to X-rays, X-ray telescopes must be mounted on high altitude rockets or artificial satellites.
- 1 Optical design
- 2 Telescopes
- 3 History of X-ray telescopes
- 4 References
- 5 External links
- 6 See also
X-ray telescopes can use a variety of different designs to image X-rays. The most common methods used in X-ray telescopes are grazing incidence mirrors and coded apertures. The limitations of X-ray optics result in much narrower fields of view than visible or UV telescopes.
The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires:
- the ability to determine the location at the arrival of an X-ray photon in two dimensions and
- a reasonable detection efficiency.
The mirrors can be made of ceramic or metal foil. The most commonly used grazing angle incidence materials for X-ray mirrors are gold and iridium. The critical reflection angle is energy dependent. For gold at 1 keV, the critical reflection angle is 3.72 degrees.
A limit for this technology in the early 2000s with Chandra and XMM-Newton X-ray observatories was about 15 kilo-electronvolt (keV) light. Using new multi-layered coatings, computer aided manufacturing, and other techniques the X-ray mirror for the NuSTAR telescope pushed this up to 79 keV light. To reflect at this level, glass layers were multi-coated with tungsten (W)/silicon (Si) or platinum (Pt)/silicon carbide(SiC).
Some X-ray telescopes use coded aperture imaging. This technique uses a flat aperture grille in front of the detector, which weighs much less than any kind of focusing X-ray lens, but requires considerably more post-processing to produce an image.
- a position-sensitive proportional counter (PSD) and
- a channel multiplier array (CMA).
Hard X-ray telescope
On board OSO 7 was a hard X-ray telescope. Its effective energy range: 7–550 keV, field of view (FOV) 6.5°, effective area ~64 cm2.
The Filin telescope carried aboard Salyut 4, consisted of four gas flow proportional counters, three of which had a total detection surface of 450 cm2 in the energy range 2–10 keV, and one of which has an effective surface of 37 cm2 for the range 0.2–2 keV. The FOV was limited by a slit collimator to 3° x 10° FWHM. The instrumentation included optical sensors mounted on the outside of the station together with the X-ray detectors. The power supply and measurement units were inside the station. Ground based calibration of the detectors occurred along with in-flight operation in three modes: inertial orientation, orbital orientation, and survey. Data were collected in 4 energy channels: 2–3.1 keV, 3.1–5.9 keV, 5.9–9.6 keV, and 2–9.6 keV in the larger detectors. The smaller detector had discriminator levels set at 0.2 keV, 0.55 keV, and 0.95 keV.
The hard X-ray and low-energy gamma-ray SIGMA telescope covered the energy range 35–1300 keV, with an effective area of 800 cm2 and a maximum sensitivity field of view of ~5° × 5°. The maximum angular resolution was 15 arcmin. The energy resolution was 8% at 511 keV. Its imaging capabilities were derived from the association of a coded mask and a position sensitive detector based on the Anger camera principle.
ART-P X-ray telescope
The ART-P X-ray telescope covered the energy range 4 to 60 keV for imaging and 4 to 100 keV for spectroscopy and timing. There were four identical modules of the ART-P telescope, each consisting of a position sensitive multi-wire proportional counter (MWPC) together with a URA coded mask. Each module had an effective area of approximately 600 cm2, producing a FOV of 1.8° x 1.8°. The angular resolution was 5 arcmin; temporal and energy resolutions were 3.9 ms and 22% at 6 keV, respectively. The instrument achieved a sensitivity of 0.001 of the Crab nebula source (= 1 "mCrab") in an eight-hour exposure. The maximum time resolution was 4 ms.
Focusing X-ray telescope
The Broad Band X-ray Telescope (BBXRT) was flown on the Space Shuttle Columbia (STS-35) as part of the ASTRO-1 payload. BBXRT was the first focusing X-ray telescope operating over a broad energy range 0.3–12 keV with a moderate energy resolution (90 eV at 1 keV and 150 eV at 6 keV). The two Co-Aligned Telescopes with a segmented Si(Li) solid state spectrometer each (detector A and B) composite of five pixels. Total FOV 17.4´ diameter, Central pixel FOV 4´ diameter. Total area 765 cm2 at 1.5 keV, and 300 cm2 at 7 keV.
XRT on the Swift MIDEX mission
The XRT on the Swift MIDEX mission (0.2–10 keV energy range) uses a Wolter I telescope to focus X-rays onto a thermoelectrically cooled CCD. It was designed to measure the fluxes, spectra, and lightcurves of Gamma-ray bursts (GRBs) and afterglows over a wide dynamic range covering more than 7 orders of magnitude in flux. The XRT can pinpoint GRBs to 5-arcsec accuracy within 10 seconds of target acquisition for a typical GRB and can study the X-ray counterparts of GRBs beginning 20–70 seconds from burst discovery and continuing for days to weeks.
The overall telescope length is 4.67 m with a focal length of 3.500 mm and a diameter of 0.51 m. The primary structural element is an aluminum optical bench interface flange at the front of the telescope that supports the forward and aft telescope tubes, the mirror module, the electron deflector, and the internal alignment monitor optics and camera, plus mounting points to the Swift observatory.
The 508 mm diameter telescope tube is made of graphite fiber/cyanate ester in two sections. The outer graphite fiber layup is designed to minimize the longitudinal coefficient of thermal expansion, whereas the inner composite tube is lined internally with an aluminum foil vapor barrier to guard against outgassing of water vapor or epoxy contaminants into the telescope interior. The telescope has a forward tube which encloses the mirrors and supports the door assembly and star trackers, and an aft tube which supports the focal plane camera and internal optical baffles.
The mirror module consists of 12 nested Wolter I grazing incidence mirrors held in place by front and rear spiders. The passively heated mirrors are gold-coated, electroformed nickel shells 600 mm long with diameters ranging from 191 to 300 mm.
The X-ray imager has an effective area of >120 cm2 at 1.15 keV, a field of view of 23.6 x 23.6 arcmin, and angular resolution (θ) of 18 arcsec at half-power diameter (HPD). The detection sensitivity is 2 x 10−14 erg cm−2s−1 in 104 s. The mirror point spread function (PSF) has a 15 arcsec HPD at the best on-axis focus (at 1.5 keV). The mirror is slightly defocused in the XRT to provide a more uniform PSF for the entire field of view hence the instrument PSF θ = 18 arcsec.
Normal incidence X-ray telescope
History of X-ray telescopes
The first X-ray telescope employing Wolter Type I grazing-incidence optics was employed in a rocket-borne experiment in 1965 to obtain X-ray images of the sun (R. Giacconi et al., ApJ 142, 1274 (1965)).
The Einstein Observatory (1978–1981), also known as HEAO-2, was the first orbiting X-ray observatory with a Wolter Type I telescope (R. Giacconi et al., ApJ 230,540 (1979)). It obtained high-resolution X-ray images in the energy range from 0.1 to 4 keV of stars of all types, super-nova remnants, galaxies, and clusters of galaxies. HEAO-1 (1977-1979) and HEAO-3 (1979-1981) were others in that series. Another large project was ROSAT (active from 1990-1999), which was a heavy X-ray space observatory with focusing X-ray optics.
The Chandra X-Ray Observatory is among the recent satellite observatories launched by NASA, and by the Space Agencies of Europe, Japan, and Russia. Chandra has operated for more than 10 years in a high elliptical orbit, returning thousands 0.5 arc-second images and high-resolution spectra of all kinds of astronomical objects in the energy range from 0.5 to 8.0 keV. Many of the spectacular images from Chandra can be seen on the NASA/Goddard website.
NuStar is one of the latest X-ray space telescopes, launched in June 2012. High-energy (3 - 79 keV), and high resolution. Sensitive to the 68 and 78 keV from decay of 44Ti in supernovae.
Gravity_and_Extreme_Magnetism (GEMS) would have measured X-ray polarization but was cancelled in 2012.
- "Mirror Laboratory".
- NuStar: Instrumentation: Optics
- Spiga, D.; Raimondi, L. (2014). "Predicting the angular resolution of x-ray mirrors". SPIE Newsroom. doi:10.1117/2.1201401.005233.
- Hoff HA (Aug 1983). "Exosat — the new extrasolar x-ray observatory". J Brit Interplan Soc (Space Chronicle). 36 (8): 363–7. Bibcode:1983JBIS...36..363H.
- Mandrou P, Jourdain E. et al. (1993). "Overview of two-year observations with SIGMA on board GRANAT". Astronomy and Astrophysics Supplement Series (97). Bibcode:1993A&AS...97....1M.
- Revnivtsev MG, Sunyaev RA, Gilfanov MR, Churazov EM, Goldwurm A, Paul J, Mandrou P, Roques JP (2004). "A hard X-ray sky survey with the SIGMA telescope of the GRANAT observatory". Astron Lett. 30 (8): 527–33. arXiv:astro-ph/0403481. Bibcode:2004AstL...30..527R. doi:10.1134/1.1784494.
- "International Astrophysical Observatory "GRANAT"". IKI RAN. Retrieved 2007-12-05.
- "GRANAT". NASA HEASARC. Retrieved 2007-12-05.
- Molkov, S.V., Grebenev, S.A., Pavlinsky, M.N., Sunyaev (March 1999). "GRANAT/ART-P Observations of GX3+1: Type I X-Ray it also Burst out and Persistent Emission". arXiv:astro-ph/9903089v1 [astro-ph].
- Burrows DN, Hill JE, Nousek JA, Kennea JA, Wells A, Osborne JP, Abbey AF, Beardmore A, Mukerjee K, Short ADT, Chincarini G, Campana S, Citterio O, Moretti A, Pagani C, Tagliaferri G, Giommi P, Capalbi M, Tamburelli F, Angelini L, Cusumano G, Bräuninger HW, Burkert W, Hartner GD (Oct 2005). "The Swift X-ray Telescope". Space Sci Rev. 120 (3-4): 165–95. arXiv:astro-ph/0508071. Bibcode:2005SSRv..120..165B. doi:10.1007/s11214-005-5097-2.
- Hoover, R. B.; Walker II, A. B. C.; Lindblom, J. F.; Allen, M. J.; O'Neal, R. H.; DeForest, C. E. (1992). Solar observations with the multispectral solar telescope array. In Hoover, Richard B. "Multilayer and Grazing Incidence X-Ray/EUV Optics". Proc. SPIE 1546: 175. doi:10.1117/12.51232.
- Kamijo N, Suzuki Y, Awaji M, et al. (May 2002). "Hard X-ray microbeam experiments with a sputtered-sliced Fresnel zone plate and its applications". J Synchrotron Radiat 9 (Pt 3): 182–6. doi:10.1107/S090904950200376X. PMID 11972376.
- Scientific applications of soft x-ray microscopy |
What is Huber formula?
What is Huber formula?
Huber’s formula: V = H x s0.5. Newton’s formula- Assuming D is basal diameter, d is top diameter and d0.5 is mid-section diameter: V = H x (D^2 + 4 x d0.5 + d^2) x (PI / 24)
How do you calculate the volume of a tree?
The volume of wood is the length of the pole times the cross-sectional area. A circle (the cross-section) has an area equal to π times the square of the radius. The radius is the circumference divided by 2π. Thus your volume of wood in cubic feet is 20 × 52 ÷ (4π).
What is the formula for determining volume of a log?
Volume of Wood The cubic feet, or volume, of a cylindrical log is given by the volume of a cylinder V=πhr2. A log with radius of 2 feet and height of 10 feet would have a volume of about 125.66 cubic feet (or ft3).
What is the volume of this cylinder?
The formula for the volume of a cylinder is V=Bh or V=πr2h . The radius of the cylinder is 8 cm and the height is 15 cm. Substitute 8 for r and 15 for h in the formula V=πr2h .
How do you use Smalian formula?
A cubic volume formula used in log scaling, expressed as cubic volume = [(B + b)/2] L, where B = the cross-sectional area at the large end of the log, b = the cross-sectional area at the small end of the log, and L = log length. To estimate the cubic foot volume of logs, Smalian’s formula was used.
What is Prismoidal formula?
1) Prismoidal Formula: This formula is based on the assumption that A1 and A2 are the areas at the ends and Am is the area of mid section parallel to ends, L=Length between the ends. From mensuration, volume of a prism having end faces is in parallel planes: V=L/6*(A1+A2+4Am) This is known as prismoidal formula.
What is bole volume?
Calculates the amount of taper to the top of the first 16-foot log in a tree. This is the diameter at the top of that log, as a percentage of DBH between 60 and 100. Bole Volume Parameter (b0)
What is Smalian formula?
[′smȯl·yəns ‚fȯr·myə·lə] (forestry) A cubic volume formula used in log scaling, expressed as cubic volume = [(B + b)/2] L, where B = the cross-sectional area at the large end of the log, b = the cross-sectional area at the small end of the log, and L = log length.
How big is the Huber’s volume in feet?
Huber’s Volume: Example Small end diameter = 6 in Midpoint diameter = 8 in Large end diameter = 9 in Length = 16 ft Huber’s Cu Volume = (B 1/2)*L B 1/2= 0.005454 * (8*8) = 0.349 Volume = 0.349 * 16 = 5.585 cu feet
How is the Huber formula used to measure logs?
The Huber formula assumes that the average cross section area is at the midpoint of the log, but this is not always true. It is intermediate in accuracy but has limited use due to the impracticality of measuring diameter inside bark at log midlength.
How to calculate the volume of a log?
Calculate the Volume of a Log in cubic metres using the Huber Formula. Huber diameter is measured at mid section but could be calculated by adding the small end and large end diameters together and dividing this amount by 2. diameter small end + diameter large end / 2 = DIB mid section |
Stoichiometry crash course: meaning of coefficients in a balanced equation, molar ratios, mole-mole calculations, mass-mass calculations, other stoichiometric calculations
The CC Academy videos provide easy, 101, crash course tutorials to give you step by step Chemistry help for your chemistry homework problems and experiments.
Check out our best lessons:
– Solution Stoichiometry Tutorial: How to use Molarity
– Quantum Numbers
– Rutherford’s Gold Foil Experiment, Explained
– Covalent Bonding Tutorial: Covalent vs. Ionic bonds
– Metallic Bonding and Metallic Properties Explained: Electron Sea Model
– Effective Nuclear Charge, Shielding, and Periodic Properties
– Electron Configuration Tutorial + How to Derive Configurations from Periodic Table
– Orbitals, the Basics: Atomic Orbital Tutorial — probability, shapes, energy
– Metric Prefix Conversions Tutorial
– Gas Law Practice Problems: Boyle’s Law, Charles Law, Gay Lussac’s, Combined Gas Law
—More on Stoichiometry | Wikipedia—
“Stoichiometry…is the calculation of relative quantities of reactants and products in chemical reactions.
Stoichiometry is founded on the law of conservation of mass where the total mass of the reactants equals the total mass of the products leading to the insight that the relations among quantities of reactants and products typically form a ratio of positive integers. This means that if the amounts of the separate reactants are known, then the amount of the product can be calculated. Conversely, if one reactant has a known quantity and the quantity of product can be empirically determined, then the amount of the other reactants can also be calculated….
Here, one molecule of methane reacts with two molecules of oxygen gas to yield one molecule of carbon dioxide and two molecules of water. Stoichiometry measures these quantitative relationships, and is used to determine the amount of products/reactants that are produced/needed in a given reaction. Describing the quantitative relationships among substances as they participate in chemical reactions is known as reaction stoichiometry. In the example above, reaction stoichiometry measures the relationship between the methane and oxygen as they react to form carbon dioxide and water.
Because of the well known relationship of moles to atomic weights, the ratios that are arrived at by stoichiometry can be used to determine quantities by weight in a reaction described by a balanced equation. This is called composition stoichiometry.
Gas stoichiometry deals with reactions involving gases, where the gases are at a known temperature, pressure, and volume and can be assumed to be ideal gases. For gases, the volume ratio is ideally the same by the ideal gas law, but the mass ratio of a single reaction has to be calculated from the molecular masses of the reactants and products. In practice, due to the existence of isotopes, molar masses are used instead when calculating the mass ratio.”
Wikipedia contributors. “Stoichiometry.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 27 May. 2016. Web. 27 May. 2016. |
COMP151 - Project 2 Prep Lab
This lab is all about gearing up for your second project by starting to put together the basic pieces of synthesized music. You’ve been given a file that contains functions for generating five types of sound waves: silence, sine, white noise, square, and triangle. Each function lets you specify duration in seconds. The sine, square, and triangle functions also let you set amplitude and frequency. The white noise function let’s you specify amplitude and duration. In short, you can use them to generate notes and sounds for a given time duration. From there can begin to build up full voices and songs by combining and blending them with other sounds. In this lab you’ll learn to use these wave forms, add another wave form, the sawtooth wave, to the library, and start building musical and rhythmic phrases.
- Don’t forget that at the end of each chapter contains a Programming Summary section that provides a quick dictionary of all of the important function and encoding names introduced thus far. When you run into problems don’t forget to check the Common Bugs and Debugging Tips scattered throughout the chapters as well. If you’re stuck on an error try looking at these items.
- As always switch the driver (person typing the code) and navigator (person watching, helping spot typos, etc.) on every new problem or every half hour, whichever comes first.
- Do all of the problems in a single python file. Put everyone’s name in a comment at the top of the file. Label the start of each problem with a comment.
Take a few moments to generate, play, and explore a few different sounds with each of the available waves. Get used to their sounds, their differences, and be certain you know how to use them.
In the code for square waves and triangle waves you see an expression like a = (a+1)%b. This is used to track the number of samples that have been produced in order to properly “turn around” in the middle of the cycle. It’s an alternative approach to what you see in the book and worth a closer look. First things first, remember that the % operator computes the remainder. For example, 5 % 2 is 1 because 5 has a remainder of 1 when divided by 2. Now, let’s just say that a starts with a value of 0 and b is 4. Write down what happens if you do a = (a+1)%b 7 times? Then, describe how this kind of pattern works in the context of the triangle and square wave functions, i.e. how does doing this let you properly track loop cycles and turn around points.
Write a function that takes frequency, amplitude, and duration (in secs) as its inputs and returns a sawtooth wave with those properties. It should work just line the sine, square, and triangle functions but produce a different wave form. A sawtooth waves begins its cycle at half it’s maximum amplitude and then steadily decreases down to the negative of the half the maximum amplitude by the end of the cycle. In other words, it starts at a positive value then steadily decreases down a full amplitude into the negative by the end of its cycle. When you repeat the cycle it starts back right at the top. This sudden change from minimum to maximum followed by a steady decline to the minimum again creates the characteristic sawtooth pattern.
Music can be broken down into beats. The speed, or tempo, of music is often measured in beats per minute (bpm) with slower tempos occurring at 70 bpm, moderate tempos around 110 bpm, and fast tempos around 140 bmp. We want to use our wave generators to produce beats of sound so we need to know how to go from bpm to seconds. First, figure out the duration, in seconds, of one beat of music if the tempo is 120 bpm and again if the tempo is 70 bpm. Now, write a python function named secondsPerBeat that takes in a bpm value and returns the number of seconds per beat. Be certain your function returns a floating point number and not an integer. Check your function against the values you computed for 120 bpm and 70 bpm.
We’ll typically want to construct a single beat out of a combination of silence and sound. For example, a drum-like sound might come from repeating a pattern of one half a beat of noise following by a half a beat of silence. More rhythmic patters often come from putting sound in the later part of a beat rather than the beginning. Finally, we’ll need to combine beats into full phrases of music. We could think in terms of concatenating sound, i.e. just gluing short sounds together end to end, but it will be more efficient if we instead think in terms of copying sound into silence. As a bonus, we can use the general copy function shown in program 106 to get this done! Now, copy program 106 from the text to your program but modify it so that the starting location for copy is given as a floating point specification of seconds rather than a sample number. For example, passing 1.5 in for start means start copying 1.5 seconds into the target sound.
Use your new copy function to construct the 16 beat percussion pattern described below at a tempo of 120 bpm. Our “instrument” generates a quarter of a beat of white noise each time we play it. Our rhythmic pattern begins by repeating the following 4 beat pattern three times: Play the instrument once at the start of the first beat and again halfway through the first beat, and then for the next three beats play it once at the the half way point of the beat. Like this:
Finally, we end by starting that same pattern but leaving out the final three sounds, i.e. play the instrument on the first beat and half-beat then wait for three beats of silence. Like this:
- Play around! Make new percussive beats and/or start making melodic phrases using the other wave forms at different frequencies. Just pick a tempo and work by mixing sound and silence in half, quarter, or even sixteenth beat increments. Create at least four beats of sound and save that to a wav file. Maybe go for something like the starting bass line from “Another One Bites the Dust” by Queen using triangle waves? Don’t worry about musicality or something that sounds good if music isn’t your thing. Just make something pattern oriented that includes a mix of sound and silence and you’re good to go.
When you’re done, print and turn in your code and email your wav file from number 6 to the instructor. |
A circle is easy to make:
Draw a curve that is "radius" away
All points are the same distance from the center.
You Can Draw It Yourself
Put a pin in a board, put a loop of string around it, and insert a pencil into the loop. Keep the string stretched and draw the circle!
Play With It
Try dragging the point to see how the radius and circumference change.
Radius, Diameter and Circumference
The Radius is the distance from the center outwards.
The Diameter goes straight across the circle, through the center.
The Circumference is the distance once around the circle.
And here is the really cool thing:
When we divide the circumference by the diameter we get 3.141592654...
which is the number π (Pi)
So when the diameter is 1, the circumference is 3.141592654...
We can say:
Circumference = π × Diameter
Example: You walk around a circle which has a diameter of 100m, how far have you walked?
Distance walked = Circumference = π × 100m
= 314m (to the nearest m)
Also note that the Diameter is twice the Radius:
Diameter = 2 × Radius
And so this is also true:
Circumference = 2 × π × Radius
|× 2||× π|
The length of the words may help you remember:
- Radius is the shortest word and shortest measure
- Diameter is longer
- Circumference is the longest
The circle is a plane shape (two dimensional), so:
The area of a circle is π times the radius squared, which is written:
A = π r2
To help you remember think "Pie Are Squared" (even though pies are usually round):
Or, using the Diameter:
A = (π/4) × D2
Example: What is the area of a circle with radius of 1.2 m ?
Area Compared to a Square
A circle has about 80% of the area of a similar-width square.
The actual value is (π/4) = 0.785398... = 78.5398...%
Because people have studied circles for thousands of years special names have come about.
Nobody wants to say "that line that starts at one side of the circle, goes through the center and ends on the other side" when a word like "Diameter" will do.
So here are the most common special names:
A line that goes from one point to another on the circle's circumference is called a Chord.
If that line passes through the center it is called a Diameter.
A line that "just touches" the circle as it passes by is called a Tangent.
And a part of the circumference is called an Arc.
There are two main "slices" of a circle.
The "pizza" slice is called a Sector.
And the slice made by a chord is called a Segment.
The Quadrant and Semicircle are two special types of Sector:
|Quarter of a circle is called a Quadrant.
Half a circle is called a Semicircle.
Inside and Outside
A circle has an inside and an outside (of course!). But it also has an "on", because we could be right on the circle.
Example: "A" is outside the circle, "B" is inside the circle and "C" is on the circle. |
Nicole Messier, CATE Instructional Designer
April 15th, 2022
WHAT? Heading link
Authentic assessments involve the application of knowledge and skills in real-world situations, scenarios, or problems. Authentic assessments create a student-centered learning experience by providing students opportunities to problem-solve, inquire, and create new knowledge and meaning.
Elements of Authentic Assessments
There are several elements to consider that make an assessment more “authentic” (Ashford-Rowe, 2014; Grant, 2021; Wilson-Mah, 2019;), including:
- Accuracy and validity – The accuracy of the assessment refers to how closely it resembles a real-world situation, problem, disciplinary norm, or field of study. The assessment validity refers to the alignment of grading criteria to the learning objectives, transferable skills (e.g., communication, critical thinking, etc.), workforce readiness skills, and disciplinary norms and practices.
- Demonstration of learning – The outcomes of an assessment should allow students to demonstrate learning in ways that reflect their field of study, for example, a performance or a product that is authentic to their future career. Or the assessment should allow for student choice based on interests and skills; for example, one group of students decides to create a podcast to demonstrate their learning in general education coursework.
- Transfer of knowledge – The assessment should provide the transfer of knowledge from theory to practice and from one task or experience to another. For example, students writing a blog post about a scientific principle that was demonstrated in current events replacing a traditional essay or paper on the scientific principle.
- Metacognition – The process of reflecting on learning should be purposefully planned for students to make connections to prior knowledge, experiences, and different subject areas. For example, metacognition can be encouraged in authentic assessments by asking students to evaluate their progress, self-assess their product or performance, and reflect on their thought processes and learning experiences during the authentic assessment.
- Collaboration – The assessments should provide opportunities for interaction that are aligned to the real-world situation. For example, if the task is typically completed by a team in the field, then the assessment should be completed collaboratively by a group.
- Flexibility – The assessment should provide flexibility in the timeline and due dates for meeting project benchmarks and deliverables to align with real-world tasks. For example, if the task would take a few weeks to complete while working full time then the timeline in the course should reflect this timing to ensure authenticity and manageability.
- Environment and tools – The environment and tools used to provide the assessment should be like the environments and tools in the students’ field of study or aligned with a real-world situation. For example, students taking a graphic design course utilizing software that is used in their field to create typography, logos, etc., or medical students practicing authentic tasks in a simulation room to mirror a hospital room.
Authentic assessments can also be referred to as alternative assessments or performance-based assessments. All of these assessments are considered “alternatives” to traditional high-stakes tests or research papers, and are based on the constructivist theory where students actively construct new meaning and knowledge.
Also, it is important to understand that authentic assessments can be used to assess students both formatively (during instruction) and summatively (when the instruction is over). Want to learn more about formative assessments or summative assessments? Please visit the Assessment & Grading Practices teaching guides in the Resources section of the CATE website.
Types of Authentic Assessments Heading link
Authentic assessments can be designed using different teaching methods like inquiry-based learning, project-based learning, problem-based learning, scenario-based learning, or design-based learning. Select each of the headings below to learn about how these teaching methods can support your design of authentic assessments.
Inquiry-based learning involves the process of research and experimentation with complex questions and problems. Inquiry-based learning is structured around phases similar to the scientific method where students develop questions, experiment, and evaluate.
Elements of Inquiry-based Learning
- Identifying a problem or question.
- Making predictions or formulating hypotheses.
- Active construction of new knowledge through testing, research, and experimentation.
- Communication and discussion of results and new knowledge.
- Evaluation of process, data interpretation, and self-reflection.
The focus of inquiry-based learning is scientific thinking and reasoning. The process students use to discover new information can vary based on the type of inquiry process you select to use in the course.
One example of an inquiry process is the 5E model:
- Engagement Phase – connections are made to past and present learning.
- Exploration Phase – students engage in testing, research, or experimentation.
- Explanation Phase – students communicate and demonstrate their learning.
- Elaboration Phase – instructor extends students’ learning with new activities.
- Evaluation Phase – students self-assess and reflect on learning.
Inquiry-based learning can be designed for science courses such as natural sciences, social science, or health science courses. Grading of inquiry-based learning could be centered around the metacognition and critical thinking documented during the inquiry process as well as the deliverables submitted during each phase of the inquiry process.
Example – Inquiry-based Learning
An instructor decides to use inquiry-based learning during lab work in a physics course. Instead of providing students with step-by-step instructions on how to complete the lab, students are allowed to decide what data to collect, how to collect it, and how to analyze it to explain the physics principle or phenomenon. The instructor notices that student interactions increase as students voice their opinions and facilitate decision-making with their group (Nutt, 2020). Please see the Additional Resources section for more information on this example.
Please note that in some cases, inquiry-based learning is used as an umbrella term that encompasses numerous forms of inquiry learning like problem-based, scenario-based, and design-based learning. In this teaching guide, inquiry-based learning is modeled after research aligned with the scientific method and experimentation.
Problem-based learning involves a dilemma or problem that needs to be solved. The problem-based learning experience is structured around the research process and the discovery of solutions.
Elements of Problem-based Learning
- Application of learning to real-world situations – the context of the problem.
- Alignment of learning objectives – the purpose behind the problem.
- Creates new knowledge while retrieving previous experiences and knowledge – the investigation of solutions to the problem.
- Communication of findings and/or collaboration with peers – the discussion or defense of solutions to the problem.
- Feedback and metacognition – how the problem improved student learning.
The focus of problem-based learning is typically on the research journey to solve real-world problems. This research journey involves an examination of previous knowledge, collection of new information, analysis, and determination of possible solutions. Grading of this type of problem-based learning could center around the documentation of the research process and the critical thinking used to determine solutions based on research.
Problem-based learning can also be designed for major coursework (e.g., a patient problem in medical training). Students might be directed to determine one solution to the proposed problem and then students present their solutions and receive peer and instructor feedback on their presentation of the problem and solution. Grading of this type of problem-based learning could center around students’ ability to present the problem and defend the solution with research-based evidence.
Example – Problem-based Learning
An instructor decides to use problem-based learning in a teacher education course. The instructor creates several student personas with different learning problems. Students work in small groups during class to discuss the student persona and brainstorm ideas on the student persona’s learning problem based on prior knowledge. Students decide roles and the steps to complete the assessment. During the next class session, each small group explains their student persona’s diagnosed learning problem and describes examples of differentiation and scaffolding to adapt instruction to improve the student persona’s learning. Students receive feedback from their peers as well as the instructor.
Scenario-based learning involves a real-world scenario that prompts student learning. Scenario-based learning provides students opportunities to draw on previous experience and knowledge to complete authentic tasks.
Elements of Scenario-based Learning
- Realistic scenarios
- Contextualize learning from theory to application
- Incorporates retrieval of previous experience and knowledge
- Completion of authentic tasks to address the scenario
- Authentic tasks show alignment to learning objectives and workforce readiness
The focus of scenario-based learning is the application of learning in real-world scenarios through authentic tasks to demonstrate learning objectives, workforce readiness, and transferable skills (e.g., communication, critical thinking, etc.). Grading of scenario-based learning could be centered around the demonstration of learning objectives and workforce readiness through authentic tasks.
Scenario-based learning can be designed for major coursework in undergraduate and graduate programs, as well as undergraduate general education coursework. In major coursework, students can develop workforce readiness while demonstrating proficiency in learning objectives during the scenario-based learning. In undergraduate general education coursework, scenario-based learning can provide an understanding of the assessment’s importance which can improve student engagement and motivation, as well as support student development of transferable skills.
Example – Scenario-based Learning
An instructor decides to use scenario-based learning in a general education writing course. The instructor designs scenarios for students to understand audience-centered writing. An example of a writing scenario could involve a historical event or person, where students write a letter providing advice to a historical person or take on the role of a historical person to suggest ways to address the historical event. Another example of a writing scenario could involve a human resource problem at a company, where students are asked to create a memo or policy to address the problem. These scenarios provide students with a real-world context for a specific audience and purpose for each formative assessment (Golden, 2018).
Project-based learning involves student interest, choice, and autonomy to create a student-centered experience. Project-based learning can be completed individually or collaboratively. If project-based learning is completed collaboratively, then a group of students works together to demonstrate the application of their collective knowledge and experiences.
Stages of Project-based Learning
- Project planning – the student or group determines how they will demonstrate the learning objectives through a selected format (product or performance).
- Project starts – the student or group research topics aligned to learning objectives and analyzes the research collected or practices skills and prepares for the performance.
- Formative feedback – the student or group receives formative feedback on the project as well as self-assess their progress.
- Completion of the project – the student or group adjusts the project based on feedback and completes the product or performance preparation.
- Presentation – the student or group presents the product or performance to the class (synchronously or asynchronously).
- Reflection – the student or group reflects on learning and experience for metacognition and provides the instructor with feedback on the process.
- Assessment of the project – the student or group receives feedback from the instructor and/or peers and receives a grade on the project.
The focus of project-based learning is the application and assimilation of knowledge that is demonstrated in a product or performance. Students select the product or performance in project-based learning based on their interests and skills. The final product or performance is used as the summative assessment to confirm student outcomes and the project plan will have a timeline for submitting deliverables for formative feedback.
Project-based learning can be designed for major coursework in undergraduate and graduate programs, as well as undergraduate general education coursework. Allowing for student choice on how students demonstrate learning can help motivate and engage students in undergraduate general education coursework. In major coursework, students can demonstrate their proficiency in the learning objectives, professionalism, and transferable skills (e.g., communication, critical thinking, etc.) during the project.
Example – Project-based Learning
An instructor decides to create a summative authentic assessment using project-based learning in a social sciences course. The instructor provides a list of societal issues aligned with the learning objectives that students will select from, or students have the option of submitting a different societal issue with an explanation of how it aligns with the learning objectives. Next, students will select the product or performance to demonstrate their learning. Students will then create a project plan and submit their plan to receive feedback from the instructor. Students adapt their project plan based on instructor feedback, begin research on the societal issue, and complete the product or performance to demonstrate their learning. Lastly, students present their product or performance asynchronously using a video recording tool like VoiceThread for feedback and grading.
Design-based learning (or design thinking) involves creativity, critical thinking, and brainstorming to solve human-centered problems. Design-based learning provides opportunities to collaboratively engage with peers to innovate and determine solutions. The process students use to ideate can vary based on the type of design process you select to use in the course.
One example of design-based learning
- Empathize – students focus on human-centered experiences and learn about their audience.
- Define – students define personas (e.g., who will benefit from the innovation, who will be the end user of the product or service, or who might be the customers to attract), goals, and objectives.
- Ideate – students brainstorm without judgment of ideas.
- Prototype – students develop an outline, sketch, flowchart, model, role-play, etc.
- Test – students implement the prototype and receive feedback (self, peer, and instructor).
- Reflect and redesign – students reflect on their learning process and refine or redesign the prototype.
The focus of design-based learning is to foster students’ ideation, curiosity, openness to new ideas, and comfort with ambiguity. Design-based learning can be implemented in major coursework in design fields like industrial design, environmental, architecture, graphic design, and engineering as well human-centered fields like law, psychology, anthropology, and business.
Example – Design-based Learning
An engineering or architectural instructor decides to incorporate design-based learning activities into scheduled class time. Each design-based learning activity begins with a class discussion of a human-focused problem and personas (people who are impacted by the problem). For example, the instructor shows a picture of a public building and asks students to identify personas who might find the building unaccessible. Students spend time empathizing and defining the personas and goals of their redesign of the entrance. Next, students begin the ideation nonverbally using an asynchronous interactive board (Padlet, Jamboard, Trello, etc.) during class and then continue to ideate over the next few weeks. In a subsequent class, the instructor guides students through a discussion to determine the top ideas for solving the problem. Each group selects one idea to design and test. Students submit the prototype and reflection on the process for feedback and grading.
WHY? Heading link
Impact of Authentic Assessments
Authentic assessments have the potential to improve student self-efficacy (belief in own capacity), performance, and learning.
- Self-efficacy and confidence – in a review of research completed on fifteen studies of project-based learning, 90% of the students reported improved confidence and were optimistic that they could implement project-based learning in future careers (Indrawn, 2019).
- Higher grades – In a general education writing course, students who participated in scenario-based learning showed consistently higher averages (one to two letter grades higher) than students who did not receive scenario-based learning (Golden, 2018).
- Engagement and retention – authentic assessments have shown improved student engagement and learner retention through participation in authentic assessments.
- Direct evidence – authentic assessments provide direct evidence of students’ learning and skills for instructors and students to better understand the learning taking place and plan the next steps for instruction and learning.
- Student diversity – authentic assessments allow students to demonstrate their unique abilities, lived experiences, interests, and social identities.
- Real-world artifacts – authentic assessments provide students with authentic tasks that can be utilized in professional portfolios, resumes, or interviews.
Workforce Readiness and Graduate Attributes Heading link
Workforce Readiness and Graduate Attributes
Authentic assessments’ impact has also been viewed through the lens of workforce readiness and graduate attributes. For example, in a project-based learning experience, 78% of students reported that the experience prepared them to be workforce ready because of the real-world practice they received through the authentic assessment (Indrawn, 2019).
Several graduate attributes have been identified as outcomes of authentic assessment participation (Foss, 2021; Indrawn, 2019; Karunanayaka, 2021; Elliott-Kingston, 2018; Murphy, 2017; Rowan, 2012), including:
- Open-mindedness – students who participate in authentic assessments learn to be receptive to the diversity of ideas and multiple perspectives.
- Comfort with ambiguity – students who participate in authentic assessments learn to live with uncomfortableness as they construct new knowledge and meaning.
- Ability to engage in an iterative process – authentic assessments provide students with opportunities to ideate, evaluate, and reflect on ideas and learning. Students develop effective problem-solving skills through this iterative process that includes idea incubation.
- Creativity – authentic assessments positively reinforce students’ creativity through the inquiry process.
- Learn to fail – authentic assessments provide formative feedback to help students build resiliency and strengthen their self-efficacy even when faced with failure.
- Take risks – authentic assessments encourage student risk-taking, and the instructor provides a safe and supportive learning environment for taking risks.
- Search for multiple answers – students learn how to brainstorm ideas and develop numerous solutions to address problems.
- Internally motivated – authentic assessments support students’ internal motivation by providing opportunities for student choice based on their interests and future careers. Students develop metacognition and self-regulation skills as they reflect on their motivations, interests, and learning.
- Take ownership of their learning – authentic assessments foster student ownership and autonomy. Students develop scholarship and a commitment to life-long learning through participation in authentic assessments.
- Leadership – authentic assessments foster leadership, professionalism, and decision-making skills as students self-direct their learning and performance.
- Citizenship and empathy – in many cases, authentic assessments ask students to reflect on an audience, end-user, or global community when solving a problem or designing a product. These experiences help to foster citizenship and empathy.
HOW? Heading link
Considerations for Authentic Assessments
There are several variables that you should consider as you begin to design an authentic assessment:
- The education and experience level of students – consider how you will support students who may not have the professional skills yet to complete the authentic tasks (see the Student Success during Authentic Assessments in the HOW section of this guide).
- The subjectivity of authenticity – consider how you will ensure that the designed assessment is authentic to the students. Please note that authenticity is subjective in nature; this means that what one person views as authentic might not be regarded the same by another (see the Elements of Authentic Assessments in the WHAT section of this guide for ways to make your assessment more authentic). Will you provide students with an opportunity to give you feedback to improve authenticity? Will you engage with practitioners in the field to ensure the authenticity of scenarios, problems, or prompts?
- Complexity – consider how you will ensure that the assessment’s level of complexity is aligned to the learning objectives, course outcomes, and real-world situation, problem, or field of study.
- Instructor’s role – consider how you will interact with students during the authentic assessment (see the Student Success during Authentic Assessments in the HOW section of this guide). How will you ensure that your role supports the education and experience level of your students? Will you provide guidance, facilitation, or direct instruction during the authentic assessment?
- Student ownership and choice – consider what level of student responsibility and choice that will be present in the authentic assessment. Will students have minimal responsibility if you are using direct instruction, or will the students have higher levels of responsibility if you are guiding student-directed inquiry? Will students have the opportunity to choose how they will demonstrate their learning with a final product or performance?
- Formative feedback – consider how students will receive formative feedback during the authentic assessment. Who will provide the formative feedback (instructor, TA, peers, or self)?
- Manageability – consider the manageability of the authentic assessment regarding class size and course modality.
- In large class sizes consider incorporating authentic assessments through partner or group work to reduce grading and feedback time as well as encourage communication and collaboration skills of students.
- In online courses consider incorporating asynchronous peer review to provide opportunities for student interaction and feedback.
- Alignment of assessments and instruction – consider how you will utilize authentic learning instruction to support student achievement in authentic assessments. For example, if using design-based learning during a group assignment then consider utilizing design thinking during your lectures and activities.
Authentic Assessment Products or Performances Heading link
There are numerous types of products and performances to choose from when designing an authentic assessment. This is not an all-encompassing list of authentic products or performances, but more of a starting point for ideas. Instructors should also consider allowing students or groups to brainstorm ideas for products or performances and self-select a format.
Writing for an Actual Audience
- Action plan
- Analysis – Gap, SWOT, Comparative
- Article for a professional publisher
- Blog article
- Business report
- Children’s story
- Executive summary
- External document
- Fact sheet
- Fictional short story
- Historical fiction
- Internal document for communication – memo
- Letter to…
- Literary analysis
- Media review
- Outline for meeting, training, or presentation
- Pamphlet or brochure
- Podcast narrative
- Presentation slides and speaker notes
- Research paper
- Short story
- Song lyrics
- Script for presentation, skit, or role playing
- Conference presentation
- Dance performance
- Music performance
- Oral report
- Panel discussion
- Play performance
- Poetry performance
- Recorded interview
- Role playing
- Routine – exercise, cheer, aerobic, gymnastics
- Teaching a skill
- Video presentation
Design of Products
- Drawings or sketches
- Physical model
- Project plan
Creation of Products
- Animation video
- Assessment tool – checklist, rubric
- Dance choreography
- Data display – spreadsheet
- Musical piece
- Visuals – chart, graph, Venn diagram
- Peer review
- Work samples
GETTING STARTED Heading link
The following steps will support you as you develop an authentic assessment:
- 1) The first step is to utilize backward design principles by aligning the authentic assessments to the course learning objectives, disciplinary norms, practices, and transferable or workforce readiness skills.
- a) What should students know and be able to do?
- b) What are your learning objectives and course outcomes?
- c) Are there disciplinary norms or practices that should be incorporated into the authentic assessment?
- d) Are there transferable skills or workforce readiness skills that should be incorporated into the authentic assessment?
- 2) The second step is to determine the goals of this authentic assessment.
- a) Will the authentic assessment allow students to demonstrate proficiency in the learning objectives as well as develop self-regulation and metacognition skills?
- b) Will the authentic assessment have opportunities for practice and feedback?
- c) Will the authentic assessment collect valid and reliable data to confirm student outcomes?
- 3) The third step is to develop the authentic assessment by determining:
- a) Authenticity – What elements of the assessment will make it authentic (see Elements of Authentic Assessments in the WHAT section of this guide)?
- b) Format – Will the format be a product or performance? Will the format be student-selected or instructor-selected?
- c) Students’ and instructor’s role – What will be the level of responsibility for student ownership of learning? What forms of guidance and authentic learning will you provide for student support?
- d) Timeline and Progress – What will be the timeline for the authentic assessment? How will progress be monitored by the students and instructor?
- e) Deliverables – What items or elements of the authentic assessment will be graded?
- f) Feedback – What will be the frequency of feedback? Who will provide the feedback? Will there be an opportunity for students to provide feedback to the instructor on their experience?
- g) Grading – What are the grading criteria for this authentic assessment? How will these criteria be explained so that students understand the expectations?
- 4) The fourth step is to review data collected from the authentic assessment and reflect on the implementation of the authentic assessment to inform continuous improvements for equitable student outcomes.
Want to learn more about assessments? Please visit the other Assessment & Grading Practices teaching guides and the Resources Section of the CATE website to review resources and more. Would you like support in designing an authentic assessment? Consider scheduling an online or in-person instructional design consultation.
- 1) The first step is to utilize backward design principles by aligning the authentic assessments to the course learning objectives, disciplinary norms, practices, and transferable or workforce readiness skills.
Student Success during Authentic Assessments Heading link
A well-planned and communicated authentic assessment will help improve student performance and student satisfaction during the authentic assessment.
Communication of Authentic Assessments
Consider providing an overview of the authentic assessment that demonstrates alignment to the course and learning objectives, as well as possible disciplinary norms and practices. This overview can also help explain how students’ participation in the authentic assessment will provide them with the opportunity to practice transferable and workforce readiness skills. Additionally, this information can help create buy-in improving student motivation and engagement during the authentic assessment.
Consider creating a timeline of the authentic assessment that includes the following information:
- Start date for authentic assessment
- Due dates for the submission of deliverables
- Dates for formative feedback and progress monitoring
- The final due date for authentic assessment product or performance
- Date for summative feedback and grade
Consider providing a detailed list of the required deliverables for the authentic assessment. For example, if utilizing project-based learning then the deliverables might include:
- The project plan
- Draft(s) of the project with formative feedback
- Completed project
- Presentation of project
- Reflection on process
- Self-assessment of final project and presentation
Expectations and Grading for Authentic Assessments
Defining grading criteria is one way to support students’ understanding of expectations during the authentic assessment. Grading criteria refer to what students will do (performance) and what instructors will measure and score. Once you have determined what students will submit for grading (the deliverables) then you can communicate expectations for each deliverable by listing the grading criteria and total points for each criterion.
For example, if utilizing project-based learning then one deliverable might be the project plan. The project plan might be worth 50 points and the grading criteria and total points for each criterion might include:
- Project question or problem – 10 points
- Proposed materials or research – 15 points
- Proposed product or performance – 10 points
- Proposed process of design – 15 points
You might consider taking the grading criteria for a deliverable and expanding on the information by utilizing a rubric. Rubrics can help you describe the varying levels of performance for each grading criterion.
For example, you can describe the criterion: project question or problem (worth ten points) in three levels of performance.
- Proficiency – project question or problem is fully developed and demonstrates a clear alignment to the learning objectives (ten points).
- Developing – project question or problem is adequately developed and demonstrates alignment to the learning objectives (seven points).
- Needs revision – project question or problem isn’t developed enough to support the project and/or is not aligned to the learning objectives. Please revise and resubmit (six or fewer points).
The description of the performance levels will help students understand what the expectations are for each component of the authentic assessment. You can develop a rubric with one, two, three, or more levels of performance. The criterion performance levels can be displayed in Blackboard by utilizing the rubric tool. Want to learn more about rubrics and assessment tools in Blackboard? Please visit the Blackboard Assessments & Grading page in the EdTech section of the CATE website.
Facilitation and Guidance during Authentic Assessments
Consider the varying levels of student responsibility and instructor facilitation that can be offered during an authentic assessment, examples include:
- Direct instruction – the instructor provides the question or problem, materials, process, or design, as well as directs the analysis and facilitates the drawing of conclusions. This type of instruction provides the most structure, scaffolding (support), and guidance during the authentic assessment.
- Structured authentic assessment – the instructor provides the question or problem, materials, process, or design, but the students direct the analysis with support from the instructor and draw conclusions based on their analysis. This type of instruction allows for students to create new meaning or knowledge while being guided through a structured authentic assessment.
- Guided authentic assessment – the instructor provides the question or problem, and materials and the students determine the process or design, as well as direct the analysis and draw conclusions. This type of instruction allows for student autonomy with an instructor-selected focus on a specific question or problem.
- Student-directed authentic assessment – the instructor provides the learning objectives or course outcomes, and then the students determine the question or problem, materials, process or design, analysis, and conclusions. This type of authentic assessment provides the least amount of structure but can still contain scaffolding and guidance from the instructor through reminders and feedback.
Consider how you will encourage students’ ability to self-direct their learning while providing them with appropriate levels of support and guidance to ensure their success in the authentic assessment.
There are several ways to provide support and guidance to students during an authentic assessment, including:
- Class discussion – add time for authentic assessment discussions around progress, challenges, and achievements.
- Peer review – provide opportunities for students to review their peers’ work and provide feedback.
- Calendar – add the authentic assessment timeline to your course calendar, so that students have due dates and progress monitoring dates.
- Announcements – create reminders using the announcements tool in Blackboard to support student progress monitoring as well as provide students with resources.
- Online office hours – designate specific online office hours for students to drop in to ask questions and get support.
- Resources – provide students with resources, including preferred databases, exemplar authentic assessments, and UIC academic support services.
CITING THIS GUIDE Heading link
Citing this Guide
Messier, N. (2022). “Authentic Assessments.“ Center for the Advancement of Teaching Excellence at the University of Illinois Chicago. Retrieved [today’s date] from https://teaching.uic.edu/resources/teaching-guides/assessment-grading-practices/authentic-assessments/
ADDITIONAL RESOURCES Heading link
Articles, Websites, and Videos
- Selkin, P. (2020). Video – Alternative Assessment Strategy for a Physics Final (6:01 minutes)
- University of Liverpool (n.d.). Authentic Assessment – including authentic case studies
- UNSW. (n.d.). Assessing Authentically
- Ashford-Rowe, K. (n.d.). Authentic Assessment Matters.
- Nutt, D. (2020). Inquiry-based labs give physics students experimental edge.
- Art of Mathematics. (n.d.). Inquiry-based Learning Guides
- Lesley University. (n.d.). Empowering students: The 5E model explained.
Problem-based and Scenario-based learning
- WPI. (n.d.). PBL in Higher Education
- WPI. (2018). Transforming Higher Education Through Project-based Learning
- Cult of Pedagogy (2016). Project Based Learning: Start Here
- Buck Institute for Education (n.d.). my PBL works.
Design-based learning (Design Thinking)
REFERENCES Heading link
Ashford-Rowe, K., Herrington, J., Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education. 39. 10.1080/02602938.2013.819566.
Berglund, J., Candefjord, S., Gil, J. (2020). Scaffolding activities for project-based learning. 10.13140/RG.2.2.34702.92487.
Eddy, P., Lawrence, A. (2012). Wikis as platforms for authentic assessment. Innovative higher education. 38. 10.1007/s10755-012-9239-7.
Elliott-Kingston, C., Doyle, O.P.E., Hunter, A. (2018). Benefits of scenario-based learning in university education. Acta Horticulturae. DOI:10.17660/ActaHortic.2016.1126.13
Foss, M., Liu, Y. (2021). Developing creativity through project-based learning.
Golden, P. (2018). Conceptualized writing: Promoting audience-centered writing through scenario-based learning. International Journal for the Scholarship of Teaching and Learning. Volume 12, Number 1, Article 6.
Grant, K., Fedoruk, L., Nowell, L. (2021). Conversations and reflections on authentic assessment. Imagining SoTL. 1. 146-162. 10.29173/isotl532.
Gulikers, J.T.M., Bastiaens, T.J., Kirschner, P.A. (2004). A five-dimensional framework for authentic assessment. ETR&D 52, 67 (2004).
Indrawan, E., Jalinus, N., Syahril, S. (2019). Review project-based learning. International Journal of Science and Research (IJSR). 8. 1014 – 1018.
Karunanayaka, S., Naidu, S. (2021). Impacts of Authentic Assessment on the Development of Graduate Attributes. Distance Education. 42. 10.1080/01587919.2021.1920206.
Lane, J. (2019). Inquiry-based learning. Penn State University. Schreyer Institute for Teaching Excellence.
Lippmann M. (2022) Inquiry-based learning in psychology. In: Zumbach J., Bernstein D., Narciss S., Marsico G. (eds) International handbook of psychology learning and teaching. Springer https://doi.org/10.1007/978-3-030-26248-8_59-2
Murphy, V., Fox, J., Freeman, S., Hughes, N. (2017). “Keeping It Real”: A review of the benefits, challenges and steps towards implementing authentic assessment. The All Ireland Journal of Teaching and Learning in Higher Education (AISHE-J). 9.
Nundy S., Kakar A., Bhutta Z.A. (2022.) The why and how of problem-based learning? How to practice academic medicine and publish from developing countries? Springer, Singapore.
Nutt, D. (2020). Inquiry-based labs give physics students experimental edge.
Rowan, B. (2014). Academic portfolios, holistic learning, and student success in higher education. US-China Education Review A. 4. 637-645.
Sutadji, E., Susilo, H., Wibawa, A.P., Jabari, N.A.M., Rohmad, S.N. (2021). Authentic assessment implementation in natural and social science. Educ. Sci. 2021, 11, 534.
Thomsen, B.C. & Renaud, C., Savory, S., Romans, E.J., Mitrofanov, O., Rio, M., Day, S., Kenyon, A., Mitchell, J. (2010). Introducing scenario-based learning: Experiences from an undergraduate electronic and electrical engineering course. 953 – 958. 10.1109/EDUCON.2010.5492474.
Wilson-Mah, R. (2019). A study of authentic assessment in an internship course. |
The Foucault pendulum (English: // foo-KOH; French pronunciation: [fuˈko]), or Foucault's pendulum, named after the French physicist Léon Foucault, is a simple device conceived as an experiment to demonstrate the rotation of the Earth. Though it had been known that the Earth rotates, the introduction of the Foucault pendulum in 1851 was the first simple proof of the rotation. Today, Foucault pendulums are popular displays in science museums and universities.
Original Foucault pendulumEdit
The first public exhibition of a Foucault pendulum took place in February 1851 in the Meridian of the Paris Observatory. A few weeks later, Foucault made his most famous pendulum when he suspended a 28-kg brass-coated lead bob with a 67-m-long wire from the dome of the Panthéon, Paris. The plane of the pendulum's swing rotated clockwise approximately 11.3° per hour, making a full circle in approximately 31.8 hours. The original bob used in 1851 at the Panthéon was moved in 1855 to the Conservatoire des Arts et Métiers in Paris. A second temporary installation was made for the 50th anniversary in 1902.
During museum reconstruction in the 1990s, the original pendulum was temporarily displayed at the Panthéon (1995), but was later returned to the Musée des Arts et Métiers before it reopened in 2000. On April 6, 2010, the cable suspending the bob in the Musée des Arts et Métiers snapped, causing irreparable damage to the pendulum and to the marble flooring of the museum. An exact copy of the original pendulum had been swinging permanently since 1995 under the dome of the Panthéon, Paris until 2014 when it was taken down during repair work to the building. The pendulum has since been reinstalled.
Explanation of mechanicsEdit
At either the North Pole or South Pole, the plane of oscillation of a pendulum remains fixed relative to the distant masses of the universe while Earth rotates underneath it, taking one sidereal day to complete a rotation. So, relative to Earth, the plane of oscillation of a pendulum at the North Pole undergoes a full clockwise rotation during one day; a pendulum at the South Pole rotates counterclockwise.
When a Foucault pendulum is suspended at the equator, the plane of oscillation remains fixed relative to Earth. At other latitudes, the plane of oscillation precesses relative to Earth, but slower than at the pole; the angular speed, ω (measured in clockwise degrees per sidereal day), is proportional to the sine of the latitude, φ:
where latitudes north and south of the equator are defined as positive and negative, respectively. For example, a Foucault pendulum at 30° south latitude, viewed from above by an earthbound observer, rotates counterclockwise 360° in two days.
To demonstrate rotation directly rather than indirectly via the swinging pendulum, Foucault used a gyroscope in an 1852 experiment. The inner gimbal of Foucault gyroscope was balanced on knife edge bearings on the outer gimbal and the outer gimbal was suspended by a fine, torsion-free thread in such a manner that the lower pivot point carried almost no weight. The gyro was spun to 9000-12000 revolutions per minute with an arrangement of gears before being placed into position, which was sufficient time to balance the gyroscope and carry out 10 minutes of experimentation. The instrument could be observed either with a microscope viewing a tenth of a degree scale or by a long pointer. At least three more copies of a Foucault gyro were made in convenient travelling and demonstration boxes and copies survive in the UK, France, and the USA.
A Foucault pendulum requires care to set up because imprecise construction can cause additional veering which masks the terrestrial effect. The initial launch of the pendulum is critical; the traditional way to do this is to use a flame to burn through a thread which temporarily holds the bob in its starting position, thus avoiding unwanted sideways motion (see a detail of the launch at the 50th anniversary in 1902).
Air resistance damps the oscillation, so some Foucault pendulums in museums incorporate an electromagnetic or other drive to keep the bob swinging; others are restarted regularly, sometimes with a launching ceremony as an added attraction.
A 'pendulum day' is the time needed for the plane of a freely suspended Foucault pendulum to complete an apparent rotation about the local vertical. This is one sidereal day divided by the sine of the latitude.
Precession as a form of parallel transportEdit
From the perspective of an inertial frame moving in tandem with Earth, but not sharing its rotation, the suspension point of the pendulum traces out a circular path during one sidereal day.
At the latitude of Paris, 48 degrees 51 minutes north, a full precession cycle takes just under 32 hours, so after one sidereal day, when the Earth is back in the same orientation as one sidereal day before, the oscillation plane has turned by just over 270 degrees. If the plane of swing was north-south at the outset, it is east-west one sidereal day later.
This also implies that there has been exchange of momentum; the Earth and the pendulum bob have exchanged momentum. The Earth is so much more massive than the pendulum bob that the Earth's change of momentum is unnoticeable. Nonetheless, since the pendulum bob's plane of swing has shifted, the conservation laws imply that an exchange must have occurred.
Rather than tracking the change of momentum, the precession of the oscillation plane can efficiently be described as a case of parallel transport. For that, it can be demonstrated, by composing the infinitesimal rotations, that the precession rate is proportional to the projection of the angular velocity of Earth onto the normal direction to Earth, which implies that the trace of the plane of oscillation will undergo parallel transport. After 24 hours, the difference between initial and final orientations of the trace in the Earth frame is α = −2πsin(φ), which corresponds to the value given by the Gauss–Bonnet theorem. α is also called the holonomy or geometric phase of the pendulum. When analyzing earthbound motions, the Earth frame is not an inertial frame, but rotates about the local vertical at an effective rate of 2π sin(φ) radians per day. A simple method employing parallel transport within cones tangent to the Earth's surface can be used to describe the rotation angle of the swing plane of Foucault's pendulum.
From the perspective of an Earth-bound coordinate system with its x-axis pointing east and its y-axis pointing north, the precession of the pendulum is described by the Coriolis force. Consider a planar pendulum with natural frequency ω in the small angle approximation. There are two forces acting on the pendulum bob: the restoring force provided by gravity and the wire, and the Coriolis force. The Coriolis force at latitude φ is horizontal in the small angle approximation and is given by
where Ω is the rotational frequency of Earth, Fc,x is the component of the Coriolis force in the x-direction and Fc,y is the component of the Coriolis force in the y-direction.
The restoring force, in the small-angle approximation, is given by
Using Newton's laws of motion this leads to the system of equations
Switching to complex coordinates z = x + iy, the equations read
To first order in Ω/ω this equation has the solution
If time is measured in days, then Ωt = 2π and the pendulum rotates by an angle of −2π sin(φ) during one day.
Related physical systemsEdit
Many physical systems precess in a similar manner to a Foucault pendulum. As early as 1836, the Scottish mathematician Edward Sang contrived and explained the precession of a spinning top. In 1851, Charles Wheatstone described an apparatus that consists of a vibrating spring that is mounted on top of a disk so that it makes a fixed angle with the disk. The spring is struck so that it oscillates in a plane. When the disk is turned, the plane of oscillation changes just like the one of a Foucault pendulum at latitude .
Similarly, consider a nonspinning, perfectly balanced bicycle wheel mounted on a disk so that its axis of rotation makes an angle with the disk. When the disk undergoes a full clockwise revolution, the bicycle wheel will not return to its original position, but will have undergone a net rotation of .
Foucault-like precession is observed in a virtual system wherein a massless particle is constrained to remain on a rotating plane that is inclined with respect to the axis of rotation.
Spin of a relativistic particle moving in a circular orbit precesses similar to the swing plane of Foucault pendulum. The relativistic velocity space in Minkowski spacetime can be treated as a sphere S3 in 4-dimensional Euclidean space with imaginary radius and imaginary timelike coordinate. Parallel transport of polarization vectors along such sphere gives rise to Thomas precession, which is analogous to the rotation of the swing plane of Foucault pendulum due to parallel transport along a sphere S2 in 3-dimensional Euclidean space.
Foucault pendulums around the worldEdit
Numerous Foucault pendulums are installed around the world, mainly at universities, science museums, and planetariums. The United Nations headquarters in New York City has one, while the largest Foucault pendulum in the world, Principia, is housed at the Oregon Convention Center.
The experiment has also been carried out at the South Pole, where it was assumed that the rotation of the earth would have maximum effect. The South Pole Pendulum Project (as discussed in The New York Times and excerpted from Seven Tales of the Pendulum) was constructed and tested by adventurous experimenters John Bird, Jennifer McCallum, Michael Town, and Alan Baker at the Amundsen–Scott South Pole Station. Their measurement is probably the closest ever made to one of the earth's poles. The pendulum was erected in a six-story staircase of a new station that was under construction near the pole. Conditions were challenging; the altitude was about 3,300 m (atmospheric pressure only about 65% that at sea level) and the temperature in the unheated staircase was about −68 °C (−90 °F). The pendulum had a length of 33 m and the bob weighed 25 kg. The new station offered an ideal venue for the Foucault pendulum; its height ensured an accurate result, no moving air could disturb it, and low air pressure reduced air resistance. The researchers confirmed about 24 hours as the rotation period of the plane of oscillation.
- Oprea, John (1995). "Geometry and the Foucault Pendulum". Amer. Math. Monthly. 102: 515–522. doi:10.2307/2974765.
- "The Pendulum of Foucault of the Panthéon. Ceremony of inauguration by M. Chaumié, minister of the state education, burnt the wire of balancing, to start the pendulum. 1902". Paris en images.
- Kissell, Joe (November 8, 2004). "Foucault's Pendulum: Low-tech proof of Earth's rotation". Interesting thing of the day. Retrieved March 21, 2012.
- Thiolay, Boris (April 28, 2010). "Le pendule de Foucault perd la boule" (in French). L'Express.
- "Foucault's pendulum is sent crashing to Earth". Times Higher Education. 13 May 2010. Retrieved March 21, 2012.
- "Foucault Pendulum". Smithsonian Encyclopedia. Retrieved September 2, 2013.
- "Pendulum day". Glossary of Meteorology. American Meteorological Society.
- Daliga, K.; Przyborski, M.; Szulwic, J. "Foucault's Pendulum. Uncomplicated Tool In The Study Of Geodesy And Cartography". library.iated.org. Retrieved 2015-11-02.
- W. B. Somerville, "The Description of Foucault's Pendulum", Q. J. R. Astron. Soc. 13, 40 (1972).
- J. B. Hart, R. E. Miller and R. L. Mills, "A simple geometric model for visualizing the motion of a Foucault pendulum", Am. J. Phys. 55, 67–70 (1987). doi:10.1119/1.14972
- Charles Wheatstone Wikisource: "Note relating to M. Foucault's new mechanical proof of the Rotation of the Earth", pp. 65–68.
- Bharadhwaj, Praveen (2014). "Foucault precession manifested in a simple system". arXiv: [physics.pop-ph].
- M. I. Krivoruchenko, "Rotation of the swing plane of Foucault's pendulum and Thomas spin precession: Two faces of one coin", Phys. Usp. 52, 821–829 (2009).
- "Geometric Phases in Physics", eds. Frank Wilczek and Alfred Shapere (World Scientific, Singapore, 1989).
- L. Mangiarotti, G. Sardanashvily, Gauge Mechanics (World Scientific, Singapore, 1998)
- Johnson, George (September 24, 2002). "Here They Are, Science's 10 Most Beautiful Experiments". The New York Times. Retrieved September 20, 2012.
- Baker, G. P. (2011). Seven Tales of the Pendulum. Oxford University Press. p. 388. ISBN 978-0-19-958951-7.
- Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics. Springer. p. 123. ISBN 0-387-96890-3.
- Ciureanu, I. A.; Condurache, D. (2015). "A Short Vector Solution of the Foucault Pendulum Problem". World Journal of Mechanics. 5: 7–19.
- Marion, Jerry B.; Thornton, Stephen T. (1995). Classical dynamics of particles and systems (4th ed.). Brooks Cole. pp. 398–401. ISBN 0-03-097302-3.
- Persson, Anders O. (2005). "The Coriolis Effect: Four centuries of conflict between common sense and mathematics, Part I: A history to 1885" (PDF). History of Meteorology. 2.
|Wikimedia Commons has media related to Foucault pendulum.|
- Julian Rubin, "The Invention of the Foucault Pendulum", Following the Path of Discovery, 2007, retrieved 2007-10-31. Directions for repeating Foucault's experiment, on amateur science site.
- Wolfe, Joe, "A derivation of the precession of the Foucault pendulum".
- "The Foucault Pendulum", derivation of the precession in polar coordinates.
- "The Foucault Pendulum" By Joe Wolfe, with film clip and animations.
- "Foucault's Pendulum" by Jens-Peer Kuska with Jeff Bryant, Wolfram Demonstrations Project: a computer model of the pendulum allowing manipulation of pendulum frequency, Earth rotation frequency, latitude, and time.
- "Webcam Kirchhoff-Institut für Physik, Universität Heidelberg".
- California academy of sciences, CA Foucault pendulum explanation, in friendly format
- Foucault pendulum model Exposition including a tabletop device that shows the Foucault effect in seconds.
- Foucault, M. L., Physical demonstration of the rotation of the Earth by means of the pendulum, Franklin Institute, 2000, retrieved 2007-10-31. Translation of his paper on Foucault pendulum.
- Tobin, William "The Life and Science of Léon Foucault".
- Bowley, Roger (2010). "Foucault's Pendulum". Sixty Symbols. Brady Haran for University of Nottingham.
- Foucault-inga Párizsban Foucault's Pendulum in Paris – video of the operating Foucault's Pendulum in the Panthéon (in Hungarian).
- Pendolo nel Salone The Foucault Pendulum inside Palazzo della Ragione in Padova, Italy
- Daliga, K., Przyborski, M., & Szulwic, J. Foucault's Pendulum. Uncomplicated Tool in the Study of Geodesy and Cartography, EDULEARN15 Proceedings - 7th International Conference on Education and New Learning Technologies, Barcelona, Spain, ISBN 978-84-606-8243-1, 2015 |
The examples and perspective in this deal primarily with the United States and do not represent a worldwide view of the subject. (November 2023)
Critical race theory (CRT) is an interdisciplinary academic field devoted to analysing how social and political laws and media shape (and are shaped by) social conceptions of race and ethnicity. CRT also considers racism to be systemic in various laws and rules, and not only based on individuals' prejudices. The word critical in the name is an academic reference to critical thinking, critical theory, and scholarly criticism, rather than criticizing or blaming individuals.
CRT is also used in sociology to explain social, political, and legal structures and power distribution as through a "lens" focusing on the concept of race, and experiences of racism. For example, the CRT conceptual framework examines racial bias in laws and legal institutions, such as highly disparate rates of incarceration among racial groups in the United States. A key CRT concept is intersectionality—the way in which different forms of inequality and identity are affected by interconnections of race, class, gender, and disability. Scholars of CRT view race as a social construct with no biological basis. One tenet of CRT is that racism and disparate racial outcomes are the result of complex, changing, and often subtle social and institutional dynamics, rather than explicit and intentional prejudices of individuals. CRT scholars argue that the social and legal construction of race advances the interests of white people at the expense of people of color, and that the liberal notion of U.S. law as "neutral" plays a significant role in maintaining a racially unjust social order, where formally color-blind laws continue to have racially discriminatory outcomes.
CRT began in the United States in the post–civil rights era, as 1960s landmark civil rights laws were being eroded and schools were being re-segregated. With racial inequalities persisting even after civil rights legislation and color-blind laws were enacted, CRT scholars in the 1970s and 1980s began reworking and expanding critical legal studies (CLS) theories on class, economic structure, and the law to examine the role of US law in perpetuating racism. CRT, a framework of analysis grounded in critical theory, originated in the mid-1970s in the writings of several American legal scholars, including Derrick Bell, Alan Freeman, Kimberlé Crenshaw, Richard Delgado, Cheryl Harris, Charles R. Lawrence III, Mari Matsuda, and Patricia J. Williams. CRT draws from the work of thinkers such as Antonio Gramsci, Sojourner Truth, Frederick Douglass, and W. E. B. Du Bois, as well as the Black Power, Chicano, and radical feminist movements from the 1960s and 1970s.
Academic critics of CRT argue it is based on storytelling instead of evidence and reason, rejects truth and merit, and undervalues liberalism. Since 2020, conservative US lawmakers have sought to ban or restrict the instruction of CRT education in primary and secondary schools, as well as relevant training inside federal agencies. Advocates of such bans argue that CRT is false, anti-American, villainizes white people, promotes radical leftism, and indoctrinates children. Advocates of bans on CRT have been accused of misrepresenting its tenets, and of having the goal to broadly silence discussions of racism, equality, social justice, and the history of race.
In his introduction to the comprehensive 1995 publication of critical race theory's key writings, Cornel West described CRT as "an intellectual movement that is both particular to our postmodern (and conservative) times and part of a long tradition of human resistance and liberation." Law professor Roy L. Brooks defined critical race theory in 1994 as "a collection of critical stances against the existing legal order from a race-based point of view".
Gloria Ladson-Billings, who—along with co-author William Tate—had introduced CRT to the field of education in 1995, described it in 2015 as an "interdisciplinary approach that seeks to understand and combat race inequity in society." Ladson-Billings wrote in 1998 that CRT "first emerged as a counterlegal scholarship to the positivist and liberal legal discourse of civil rights."
In 2017, University of Alabama School of Law professor Richard Delgado, a co-founder of critical race theory, and legal writer Jean Stefancic define CRT as "a collection of activists and scholars interested in studying and transforming the relationship among race, racism, and power". In 2021, Khiara Bridges, a law professor and author of the textbook Critical Race Theory: A Primer, defined critical race theory as an "intellectual movement", a "body of scholarship", and an "analytical toolset for interrogating the relationship between law and racial inequality."
The 2021 Encyclopaedia Britannica described CRT as an "intellectual and social movement and loosely organized framework of legal analysis based on the premise that race is not a natural, biologically grounded feature of physically distinct subgroups of human beings but a socially constructed (culturally invented) category that is used to oppress and exploit people of colour."
Scholars of CRT say that race is not "biologically grounded and natural"; rather, it is a socially constructed category used to oppress and exploit people of color; and that racism is not an aberration, but a normalized feature of American society. According to CRT, negative stereotypes assigned to members of minority groups benefit white people and increase racial oppression. Individuals can belong to a number of different identity groups. The concept of intersectionality—one of CRT's main concepts—was introduced by legal scholar Kimberlé Crenshaw.
Derrick Albert Bell Jr. (1930 – 2011), an American lawyer, professor, and civil rights activist, wrote that racial equality is "impossible and illusory" and that racism in the US is permanent. According to Bell, civil-rights legislation will not on its own bring about progress in race relations; alleged improvements or advantages to people of color "tend to serve the interests of dominant white groups", in what Bell called "interest convergence". These changes do not typically affect—and at times even reinforce—racial hierarchies. This is representative of the shift in the 1970s, in Bell's re-assessment of his earlier desegregation work as a civil rights lawyer. He was responding to the Supreme Court's decisions that had resulted in the re-segregation of schools.
The concept of standpoint theory became particularly relevant to CRT when it was expanded to include a black feminist standpoint by Patricia Hill Collins. First introduced by feminist sociologists in the 1980s, standpoint theory holds that people in marginalized groups, who share similar experiences, can bring a collective wisdom and a unique voice to discussions on decreasing oppression. In this view, insights into racism can be uncovered by examining the nature of the US legal system through the perspective of the everyday lived experiences of people of color.
According to Encyclopedia Britannica, tenets of CRT have spread beyond academia, and are used to deepen understanding of socio-economic issues such as "poverty, police brutality, and voting rights violations", that are affected by the ways in which race and racism are "understood and misunderstood" in the United States.
This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
Richard Delgado and Jean Stefancic published an annotated bibliography of CRT references in 1993, listing works of legal scholarship that addressed one or more of the following themes: "critique of liberalism"; "storytelling/counterstorytelling and 'naming one's own reality'"; "revisionist interpretations of American civil rights law and progress"; "a greater understanding of the underpinnings of race and racism"; "structural determinism"; "race, sex, class, and their intersections"; "essentialism and anti-essentialism"; "cultural nationalism/separatism"; "legal institutions, critical pedagogy, and minorities in the bar"; and "criticism and self-criticism". When Gloria Ladson-Billings introduced CRT into education in 1995, she cautioned that its application required a "thorough analysis of the legal literature upon which it is based".
Critique of liberalism
First and foremost to CRT legal scholars in 1993 was their "discontent" with the way in which liberalism addressed race issues in the US. They critiqued "liberal jurisprudence", including affirmative action, color-blindness, role modeling, and the merit principle. Specifically, they claimed that the liberal concept of value-neutral law contributed to maintenance of the US's racially unjust social order.
An example questioning foundational liberal conceptions of Enlightenment values, such as rationalism and progress, is Rennard Strickland's 1986 Kansas Law Review article, "Genocide-at-Law: An Historic and Contemporary View of the Native American Experience". In it, he "introduced Native American traditions and world-views" into law school curriculum, challenging the entrenchment at that time of the "contemporary ideas of progress and enlightenment". He wrote that US laws that "permeate" the everyday lives of Native Americans were in "most cases carried out with scrupulous legality" but still resulted in what he called "cultural genocide".
In 1993, David Theo Goldberg described how countries that adopt classical liberalism's concepts of "individualism, equality, and freedom"—such as the United States and European countries—conceal structural racism in their cultures and languages, citing terms such as "Third World" and "primitive".: 6–7
In 1988, Kimberlé Williams Crenshaw traced the origins of the New Right's use of the concept of color-blindness from 1970s neoconservative think tanks to the Ronald Reagan administration in the 1980s. She described how prominent figures such as neoconservative scholars Thomas Sowell and William Bradford Reynolds, who served as Assistant Attorney General for the Civil Rights Division from 1981 to 1988, called for "strictly color-blind policies". Sowell and Reynolds, like many conservatives at that time, believed that the goal of equality of the races had already been achieved, and therefore the race-specific civil rights movement was a "threat to democracy". The color-blindness logic used in "reverse discrimination" arguments in the post-civil rights period is informed by a particular viewpoint on "equality of opportunity", as adopted by Sowell, in which the state's role is limited to providing a "level playing field", not to promoting equal distribution of resources.
Crenshaw claimed that "equality of opportunity" in antidiscrimination law can have both an expansive and a restrictive aspect. Crenshaw wrote that formally color-blind laws continue to have racially discriminatory outcomes. According to her, this use of formal color-blindness rhetoric in claims of reverse discrimination, as in the 1978 Supreme Court ruling on Bakke, was a response to the way in which the courts had aggressively imposed affirmative action and busing during the Civil Rights era, even on those who were hostile to those issues. In 1990, legal scholar Duncan Kennedy described the dominant approach to affirmative action in legal academia as "colorblind meritocratic fundamentalism". He called for a postmodern "race consciousness" approach that included "political and cultural relations" while avoiding "racialism" and "essentialism".
Sociologist Eduardo Bonilla-Silva describes this newer, subtle form of racism as "color-blind racism", which uses frameworks of abstract liberalism to decontextualize race, naturalize outcomes such as segregation in neighborhoods, attribute certain cultural practices to race, and cause "minimization of racism".
In his influential 1984 article, Delgado challenged the liberal concept of meritocracy in civil rights scholarship. He questioned how the top articles in most well-established journals were all written by white men.
Storytelling/counterstorytelling and "naming one's own reality"
One of the prime tenets of liberal jurisprudence is that people can create appealing narratives to think and talk about greater levels of justice. Delgado and Stefancic call this the empathic fallacy—the belief that it is possible to "control our consciousness" by using language alone to overcome bigotry and narrow-mindedness. They examine how people of color, considered outsiders in mainstream US culture, are portrayed in media and law through stereotypes and stock characters that have been adapted over time to shield the dominant culture from discomfort and guilt. For example, slaves in the 18th-century Southern States were depicted as childlike and docile; Harriet Beecher Stowe adapted this stereotype through her character Uncle Tom, depicting him as a "gentle, long-suffering", pious Christian.
Following the American Civil War, the African-American woman was depicted as a wise, care-giving "Mammy" figure. During the Reconstruction period, African-American men were stereotyped as "brutish and bestial", a danger to white women and children. This was exemplified in Thomas Dixon Jr.'s novels, used as the basis for the epic film The Birth of a Nation, which celebrated the Ku Klux Klan and lynching. During the Harlem Renaissance, African-Americans were depicted as "musically talented" and "entertaining". Following World War II, when many Black veterans joined the nascent civil rights movement, African Americans were portrayed as "cocky [and] street-smart", the "unreasonable, opportunistic" militant, the "safe, comforting, cardigan-wearing" TV sitcom character, and the "super-stud" of blaxploitation films.
The empathic fallacy informs the "time-warp aspect of racism", where the dominant culture can see racism only through the hindsight of a past era or distant land, such as South Africa. Through centuries of stereotypes, racism has become normalized; it is a "part of the dominant narrative we use to interpret experience". Delgado and Stefancic argue that speech alone is an ineffective tool to counter racism, since the system of free expression tends to favor the interests of powerful elites and to assign responsibility for racist stereotypes to the "marketplace of ideas". In the decades following the passage of civil rights laws, acts of racism had become less overt and more covert—invisible to, and underestimated by, most of the dominant culture.
Since racism makes people feel uncomfortable, the empathic fallacy helps the dominant culture to mistakenly believe that it no longer exists, and that dominant images, portrayals, stock characters, and stereotypes—which usually portray minorities in a negative light—provide them with a true image of race in America. Based on these narratives, the dominant group has no need to feel guilty or to make an effort to overcome racism, as it feels "right, customary, and inoffensive to those engaged in it", while self-described liberals who uphold freedom of expression can feel virtuous while maintaining their own superior position.
Bryan Brayboy has emphasized the epistemic importance of storytelling in Indigenous-American communities as superseding that of theory, and has proposed a Tribal Critical Race Theory (TribCrit).[example needed]
This is the view that members of racial minority groups have a unique authority and ability to speak about racism. This is seen as undermining dominant narratives relating to racial inequality, such as legal neutrality and personal responsibility or bootstrapping, through valuable first-hand accounts of the experience of racism.
Revisionist interpretations of American civil rights law and progress
Interest convergence is a concept introduced by Derrick Bell in his 1980 Harvard Law Review article, "Brown v. Board of Education and the Interest-Convergence Dilemma". In this article, Bell described how he re-assessed the impact of the hundreds of NAACP LDF de-segregation cases he won from 1960 to 1966, and how he began to believe that in spite of his sincerity at the time, anti-discrimination law had not resulted in improving Black children's access to quality education. He listed and described how Supreme Court cases had gutted civil rights legislation, which had resulted in African-American students continuing to attend all-black schools that lacked adequate funding and resources. In examining these Supreme Court cases, Bell concluded that the only civil-rights legislation that was passed coincided with the self-interest of white people, which Bell termed interest convergence.
One of the best-known examples of interest convergence is the way in which American geopolitics during the Cold War in the aftermath of World War II was a critical factor in the passage of civil rights legislation by both Republicans and Democrats. Bell described this in numerous articles, including the aforementioned, and it was supported by the research and publications of legal scholar Mary L. Dudziak. In her journal articles and her 2000 book Cold War Civil Rights—based on newly released documents—Dudziak provided detailed evidence that it was in the interest of the United States to quell the negative international press about treatment of African-Americans when the majority of the populations of newly decolonized countries which the US was trying to attract to Western-style democracy, were not white. The US sought to promote liberal values throughout Africa, Asia, and Latin America to prevent the Soviet Union from spreading communism. Dudziak described how the international press widely circulated stories of segregation and violence against African-Americans.
The Moore's Ford lynchings, where a World War II veteran was lynched, were particularly widespread in the news. American allies followed stories of American racism through the international press, and the Soviets used stories of racism against Black Americans as a vital part of their propaganda. Dudziak performed extensive archival research in the US Department of State and Department of Justice and concluded that US government support for civil-rights legislation "was motivated in part by the concern that racial discrimination harmed the United States' foreign relations". When the National Guard was called in to prevent nine African-American students from integrating the Little Rock Central High School, the international press covered the story extensively. The then-Secretary of State told President Dwight Eisenhower that the Little Rock situation was "ruining" American foreign policy, particularly in Asia and Africa. The US's ambassador to the United Nations told President Eisenhower that as two-thirds of the world's population was not white, he was witnessing their negative reactions to American racial discrimination. He suspected that the US "lost several votes on the Chinese communist item because of Little Rock."
This refers to the examination of race, sex, class, national origin, and sexual orientation, and how their intersections play out in various settings, such as how the needs of a Latina are different from those of a Black male, and whose needs are promoted.[further explanation needed] These intersections provide a more holistic picture for evaluating different groups of people. Intersectionality is a response to identity politics insofar as identity politics does not take into account the different intersections of people's identities.
Essentialism vs. anti-essentialism
Delgado and Stefancic write, "Scholars who write about these issues are concerned with the appropriate unit for analysis: Is the black community one, or many, communities? Do middle- and working-class African-Americans have different interests and needs? Do all oppressed peoples have something in common?" This is a look at the ways that oppressed groups may share in their oppression but also have different needs and values that need to be analyzed differently. It is a question of how groups can be essentialized or are unable to be essentialized.[further explanation needed]
From an essentialist perspective, one's identity consists of an internal "essence" that is static and unchanging from birth, whereas a non-essentialist position holds that "the subject has no fixed or permanent identity." Racial essentialism diverges into biological and cultural essentialism, where subordinated groups may endorse one over the other. "Cultural and biological forms of racial essentialism share the idea that differences between racial groups are determined by a fixed and uniform essence that resides within and defines all members of each racial group. However, they differ in their understanding of the nature of this essence." Subordinated communities may be more likely to endorse cultural essentialism as it provides a basis of positive distinction for establishing a cumulative resistance as a means to assert their identities and advocacy of rights, whereas biological essentialism may be unlikely to resonate with marginalized groups as historically, dominant groups have used genetics and biology in justifying racism and oppression.
Essentialism is the idea of a singular, shared experience between a specific group of people. Anti-essentialism, on the other hand, believes that there are other various factors that can affect a person's being and their overall life experience. The race of an individual is viewed more as a social construct that does not necessarily dictate the outcome of their life circumstances. Race is viewed as "a social and historical construction, rather than an inherent, fixed, essential biological characteristic." Anti-essentialism "forces a destabilization in the very concept of race itself…" The results of this destabilization vary on the analytic focus falling into two general categories, "... consequences for the analytic concepts of racial identity or racial subjectivity."
Structural determinism, and race, sex, class, and their intersections
This refers to the exploration of how "the structure of legal thought or culture influences its content" in a way that determines social outcomes. Delgado and Stefancic cited "empathic fallacy" as one example of structural determinism—the "idea that our system, by reason of its structure and vocabulary, cannot redress certain types of wrong." They interrogate the absence of terms such as intersectionality, anti-essentialism, and jury nullification in standard legal reference research tools in law libraries.
Legal institutions, critical pedagogy, and minorities in the bar
Camara Phyllis Jones defines institutionalized racism as "differential access to the goods, services, and opportunities of society by race. Institutionalized racism is normative, sometimes legalized and often manifests as inherited disadvantage. It is structural, having been absorbed into our institutions of custom, practice, and law, so there need not be an identifiable offender. Indeed, institutionalized racism is often evident as inaction in the face of need, manifesting itself both in material conditions and in access to power. With regard to the former, examples include differential access to quality education, sound housing, gainful employment, appropriate medical facilities, and a clean environment."
The black–white binary is a paradigm identified by legal scholars through which racial issues and histories are typically articulated within a racial binary between black and white Americans. The binary largely governs how race has been portrayed and addressed throughout US history. Critical race theorists Richard Delgado and Jean Stefancic argue that anti-discrimination law has blindspots for non-black minorities due to its language being confined within the black–white binary.
Applications and adaptations
Scholars of critical race theory have focused, with some particularity, on the issues of hate crime and hate speech. In response to the opinion of the US Supreme Court in the hate speech case of R.A.V. v. City of St. Paul (1992), in which the Court struck down an anti-bias ordinance as applied to a teenager who had burned a cross, Mari Matsuda and Charles Lawrence argued that the Court had paid insufficient attention to the history of racist speech and the actual injury produced by such speech.
Critical race theorists have also argued in favor of affirmative action. They propose that so-called merit standards for hiring and educational admissions are not race-neutral and that such standards are part of the rhetoric of neutrality through which whites justify their disproportionate share of resources and social benefits.
In his 2009 article "Will the Real CRT Please Stand Up: The Dangers of Philosophical Contributions to CRT", Curry distinguished between the original CRT key writings and what is being done in the name of CRT by a "growing number of white feminists". The new CRT movement "favors narratives that inculcate the ideals of a post-racial humanity and racial amelioration between compassionate (Black and White) philosophical thinkers dedicated to solving America's race problem." They are interested in discourse (i.e., how individuals speak about race) and the theories of white Continental philosophers, over and against the structural and institutional accounts of white supremacy which were at the heart of the realist analysis of racism introduced in Derrick Bell's early works, and articulated through such African-American thinkers as W. E. B. Du Bois, Paul Robeson, and Judge Robert L. Carter.
Although the terminology critical race theory began in its application to laws, the subject emerges from the broader frame of critical theory in how it analyzes power structures in society despite whatever laws may be in effect. In the 1998 article, "Critical Race Theory: Past, Present, and Future", Delgado and Stefancic trace the origins of CRT to the early writings of Derrick Albert Bell Jr. including his 1976 Yale Law Journal article, "Serving Two Masters" and his 1980 Harvard Law Review article entitled "Brown v. Board of Education and the Interest-Convergence Dilemma".
In the 1970s, as a professor at Harvard Law School Bell began to critique, question and re-assess the civil rights cases he had litigated in the 1960s to desegregate schools following the passage of Brown v. Board of Education. This re-assessment became the "cornerstone of critical race theory". Delgado and Stefancic, who together wrote Critical Race Theory: a Introduction in 2001, described Bell's "interest convergence" as a "means of understanding Western racial history". The focus on desegregation after the 1954 Supreme Court decision in Brown—declaring school segregation unconstitutional—left "civil-rights lawyers compromised between their clients' interests and the law". The concern of many Black parents—for their children's access to better education—was being eclipsed by the interests of litigators who wanted a "breakthrough" in their "pursuit of racial balance in schools". In 1995, Cornel West said that Bell was "virtually the lone dissenter" writing in leading law reviews who challenged basic assumptions about how the law treated people of color.
In his Harvard Law Review articles, Bell cites the 1964 Hudson v. Leake County School Board case which the NAACP Legal Defense and Educational Fund (NAACP LDF) won, mandating that the all-white school board comply with desegregation. At that time it was seen as a success. By the 1970s, White parents were removing their children from the desegregated schools and enrolling them in segregation academies. Bell came to believe that he had been mistaken in 1964 when, as a young lawyer working for the LDF, he had convinced Winson Hudson, who was the head of the newly formed local NAACP chapter in Harmony, Mississippi, to fight the all-White Leake County School Board to desegregate schools. She and the other Black parents had initially sought LDF assistance to fight the board's closure of their school—one of the historic Rosenwald Schools for Black children. Bell explained to Hudson, that—following Brown—the LDF could not fight to keep a segregated Black school open; they would have to fight for desegregation. In 1964, Bell and the NAACP had believed that resources for desegregated schools would be increased and Black children would access higher quality education, since White parents would insist on better quality schools; by the 1970s, Black children were again attending segregated schools and the quality of education had deteriorated.
Bell began to work for the NAACP LDF shortly after the Montgomery bus boycott and the ensuing 1956 Supreme Court ruling following Browder v. Gayle that the Alabama and Montgomery bus segregation laws were unconstitutional. From 1960 to 1966 Bell successfully litigated 300 civil rights cases in Mississippi. Bell was inspired by Thurgood Marshall, who had been one of the two leaders of a decades-long legal campaign starting in the 1930s, in which they filed hundreds of lawsuits to reverse the "separate but equal" doctrine announced by the Supreme Court's decision in Plessy v. Ferguson (1896). The Court ruled that racial segregation laws enacted by the states were not in violation of the United States Constitution as long as the facilities for each race were equal in quality. The Plessy decision provided the legal mandate at the federal level to enforce Jim Crow laws that had been introduced by white Southern Democrats starting in the 1870s for racial segregation in all public facilities, including public schools. The Court's 1954 Brown decision—which held that the "separate but equal" doctrine is unconstitutional in the context of public schools and educational facilities—severely weakened Plessy. The Supreme Court concept of constitutional colorblindness in regards to case evaluation began with Plessy. Before Plessy, the Court considered color as a determining factor in many landmark cases, which reinforced Jim Crow laws. Bell's 1960s civil rights work built on Justice Marshall's groundwork begun in the 1930s. It was a time when the legal branch of the civil rights movement was launching thousands of civil rights cases. It was a period of idealism for the civil rights movement.
At Harvard, Bell developed new courses that studied American law through a racial lens. He compiled his own course materials which were published in 1970 under the title Race, Racism, and American Law. He became Harvard Law School's first Black tenured professor in 1971.
During the 1970s, the courts were using legislation to enforce affirmative action programs and busing—where the courts mandated busing to achieve racial integration in school districts that rejected desegregation. In response, in the 1970s, neoconservative think tanks—hostile to these two issues in particular—developed a color-blind rhetoric to oppose them, claiming they represented reverse discrimination. In 1978, Regents of the University of California v. Bakke, when Bakke won this landmark Supreme Court case by using the argument of reverse racism, Bell's skepticism that racism would end increased. Justice Lewis F. Powell Jr. held that the "guarantee of equal protection cannot mean one thing when applied to one individual and something else when applied to a person of another color." In a 1979 article, Bell asked if there were any groups of the White population that would be willing to suffer any disadvantage that might result from the implementation of a policy to rectify harms to Black people resulting from slavery, segregation, or discrimination.
Bell resigned in 1980 because of what he viewed as the university's discriminatory practices, became the dean at University of Oregon School of Law and later returned to Harvard as a visiting professor.
While he was absent from Harvard, his supporters organized protests against Harvard's lack of racial diversity in the curriculum, in the student body and in the faculty. The university had rejected student requests, saying no sufficiently qualified black instructor existed. Legal scholar Randall Kennedy writes that some students had "felt affronted" by Harvard's choice to employ an "archetypal white liberal... in a way that precludes the development of black leadership".
One of these students was Kimberlé Crenshaw, who had chosen Harvard in order to study under Bell; she was introduced to his work at Cornell. Crenshaw organized the student-led initiative to offer an alternative course on race and law in 1981—based on Bell's course and textbook—where students brought in visiting professors, such as Charles Lawrence, Linda Greene, Neil Gotanda, and Richard Delgado, to teach chapter-by-chapter from Race, Racism, and American Law.
Critical race theory emerged as an intellectual movement with the organization of this boycott; CRT scholars included graduate law students and professors.
Alan Freeman was a founding member of the Critical Legal Studies (CLS) movement that hosted forums in the 1980s. CLS legal scholars challenged claims to the alleged value-neutral position of the law. They criticized the legal system's role in generating and legitimizing oppressive social structures which contributed to maintaining an unjust and oppressive class system. Delgado and Stefancic cite the work of Alan Freeman in the 1970s as formative to critical race theory. In his 1978 Minnesota Law Review article Freeman reinterpreted, through a critical legal studies perspective, how the Supreme Court oversaw civil rights legislation from 1953 to 1969 under the Warren Court. He criticized the narrow interpretation of the law which denied relief for victims of racial discrimination. In his article, Freeman describes two perspectives on the concept of racial discrimination: that of victim or perpetrator. Racial discrimination to the victim includes both objective conditions and the "consciousness associated with those objective conditions". To the perpetrator, racial discrimination consists only of actions without consideration of the objective conditions experienced by the victims, such as the "lack of jobs, lack of money, lack of housing". Only those individuals who could prove they were victims of discrimination were deserving of remedies. By the late 1980s, Freeman, Bell, and other CRT scholars left the CLS movement claiming it was too narrowly focused on class and economic structures while neglecting the role of race and race relations in American law.
Emergence as a movement
In 1989, Kimberlé Crenshaw, Neil Gotanda, and Stephanie Phillips organized a workshop at the University of Wisconsin-Madison entitled "New Developments in Critical Race Theory". The organizers coined the term "Critical Race Theory" to signify an "intersection of critical theory and race, racism and the law."
Afterward, legal scholars began publishing a higher volume of works employing critical race theory, including more than "300 leading law review articles" and books.: 108 In 1990, Duncan Kennedy published his article on affirmative action in legal academia in the Duke Law Journal, and Anthony E. Cook published his article "Beyond Critical Legal Studies" in the Harvard Law Review. In 1991, Patricia Williams published The Alchemy of Race and Rights, while Derrick Bell published Faces at the Bottom of the Well in 1992.: 124 Cheryl I. Harris published her 1993 Harvard Law Review article "Whiteness as Property" in which she described how passing led to benefits akin to owning property. In 1995, two dozen legal scholars contributed to a major compilation of key writings on CRT.
By the early 1990s, key concepts and features of CRT had emerged. Bell had introduced his concept of "interest convergence" in his 1973 article. He developed the concept of racial realism in a 1992 series of essays and book, Faces at the bottom of the well: the permanence of racism. He said that Black people needed to accept that the civil rights era legislation would not on its own bring about progress in race relations; anti-Black racism in the US was a "permanent fixture" of American society; and equality was "impossible and illusory" in the US. Crenshaw introduced the term intersectionality in the 1990s.
In 1995, pedagogical theorists Gloria Ladson-Billings and William F. Tate began applying the critical race theory framework in the field of education. In their 1995 article Ladson-Billings and Tate described the role of the social construction of white norms and interests in education. They sought to better understand inequities in schooling. Scholars have since expanded work to explore issues including school segregation in the US; relations between race, gender, and academic achievement; pedagogy; and research methodologies.
As of 2002[update], over 20 American law schools and at least three non-American law schools offered critical race theory courses or classes. Critical race theory is also applied in the fields of education, political science, women's studies, ethnic studies, communication, sociology, and American studies. Other movements developed that apply critical race theory to specific groups. These include the Latino-critical (LatCrit), queer-critical, and Asian-critical movements. These continued to engage with the main body of critical theory research, over time developing independent priorities and research methods.
CRT has also been taught internationally, including in the United Kingdom (UK) and Australia.[failed verification] According to educational researcher Mike Cole, the main proponents of CRT in the UK include David Gillborn, John Preston, and Namita Chakrabarty.
CRT scholars draw on the work of Antonio Gramsci, Sojourner Truth, Frederick Douglass, and W. E. B. DuBois. Bell shared Paul Robeson's belief that "Black self-reliance and African cultural continuity should form the epistemic basis of Blacks' worldview." Their writing is also informed by the 1960s and 1970s movements such as Black Power, Chicano, and radical feminism. Critical race theory shares many intellectual commitments with critical theory, critical legal studies, feminist jurisprudence, and postcolonial theory. University of Connecticut philosopher, Lewis Gordon, who has focused on postcolonial phenomenology, and race and racism, wrote that CRT is notable for its use of postmodern poststructural scholarship, including an emphasis on "subaltern" or "marginalized" communities and the "use of alternative methodology in the expression of theoretical work, most notably their use of "narratives" and other literary techniques".
Standpoint theory, which has been adopted by some CRT scholars, emerged from the first wave of the women's movement in the 1970s. The main focus of feminist standpoint theory is epistemology—the study of how knowledge is produced. The term was coined by Sandra Harding, an American feminist theorist, and developed by Dorothy Smith in her 1989 publication, The Everyday World as Problematic: A Feminist Sociology. Smith wrote that by studying how women socially construct their own everyday life experiences, sociologists could ask new questions. Patricia Hill Collins introduced black feminist standpoint—a collective wisdom of those who have similar perspectives in society which sought to heighten awareness to these marginalized groups and provide ways to improve their position in society.
Critical race theory draws on the priorities and perspectives of both critical legal studies (CLS) and conventional civil rights scholarship, while also sharply contesting both of these fields. UC Davis School of Law legal scholar Angela P. Harris, describes critical race theory as sharing "a commitment to a vision of liberation from racism through right reason" with the civil rights tradition. It deconstructs some premises and arguments of legal theory and simultaneously holds that legally constructed rights are incredibly important. CRT scholars disagreed with the CLS anti-legal rights stance, nor did they wish to "abandon the notions of law" completely; CRT legal scholars acknowledged that some legislation and reforms had helped people of color. As described by Derrick Bell, critical race theory in Harris' view is committed to "radical critique of the law (which is normatively deconstructionist) and... radical emancipation by the law (which is normatively reconstructionist)".
University of Edinburgh philosophy professor Tommy J. Curry says that by 2009, the CRT perspective on a race as a social construct was accepted by "many race scholars" as a "commonsense view" that race is not "biologically grounded and natural." Social construct is a term from social constructivism, whose roots can be traced to the early science wars, instigated in part by Thomas Kuhn's 1962 The Structure of Scientific Revolutions. Ian Hacking, a Canadian philosopher specializing in the philosophy of science, describes how social construction has spread through the social sciences. He cites the social construction of race as an example, asking how race could be "constructed" better.
According to the Encyclopaedia Britannica, aspects of CRT have been criticized by "legal scholars and jurists from across the political spectrum." Criticism of CRT has focused on its emphasis on storytelling, its critique of the merit principle and of objective truth, and its thesis of the voice of color. Critics say it contains a "postmodernist-inspired skepticism of objectivity and truth", and has a tendency to interpret "any racial inequity or imbalance [...] as proof of institutional racism and as grounds for directly imposing racially equitable outcomes in those realms", according to Britannica. Proponents of CRT have also been accused of treating even well-meaning criticism of CRT as evidence of latent racism.
In a 1997 book, law professors Daniel A. Farber and Suzanna Sherry criticized CRT for basing its claims on personal narrative and for its lack of testable hypotheses and measurable data. CRT scholars including Crenshaw, Delgado, and Stefancic responded that such critiques represent dominant modes within social science which tend to exclude people of color. Delgado and Stefancic wrote that "In these realms [social science and politics], truth is a social construct created to suit the purposes of the dominant group." Farber and Sherry have also argued that anti-meritocratic tenets in critical race theory, critical feminism, and critical legal studies may unintentionally lead to antisemitic and anti-Asian implications. They write that the success of Jews and Asians within what critical race theorists posit to be a structurally unfair system may lend itself to allegations of cheating and advantage-taking. In response, Delgado and Stefancic write that there is a difference between criticizing an unfair system and criticizing individuals who perform well inside that system.
Critical race theory has stirred controversy in the United States for promoting the use of narrative in legal studies, advocating "legal instrumentalism" as opposed to ideal-driven uses of the law, and encouraging legal scholars to promote racial equity.
Before 1993, the term "critical race theory" was not part of public discourse. In the spring of that year, conservatives launched a campaign led by Clint Bolick to portray Lani Guinier—then-President Bill Clinton's nominee for Assistant Attorney General for Civil Rights—as a radical because of her connection to CRT. Within months, Clinton had withdrawn the nomination, describing the effort to stop Guinier's appointment as "a campaign of right-wing distortion and vilification". This was part of a wider conservative strategy to shift the Supreme Court in their favor.
Amy E. Ansell writes that the logic of legal instrumentalism reached wide public reception in the O. J. Simpson murder case when attorney Johnnie Cochran "enacted a sort of applied CRT", selecting an African-American jury and urging them to acquit Simpson in spite of the evidence against him—a form of jury nullification. Legal scholar Jeffrey Rosen calls this the "most striking example" of CRT's influence on the US legal system. Law professor Margaret M. Russell responded to Rosen's assertion in the Michigan Law Review, saying that Cochran's "dramatic" and "controversial" courtroom "style and strategic sense" in the Simpson case resulted from his decades of experience as an attorney; it was not significantly influenced by CRT writings.
In 2010, a Mexican-American studies program in Tucson, Arizona, was halted because of a state law forbidding public schools from offering race-conscious education in the form of "advocat[ing] ethnic solidarity instead of the treatment of pupils as individuals". Certain books, including a primer on CRT, were banned from the curriculum. Matt de la Peña's young-adult novel Mexican WhiteBoy was banned for "containing 'critical race theory'" according to state officials. The ban on ethnic-studies programs was later deemed unconstitutional on the grounds that the state showed discriminatory intent: "Both enactment and enforcement were motivated by racial animus", federal Judge A. Wallace Tashima ruled.
Following the 2020 protests of the murders of Ahmaud Arbery and George Floyd as well as the killing of Breonna Taylor, school districts began to introduce additional curricula and create diversity, equity, and inclusion (DEI)-positions to address "disparities stemming from race, economics, disabilities and other factors." These measures were met with criticism from conservatives, particularly those in the Republican Party. Critics have described these criticisms to be part of a cycle of backlash against what they view as progress toward racial equality and equity.Outspoken critics of critical race theory include former U.S. president Donald Trump, conservative activist Christopher Rufo, various Republican officials, and conservative commentators on Fox News and right-wing talk radio shows. Movements have arisen from the controversy; in particular, the No Left Turn in Education movement, which has been described as one of the largest groups targeting school boards regarding critical race theory. In response to the unfounded assertion that CRT was being taught in public schools, dozens of states have introduced bills that limit what schools can teach regarding race, American history, politics, and gender.
Within critical race theory, various sub-groupings focus on issues and nuances unique to particular ethno-racial and/or marginalized communities. This includes the intersection of race with disability, ethnicity, gender, sexuality, class, or religion. For example, disability critical race studies (DisCrit), critical race feminism (CRF), Jewish Critical Race Theory (HebCrit, pronounced "Heeb"), Black Critical Race Theory (Black Crit), Latino critical race studies (LatCrit), Asian American critical race studies (AsianCrit), South Asian American critical race studies (DesiCrit), Quantitative Critical Race Theory (QuantCrit) and American Indian critical race studies (sometimes called TribalCrit). CRT methodologies have also been applied to the study of white immigrant groups. CRT has spurred some scholars to call for a second wave of whiteness studies, which is now a small offshoot known as Second Wave Whiteness (SWW). Critical race theory has also begun to spawn research that looks at understandings of race outside the United States.
Disability critical race theory
Latino critical race theory
Latino critical race theory (LatCRT or LatCrit) is a research framework that outlines the social construction of race as central to how people of color are constrained and oppressed in society. Race scholars developed LatCRT as a critical response to the "problem of the color line" first explained by W. E. B. Du Bois. While CRT focuses on the Black–White paradigm, LatCRT has moved to consider other racial groups, mainly Chicana/Chicanos, as well as Latinos/as, Asians, Native Americans/First Nations, and women of color.
In Critical Race Counterstories along the Chicana/Chicano Educational Pipeline, Tara J. Yosso discusses how the constraint of POC can be defined. Looking at the differences between Chicana/o students, the tenets that separate such individuals are: the intercentricity of race and racism, the challenge of dominant ideology, the commitment to social justice, the centrality of experience knowledge, and the interdisciplinary perspective.
LatCRTs main focus is to advocate social justice for those living in marginalized communities (specifically Chicana/os), who are guided by structural arrangements that disadvantage people of color. Arrangements where Social institutions function as dispossessions, disenfranchisement, and discrimination over minority groups. In an attempt to give voice to those who are victimized, LatCRT has created two common themes:
First, CRT proposes that white supremacy and racial power are maintained over time, a process that the law plays a central role in. Different racial groups lack the voice to speak in this civil society, and, as such, CRT has introduced a new critical form of expression, called the voice of color. The voice of color is narratives and storytelling monologues used as devices for conveying personal racial experiences. These are also used to counter metanarratives that continue to maintain racial inequality. Therefore, the experiences of the oppressed are important aspects for developing a LatCRT analytical approach, and it has not been since the rise of slavery that an institution has so fundamentally shaped the life opportunities of those who bear the label of criminal.
Secondly, LatCRT work has investigated the possibility of transforming the relationship between law enforcement and racial power, as well as pursuing a project of achieving racial emancipation and anti-subordination more broadly. Its body of research is distinct from general critical race theory in that it emphasizes immigration theory and policy, language rights, and accent- and national origin-based forms of discrimination. CRT finds the experiential knowledge of people of color and draws explicitly from these lived experiences as data, presenting research findings through storytelling, chronicles, scenarios, narratives, and parables.
Asian critical race theory
Asian critical race theory looks at the influence of race and racism on Asian Americans and their experiences in the US education system. Like Latino critical race theory, Asian critical race theory is distinct from the main body of CRT in its emphasis on immigration theory and policy.
Tribal critical race theory
Critical Race Theory evolved after a 1970s response to Critical Legal Studies. Tribal Critical Theory focuses on stories as values oral data as a main source of information. Colonization is endemic to society. White supremacy, imperialism, and a desire for material gain underpin US policies toward Indigenous peoples. Indigenous identities' political and racialized natures are explained by indigenous people living in a liminal space. Tribal sovereignty, tribal autonomy, self-determination, and self-identification are aspirations of indigenous peoples. The idea of culture, information and power takes on new importance when inspected through a Native lens. The problematic goal of assimilation is at the heart of both educational policies aimed at Indigenous peoples and government policies. Understanding the lived realities of Indigenous peoples is fundamentally dependent on comprehending tribal philosophies, beliefs, traditions, and visions for the future. Tribal Critical Theory also occupies the most important lens.
Critical philosophy of race
The Critical Philosophy of Race (CPR) is inspired by both Critical Legal Studies and Critical Race Theory's use of interdisciplinary scholarship. Both CLS and CRT explore the covert nature of mainstream use of "apparently neutral concepts, such as merit or freedom."
- Wallace-Wells, Benjamin (June 18, 2021). "How a Conservative Activist Invented the Conflict Over Critical Race Theory". The New Yorker. Retrieved June 19, 2021.
- Meckler, Laura; Dawsey, Josh (June 21, 2021). "Republicans, spurred by an unlikely figure, see political promise in critical race theory". The Washington Post. Vol. 144. ISSN 0190-8286. Retrieved June 19, 2021.
- Iati, Marisa (May 29, 2021). "What is critical race theory, and why do Republicans want to ban it in schools?". The Washington Post.
Rather than encouraging white people to feel guilty, Thomas said critical race theorists aim to shift focus away from individual people's bad actions and toward how systems uphold racial disparities.
- Kahn, Chris (July 15, 2021). "Many Americans embrace falsehoods about critical race theory". Reuters. Retrieved January 22, 2022.
- Christian, Michelle; Seamster, Louise; Ray, Victor (November 2019). "New Directions in Critical Race Theory and Sociology: Racism, White Supremacy, and Resistance". American Behavioral Scientist. 63 (13): 1731–1740. doi:10.1177/0002764219842623. S2CID 151160318.
- Yosso, Tara; Solórzano, Daniel G (2005). "Conceptualizing a critical race theory in sociology". In Romero, Mary (ed.). The Blackwell Companion to Social Inequalities.
- Borter, Gabriella (September 22, 2021). "Explainer: What 'critical race theory' means and why it's igniting debate". Reuters. Retrieved January 22, 2022.
- Gillborn 2015, p. 278.
- Curry 2009a, p. 166.
- Gillborn, David; Ladson-Billings, Gloria (2020). "Critical Race Theory". In Paul Atkinson; et al. (eds.). SAGE Research Methods Foundations. Theoretical Foundations of Qualitative Research. SAGE Publications. doi:10.4135/9781526421036764633. ISBN 978-1-5264-2103-6. S2CID 240846071.
- Bridges 2019.
- Ruparelia 2019, pp. 77–89.
- Milner, Richard (March 2013). "Analyzing Poverty, Learning, and Teaching Through a Critical Race Theory Lens". Review of Research in Education. 37 (1): 1–53. doi:10.3102/0091732X12459720. JSTOR 24641956. S2CID 146634183.
- Crenshaw 1991; Crenshaw 1989.
- Ansell 2008, pp. 344–345.
- Crenshaw 2019, pp. 52–84.
- "Critical race theory". Encyclopaedia Britannica. September 21, 2021. Archived from the original on November 22, 2021.
- Ansell 2008, pp. 344–345; Bridges 2019, p. 7; Crenshaw et al. 1995, p. xiii.
- Ansell 2008, p. 344; Cole 2007, pp. 112–113: "CRT was a reaction to Critical Legal Studies (CLS) ... CRT was a response to CLS, criticizing the latter for its undue emphasis on class and economic structure, and insisting that 'race' is a more critical identity."
- Bridges 2021, 2:06.
- Crenshaw et al. 1995, p. xxvii. "Indeed, the organizers coined the term 'Critical Race Theory' to make it clear that our work locates itself in intersection of critical theory and race, racism and the law."
- Ansell 2008, p. 344.
- Cabrera 2018, p. 213.
- Wallace-Wells, Benjamin (June 18, 2021). "How a Conservative Activist Invented the Conflict Over Critical Race Theory". The New Yorker. OCLC 909782404. Archived from the original on June 18, 2021.
- Caroline Kelly (September 5, 2020). "Trump bars 'propaganda' training sessions on race in latest overture to his base". CNN.
- Duhaney, Patrina (March 8, 2022). "Why does critical race theory make people so uncomfortable?". The Conversation. Retrieved March 15, 2022.
- Bump, Philip (June 15, 2021). "Analysis | The Scholar Strategy: How 'critical race theory' alarms could convert racial anxiety into political energy". The Washington Post. Archived from the original on June 22, 2021.
- Harris 2021.
- West 1995, p. xi.
- Brooks 1994, p. 85.
- Ladson-Billings & Tate 1995.
- Gillborn 2015; Ladson-Billings 1998.
- Ladson-Billings 1998, p. 7.
- Cabrera 2018, p. 211; Delgado & Stefancic 2017, p. 3.
- "Examine critical race theory (CRT)". Encyclopaedia Britannica. Video with transcript. Archived from the original on November 24, 2021.
- Bell 1992.
- McCristal-Culp 1992, p. 1149.
- Hancock 2016, p. 192; Crenshaw 1989.
- Cesario 2008, pp. 201–212; Bell 1980.
- Harnois 2010; Collins 2009.
- Delgado & Stefancic 1993.
- Kennedy 1995; Kennedy 1990.
- Delgado & Stefancic 1993, p. 462.
- Delgado & Stefancic 1993; Strickland 1997.
- Goldberg, David Theo (1993). Racist Culture: Philosophy and the Politics of Meaning. Blackwell. ISBN 978-0-631-18078-4.
- Crenshaw 1988, p. 103.
- Crenshaw 1988, p. 104–105.
- Crenshaw 1988, p. 104.
- Crenshaw 1988, p. 106.
- Kennedy 1990, p. 705.
- Bonilla-Silva 2020; Bonilla-Silva 2010, p. 26.
- Alcoff 2021.
- Alcoff 2021; Delgado 1984.
- Delgado & Stefancic 1992, p. 1276.
- Delgado & Stefancic 1992, p. 1261.
- Delgado & Stefancic 1992, pp. 1262–1263.
- Delgado & Stefancic 1992, p. 1263–1264.
- Delgado & Stefancic 1992, pp. 1264–1265.
- Delgado & Stefancic 1992, p. 1266.
- Delgado & Stefancic 1992, pp. 1266–1267.
- Delgado & Stefancic 1992, p. 1278.
- Delgado & Stefancic 1992, p. 1279.
- Delgado & Stefancic 1992, pp. 1284–1285.
- Delgado & Stefancic 1992, pp. 1286–1287.
- Delgado & Stefancic 1992, p. 1282.
- Delgado & Stefancic 1992, p. 1288.
- Brayboy, Bryan McKinley Jones (December 2005). "Toward a Tribal Critical Race Theory in Education". The Urban Review. 37 (5): 425–446. doi:10.1007/s11256-005-0018-y. S2CID 145515195.
- Leonardo 2013, pp. 603–604; Ansell 2008, p. 345.
- Bell 1980.
- Wright & Cobb 2021.
- Shih, David (April 19, 2017). "A Theory To Better Understand Diversity, And Who Really Benefits". Code Switch. NPR. Retrieved October 20, 2021.
- Ogbonnaya-Ogburu, Ihudiya Finda; Smith, Angela D.R.; To, Alexandra; Toyama, Kentaro (2020). "Critical Race Theory for HCI". Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. pp. 1–16. doi:10.1145/3313831.3376392. ISBN 978-1-4503-6708-0. S2CID 218483077.
Those with power rarely concede it without interest convergence. Racism benefits some groups, and those groups are reluctant to move against it. They will take or allow anti-racist actions most often when it also confers their benefits. In the US context, the forward movement for civil rights has typically only occurred when it is materially in the interest of the White majority.
- Bell 1989, p. [page needed]; Dudziak 2000, p. [page needed].
- Dudziak 2000; Ioffe 2017.
- Dudziak 2000.
- Delgado & Stefancic 2017, pp. 25–26; Dudziak 1988.
- Dudziak 1997.
- Ioffe 2017.
- Delgado & Stefancic 2012, pp. 51–55.
- Crenshaw 1991.
- Delgado & Stefancic 2017, pp. 63–66.
- Zilliacus, Harriet; Paulsrud, BethAnne; Holm, Gunilla (April 3, 2017). "Essentializing vs. non-essentializing students' cultural identities: curricular discourses in Finland and Sweden". Journal of Multicultural Discourses. 12 (2): 166–180. doi:10.1080/17447143.2017.1311335. S2CID 49215486.
- Soylu Yalcinkaya, Nur; Estrada-Villalta, Sara; Adams, Glenn (2017). "The (Biological or Cultural) Essence of Essentialism: Implications for Policy Support among Dominant and Subordinated Groups". Frontiers in Psychology. 8: 900. doi:10.3389/fpsyg.2017.00900. PMC 5447748. PMID 28611723.
- Van Wagenen, Aimee (2007). "The Promise and Impossibility of Representing Anti-Essentialism: Reading Bulworth Through Critical Race Theory". Race, Gender & Class. 14 (1/2): 157–177. JSTOR 41675202. ProQuest 218827114.
- "Race and Racial Identity". National Museum of African American History and Culture. Retrieved December 1, 2022.
- Delgado & Stefancic 2012, pp. 26, 155.
- Delgado & Stefancic 2001, p. 26.
- Delgado & Stefancic 2001, p. 27.
- Jones 2002, pp. 9–10.
- Perea, Juan (1997). "The Black/White Binary Paradigm of Race: The 'Normal Science' of American Racial Thought". California Law Review, la Raza Journal. 85 (5): 1213–1258. doi:10.2307/3481059. JSTOR 3481059.
- Delgado & Stefancic 2017, p. 76.
- Matsuda, Mari J.; Lawrence, Charles R. (1993). "Epilogue: Burning Crosses and the R. A. V. Case". Words That Wound: Critical Race Theory, Assaultive Speech, And The First Amendment (1st ed.). Westview Press. pp. 133–136. ISBN 978-0-429-50294-1.
- Delgado 1995.
- Kennedy 1990.
- Williams 1991.
- Curry 2009b, p. 1.
- Curry 2009b, p. 2.
- Curry 2011, p. [page needed].
- Curry 2009b, p. [page needed].
- Delgado & Stefancic 1998a, p. 467; Delgado & Stefancic 2001, p. 30; Bell 1976.
- Delgado & Stefancic 1998a; Bell 1980.
- Friedman, Jonathan (November 8, 2021). Educational Gag Orders: Legislative Restrictions on the Freedom to Read, Learn, and Teach (Report). New York: PEN America. Archived from the original on November 9, 2021.
- Delgado & Stefancic 2001.
- Delgado & Stefancic 1998a, p. 467.
- Jackson, Lauren Michele (July 7, 2021). "The Void That Critical Race Theory Was Created to Fill". The New Yorker. Retrieved November 8, 2021.
- Bell 1976; Bell 1980.
- Cobb, Jelani (September 13, 2021). "The Man Behind Critical Race Theory". The New Yorker. Retrieved November 14, 2021.
- Wright & Cobb 2021; Bell 1976; Bell 1980.
- "Montgomery Bus Boycott". Civil Rights Movement Archive.
- Groves, Harry E. (1951). "Separate but Equal—The Doctrine of Plessy v. Ferguson". Phylon. 12 (1): 66–72. doi:10.2307/272323. JSTOR 272323.
- Schauer, Frederick (1997). "Generality and Equality". Law and Philosophy. 16 (3): 279–97. doi:10.2307/3504874. JSTOR 3504874.
- Gotanda 1991.
- Bell 1970.
- Bell 1979a.
- Crenshaw et al. 1995, pp. xix–xx.
- Buras, Kristen L. (2014). "From Carter G. Woodson to Critical Race Curriculum Studies". In Dixson, Adrienne D. (ed.). Researching Race in Education: Policy, Practice, and Qualitative Research. Charlotte, N.C.: Information Age Publishing. pp. 49–50. ISBN 978-1-6239-6678-2.
When Bell departed from Harvard to lead the University of Oregon School of Law, Harvard's law students of color demanded that another faculty member of color be hired to replace him.
- Crenshaw et al. 1995, p. xx: "The liberal white Harvard administration responded to student protests, demonstrations, rallies, and sit-ins—including a takeover of the Dean's office—by asserting that there were no qualified black scholars who merited Harvard's interest."
- Kennedy, Randall L. (June 1989). "Racial Critiques of Legal Academia". Harvard Law Review. 102 (8): 1745–1819. doi:10.2307/1341357. JSTOR 1341357.
- Cook et al. 2021, c.14:36.
- Cook et al. 2021.
- Gottesman, Isaac (2016). "Critical Race Theory and Legal Studies". The Critical Turn in Education: From Marxist Critique to Poststructuralist Feminism to Critical Theories of Race. London: Taylor & Francis. p. 123. ISBN 978-1-3176-7095-7.
- Delgado & Stefancic 2001, p. 30.
- Freeman, Alan David (January 1, 1978). "Legitimizing Racial Discrimination through Antidiscrimination law: A Critical Review of Supreme Court Doctrine". Minnesota Law Review. 62 (73).
- Yosso 2005, p. 71.
- Ladson-Billings, Gloria (2021). Critical Race Theory in Education: A Scholar's Journey. Teachers College Press. ISBN 978-0-8077-6583-8.
- Kennedy 1990; Kennedy 1995.
- Cook, Anthony E. (1990). "Beyond Critical Legal Studies: The Reconstructive Theology of Dr. Martin Luther King, Jr". Harvard Law Review. 103 (5): 985–1044. doi:10.2307/1341453. JSTOR 1341453.
- Harris 1993.
- Warren, James (September 5, 1993). "'Whiteness as Property'". Chicago Tribune.
- Crenshaw et al. 1995, p. xiii.
- Gillborn 2015; Crenshaw 1991.
- Curry 2008, pp. 35–36; Ladson-Billings 1998, pp. 7–24; Ladson-Billings & Tate 1995.
- Donnor, Jamel; Ladson-Billings, Gloria (2017). "Critical Race Theory and the Postracial Imaginary". In Denzin, Norman; Lincoln, Yvonna (eds.). The SAGE handbook of qualitative research (5th ed.). Thousand Oaks, California: Sage Publications. p. 366. ISBN 978-1-4833-4980-0.
- Harris 2002, p. 1216: "Over twenty American law schools offer courses in Critical Race Theory or include Critical Race Theory as a central part of other courses. Critical Race Theory is a formal course in a number of universities in the United States and in at least three foreign law schools."
- Delgado & Stefancic 2017, pp. 7–8.
- "Critical Race Theory". Centre for Research in Race and Education; University of Birmingham. Retrieved June 25, 2021.
- Quinn, Karl (November 6, 2020). "Are all white people racist? Why Critical Race Theory has us rattled". The Sydney Morning Herald. Retrieved June 26, 2021.
- Cole, Mike (2009). "Critical Race Theory comes to the UK: A Marxist response". Ethnicities. 9 (2): 246–269. doi:10.1177/1468796809103462. S2CID 144325161.
- Curry 2011, p. 4.
- Gordon 1999.
- Borland, Elizabeth. "Standpoint theory". Encyclopaedia Britannica. Retrieved November 22, 2021.
- Macionis, John J.; Gerber, Linda M. (2011). Sociology (7th Canadian ed.). Toronto: Pearson Prentice Hall. p. 12. ISBN 978-0-13-800270-1.
- Harris 1994, pp. 741–743.
- Crenshaw et al. 1995, p. xxiv: "To the emerging race crits, rights discourse held a social and transformative value in the context of racial subordination that transcended the narrower question of whether reliance on rights alone could bring about any determinate results"; Harris 1994, p. [page needed].
- Bell 1995, p. 899.
- Mallon 2007.
- Hacking 2003.
- Delgado & Stefancic 2017, p. 102.
- Cabrera 2018, p. 213; Farber & Sherry 1997a.
- Cabrera 2018, p. 213.
- Hernández-Truyol, Berta E.; Harris, Angela P.; Valdes, Francisco (2006). "Beyond the First Decade: A Forward-Looking History of LatCrit Theory, Community and Praxis". Berkeley la Raza Law Journal. SSRN 2666047.
- Farber, Daniel A.; Sherry, Suzanna (May 1995). "Is the Radical Critique of Merit Anti-Semitic?". California Law Review. 83 (3): 853. doi:10.2307/3480866. hdl:1803/6607. JSTOR 3480866.
Therefore, the authors suggest, the radical critique of merit has the wholly unintended consequence of being anti-Semitic and possibly racist.
- Farber & Sherry 1997a.
- Delgado & Stefancic 2017, pp. 103–104.
- Ansell 2008, pp. 345–346.
- Holmes 1997.
- Harris 2021; Locin & Tackett 1993.
- Apple, R. W. (June 5, 1993). "THE GUINIER BATTLE; President Blames Himself for Furor Over Nominee". The New York Times.
- Totenberg, Nina (July 5, 2022). "The Supreme Court is the most conservative in 90 years". NPR. Retrieved June 11, 2023.
- Kruzel, John (May 4, 2022). "Conservative court strategy bears fruit as Roe faces peril". The Hill. Retrieved June 11, 2023.
- Hurley, Lawrence; Chung, Andrew; Hurley, Lawrence (July 1, 2022). "Explainer: How the conservative Supreme Court is reshaping U.S. law". Reuters. Retrieved June 11, 2023.
- Rhodes, Christopher. "The Federalist Society: Architects of the American dystopia". www.aljazeera.com. Retrieved June 11, 2023.
- Ansell 2008, p. 346.
- Rosen 1996.
- Russell 1997, Note 67, p. 791.
- Gillborn, David (2014). "Racism as Policy: A Critical Race Analysis of Education Reforms in the United States and England". The Educational Forum. 78 (1): 30–31. doi:10.1080/00131725.2014.850982. S2CID 144670114.
- Winerip, Michael (March 19, 2012). "Racial Lens Used to Cull Curriculum in Arizona". The New York Times. Archived from the original on July 8, 2017.
- Depenbrock, Julie (August 22, 2017). "Federal Judge Finds Racism Behind Arizona Law Banning Ethnic Studies". All Things Considered. NPR. Archived from the original on July 6, 2019.
- Carr (2022).
- Wilson (2021).
- Dawsey & Stein (2020); Lang (2020); Waxman (2021); Education Week (2021).
- Gross (2022).
- Rubin, Daniel Ian (July 3, 2020). "Hebcrit: a new dimension of critical race theory". Social Identities. 26 (4): 499–514. doi:10.1080/13504630.2020.1773778. S2CID 219923352.
- Yosso 2005, p. 72; Delgado & Stefancic 1998b.
- Yosso 2005, p. 72.
- Harpalani 2013.
- Castillo, Wendy; Gillborn, David (March 9, 2022). "How to "QuantCrit:" Practices and Questions for Education Data Researchers and Users".
- Myslinska 2014a, pp. 559–660.
- Jupp, Berry & Lensmire 2016.
- Myslinska 2014b.
- See e.g., Levin 2008.
- Annamma, Connor & Ferri 2012.
- Treviño, Harris & Wallace 2008.
- Yosso 2006, p. 7.
- Yosso 2005.
- Delgado & Stefancic 2001, p. 6.
- Yosso 2006.
- Iftikar, Jon S.; Museus, Samuel D. (November 26, 2018). "On the utility of Asian critical (AsianCrit) theory in the field of education". International Journal of Qualitative Studies in Education. 31 (10): 935–949. doi:10.1080/09518398.2018.1522008. S2CID 149949621.
- Alcoff, Linda (2021). "Critical Philosophy of Race". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy (Fall 2021 ed.).
- Ansell, Amy (2008). "Critical Race Theory". In Schaefer, Richard T. (ed.). Encyclopedia of Race, Ethnicity, and Society, Volume 1. SAGE Publications. pp. 344–346. doi:10.4135/9781412963879.n138. ISBN 978-1-4129-2694-2.
- Annamma, Subini Ancy; Connor, David; Ferri, Beth (2012). "Dis/ability critical race studies (DisCrit): theorizing at the intersections of race and dis/ability". Race Ethnicity and Education. 16 (1): 1–31. doi:10.1080/13613324.2012.730511. S2CID 145739550.
- Bell, Derrick A (1970). Race, Racism, and American Law. Cambridge, Mass.: Harvard Law School. OCLC 22681096.
- Bell, Derrick A. (March 1976). "Serving Two Masters: Integration Ideals and Client Interests in School Desegregation Litigation". The Yale Law Journal. 85 (4): 470–516. doi:10.2307/795339. JSTOR 795339. Reprinted in Crenshaw et al. (1995).
- Bell, Derrick A. (1979a). "Bakke, Minority Admissions, and the Usual Price of Racial Remedies". California Law Review. 67 (1): 3–19. doi:10.2307/3480087. JSTOR 3480087.
- Bell, Derrick A Jr. (1980). "Brown v. Board of Education and the Interest-Convergence Dilemma". Harvard Law Review. 93 (3): 518–533. doi:10.2307/1340546. JSTOR 1340546. Reprinted in Crenshaw et al. (1995).
- Bell, Derrick (1989) [first published 1973]. Race, Racism, and American Law (2nd ed.). Aspen Publishers. ISBN 978-0-7355-7574-5.
- Bell, Derrick (1992). Faces at the bottom of the well: the permanence of racism. New York: Basic Books. ISBN 978-0-465-06817-3. OCLC 25410809.
- Bell, Derrick A. (1995). "Who's Afraid of Critical Race Theory?". University of Illinois Law Review. 1995 (4): 893–.
- Bonilla-Silva, Eduardo (2010). Racism without racists : color-blind racism and the persistence of racial inequality in the United States (3rd ed.). Lanham, Md.: Rowman & Littlefield. ISBN 978-1-44-220218-4.
- Bonilla-Silva, Eduardo (July 31, 2020). "Color-Blind Racism in Pandemic Times". Sociology of Race and Ethnicity. 8 (3): 343–354. doi:10.1177/2332649220941024.
- Bridges, Khiara M. (2019). Critical Race Theory: A Primer. St. Paul, Minn.: Foundation Press. ISBN 978-1-6832-8443-7. OCLC 1054004570.
- Bridges, Khiara M. (September 2, 2021). Khiara M. Bridges Explains Critical Race Theory (video). International Association for Political Science Students. Event occurs at 15:43. Archived from the original on December 13, 2021. Retrieved November 27, 2021 – via YouTube.
- Brooks, Roy (1994). "Critical Race Theory: A Proposed Structure and Application to Federal Pleading". Harvard BlackLetter Law Journal. 11: 85–.
- Cabrera, Nolan L. (2018). "Where is the Racial Theory in Critical Race Theory?: A constructive criticism of the Crits". The Review of Higher Education. 42 (1): 209–233. doi:10.1353/rhe.2018.0038. S2CID 149791522.
- Carbado, Devon W.; Gulati, Mitu; Valdes, Francisco; McCristal-Culp, Jerome; Harris, Angela P. (May 2003). "The Law and Economics of Critical Race Theory" (PDF). The Yale Law Journal. 112 (7): 1757. doi:10.2307/3657500. JSTOR 3657500. Archived from the original (PDF) on October 5, 2016.
- Carr, Nicole (June 16, 2022). "White Parents Rallied to Chase a Black Educator Out of Town. Then, They Followed Her to the Next One". ProPublica. Retrieved June 17, 2022.
- Cesario, Anne Marie (2008). "Brown v. Board of Education". In Schaefer, Richard T. (ed.). Encyclopedia of Race, Ethnicity, and Society. Vol. 1. SAGE Publications. pp. 210–212. doi:10.4135/9781412963879.n138. ISBN 978-1-4129-2694-2.
- Cole, Mike (2007). Marxism and Educational Theory: Origins and Issues. Taylor & Francis. ISBN 978-0-203-39732-9.
- Collins, Patricia Hill (2009) [first published 1990]. Black feminist thought: knowledge, consciousness, and the politics of empowerment (1st ed.). New York: Routledge. ISBN 978-0-415-96472-2.
- Cook, Anthony; Hosang, Daniel M.; Ladson-Billings, Gloria; Peller, Gary; Williams, Robert A. (September 2, 2021). "The Insurgent Origins of Critical Race Theory" (Podcast). Intersectionality Matters!. No. 39. Retrieved November 9, 2021.
- Crenshaw, Kimberlé (1988). "Race, Reform and Retrenchment: Transformation and Legitimation in Anti-Discrimination Law". Harvard Law Review. 101 (7): 1331–1387. doi:10.2307/1341398. JSTOR 1341398. Reprinted in Crenshaw et al. (1995, pp. 103–127).
- Crenshaw, Kimberlé (1989). "Demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics". University of Chicago Legal Forum. 1989 (1): 139–167.
- Crenshaw, Kimberlé (July 1991). "Mapping the margins: Intersectionality, identity politics, and violence against women of color". Stanford Law Review. 43 (6): 1241–1299. CiteSeerX 10.1.1.695.5934. doi:10.2307/1229039. JSTOR 1229039. S2CID 24661090. Reprinted in Crenshaw et al. (1995, pp. 357–384).
- Crenshaw, Kimberlé Williams (2019). "Unmasking Colorblindness in the Law: Lessons from the Formation of Critical Race Theory". Seeing Race Again: Countering Colorblindness across the Disciplines. University of California Press. pp. 52–84. doi:10.1525/9780520972148-004. ISBN 978-0-520-97214-8. JSTOR j.ctvcwp0hd. S2CID 243191319.
- Crenshaw, Kimberlé; Gotanda, Neil; Peller, Gary; Thomas, Kendall, eds. (1995). Critical Race Theory: The Key Writings that Formed the Movement. New York: The New Press. ISBN 978-1-56584-271-7.
- Curry, Tommy J. (2008). "Saved by the Bell: Derrick Bell's Racial Realism as Pedagogy". Philosophical Studies in Education. 39: 35–46. ERIC EJ1071987.
- Curry, Tommy (2009a). "Critical Race Theory". In Greene, Helen Taylor; Gabbidon, Shaun L. (eds.). Encyclopedia of Race and Crime. SAGE Publications. pp. 166–169. ISBN 978-1-4129-5085-5.
- Curry, Tommy J. (2009b). "Will the Real Crt Please Stand Up? The Dangers of Philosophical Contributions to Crt". Crit: A Critical Legal Studies Journal. 2 (1): 1–47.
- Curry, Tommy J. (2011). ""Shut Your Mouth When You're Talking to Me: Silencing the Idealist School of Critical Race Theory Through a Culturalogic Turn in Jurisprudence". Georgetown Law Journal of Modern Critical Race Studies. 1 (3): 1–38. SSRN 983923.
- Dawsey, Josh; Stein, Jeff (September 5, 2020). "White House directs federal agencies to cancel race-related training sessions it calls 'un-American propaganda'". The Washington Post. ISSN 0190-8286. Archived from the original on September 11, 2020.
- Delgado, Richard (1984). "The Imperial Scholar: Reflections on a Review of Civil Rights Literature". University of Pennsylvania Law Review. 132 (3): 561–578. doi:10.2307/3311882. JSTOR 3311882. S2CID 141979963.
- Delgado, Richard (1995). "Rodrigo's Tenth Chronicle: Merit and Affirmative Action". Georgetown Law Journal. 83 (4): 1711–1748. SSRN 2094599.
- Delgado, Richard; Stefancic, Jean (1992). "Images of the Outsider in American Law and Culture: Can Free Expression Remedy Systemic Social Ills?" (PDF). Cornell Law Review. 77 (6): 1258–1297. CiteSeerX 10.1.1.946.7275. SSRN 2095316.
- Delgado, Richard; Stefancic, Jean (1993). "Critical Race Theory: An Annotated Bibliography". Virginia Law Review. 79 (2): 461–516. doi:10.2307/1073418. JSTOR 1073418.
- Delgado, Richard; Stefancic, Jean (1998a). "Critical Race Theory: Past, Present, and Future". Current Legal Problems. 51 (1): 467–491. doi:10.1093/clp/51.1.467.
- Delgado, Richard; Stefancic, Jean (1998b). The Latino/a Condition: A Critical Reader. New York University Press. ISBN 978-0-8147-1894-0.
- Delgado, Richard; Stefancic, Jean (2001). Critical race theory : an introduction (1st ed.). New York University Press. ISBN 0-8147-1930-9.
- Delgado, Richard; Stefancic, Jean (2012). Critical Race Theory: An Introduction. Critical America (2nd ed.). New York University Press. ISBN 978-0-8147-2136-0.
- Delgado, Richard; Stefancic, Jean (2017). Critical race theory : an introduction (3rd ed.). New York University Press. ISBN 978-1-4798-0276-0.
- Delgado Bernal, Dolores (February 2002). "Critical Race Theory, Latino Critical Theory, and Critical Raced-Gendered Epistemologies: Recognizing Students of Color as Holders and Creators of Knowledge". Qualitative Inquiry. 8 (1): 105–126. doi:10.1177/107780040200800107. S2CID 146643087.
- Dudziak, Mary L. (November 1988). "Desegregation as a Cold War Imperative". Stanford Law Review. 41 (1): 61–120. doi:10.2307/1228836. JSTOR 1228836.
- Dudziak, Mary L. (September 1997). "The Little Rock Crisis and Foreign Affairs: Race, Resistance and the Image of American Democracy". Southern California Law Review. 70 (6): 1641–1716. SSRN 45950.
- Dudziak, Mary L (2000). Cold War civil rights: race and the image of American democracy. Princeton, N.J.: Princeton University Press. ISBN 978-0-691-01661-0.
- "Map: Where Critical Race Theory Is Under Attack". Education Week. June 11, 2021. ISSN 0277-4232. Retrieved January 22, 2022.
- Farber, Daniel A.; Sherry, Suzanna (1997a). Beyond All Reason: The Radical Assault on Truth in American Law. Oxford University Press. pp. 5, 9–11, 58, 118–119, 127. ISBN 978-0-19-535543-7.
- Gates, Henry Louis Jr. (1996). "Critical Race Theory and Freedom of Speech". In Menand, Louis (ed.). The Future of Academic Freedom. University of Chicago Press. pp. 119–159. ISBN 978-0-226-52004-9.
- Gillborn, David (2015). "Intersectionality, Critical Race Theory, and the Primacy of Racism: Race, Class, Gender, and Disability in Education". Qualitative Inquiry. 21 (3): 277–287. doi:10.1177/1077800414557827. S2CID 147260539.
- Gordon, Lewis R. (Spring 1999). "A Short History of the 'Critical' in Critical Race Theory". American Philosophy Association Newsletter. 98 (2). Archived from the original on May 2, 2003.
- Gotanda, Neil (1991). "A Critique of 'Our Constitution Is Color-Blind'". Stanford Law Review. 44 (1): 1–68. doi:10.2307/1228940. JSTOR 1228940. Reprinted in Crenshaw et al. (1995, pp. 257–275)
- Gross, Terry (February 3, 2022). "From slavery to socialism, new legislation restricts what teachers can discuss". Fresh Air. NPR. Retrieved May 17, 2022.
- Hacking, Ian (2003). The social construction of what?. Cambridge, Mass.: Harvard University Press. ISBN 978-0-674-00412-2.
- Hancock, Ange-Marie (2016). Intersectionality: an intellectual history. New York: Oxford University Press. ISBN 978-0-19-937037-5.
- Harpalani, Vinay (August 12, 2013). "DesiCrit: Theorizing the Racial Ambiguity of South Asian Americans" (PDF). New York University Annual Survey of American Law. 69 (1): 77–183. SSRN 2308892. Archived from the original (PDF) on June 22, 2021.
- Harnois, Catherine E. (2010). "Race, Gender, and the Black Women's Standpoint". Sociological Forum. 25 (1): 68–85. doi:10.1111/j.1573-7861.2009.01157.x.
- Harris, Adam (May 7, 2021). "The GOP's 'Critical Race Theory' Obsession". The Atlantic. Archived from the original on May 26, 2021.
- Harris, Angela P. (July 1994). "Foreword: The Jurisprudence of Reconstruction". California Law Review. 82 (4): 741–785. doi:10.2307/3480931. JSTOR 3480931.
- Harris, Cheryl I. (June 1993). "Whiteness as Property". Harvard Law Review. 106 (8): 1707–1791. doi:10.2307/1341787. JSTOR 1341787. Reprinted in Crenshaw et al. (1995, pp. 276–292)
- Harris, Cheryl (2002). "Critical Race Studies: An Introduction". UCLA Law Review. 49 (5): 1215–.
- Holmes, Steven A. (November 16, 1997). "Political Right's Point Man on Race". The New York Times. p. 24.
- Ioffe, Julia (October 21, 2017). "The History of Russian Involvement in America's Race Wars". The Atlantic.
- Jones, Camara Phyllis (2002). "Confronting Institutionalized Racism". Phylon. 50 (1/2): 7–22. doi:10.2307/4149999. JSTOR 4149999. S2CID 158126244.
- Jupp, James C.; Berry, Theodorea Regina; Lensmire, Timothy J. (December 2016). "Second-Wave White Teacher Identity Studies: A Review of White Teacher Identity Literatures From 2004 Through 2014". Review of Educational Research. 86 (4): 1151–1191. doi:10.3102/0034654316629798. S2CID 147354763.
- Kang, Jerry; Banaji, Mahzarin R. (2006). "Fair Measures: A Behavioral Realist Revision of Affirmative Action". California Law Review. 94 (4): 1063–1118. doi:10.15779/Z38370Q. SSRN 873907.
- Kennedy, Duncan (September 1990). "A Cultural Pluralist Case for Affirmative Action in Legal Academia". Duke Law Journal. 1990 (4): 705–757. doi:10.2307/1372722. JSTOR 1372722.
- Kennedy, Duncan (1995). "A Cultural Pluralist Case for Affirmative Action in Legal Academia". In Crenshaw, Kimberlé; Gotanda, Neil; Peller, Gary; Thomas, Kendall (eds.). Critical Race Theory: The Key Writings that Formed the Movement. New York: The New Press. ISBN 978-1-56584-271-7.
- Komlos, John (2021). "Covert Racism in Economics". FinanzArchiv: Public Finance Analysis. pp. 83–115. Retrieved May 29, 2022.
- Ladson-Billings, Gloria (January 1998). "Just what is critical race theory and what's it doing in a nice field like education?". International Journal of Qualitative Studies in Education. 11 (1): 7–24. doi:10.1080/095183998236863. S2CID 53628887.
- Ladson-Billings, Gloria; Tate, William F. IV (1995). "Toward a Critical Race Theory of Education". Teachers College Record. 97 (1): 47–68. doi:10.1177/016146819509700104. S2CID 246702897.
- Lang, Cady (September 29, 2020). "President Trump Has Attacked Critical Race Theory. Here's What to Know About the Intellectual Movement". Time. Archived from the original on January 16, 2021.
- Leonardo, Zeus (2013). "The story of schooling: critical race theory and the educational racial contract". Discourse: Studies in the Cultural Politics of Education. 34 (4): 599–610. doi:10.1080/01596306.2013.822624. S2CID 144840673. Reprinted in: Gillborn, D.; Gulson, K. N.; Leonardo, Z., eds. (2016). The Edge of Race : Critical examinations of education and race/racism. Routledge. ISBN 978-1-138-18910-2.
- Levin, Mark (2008). "The Wajin's Whiteness: Law and Race Privilege in Japan". Hōritsu Jihō. 80 (2): 80–91. SSRN 1551462.
- Locin, Mitchell; Tackett, Michael (June 4, 1993). "Clinton dumps nominee". Chicago Tribune. p. 11. Archived from the original on November 26, 2018.
- Mallon, Ron (January 2007). "A Field Guide to Social Construction". Philosophy Compass. 2 (1): 93–108. doi:10.1111/j.1747-9991.2006.00051.x.
- Matsuda, Mari (1987). "Looking to the Bottom: Critical Legal Studies and Reparations". Harvard Civil Rights-Civil Liberties Law Review. 22 (2): 323–. hdl:10125/65944.
- McCristal-Culp, Jerome (1992). "Diversity, Multiculturalism, And Affirmative Action: Duke, The Nas, And Apartheid" (PDF). DePaul Law Review. 41 (114): 32.
- Meyer, Theodoric; Severns, Maggie; McGraw, Meridith (June 23, 2021). "'The Tea Party to the 10th power': Trumpworld bets big on critical race theory". POLITICO. Retrieved June 23, 2021.
- Myslinska, Dagmar (2014a). "Contemporary First-Generation European-Americans: The Unbearable 'Whiteness' of Being". Tulane Law Review. 88 (3): 559–625. SSRN 2222267.
- Myslinska, Dagmar (2014b). "Racist Racism: Complicating Whiteness Through the Privilege and Discrimination of Westerners in Japan". UMKC Law Review. 83 (1): 1–55. SSRN 2399984.
- Rosen, Jeffrey (December 9, 1996). "The Bloods and the Crits". The New Republic.
- Ruparelia, Rakhi (2019) . "The Invisibility of Whiteness in the White Feminist Imagination". In Kirkland, Ewan (ed.). Shades of Whiteness. Leiden and Boston: Brill Publishers. pp. 77–89. doi:10.1163/9781848883833_008. ISBN 978-1-84888-383-3. S2CID 201575540.
- Russell, Margaret M. (1997). "Beyond 'Sellouts' and 'Race Cards': Black Attorneys and the Straitjacket of Legal Practice". Michigan Law Review. 95 (4): 766–794. doi:10.2307/1290046. JSTOR 1290046.
- Strickland, Rennard (1997). "The Genocidal Premise in Native American Law and Policy: Exorcising Aboriginal Ghosts". Journal of Gender, Race and Justice. 1: 325–.
- Treviño, A. Javier; Harris, Michelle A.; Wallace, Derron (March 2008). "What's so critical about critical race theory?". Contemporary Justice Review. 11 (1): 7–10. doi:10.1080/10282580701850330. S2CID 145399733.
- Waxman, Olivia (June 24, 2021). "'Critical Race Theory Is Simply the Latest Bogeyman.' Inside the Fight Over What Kids Learn About America's History". Time.
- West, Cornel (1995). "Foreword". In Crenshaw, Kimberlé; Gotanda, Neil; Peller, Gary (eds.). Critical Race Theory: The Key Writings that Formed the Movement. The New Press. pp. xi–xii. ISBN 978-1-56584-271-7.
- Williams, Patricia J. (1991). The Alchemy of Race and Rights: Diary of a Law Professor. Cambridge, Mass.: Harvard University Press. ISBN 978-0-674-01470-1.
- Wilson, Reid (June 22, 2021). "GOP sees critical race theory battle as potent midterm weapon". The Hill. Retrieved June 17, 2022.
- Wright, Kai; Cobb, Jelani (October 11, 2021). "The True Story of Critical Race Theory" (Podcast). The United States of Anxiety. WNYC Studios. Retrieved November 14, 2021.
- Yosso, Tara J. (March 2005). "Whose culture has capital? A critical race theory discussion of community cultural wealth" (PDF). Race Ethnicity and Education. 8 (1): 69–91. doi:10.1080/1361332052000341006. S2CID 34658106.
- Yosso, Tara J. (2006). Critical Race Counterstories along the Chicana/Chicano Educational Pipeline. Teaching/Learning Social Justice. New York: Routledge. ISBN 978-0-415-95195-1.
- Delgado, Richard, ed. (1995). Critical Race Theory: The Cutting Edge. Philadelphia: Temple University Press. ISBN 978-1-5663-9347-8.
- Dixson, Adrienne D.; Rousseau, Celia K., eds. (2006). Critical Race Theory in Education: All God's Children Got a Song. New York: Routledge. ISBN 978-0-415-95292-7.
- Epstein, Kitty Kelly (2006). A Different View of Urban Schools: Civil Rights, Critical Race Theory, and Unexplored Realities. Peter Lang. ISBN 978-0-8204-7879-1.
- Fortin, Jacey (November 8, 2021). "Critical Race Theory: A Brief History". The New York Times.
- Gillborn, David; Dixson, Adrienne D.; Ladson-Billings, Gloria; Parker, Laurence; Rollock, Nicola; Warmington, Paul, eds. (2018). Critical Race Theory in Education (1st ed.). Routledge. ISBN 978-1-138-84827-6.
- Goldberg, David Theo (May 2, 2021). "The War on Critical Race Theory". Boston Review.
- Taylor, Edward (Spring 1998). "A Primer on Critical Race Theory: Who are the critical race theorists and what are they saying?". Journal of Blacks in Higher Education (19): 122–124. doi:10.2307/2998940. JSTOR 2998940. |
We are learning more about how airborne dust affects the climate thanks to NASA’s Earth Surface Mineral Dust Source Investigation (EMIT) project, which is measuring the presence of important minerals in the planet’s dust-producing deserts. However, EMIT has also proven it is capable of detecting methane, a powerful greenhouse gas.
What Are These “Super-Emitters”?
The science team has discovered more than 50 “super-emitters” across Central Asia, the Middle East, and the Southwestern United States in the data EMIT has gathered since it was deployed on the International Space Station in July.
Super-emitters are buildings, pieces of machinery, and other infrastructure that emit methane at high rates, usually in the fossil fuel, waste management, or agricultural industries.
Related: How does climate change impact our health?
The Impact Of Methane In Pollution
Methane emissions must be controlled if global warming is to be prevented. According to NASA Administrator Bill Nelson, this innovative new breakthrough will not only help researchers better understand where methane leaks are occurring but also offer guidance on how to promptly address them.
“For years, NASA’s more than two dozen satellites and in-space instruments, including the International Space Station, have been crucial in identifying changes to the Earth’s climate. In order to quantify this powerful greenhouse gas and halt it at the source, EMIT is proving to be an essential tool in our toolkit.”
Methane absorbs infrared light in a distinctive pattern known as a spectral fingerprint, which EMIT’s imaging spectrometer can identify with great accuracy and precision. Carbon dioxide can also be measured using the device.
Significance Of EMIT
The new observations result from EMIT’s ability to scan expanses of the Earth’s surface dozens of miles wide while resolving areas as tiny as a soccer field, as well as from the wide coverage of the globe provided by the space station’s orbit.
According to David Thompson, senior research scientist at NASA’s Jet Propulsion Laboratory in Southern California, which oversees the mission,
“These results are exceptional and they demonstrate the value of pairing global-scale perspective with the resolution required to identify methane point sources, down to the facility scale.”
Thompson is also the instrument scientist for the EMIT satellite. Its special capabilities will increase the bar for efforts to identify methane sources and reduce emissions caused by human activity.
Why Is Methane More Harmful Than Carbon dioxide?
Methane contributes only a small portion of greenhouse gas emissions created by humans compared to carbon dioxide, but it is thought to be 80 times more efficient, tonne for tonne, at trapping heat in the atmosphere in the 20 years following release.
Furthermore, whereas methane persists for only about ten years, carbon dioxide does so for hundreds of years. As a result, if emissions are decreased, the atmosphere will react in a similar amount of time, resulting in a slower near-term warming.
How Can We Reduce Methane?
A crucial part of the procedure can be locating methane point sources. Operators of the buildings, machinery, and infrastructure that generate the gas can swiftly take action to reduce emissions if they are aware of the locations of major emitters.
Scientists checked the accuracy of the imaging spectrometer’s mineral data as they made methane observations with EMIT. EMIT will measure the surface minerals in dry regions of Africa, Asia, North and South America, and Australia for the course of its mission.
Researchers will be able to better comprehend the effects of airborne dust particles on the surface and atmosphere of Earth as a result of the data.
Kate Calvin, NASA’s principal scientist and senior climate advisor, said, “We have been anxious to see how EMIT’s mineral data could improve climate modeling.” “This new methane-detecting capability presents a unique opportunity to quantify and monitor greenhouse gasses that contribute to climate change,” the study authors write.
Methane Plume Detection
In order to evaluate the imaging spectrometer’s capabilities, researchers can search for methane in the mission’s study area, which overlaps with known methane hotspots throughout the globe.
The EMIT methane effort is being led by research technologist Andrew Thorpe at JPL. “Some of the plumes EMIT detected are among the largest ever seen – unlike anything that has ever been observed from space,” he said. What we’ve discovered in a short period of time has already surpassed our expectations.
For instance, the sensor located a plume in the Permian Basin, southeast of Carlsbad, New Mexico, that was roughly 2 miles (3.3 kilometers) long. The Permian, one of the world’s biggest oil fields, extends across portions of southern New Mexico and western Texas.
East of the Caspian Sea port city of Hazar, EMIT researchers in Turkmenistan found 12 plumes from oil and gas infrastructure. Some plumes that are blowing to the west are more than 20 miles long (32 kilometers).
Is There Any Other Methane Plume?
The scientists also located a methane plume from a sizable waste-processing complex that was at least 3 miles (4.8 kilometers) long and located south of Tehran, Iran. Decomposition produces methane, and landfills can be a significant source of this gas.
The Permian site is expected to flow at a rate of around 40,300 pounds (18,300 kilogrammes) per hour, the Turkmenistan sources combined at 111,000 pounds (50,400 kilogrammes) per hour, and the Iranian site at 18,700 pounds (8,500 kilogrammes) per hour, according to scientists.
The combined flow rate of the Turkmenistan sources is comparable to that of the Aliso Canyon gas leak in 2015, which occasionally topped 110,000 pounds (50,000 kg) per hour. One of the greatest methane emissions in American history occurred in the Los Angeles region.
EMIT has the potential to discover hundreds of super-emitters, some of which have already been identified through air, space, or ground-based measurement, and others which were previously unknown.
EMIT’s lead investigator at JPL, Robert Green, predicted that as it continues to study the earth, it “will notice regions in which no one thought to search for greenhouse-gas emitters previously, and it will find plumes that no one expects.”
In a new class of spaceborne imaging spectrometers, EMIT is the first one to examine Earth. One illustration is the Carbon Plume Mapper (CPM), a device being developed at JPL that can detect carbon dioxide and methane. To launch two satellites with CPM in late 2023, JPL is collaborating with other companies and a nonprofit organization called Carbon Mapper. |
In Earth's atmosphere, carbon dioxide is a trace gas that plays an integral part in the greenhouse effect, carbon cycle, photosynthesis and oceanic carbon cycle. It is one of several greenhouse gases in the atmosphere of Earth. The current global average concentration of CO2 in the atmosphere is 421 ppm as of May 2022 (0.04%). This is an increase of 50% since the start of the Industrial Revolution, up from 280 ppm during the 10,000 years prior to the mid-18th century. The increase is due to human activity. Burning fossil fuels is the main cause of these increased CO2 concentrations and also the main cause of climate change. Other large anthropogenic sources include cement production, deforestation, and biomass burning.
While transparent to visible light, carbon dioxide is a greenhouse gas, absorbing and emitting infrared radiation at its two infrared-active vibrational frequencies. CO2 absorbs and emits infrared radiation at wavelengths of 4.26 μm (2,347 cm−1) (asymmetric stretching vibrational mode) and 14.99 μm (667 cm−1) (bending vibrational mode). It plays a significant role in influencing Earth's surface temperature through the greenhouse effect. Light emission from the Earth's surface is most intense in the infrared region between 200 and 2500 cm−1, as opposed to light emission from the much hotter Sun which is most intense in the visible region. Absorption of infrared light at the vibrational frequencies of atmospheric CO2 traps energy near the surface, warming the surface and the lower atmosphere. Less energy reaches the upper atmosphere, which is therefore cooler because of this absorption.
Increases in atmospheric concentrations of CO2 and other long-lived greenhouse gases such as methane, nitrous oxide and ozone increase the absorption and emission of infrared radiation by the atmosphere, causing the observed rise in average global temperature and ocean acidification. Another direct effect is the CO2 fertilization effect. These changes cause a range of indirect effects of climate change on the physical environment, ecosystems and human societies. Carbon dioxide exerts a larger overall warming influence than all of the other greenhouse gases combined. It has an atmospheric lifetime that increases with the cumulative amount of fossil carbon extracted and burned, due to the imbalance that this activity has imposed on Earth's fast carbon cycle. This means that some fraction (a projected 20–35%) of the fossil carbon transferred thus far will persist in the atmosphere as elevated CO2 levels for many thousands of years after these carbon transfer activities begin to subside. The carbon cycle is a biogeochemical cycle in which carbon is exchanged between the Earth's oceans, soil, rocks and the biosphere. Plants and other photoautotrophs use solar energy to produce carbohydrate from atmospheric carbon dioxide and water by photosynthesis. Almost all other organisms depend on carbohydrate derived from photosynthesis as their primary source of energy and carbon compounds.
The present atmospheric concentration of CO2 is the highest for 14 million years. Concentrations of CO2 in the atmosphere were as high as 4,000 ppm during the Cambrian period about 500 million years ago, and as low as 180 ppm during the Quaternary glaciation of the last two million years. Reconstructed temperature records for the last 420 million years indicate that atmospheric CO2 concentrations peaked at approximately 2,000 ppm during the Devonian (400 Ma) period, and again in the Triassic (220–200 Ma) period and was four times current levels during the Jurassic period (201–145 Ma).
Current concentration and future trends
Since the start of the Industrial Revolution, atmospheric CO2 concentration have been increasing, causing global warming and ocean acidification. As of May 2022, the average monthly level of CO2 in Earth's atmosphere reached 421 parts per million by volume (ppm). "Parts per million" refers to the number of carbon dioxide molecules per million molecules of dry air. Previously, the value was 280 ppm during the 10,000 years up to the mid-18th century.
It was pointed out in 2021 that "the current rates of increase of the concentration of the major greenhouse gases (carbon dioxide, methane and nitrous oxide) are unprecedented over at least the last 800,000 years".: 515
Annual and regional fluctuations
Atmospheric CO2 concentrations fluctuate slightly with the seasons, falling during the Northern Hemisphere spring and summer as plants consume the gas and rising during northern autumn and winter as plants go dormant or die and decay. The level drops by about 6 or 7 ppm (about 50 Gt) from May to September during the Northern Hemisphere's growing season, and then goes up by about 8 or 9 ppm. The Northern Hemisphere dominates the annual cycle of CO2 concentration because it has much greater land area and plant biomass than the Southern Hemisphere. Concentrations reach a peak in May as the Northern Hemisphere spring greenup begins, and decline to a minimum in October, near the end of the growing season.
Concentrations also vary on a regional basis, most strongly near the ground with much smaller variations aloft. In urban areas concentrations are generally higher and indoors they can reach 10 times background levels.
Measurements and predictions made in the recent past
- Estimates in 2001 found that the current carbon dioxide concentration in the atmosphere may be the highest in the last 20 million years. This figure has been corrected down since then, whereby the latest estimate is now 14 million years (estimate from 2013). Most recently, IPCC AR6 (see for example figure 2.34) reports similar levels 3-3.3 mya in the mid-Pliocene warm period. AR6 reports this period as a good proxy for likely climate outcomes with current levels of CO2.
- Data from 2009 found that the global mean CO2 concentration was rising at a rate of approximately 2 ppm/year and accelerating.
- The daily average concentration of atmospheric CO2 at Mauna Loa Observatory first exceeded 400 ppm on 10 May 2013 although this concentration had already been reached in the Arctic in June 2012. Data from 2013 showed that the concentration of carbon dioxide in the atmosphere is this high "for the first time in 55 years of measurement—and probably more than 3 million years of Earth history."
- As of 2018, CO2 concentrations were measured to be 410 ppm.
The concentrations of carbon dioxide in the atmosphere are expressed as parts per million by volume (abbreviated as ppmv or just ppm). To convert from the usual ppmv units to ppm mass, multiply by the ratio of the molar weight of CO2 to that of air, i.e. times 1.52 (44.01 divided by 28.96).
The first reproducibly accurate measurements of atmospheric CO2 were from flask sample measurements made by Dave Keeling at Caltech in the 1950s. Measurements at Mauna Loa have been ongoing since 1958. Additionally, measurements are also made at many other sites around the world. Many measurement sites are part of larger global networks. Global network data are often made publicly available.
There are several surface measurement (including flasks and continuous in situ) networks including NOAA/ERSL, WDCGG, and RAMCES. The NOAA/ESRL Baseline Observatory Network, and the Scripps Institution of Oceanography Network data are hosted at the CDIAC at ORNL. The World Data Centre for Greenhouse Gases (WDCGG), part of GAW, data are hosted by the JMA. The Reseau Atmospherique de Mesure des Composes an Effet de Serre database (RAMCES) is part of IPSL.
From these measurements, further products are made which integrate data from the various sources. These products also address issues such as data discontinuity and sparseness. GLOBALVIEW-CO2 is one of these products.
Ongoing ground-based total column measurements began more recently. Column measurements typically refer to an averaged column amount denoted XCO2, rather than a surface only measurement. These measurements are made by the TCCON. These data are also hosted on the CDIAC, and made publicly available according to the data use policy.
Space-based measurements of carbon dioxide are also a recent addition to atmospheric XCO2 measurements. SCIAMACHY aboard ESA's ENVISAT made global column XCO2 measurements from 2002 to 2012. AIRS aboard NASA's Aqua satellite makes global XCO2 measurements and was launched shortly after ENVISAT in 2012. More recent satellites have significantly improved the data density and precision of global measurements. Newer missions have higher spectral and spatial resolutions. JAXA's GOSAT was the first dedicated GHG monitoring satellite to successfully achieve orbit in 2009. NASA's OCO-2 launched in 2014 was the second. Various other satellites missions to measure atmospheric XCO2 are planned.
Analytical methods to investigate sources of CO2
- The burning of long-buried fossil fuels releases CO2 containing carbon of different isotopic ratios to those of living plants, enabling distinction between natural and human-caused contributions to CO2 concentration.
- There are higher atmospheric CO2 concentrations in the Northern Hemisphere, where most of the world's population lives (and emissions originate from), compared to the southern hemisphere. This difference has increased as anthropogenic emissions have increased.
- Atmospheric O2 levels are decreasing in Earth's atmosphere as it reacts with the carbon in fossil fuels to form CO2.
Causes of the current increase
Anthropogenic CO2 emissions
While CO2 absorption and release is always happening as a result of natural processes, the recent rise in CO2 levels in the atmosphere is known to be mainly due to human (anthropogenic) activity. Anthropogenic carbon emissions exceed the amount that can be taken up or balanced out by natural sinks. Thus carbon dioxide has gradually accumulated in the atmosphere and, as of May 2022, its concentration is 50% above pre-industrial levels.
The extraction and burning of fossil fuels, releasing carbon that has been underground for many millions of years, has increased the atmospheric concentration of CO2. As of year 2019 the extraction and burning of geologic fossil carbon by humans releases over 30 gigatonnes of CO2 (9 billion tonnes carbon) each year. This larger disruption to the natural balance is responsible for recent growth in the atmospheric CO2 concentration. Currently about half of the carbon dioxide released from the burning of fossil fuels is not absorbed by vegetation and the oceans and remains in the atmosphere.
Burning fossil fuels such as coal, petroleum, and natural gas is the leading cause of increased anthropogenic CO2; deforestation is the second major cause. In 2010, 9.14 gigatonnes of carbon (GtC, equivalent to 33.5 gigatonnes of CO2 or about 4.3 ppm in Earth's atmosphere) were released from fossil fuels and cement production worldwide, compared to 6.15 GtC in 1990. In addition, land use change contributed 0.87 GtC in 2010, compared to 1.45 GtC in 1990. In the period 1751 to 1900, about 12 GtC were released as CO2 to the atmosphere from burning of fossil fuels, whereas from 1901 to 2013 the figure was about 380 GtC.
The International Energy Agency estimates that the top 1% of emitters globally each had carbon footprints of over 50 tonnes of CO2 in 2021, more than 1,000 times greater than those of the bottom 1% of emitters. The global average energy-related carbon footprint is around 4.7 tonnes of CO2 per person.
Roles in natural processes on Earth
Earth's natural greenhouse effect makes life as we know it possible and carbon dioxide plays a significant role in providing for the relatively high temperature on Earth. The greenhouse effect is a process by which thermal radiation from a planetary atmosphere warms the planet's surface beyond the temperature it would have in the absence of its atmosphere. Without the greenhouse effect, the Earth's average surface temperature would be about −18 °C (−0.4 °F) compared to Earth's actual average surface temperature of approximately 14 °C (57.2 °F).
Water is responsible for most (about 36–70%) of the total greenhouse effect, and the role of water vapor as a greenhouse gas depends on temperature. On Earth, carbon dioxide is the most relevant, direct anthropologically influenced greenhouse gas. Carbon dioxide is often mentioned in the context of its increased influence as a greenhouse gas since the pre-industrial (1750) era. In 2013, the increase in CO2 was estimated to be responsible for 1.82 W m−2 of the 2.63 W m−2 change in radiative forcing on Earth (about 70%).
The concept of atmospheric CO2 increasing ground temperature was first published by Svante Arrhenius in 1896. The increased radiative forcing due to increased CO2 in the Earth's atmosphere is based on the physical properties of CO2 and the non-saturated absorption windows where CO2 absorbs outgoing long-wave energy. The increased forcing drives further changes in Earth's energy balance and, over the longer term, in Earth's climate.
|Part of a series on the|
Atmospheric carbon dioxide plays an integral role in the Earth's carbon cycle whereby CO2 is removed from the atmosphere by some natural processes such as photosynthesis and deposition of carbonates, to form limestones for example, and added back to the atmosphere by other natural processes such as respiration and the acid dissolution of carbonate deposits. There are two broad carbon cycles on Earth: the fast carbon cycle and the slow carbon cycle. The fast carbon cycle refers to movements of carbon between the environment and living things in the biosphere whereas the slow carbon cycle involves the movement of carbon between the atmosphere, oceans, soil, rocks, and volcanism. Both cycles are intrinsically interconnected and atmospheric CO2 facilitates the linkage.
Natural sources of atmospheric CO2 include volcanic outgassing, the combustion of organic matter, wildfires and the respiration processes of living aerobic organisms. Man-made sources of CO2 include the burning of fossil fuels for heating, power generation and transport, as well as some industrial processes such as cement making. It is also produced by various microorganisms from fermentation and cellular respiration. Plants, algae and cyanobacteria convert carbon dioxide to carbohydrates by a process called photosynthesis. They gain the energy needed for this reaction from absorption of sunlight by chlorophyll and other pigments. Oxygen, produced as a by-product of photosynthesis, is released into the atmosphere and subsequently used for respiration by heterotrophic organisms and other plants, forming a cycle with carbon.
Most sources of CO2 emissions are natural, and are balanced to various degrees by similar CO2 sinks. For example, the decay of organic material in forests, grasslands, and other land vegetation - including forest fires - results in the release of about 436 gigatonnes of CO2 (containing 119 gigatonnes carbon) every year, while CO2 uptake by new growth on land counteracts these releases, absorbing 451 Gt (123 Gt C). Although much CO2 in the early atmosphere of the young Earth was produced by volcanic activity, modern volcanic activity releases only 130 to 230 megatonnes of CO2 each year. Natural sources are more or less balanced by natural sinks, in the form of chemical and biological processes which remove CO2 from the atmosphere.
Overall, there is a large natural flux of atmospheric CO2 into and out of the biosphere, both on land and in the oceans. In the pre-industrial era, each of these fluxes were in balance to such a degree that little net CO2 flowed between the land and ocean reservoirs of carbon, and little change resulted in the atmospheric concentration. From the human pre-industrial era to 1940, the terrestrial biosphere represented a net source of atmospheric CO2 (driven largely by land-use changes), but subsequently switched to a net sink with growing fossil carbon emissions. In 2012, about 57% of human-emitted CO2, mostly from the burning of fossil carbon, was taken up by land and ocean sinks.
The ratio of the increase in atmospheric CO2 to emitted CO2 is known as the airborne fraction. This ratio varies in the short-term and is typically about 45% over longer (5-year) periods. Estimated carbon in global terrestrial vegetation increased from approximately 740 gigatonnes in 1910 to 780 gigatonnes in 1990.
Carbon dioxide in the Earth's atmosphere is essential to life and to most of the planetary biosphere. The average rate of energy capture by photosynthesis globally is approximately 130 terawatts, which is about six times larger than the current power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion metric tonnes of carbon into biomass per year.
Photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from CO2 and water using energy from light. However, not all organisms that use light as a source of energy carry out photosynthesis, since photoheterotrophs use organic compounds, rather than CO2, as a source of carbon. In plants, algae and cyanobacteria, photosynthesis releases oxygen. This is called oxygenic photosynthesis. Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. Some types of bacteria, however, carry out anoxygenic photosynthesis, which consumes CO2 but does not release oxygen.
Carbon dioxide is converted into sugars in a process called carbon fixation. Carbon fixation is an endothermic redox reaction, so photosynthesis needs to supply both the source of energy to drive this process and the electrons needed to convert CO2 into a carbohydrate. This addition of the electrons is a reduction reaction. In general outline and in effect, photosynthesis is the opposite of cellular respiration, in which glucose and other compounds are oxidized to produce CO2 and water, and to release exothermic chemical energy to drive the organism's metabolism. The two processes take place through a different sequence of chemical reactions, however, and in different cellular compartments.
Oceanic carbon cycle
The Earth's oceans contain a large amount of CO2 in the form of bicarbonate and carbonate ions—much more than the amount in the atmosphere. The bicarbonate is produced in reactions between rock, water, and carbon dioxide. One example is the dissolution of calcium carbonate:
3 + CO2 + H
2O ⇌ Ca2+
+ 2 HCO−
Reactions like this tend to buffer changes in atmospheric CO2. Since the right side of the reaction produces an acidic compound, adding CO2 on the left side decreases the pH of seawater, a process which has been termed ocean acidification (pH of the ocean becomes more acidic although the pH value remains in the alkaline range). Reactions between CO2 and non-carbonate rocks also add bicarbonate to the seas. This can later undergo the reverse of the above reaction to form carbonate rocks, releasing half of the bicarbonate as CO2. Over hundreds of millions of years, this has produced huge quantities of carbonate rocks.
From 1850 until 2022, the ocean has absorbed 26 % of total anthropogenic emissions. However, the rate at which the ocean will take it up in the future is less certain. Even if equilibrium is reached, including dissolution of carbonate minerals, the increased concentration of bicarbonate and decreased or unchanged concentration of carbonate ion will give rise to a higher concentration of un-ionized carbonic acid and dissolved CO2. This higher concentration in the seas, along with higher temperatures, would mean a higher equilibrium concentration of CO2 in the air.
Carbon moves between the atmosphere, vegetation (dead and alive), the soil, the surface layer of the ocean, and the deep ocean. From 1850 until 2022, the ocean has absorbed 26 % of total anthropogenic emissions.
Effects of current increase
Temperature rise on land
The global average and combined land and ocean surface temperature, show a warming of 1.09 °C (range: 0.95 to 1.20 °C) from 1850–1900 to 2011–2020, based on multiple independently produced datasets.: 5 The trend is faster since 1970s than in any other 50-year period over at least the last 2000 years.: 8Most of the observed warming occurred in two periods: around 1900 to around 1940 and around 1970 onwards; the cooling/plateau from 1940 to 1970 has been mostly attributed to sulphate aerosol.: 207 Some of the temperature variations over this time period may also be due to ocean circulation patterns.
Temperature rise in oceans
It is clear that the ocean is warming as a result of climate change, and this rate of warming is increasing.: 9 The global ocean was the warmest it had ever been recorded by humans in 2022. This is determined by the ocean heat content, which exceeded the previous 2021 maximum in 2022. The steady rise in ocean temperatures is an unavoidable result of the Earth's energy imbalance, which is primarily caused by rising levels of greenhouse gases. Between pre-industrial times and the 2011–2020 decade, the ocean's surface has heated between 0.68 and 1.01 °C.: 1214The upper ocean (above 700 m) is warming the fastest, but the warming trend is widespread. The majority of ocean heat gain occurs in the Southern Ocean. For example, between the 1950s and the 1980s, the temperature of the Antarctic Southern Ocean rose by 0.17 °C (0.31 °F), nearly twice the rate of the global ocean.
Ocean acidification is the decrease in the pH of the Earth's ocean. Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide (CO2) levels exceeding 410 ppm (in 2020). CO2 from the atmosphere is absorbed by the oceans. This produces carbonic acid (H2CO3) which dissociates into a bicarbonate ion (HCO−3) and a hydrogen ion (H+). The presence of free hydrogen ions (H+) lowers the pH of the ocean, increasing acidity (this does not mean that seawater is acidic yet; it is still alkaline, with a pH higher than 8). Marine calcifying organisms, such as mollusks and corals, are especially vulnerable because they rely on calcium carbonate to build shells and skeletons.A change in pH by 0.1 represents a 26% increase in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH units is equivalent to a tenfold change in hydrogen ion concentration). Sea-surface pH and carbonate saturation states vary depending on ocean depth and location. Colder and higher latitude waters are capable of absorbing more CO2. This can cause acidity to rise, lowering the pH and carbonate saturation levels in these areas. Other factors that influence the atmosphere-ocean CO2 exchange, and thus local ocean acidification, include: ocean currents and upwelling zones, proximity to large continental rivers, sea ice coverage, and atmospheric exchange with nitrogen and sulfur from fossil fuel burning and agriculture.
CO2 fertilization effect
The CO2 fertilization effect or carbon fertilization effect causes an increased rate of photosynthesis while limiting leaf transpiration in plants. Both processes result from increased levels of atmospheric carbon dioxide (CO2). The carbon fertilization effect varies depending on plant species, air and soil temperature, and availability of water and nutrients. Net primary productivity (NPP) might positively respond to the carbon fertilization effect. Although, evidence shows that enhanced rates of photosynthesis in plants due to CO2 fertilization do not directly enhance all plant growth, and thus carbon storage. The carbon fertilization effect has been reported to be the cause of 44% of gross primary productivity (GPP) increase since the 2000s. Earth System Models, Land System Models and Dynamic Global Vegetation Models are used to investigate and interpret vegetation trends related to increasing levels of atmospheric CO2. However, the ecosystem processes associated with the CO2 fertilization effect remain uncertain and therefore are challenging to model.
Terrestrial ecosystems have reduced atmospheric CO2 concentrations and have partially mitigated climate change effects. The response by plants to the carbon fertilization effect is unlikely to significantly reduce atmospheric CO2 concentration over the next century due to the increasing anthropogenic influences on atmospheric CO2. Earth's vegetated lands have shown significant greening since the early 1980s largely due to rising levels of atmospheric CO2.Theory predicts the tropics to have the largest uptake due to the carbon fertilization effect, but this has not been observed. The amount of CO2 uptake from CO2 fertilization also depends on how forests respond to climate change, and if they are protected from deforestation.
Other direct effects
CO2 emissions have also led to the stratosphere contracting by 400 meters since 1980, which could affect satellite operations, GPS systems and radio communications.
Indirect effects and impacts
Approaches for reducing CO2 concentrations
Carbon dioxide has unique long-term effects on climate change that are nearly "irreversible" for a thousand years after emissions stop (zero further emissions). The greenhouse gases methane and nitrous oxide do not persist over time in the same way as carbon dioxide. Even if human carbon dioxide emissions were to completely cease, atmospheric temperatures are not expected to decrease significantly in the short term. This is because the air temperature is determined by a balance between heating, due to greenhouse gases, and cooling due to heat transfer to the ocean. If emissions were to stop, CO2 levels and the heating effect would slowly decrease, but simultaneously the cooling due to heat transfer would diminish (because sea temperatures would get closer to the air temperature), with the result that the air temperature would decrease only slowly. Sea temperatures would continue to rise, causing thermal expansion and some sea level rise. Lowering global temperatures more rapidly would require carbon sequestration or geoengineering.
Various techniques have been proposed for removing excess carbon dioxide from the atmosphere.
Concentrations in the geologic past
Carbon dioxide is believed to have played an important effect in regulating Earth's temperature throughout its 4.7 billion year history. Early in the Earth's life, scientists have found evidence of liquid water indicating a warm world even though the Sun's output is believed to have only been 70% of what it is today. Higher carbon dioxide concentrations in the early Earth's atmosphere might help explain this faint young sun paradox. When Earth first formed, Earth's atmosphere may have contained more greenhouse gases and CO2 concentrations may have been higher, with estimated partial pressure as large as 1,000 kPa (10 bar), because there was no bacterial photosynthesis to reduce the gas to carbon compounds and oxygen. Methane, a very active greenhouse gas, may have been more prevalent as well.
Carbon dioxide concentrations have shown several cycles of variation from about 180 parts per million during the deep glaciations of the Holocene and Pleistocene to 280 parts per million during the interglacial periods. Carbon dioxide concentrations have varied widely over the Earth's 4.54 billion year history. It is believed to have been present in Earth's first atmosphere, shortly after Earth's formation. The second atmosphere, consisting largely of nitrogen and CO
2 was produced by outgassing from volcanism, supplemented by gases produced during the late heavy bombardment of Earth by huge asteroids. A major part of carbon dioxide emissions were soon dissolved in water and incorporated in carbonate sediments.
The production of free oxygen by cyanobacterial photosynthesis eventually led to the oxygen catastrophe that ended Earth's second atmosphere and brought about the Earth's third atmosphere (the modern atmosphere) 2.4 billion years before the present. Carbon dioxide concentrations dropped from 4,000 parts per million during the Cambrian period about 500 million years ago to as low as 180 parts per million during the Quaternary glaciation of the last two million years.
Drivers of ancient-Earth CO2 concentration
On long timescales, atmospheric CO2 concentration is determined by the balance among geochemical processes including organic carbon burial in sediments, silicate rock weathering, and volcanic degassing. The net effect of slight imbalances in the carbon cycle over tens to hundreds of millions of years has been to reduce atmospheric CO2. On a timescale of billions of years, such downward trend appears bound to continue indefinitely as occasional massive historical releases of buried carbon due to volcanism will become less frequent (as earth mantle cooling and progressive exhaustion of internal radioactive heat proceed further). The rates of these processes are extremely slow; hence they are of no relevance to the atmospheric CO2 concentration over the next hundreds or thousands of years.
Photosynthesis in the geologic past
Over the course of Earth's geologic history CO2 concentrations have played a role in biological evolution. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen or hydrogen sulfide as sources of electrons, rather than water. Cyanobacteria appeared later, and the excess oxygen they produced contributed to the oxygen catastrophe, which rendered the evolution of complex life possible. In recent geologic times, low CO2 concentrations below 600 parts per million might have been the stimulus that favored the evolution of C4 plants which increased greatly in abundance between 7 and 5 million years ago over plants that use the less efficient C3 metabolic pathway. At current atmospheric pressures photosynthesis shuts down when atmospheric CO2 concentrations fall below 150 ppm and 200 ppm although some microbes can extract carbon from the air at much lower concentrations.
Measuring ancient-Earth CO2 concentration
The most direct method for measuring atmospheric carbon dioxide concentrations for periods before instrumental sampling is to measure bubbles of air (fluid or gas inclusions) trapped in the Antarctic or Greenland ice sheets. The most widely accepted of such studies come from a variety of Antarctic cores and indicate that atmospheric CO2 concentrations were about 260–280 ppmv immediately before industrial emissions began and did not vary much from this level during the preceding 10,000 years. The longest ice core record comes from East Antarctica, where ice has been sampled to an age of 800,000 years. During this time, the atmospheric carbon dioxide concentration has varied between 180 and 210 ppm during ice ages, increasing to 280–300 ppm during warmer interglacials. The beginning of human agriculture during the current Holocene epoch may have been strongly connected to the atmospheric CO2 increase after the last ice age ended, a fertilization effect raising plant biomass growth and reducing stomatal conductance requirements for CO2 intake, consequently reducing transpiration water losses and increasing water usage efficiency.
Various proxy measurements have been used to attempt to determine atmospheric carbon dioxide concentrations millions of years in the past. These include boron and carbon isotope ratios in certain types of marine sediments, and the number of stomata observed on fossil plant leaves.
Phytane is a type of diterpenoid alkane. It is a breakdown product of chlorophyll and is now used to estimate ancient CO2 levels. Phytane gives both a continuous record of CO2 concentrations but it also can overlap a break in the CO2 record of over 500 million years.
600 to 400 Ma
There is evidence for high CO2 concentrations of over 3,000 ppm between 200 and 150 million years ago, and of over 6,000 ppm between 600 and 400 million years ago.
60 to 5 Ma
In more recent times, atmospheric CO2 concentration continued to fall after about 60 million years ago. About 34 million years ago, the time of the Eocene–Oligocene extinction event and when the Antarctic ice sheet started to take its current form, CO2 was about 760 ppm, and there is geochemical evidence that concentrations were less than 300 ppm by about 20 million years ago. Decreasing CO2 concentration, with a tipping point of 600 ppm, was the primary agent forcing Antarctic glaciation. Low CO2 concentrations may have been the stimulus that favored the evolution of C4 plants, which increased greatly in abundance between 7 and 5 million years ago.
- Showstack, Randy (2013). "Carbon dioxide tops 400 ppm at Mauna Loa, Hawaii". Eos, Transactions American Geophysical Union. 94 (21): 192. Bibcode:2013EOSTr..94Q.192S. doi:10.1002/2013eo210004. ISSN 0096-3941.
- Montaigne, Fen. "Son of Climate Science Pioneer Ponders A Sobering Milestone". Yale Environment 360. Yale School of Forestry & Environmental Studies. Archived from the original on 8 June 2013. Retrieved 14 May 2013.
- "Carbon dioxide now more than 50% higher than pre-industrial levels | National Oceanic and Atmospheric Administration". www.noaa.gov. 3 June 2022. Archived from the original on 5 June 2022. Retrieved 14 June 2022.
- Eggleton, Tony (2013). A Short Introduction to Climate Change. Cambridge University Press. p. 52. ISBN 9781107618763. Archived from the original on 14 March 2023. Retrieved 14 March 2023.
- "The NOAA Annual Greenhouse Gas Index (AGGI) – An Introduction". NOAA Global Monitoring Laboratory/Earth System Research Laboratories. Archived from the original on 27 November 2020. Retrieved 18 December 2020.
- Etheridge, D.M.; L.P. Steele; R.L. Langenfelds; R.J. Francey; J.-M. Barnola; V.I. Morgan (1996). "Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn". Journal of Geophysical Research. 101 (D2): 4115–28. Bibcode:1996JGR...101.4115E. doi:10.1029/95JD03410. ISSN 0148-0227. S2CID 19674607.
- IPCC (2022) Summary for policy makers Archived 12 March 2023 at the Wayback Machine in Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change Archived 2 August 2022 at the Wayback Machine, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA
- Petty, G.W. (2004). "A First Course in Atmospheric Radiation". Eos Transactions. 85 (36): 229–51. Bibcode:2004EOSTr..85..341P. doi:10.1029/2004EO360007.
- Atkins P, de Paula J (2006). Atkins' Physical Chemistry (8th ed.). W. H. Freeman. p. 462. ISBN 978-0-7167-8759-4.
- "Carbon Dioxide Absorbs and Re-emits Infrared Radiation". UCAR Center for Science Education. 2012. Archived from the original on 21 September 2017. Retrieved 9 September 2017.
- Archer D (15 March 2005). "How long will global warming last?". RealClimate. Archived from the original on 4 March 2021. Retrieved 5 March 2021.
- Archer D (2009). "Atmospheric lifetime of fossil fuel carbon dioxide". Annual Review of Earth and Planetary Sciences. 37 (1): 117–34. Bibcode:2009AREPS..37..117A. doi:10.1146/annurev.earth.031208.100206. hdl:2268/12933. Archived from the original on 24 February 2021. Retrieved 7 March 2021.
- Joos F, Roth R, Fuglestvedt JS, Peters GP, Enting IG, Von Bloh W, et al. (2013). "Carbon dioxide and climate impulse response functions for the computation of greenhouse gas metrics: A multi-model analysis". Atmospheric Chemistry and Physics. 13 (5): 2793–2825. doi:10.5194/acpd-12-19799-2012. Archived from the original on 22 July 2020. Retrieved 7 March 2021.
- "Figure 8.SM.4" (PDF). Intergovernmental Panel on Climate Change Fifth Assessment Report. p. 8SM-16. Archived (PDF) from the original on 24 March 2021. Retrieved 7 March 2021.
- Zhang, Yi Ge; et al. (28 October 2013). "A 40-million-year history of atmospheric CO2". Philosophical Transactions of the Royal Society A. 371 (2001): 20130096. Bibcode:2013RSPTA.37130096Z. doi:10.1098/rsta.2013.0096. PMID 24043869.
- "Climate and CO2 in the Atmosphere". Archived from the original on 6 October 2018. Retrieved 10 October 2007.
- Berner RA, Kothavala Z (2001). "GEOCARB III: A revised model of atmospheric CO2 over Phanerozoic Time" (PDF). American Journal of Science. 301 (2): 182–204. Bibcode:2001AmJS..301..182B. CiteSeerX 10.1.1.393.582. doi:10.2475/ajs.301.2.182. Archived (PDF) from the original on 4 September 2011. Retrieved 15 February 2008.
- Friedlingstein, Pierre; O'Sullivan, Michael; Jones, Matthew W.; Andrew, Robbie M.; Gregor, Luke; Hauck, Judith; Le Quéré, Corinne; Luijkx, Ingrid T.; Olsen, Are; Peters, Glen P.; Peters, Wouter; Pongratz, Julia; Schwingshackl, Clemens; Sitch, Stephen; Canadell, Josep G. (11 November 2022). "Global Carbon Budget 2022". Earth System Science Data. 14 (11): 4811–4900. Bibcode:2022ESSD...14.4811F. doi:10.5194/essd-14-4811-2022. This article incorporates text from this source, which is available under the CC BY 4.0 license.
- Change, NASA Global Climate. "Carbon Dioxide Concentration | NASA Global Climate Change". Climate Change: Vital Signs of the Planet. Archived from the original on 17 April 2022. Retrieved 17 December 2022.
- "Conversion Tables". Carbon Dioxide Information Analysis Center. Oak Ridge National Laboratory. 18 July 2020. Archived from the original on 27 September 2017. Retrieved 18 July 2020. Alt URL Archived 23 February 2016 at the Wayback Machine
- Eyring, V., N.P. Gillett, K.M. Achuta Rao, R. Barimalala, M. Barreiro Parrillo, N. Bellouin, C. Cassou, P.J. Durack, Y. Kosaka, S. McGregor, S. Min, O. Morgenstern, and Y. Sun, 2021: Chapter 3: Human Influence on the Climate System Archived 7 March 2023 at the Wayback Machine. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change Archived 9 August 2021 at the Wayback Machine [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 423–552, doi:10.1017/9781009157896.005.
- Rasmussen, Carl Edward. "Atmospheric Carbon Dioxide Growth Rate". Archived from the original on 14 March 2023. Retrieved 14 March 2023.
- "Frequently Asked Questions". Carbon Dioxide Information Analysis Center (CDIAC). Archived from the original on 17 August 2011. Retrieved 13 June 2007.
- George K, Ziska LH, Bunce JA, Quebedeaux B (2007). "Elevated atmospheric CO2 concentration and temperature across an urban–rural transect". Atmospheric Environment. 41 (35): 7654–7665. Bibcode:2007AtmEn..41.7654G. doi:10.1016/j.atmosenv.2007.08.018. Archived from the original on 15 October 2019. Retrieved 12 September 2019.
- "IPCC: Climate Change 2001: The Scientific Basis" (PDF). Archived (PDF) from the original on 29 August 2022. Retrieved 14 March 2023.
- Tans, Pieter. "Trends in Carbon Dioxide". NOAA/ESRL. Archived from the original on 25 January 2013. Retrieved 11 December 2009.
- "Carbon Budget 2009 Highlights". globalcarbonproject.org. Archived from the original on 16 December 2011. Retrieved 2 November 2012.
- "Carbon dioxide passes symbolic mark". BBC. 10 May 2013. Archived from the original on 23 May 2019. Retrieved 10 May 2013.
- "Up-to-date weekly average CO2 at Mauna Loa". NOAA. Archived from the original on 24 May 2019. Retrieved 1 June 2019.
- "Greenhouse gas levels pass symbolic 400ppm CO2 milestone". The Guardian. Associated Press. 1 June 2012. Archived from the original on 22 January 2014. Retrieved 11 May 2013.
- Kunzig, Robert (9 May 2013). "Climate Milestone: Earth's CO2 Level Passes 400 ppm". National Geographic. Archived from the original on 15 December 2013. Retrieved 12 May 2013.
- "Trends in Atmospheric Carbon Dioxide". Earth System Research Laboratory. NOAA. Archived from the original on 25 January 2013. Retrieved 14 March 2023.
- "The Early Keeling Curve | Scripps CO2 Program". scrippsco2.ucsd.edu. Archived from the original on 8 October 2022. Retrieved 14 March 2023.
- "NOAA CCGG page Retrieved 2 March 2016". Archived from the original on 11 August 2011. Retrieved 14 March 2023.
- WDCGG webpage Archived 6 April 2016 at the Wayback Machine Retrieved 2 March 2016
- RAMCES webpage[permanent dead link] Retrieved 2 March 2016
- "CDIAC CO2 page Retrieved 9 February 2016". Archived from the original on 13 August 2011. Retrieved 14 March 2023.
- "GLOBALVIEW-CO2 information page. Retrieved 9 February 2016". Archived from the original on 31 January 2020. Retrieved 14 March 2023.
- "TCCON data use policy webpage Retrieved 9 February 2016". Archived from the original on 17 October 2020. Retrieved 14 March 2023.
- e.g. Gosh, Prosenjit; Brand, Willi A. (2003). "Stable isotope ratio mass spectrometry in global climate change research" (PDF). International Journal of Mass Spectrometry. 228 (1): 1–33. Bibcode:2003IJMSp.228....1G. CiteSeerX 10.1.1.173.2083. doi:10.1016/S1387-3806(03)00289-6. Archived (PDF) from the original on 11 August 2017. Retrieved 2 July 2012.
Global change issues have become significant due to the sustained rise in atmospheric trace gas concentrations (CO2, N
4) over recent years, attributable to the increased per capita energy consumption of a growing global population.
- Keeling, Charles D.; Piper, Stephen C.; Whorf, Timothy P.; Keeling, Ralph F. (2011). "Evolution of natural and anthropogenic fluxes of atmospheric CO2 from 1957 to 2003". Tellus B. 63 (1): 1–22. Bibcode:2011TellB..63....1K. doi:10.1111/j.1600-0889.2010.00507.x. ISSN 0280-6509.
- Bender, Michael L.; Ho, David T.; Hendricks, Melissa B.; Mika, Robert; Battle, Mark O.; Tans, Pieter P.; Conway, Thomas J.; Sturtevant, Blake; Cassar, Nicolas (2005). "Atmospheric O2/N2changes, 1993–2002: Implications for the partitioning of fossil fuel CO2sequestration". Global Biogeochemical Cycles. 19 (4): n/a. Bibcode:2005GBioC..19.4017B. doi:10.1029/2004GB002410. ISSN 0886-6236.
- Evans, Simon (5 October 2021). "Analysis: Which countries are historically responsible for climate change? / Historical responsibility for climate change is at the heart of debates over climate justice". CarbonBrief.org. Carbon Brief. Archived from the original on 26 October 2021.
Source: Carbon Brief analysis of figures from the Global Carbon Project, CDIAC, Our World in Data, Carbon Monitor, Houghton and Nassikas (2017) and Hansis et al (2015).
- Ballantyne, A.P.; Alden, C.B.; Miller, J.B.; Tans, P.P.; White, J.W.C. (2012). "Increase in observed net carbon dioxide uptake by land and oceans during the past 50 years". Nature. 488 (7409): 70–72. Bibcode:2012Natur.488...70B. doi:10.1038/nature11299. ISSN 0028-0836. PMID 22859203. S2CID 4335259.
- Friedlingstein, P., Jones, M., O'Sullivan, M., Andrew, R., Hauck, J., Peters, G., Peters, W., Pongratz, J., Sitch, S., Le Quéré, C. and 66 others (2019) "Global carbon budget 2019". Earth System Science Data, 11(4): 1783–1838. doi:10.5194/essd-11-1783-2019. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
- Dlugokencky, E. (5 February 2016). "Annual Mean Carbon Dioxide Data". Earth System Research Laboratory. NOAA. Archived from the original on 14 March 2023. Retrieved 12 February 2016.
- A.P. Ballantyne; C.B. Alden; J.B. Miller; P.P. Tans; J.W. C. White (2012). "Increase in observed net carbon dioxide uptake by land and oceans during the past 50 years". Nature. 488 (7409): 70–72. Bibcode:2012Natur.488...70B. doi:10.1038/nature11299. PMID 22859203. S2CID 4335259.
- "Global carbon budget 2010 (summary)". Tyndall Centre for Climate Change Research. Archived from the original on 23 July 2012.
- Calculated from file global.1751_2013.csv in Archived 22 October 2011 at the Wayback Machine from the Carbon Dioxide Information Analysis Center.
- IEA (2023), The world’s top 1% of emitters produce over 1000 times more CO2 than the bottom 1%, IEA, Paris https://www.iea.org/commentaries/the-world-s-top-1-of-emitters-produce-over-1000-times-more-co2-than-the-bottom-1 , License: CC BY 4.0
- "Annex II Glossary". Intergovernmental Panel on Climate Change. Archived from the original on 3 November 2018. Retrieved 15 October 2010.
- A concise description of the greenhouse effect is given in the Intergovernmental Panel on Climate Change Fourth Assessment Report, "What is the Greenhouse Effect?" FAQ 1.3 – AR4 WGI Chapter 1: Historical Overview of Climate Change Science Archived 30 November 2018 at the Wayback Machine, IPCC Fourth Assessment Report, Chapter 1, p. 115: "To balance the absorbed incoming [solar] energy, the Earth must, on average, radiate the same amount of energy back to space. Because the Earth is much colder than the Sun, it radiates at much longer wavelengths, primarily in the infrared part of the spectrum (see Figure 1). Much of this thermal radiation emitted by the land and ocean is absorbed by the atmosphere, including clouds, and reradiated back to Earth. This is called the greenhouse effect."
Stephen H. Schneider, in Geosphere-biosphere Interactions and Climate, Lennart O. Bengtsson and Claus U. Hammer, eds., Cambridge University Press, 2001, ISBN 0-521-78238-4, pp. 90–91.
E. Claussen, V.A. Cochran, and D.P. Davis, Climate Change: Science, Strategies, & Solutions, University of Michigan, 2001. p. 373.
A. Allaby and M. Allaby, A Dictionary of Earth Sciences, Oxford University Press, 1999, ISBN 0-19-280079-5, p. 244.
- Vaclav Smil (2003). The Earth's Biosphere: Evolution, Dynamics, and Change. MIT Press. p. 107. ISBN 978-0-262-69298-4. Archived from the original on 14 March 2023. Retrieved 14 March 2023.
- "Solar Radiation and the Earth's Energy Balance". The Climate System – EESC 2100 Spring 2007. Columbia University. Archived from the original on 4 November 2004. Retrieved 15 October 2010.
- Le Treut H, Somerville R, Cubasch U, Ding Y, Mauritzen C, Mokssit A, Peterson T, Prather M (2007). "Historical Overview of Climate Change Science" (PDF). In Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL (eds.). Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK and New York, NY: Cambridge University Press. p. 97. Archived from the original (PDF) on 26 November 2018. Retrieved 25 March 2014.
- "The Elusive Absolute Surface Air Temperature (SAT)". Goddard Institute for Space Studies. NOAA. Archived from the original on 5 September 2015. Retrieved 14 March 2023.
- "IPCC Fifth Assessment Report – Chapter 8: Anthropogenic and Natural Radiative Forcing" (PDF). Archived (PDF) from the original on 22 October 2018. Retrieved 14 March 2023.
- Arrhenius, Svante (1896). "On the influence of carbonic acid in the air upon the temperature of the ground" (PDF). Philosophical Magazine and Journal of Science: 237–76. Archived (PDF) from the original on 18 November 2020. Retrieved 14 March 2023.
- Riebeek, Holli (16 June 2011). "The Carbon Cycle". Earth Observatory. NASA. Archived from the original on 5 March 2016. Retrieved 5 April 2018.
- Kayler, Z.; Janowiak, M.; Swanston, C. (2017). "The Global Carbon Cycle". Considering Forest and Grassland Carbon in Land Management (PDF). pp. 3–9. Archived (PDF) from the original on 7 July 2022. Retrieved 14 March 2023.
- Gerlach, T.M. (4 June 1991). "Present-day CO2 emissions from volcanoes". Eos, Transactions, American Geophysical Union. 72 (23): 249, 254–55. Bibcode:1991EOSTr..72..249.. doi:10.1029/90EO10192.
- Cappelluti, G.; Bösch, H.; Monks, P.S. (2009). Use of remote sensing techniques for the detection and monitoring of GHG emissions from the Scottish land use sector. Scottish Government. ISBN 978-0-7559-7738-3. Archived from the original on 8 June 2011. Retrieved 28 January 2011.
- Junling Huang; Michael B. McElroy (2012). "The Contemporary and Historical Budget of Atmospheric CO2" (PDF). Canadian Journal of Physics. 90 (8): 707–16. Bibcode:2012CaJPh..90..707H. doi:10.1139/p2012-033. Archived (PDF) from the original on 3 August 2017. Retrieved 14 March 2023.
- Canadell JG, Le Quéré C, Raupach MR, et al. (November 2007). "Contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks". Proc. Natl. Acad. Sci. U.S.A. 104 (47): 18866–70. Bibcode:2007PNAS..10418866C. doi:10.1073/pnas.0702737104. PMC 2141868. PMID 17962418.
- Post WM, King AW, Wullschleger SD, Hoffman FM (June 1997). "Historical Variations in Terrestrial Biospheric Carbon Storage". DOE Research Summary. 34 (1): 99–109. Bibcode:1997GBioC..11...99P. doi:10.1029/96GB03942. Archived from the original on 28 July 2011. Retrieved 28 May 2011.
- Nealson KH, Conrad PG (December 1999). "Life: past, present and future". Philos. Trans. R. Soc. Lond. B Biol. Sci. 354 (1392): 1923–39. doi:10.1098/rstb.1999.0532. PMC 1692713. PMID 10670014.
- Whitmarsh J, Govindjee (1999). "The photosynthetic process". In Singhal GS; Renger G; Sopory SK; Irrgang KD; Govindjee (eds.). Concepts in photobiology: photosynthesis and photomorphogenesis. Boston: Kluwer Academic Publishers. pp. 11–51. ISBN 978-0-7923-5519-9. Archived from the original on 14 August 2010. Retrieved 20 March 2014.
100 x 1015 grams of carbon/year fixed by photosynthetic organisms which is equivalent to 4 x 1018 kJ/yr = 4 x 1021J/yr of free energy stored as reduced carbon; (4 x 1018 kJ/yr) / (31,556,900 sec/yr) = 1.27 x 1014 J/yr; (1.27 x 1014 J/yr) / (1012 J/sec / TW) = 127 TW.
- Steger U, Achterberg W, Blok K, Bode H, Frenz W, Gather C, Hanekamp G, Imboden D, Jahnke M, Kost M, Kurz R, Nutzinger HG, Ziesemer T (2005). Sustainable development and innovation in the energy sector. Berlin: Springer. p. 32. ISBN 978-3-540-23103-5. Archived from the original on 14 March 2023. Retrieved 14 March 2023.
The average global rate of photosynthesis is 130 TW (1 TW = 1 terawatt = 1012 watt).
- "World Consumption of Primary Energy by Energy Type and Selected Country Groups, 1980–2004". Energy Information Administration. 31 July 2006. Archived from the original (XLS) on 9 November 2006. Retrieved 2007-01-20.
- Field CB, Behrenfeld MJ, Randerson JT, Falkowski P (July 1998). "Primary production of the biosphere: integrating terrestrial and oceanic components". Science. 281 (5374): 237–40. Bibcode:1998Sci...281..237F. doi:10.1126/science.281.5374.237. PMID 9657713. Archived from the original on 25 September 2018. Retrieved 14 March 2023.
- "Photosynthesis". McGraw-Hill Encyclopedia of Science & Technology. Vol. 13. New York: McGraw-Hill. 2007. ISBN 978-0-07-144143-8.
- Bryant DA, Frigaard NU (November 2006). "Prokaryotic photosynthesis and phototrophy illuminated". Trends Microbiol. 14 (11): 488–96. doi:10.1016/j.tim.2006.09.001. PMID 16997562.
- Susan Solomon; Gian-Kasper Plattner; Reto Knutti; Pierre Friedlingstein (February 2009). "Irreversible climate change due to carbon dioxide emissions". Proc. Natl. Acad. Sci. USA. 106 (6): 1704–09. Bibcode:2009PNAS..106.1704S. doi:10.1073/pnas.0812721106. PMC 2632717. PMID 19179281.
- Archer, David; Eby, Michael; Brovkin, Victor; Ridgwell, Andy; Cao, Long; Mikolajewicz, Uwe; Caldeira, Ken; Matsumoto, Katsumi; Munhoven, Guy; Montenegro, Alvaro; Tokos, Kathy (2009). "Atmospheric Lifetime of Fossil Fuel Carbon Dioxide". Annual Review of Earth and Planetary Sciences. 37 (1): 117–34. Bibcode:2009AREPS..37..117A. doi:10.1146/annurev.earth.031208.100206. hdl:2268/12933. ISSN 0084-6597. Archived from the original on 14 March 2023. Retrieved 14 March 2023.
- Keeling, Charles D. (5 August 1997). "Climate change and carbon dioxide: An introduction". Proceedings of the National Academy of Sciences. 94 (16): 8273–8274. Bibcode:1997PNAS...94.8273K. doi:10.1073/pnas.94.16.8273. ISSN 0027-8424. PMC 33714. PMID 11607732.
- "By 2500 earth could be alien to humans". Scienmag: Latest Science and Health News. 14 October 2021. Archived from the original on 18 October 2021. Retrieved 18 October 2021.
- Lyon, Christopher; Saupe, Erin E.; Smith, Christopher J.; Hill, Daniel J.; Beckerman, Andrew P.; Stringer, Lindsay C.; Marchant, Robert; McKay, James; Burke, Ariane; O’Higgins, Paul; Dunhill, Alexander M.; Allen, Bethany J.; Riel-Salvatore, Julien; Aze, Tracy (2021). "Climate change research and action must look beyond 2100". Global Change Biology. 28 (2): 349–361. doi:10.1111/gcb.15871. ISSN 1365-2486. PMID 34558764. S2CID 237616583.
- IPCC (2021). "Summary for Policymakers" (PDF). The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. ISBN 978-92-9169-158-6.
- "IPCC AR5 Chapter 2 page 193" (PDF). Archived (PDF) from the original on 21 November 2016. Retrieved 28 January 2016.
- Houghton, ed. (2001). "Climate Change 2001: Working Group I: The Scientific Basis – Chapter 12: Detection of Climate Change and Attribution of Causes". IPCC. Archived from the original on 11 July 2007. Retrieved 13 July 2007.
- "Ch 6. Changes in the Climate System". Advancing the Science of Climate Change. 2010. doi:10.17226/12782. ISBN 978-0-309-14588-6.
- Swanson, K.L.; Sugihara, G.; Tsonis, A.A. (22 September 2009). "Long-term natural variability and 20th century climate change". Proc. Natl. Acad. Sci. U.S.A. 106 (38): 16120–3. Bibcode:2009PNAS..10616120S. doi:10.1073/pnas.0908699106. PMC 2752544. PMID 19805268.
- "Summary for Policymakers". The Ocean and Cryosphere in a Changing Climate (PDF). 2019. pp. 3–36. doi:10.1017/9781009157964.001. ISBN 978-1-00-915796-4. Archived (PDF) from the original on 29 March 2023. Retrieved 26 March 2023.
- Cheng, Lijing; Abraham, John; Trenberth, Kevin E.; Fasullo, John; Boyer, Tim; Mann, Michael E.; Zhu, Jiang; Wang, Fan; Locarnini, Ricardo; Li, Yuanlong; Zhang, Bin; Yu, Fujiang; Wan, Liying; Chen, Xingrong; Feng, Licheng (2023). "Another Year of Record Heat for the Oceans". Advances in Atmospheric Sciences. 40 (6): 963–974. doi:10.1007/s00376-023-2385-2. ISSN 0256-1530. PMC 9832248. PMID 36643611. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License
- Fox-Kemper, B., H.T. Hewitt, C. Xiao, G. Aðalgeirsdóttir, S.S. Drijfhout, T.L. Edwards, N.R. Golledge, M. Hemer, R.E. Kopp, G. Krinner, A. Mix, D. Notz, S. Nowicki, I.S. Nurhati, L. Ruiz, J.-B. Sallée, A.B.A. Slangen, and Y. Yu, 2021: Chapter 9: Ocean, Cryosphere and Sea Level Change Archived 2022-10-24 at the Wayback Machine. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change Archived 2021-08-09 at the Wayback Machine [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 1211–1362
- Gille, Sarah T. (15 February 2002). "Warming of the Southern Ocean Since the 1950s". Science. 295 (5558): 1275–1277. Bibcode:2002Sci...295.1275G. doi:10.1126/science.1065863. PMID 11847337. S2CID 31434936.
- Terhaar, Jens; Frölicher, Thomas L.; Joos, Fortunat (2023). "Ocean acidification in emission-driven temperature stabilization scenarios: the role of TCRE and non-CO2 greenhouse gases". Environmental Research Letters. 18 (2): 024033. Bibcode:2023ERL....18b4033T. doi:10.1088/1748-9326/acaf91. ISSN 1748-9326. S2CID 255431338.
- Ocean acidification due to increasing atmospheric carbon dioxide (PDF). 2005. ISBN 0-85403-617-2.
- Jiang, Li-Qing; Carter, Brendan R.; Feely, Richard A.; Lauvset, Siv K.; Olsen, Are (2019). "Surface ocean pH and buffer capacity: past, present and future". Scientific Reports. 9 (1): 18624. Bibcode:2019NatSR...918624J. doi:10.1038/s41598-019-55039-4. PMC 6901524. PMID 31819102. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License Archived 16 October 2017 at the Wayback Machine
- Zhang, Y.; Yamamoto‐Kawai, M.; Williams, W.J. (16 February 2020). "Two Decades of Ocean Acidification in the Surface Waters of the Beaufort Gyre, Arctic Ocean: Effects of Sea Ice Melt and Retreat From 1997–2016". Geophysical Research Letters. 47 (3). doi:10.1029/2019GL086421. S2CID 214271838.
- Beaupré-Laperrière, Alexis; Mucci, Alfonso; Thomas, Helmuth (31 July 2020). "The recent state and variability of the carbonate system of the Canadian Arctic Archipelago and adjacent basins in the context of ocean acidification". Biogeosciences. 17 (14): 3923–3942. Bibcode:2020BGeo...17.3923B. doi:10.5194/bg-17-3923-2020. S2CID 221369828.
- Ueyama M, Ichii K, Kobayashi H, Kumagai TO, Beringer J, Merbold L, et al. (17 July 2020). "Inferring CO2 fertilization effect based on global monitoring land-atmosphere exchange with a theoretical model". Environmental Research Letters. 15 (8): 084009. Bibcode:2020ERL....15h4009U. doi:10.1088/1748-9326/ab79e5. ISSN 1748-9326.
- Tharammal T, Bala G, Narayanappa D, Nemani R (April 2019). "Potential roles of CO2 fertilization, nitrogen deposition, climate change, and land use and land cover change on the global terrestrial carbon uptake in the twenty-first century". Climate Dynamics. 52 (7–8): 4393–4406. Bibcode:2019ClDy...52.4393T. doi:10.1007/s00382-018-4388-8. ISSN 0930-7575. S2CID 134286531.
- Hararuk O, Campbell EM, Antos JA, Parish R (December 2018). "Tree rings provide no evidence of a CO2 fertilization effect in old-growth subalpine forests of western Canada". Global Change Biology. 25 (4): 1222–1234. Bibcode:2019GCBio..25.1222H. doi:10.1111/gcb.14561. PMID 30588740.
- Cartwright J (16 August 2013). "How does carbon fertilization affect crop yield?". environmentalresearchweb. Environmental Research Letters. Archived from the original on 27 June 2018. Retrieved 3 October 2016.
- Smith WK, Reed SC, Cleveland CC, Ballantyne AP, Anderegg WR, Wieder WR, et al. (March 2016). "Large divergence of satellite and Earth system model estimates of global terrestrial CO2 fertilization". Nature Climate Change. 6 (3): 306–310. Bibcode:2016NatCC...6..306K. doi:10.1038/nclimate2879. ISSN 1758-678X.
- Chen C, Riley WJ, Prentice IC, Keenan TF (March 2022). "CO2 fertilization of terrestrial photosynthesis inferred from site to global scales". Proceedings of the National Academy of Sciences of the United States of America. 119 (10): e2115627119. doi:10.1073/pnas.2115627119. PMC 8915860. PMID 35238668.
- Bastos A, Ciais P, Chevallier F, Rödenbeck C, Ballantyne AP, Maignan F, Yin Y, Fernández-Martínez M, Friedlingstein P, Peñuelas J, Piao SL (7 October 2019). "Contrasting effects of CO2 fertilization, land-use change and warming on seasonal amplitude of Northern Hemisphere CO2 exchange". Atmospheric Chemistry and Physics. 19 (19): 12361–12375. Bibcode:2019ACP....1912361B. doi:10.5194/acp-19-12361-2019. ISSN 1680-7324.
- Li Q, Lu X, Wang Y, Huang X, Cox PM, Luo Y (November 2018). "Leaf Area Index identified as a major source of variability in modelled CO2 fertilization". Biogeosciences. 15 (22): 6909–6925. doi:10.5194/bg-2018-213.
- Albani M, Medvigy D, Hurtt GC, Moorcroft PR (December 2006). "The contributions of land-use change, CO2 fertilization, and climate variability to the Eastern US carbon sink: Partitioning of the Eastern US Carbon Sink". Global Change Biology. 12 (12): 2370–2390. doi:10.1111/j.1365-2486.2006.01254.x. S2CID 2861520.
- Wang S, Zhang Y, Ju W, Chen JM, Ciais P, Cescatti A, et al. (December 2020). "Recent global decline of CO2 fertilization effects on vegetation photosynthesis". Science. 370 (6522): 1295–1300. Bibcode:2020Sci...370.1295W. doi:10.1126/science.abb7772. hdl:10067/1754050151162165141. PMID 33303610. S2CID 228084631.
- Sugden AM (11 December 2020). Funk M (ed.). "A decline in the carbon fertilization effect". Science. 370 (6522): 1286.5–1287. Bibcode:2020Sci...370S1286S. doi:10.1126/science.370.6522.1286-e. S2CID 230526366.
- Kirschbaum MU (January 2011). "Does enhanced photosynthesis enhance growth? Lessons learned from CO2 enrichment studies". Plant Physiology. 155 (1): 117–24. doi:10.1104/pp.110.166819. PMC 3075783. PMID 21088226.
- "Global Green Up Slows Warming". earthobservatory.nasa.gov. 18 February 2020. Retrieved 27 December 2020.
- Tabor A (8 February 2019). "Human Activity in China and India Dominates the Greening of Earth". NASA. Retrieved 27 December 2020.
- Zhu Z, Piao S, Myneni RB, Huang M, Zeng Z, Canadell JG, et al. (1 August 2016). "Greening of the Earth and its drivers". Nature Climate Change. 6 (8): 791–795. Bibcode:2016NatCC...6..791Z. doi:10.1038/nclimate3004. S2CID 7980894.
- Hille K (25 April 2016). "Carbon Dioxide Fertilization Greening Earth, Study Finds". NASA. Retrieved 27 December 2020.
- "If you're looking for good news about climate change, this is about the best there is right now". Washington Post. Retrieved 11 November 2016.
- Schimel D, Stephens BB, Fisher JB (January 2015). "Effect of increasing CO2 on the terrestrial carbon cycle". Proceedings of the National Academy of Sciences of the United States of America. 112 (2): 436–41. Bibcode:2015PNAS..112..436S. doi:10.1073/pnas.1407302112. PMC 4299228. PMID 25548156.
- Pisoft, Petr (25 May 2021). "Stratospheric contraction caused by increasing greenhouse gases". Environmental Research Letters. 16 (6): 064038. Bibcode:2021ERL....16f4038P. doi:10.1088/1748-9326/abfe2b.
- "Effects of climate change". Met Office. Retrieved 23 April 2023.
- Käse, Laura; Geuer, Jana K. (2018). "Phytoplankton Responses to Marine Climate Change – an Introduction". YOUMARES 8 – Oceans Across Boundaries: Learning from each other. pp. 55–71. doi:10.1007/978-3-319-93284-2_5. ISBN 978-3-319-93283-5. S2CID 134263396.
- Cheng, Lijing; Abraham, John; Hausfather, Zeke; Trenberth, Kevin E. (11 January 2019). "How fast are the oceans warming?". Science. 363 (6423): 128–129. Bibcode:2019Sci...363..128C. doi:10.1126/science.aav7619. PMID 30630919. S2CID 57825894.
- Doney, Scott C.; Busch, D. Shallin; Cooley, Sarah R.; Kroeker, Kristy J. (17 October 2020). "The Impacts of Ocean Acidification on Marine Ecosystems and Reliant Human Communities". Annual Review of Environment and Resources. 45 (1): 83–112. doi:10.1146/annurev-environ-012320-083019. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License Archived 2017-10-16 at the Wayback Machine
- IPCC, 2021: Annex VII: Glossary [Matthews, J.B.R., V. Möller, R. van Diemen, J.S. Fuglestvedt, V. Masson-Delmotte, C. Méndez, S. Semenov, A. Reisinger (eds.)]. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 2215–2256, doi:10.1017/9781009157896.022.
- Geden, Oliver (May 2016). "An actionable climate target". Nature Geoscience. 9 (5): 340–342. Bibcode:2016NatGe...9..340G. doi:10.1038/ngeo2699. ISSN 1752-0908. Archived from the original on 25 May 2021. Retrieved 7 March 2021.
- Schenuit, Felix; Colvin, Rebecca; Fridahl, Mathias; McMullin, Barry; Reisinger, Andy; Sanchez, Daniel L.; Smith, Stephen M.; Torvanger, Asbjørn; Wreford, Anita; Geden, Oliver (4 March 2021). "Carbon Dioxide Removal Policy in the Making: Assessing Developments in 9 OECD Cases". Frontiers in Climate. 3: 638805. doi:10.3389/fclim.2021.638805. ISSN 2624-9553.
- IPCC (2022). Shukla, P.R.; Skea, J.; Slade, R.; Al Khourdajie, A.; et al. (eds.). Climate Change 2022: Mitigation of Climate Change (PDF). Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK and New York, NY, USA: Cambridge University Press (In Press). doi:10.1017/9781009157926. ISBN 9781009157926.
- Walker, James C.G. (June 1985). "Carbon dioxide on the early earth" (PDF). Origins of Life and Evolution of the Biosphere. 16 (2): 117–27. Bibcode:1985OrLi...16..117W. doi:10.1007/BF01809466. hdl:2027.42/43349. PMID 11542014. S2CID 206804461. Archived (PDF) from the original on 14 September 2012. Retrieved 30 January 2010.
- Pavlov, Alexander A.; Kasting, James F.; Brown, Lisa L.; Rages, Kathy A.; Freedman, Richard (May 2000). "Greenhouse warming by CH4 in the atmosphere of early Earth". Journal of Geophysical Research. 105 (E5): 11981–90. Bibcode:2000JGR...10511981P. doi:10.1029/1999JE001134. PMID 11543544.
- Zahnle, K.; Schaefer, L.; Fegley, B. (2010). "Earth's Earliest Atmospheres". Cold Spring Harbor Perspectives in Biology. 2 (10): a004895. doi:10.1101/cshperspect.a004895. PMC 2944365. PMID 20573713.
- Olson JM (May 2006). "Photosynthesis in the Archean era". Photosynth. Res. 88 (2): 109–17. doi:10.1007/s11120-006-9040-5. PMID 16453059. S2CID 20364747.
- Buick R (August 2008). "When did oxygenic photosynthesis evolve?". Philos. Trans. R. Soc. Lond. B Biol. Sci. 363 (1504): 2731–43. doi:10.1098/rstb.2008.0041. PMC 2606769. PMID 18468984.
- Osborne, C.P.; Beerling, D.J. (2006). "Nature's green revolution: the remarkable evolutionary rise of C4 plants". Philosophical Transactions of the Royal Society B: Biological Sciences. 361 (1465): 173–94. doi:10.1098/rstb.2005.1737. PMC 1626541. PMID 16553316.
- Lovelock, J. E. (1972). "Gaia as seen through the atmosphere". Atmospheric Environment. 6 (8): 579–580. Bibcode:1972AtmEn...6..579L. doi:10.1016/0004-6981(72)90076-5. Archived from the original on 3 November 2011. Retrieved 22 March 2014.
- Li, K.-F. (30 May 2009). "Atmospheric pressure as a natural climate regulator for a terrestrial planet with a biosphere". Proceedings of the National Academy of Sciences. 106 (24): 9576–9579. Bibcode:2009PNAS..106.9576L. doi:10.1073/pnas.0809436106. PMC 2701016. PMID 19487662. Archived from the original on 12 February 2013. Retrieved 22 March 2014.
- Etheridge, D.M.; Steele, L.P.; Langenfelds, R.L.; Francey, R.J.; Barnola, JM; Morgan, VI (June 1998). "Historical CO2 record derived from a spline fit (20-year cutoff) of the Law Dome DE08 and DE08-2 ice cores". Carbon Dioxide Information Analysis Center. Oak Ridge National Laboratory. Archived from the original on 5 March 2012. Retrieved 12 June 2007.
- Amos, J. (4 September 2006). "Deep ice tells long climate story". BBC News. Archived from the original on 23 January 2013. Retrieved 28 April 2010.
- Hileman B. (November 2005). "Ice Core Record Extended: Analyses of trapped air show current CO2 at highest level in 650,000 years". Chemical & Engineering News. 83 (48): 7. doi:10.1021/cen-v083n048.p007. ISSN 0009-2347. Archived from the original on 15 May 2019. Retrieved 28 January 2010.
- Vostok Ice Core Data Archived 27 February 2015 at the Wayback Machine, ncdc.noaa.gov Archived 22 April 2021 at the Wayback Machine
- Richerson P.J.; Boyd R.; Bettinger R.L. (July 2001). "Was Agriculture Impossible During The Pleistocene But Mandatory During The Holocene?" (PDF). American Antiquity. 66 (3): 387–411. doi:10.2307/2694241. JSTOR 2694241. S2CID 163474968. Archived from the original (PDF) on 12 January 2006.
- Witkowski, Caitlyn (28 November 2018). "Molecular fossils from phytoplankton reveal secular Pco2 trend over the Phanerozoic". Science Advances. 2 (11): eaat4556. Bibcode:2018SciA....4.4556W. doi:10.1126/sciadv.aat4556. PMC 6261654. PMID 30498776.
- "New CO2 data helps unlock the secrets of Antarctic formation". Physorg.com. 13 September 2009. Archived from the original on 15 July 2011. Retrieved 28 January 2010.
- Pagani, Mark; Huber, Matthew; Liu, Zhonghui; Bohaty, Steven M.; Henderiks, Jorijntje; Sijp, Willem; Krishnan, Srinath; Deconto, Robert M. (2 December 2011). "Drop in carbon dioxide levels led to polar ice sheet, study finds". Science. 334 (6060): 1261–4. Bibcode:2011Sci...334.1261P. doi:10.1126/science.1203909. PMID 22144622. S2CID 206533232. Archived from the original on 22 May 2013. Retrieved 14 May 2013.
- Current global map of carbon dioxide concentrations.
- Global Carbon Dioxide Circulation (NASA; 13 December 2016)
- Video (03:10) – A Year in the Life of Earth's CO2 (NASA; 17 November 2014) |
Excel is a powerful tool that allows users to perform complex calculations and analyze data efficiently. However, when it comes to circular references, things can get tricky. Circular references occur when a formula refers back to the cell it is located in or to a series of cells that ultimately lead back to the original cell. This can create a never-ending loop of calculations, causing errors and confusion. In this blog post, we will delve into the definition of circular references in Excel and explore why they can be problematic.
- Circular references occur when a formula refers back to the cell it is located in or to a series of cells that ultimately lead back to the original cell.
- Circular references can create a never-ending loop of calculations, causing errors and confusion.
- Understanding circular references is important in order to identify and troubleshoot them effectively.
- Iterating circular references can save time and effort in complex calculations while achieving accurate results.
- Best practices for working with circular references include minimizing their use, documenting and labeling them, and continuously exploring and experimenting with them to enhance Excel skills.
Understanding Circular References
In Excel, circular references occur when a formula refers to its own cell or indirectly refers to itself through a series of formulas. This can create a loop in the calculations, leading to incorrect results or an infinite calculation loop.
A. How circular references work in Excel
In Excel, when a cell contains a formula that refers to its own cell, the circular reference is created. For example, if cell A1 contains the formula "=A1+1", it creates a circular reference because it refers to itself.
Circular references can also occur indirectly, where one formula refers to another cell, which in turn refers back to the original formula. This creates a chain of formulas that eventually leads back to the original formula, forming a circular reference.
Excel allows circular references to be used, but it requires iterative calculations to solve them. Iterative calculations repeatedly recalculate the formulas until a certain condition is met, such as a specific number of iterations or a tolerance limit.
B. Identifying circular references in a worksheet
Excel provides a feature to display a warning when a circular reference is detected in a worksheet. To enable this feature:
- Click on the File tab in the Excel ribbon.
- Choose Options to open the Excel Options dialog box.
- Select the Formulas category.
- Under the Error Checking section, check the box next to Circular references.
- Click OK to apply the changes.
Once enabled, Excel will display a warning message whenever a circular reference is detected in a worksheet. The message will provide information about the cell(s) causing the circular reference, allowing you to locate and resolve the issue.
The Benefits of Iterating Circular References
Iterating circular references in Excel can provide several benefits, both in terms of saving time and effort in complex calculations and achieving accurate results in iterative calculations. Let's explore these benefits in more detail:
Saving time and effort in complex calculations
The ability to iterate circular references can significantly streamline complex calculations in Excel. Instead of manually performing multiple iterations, Excel does the work for you, automatically updating the values until convergence is achieved.
This saves valuable time and effort, especially when dealing with large datasets or intricate formulas. By allowing Excel to handle the iterative process, you can focus on analyzing the results and making informed decisions based on the calculated values.
Achieving accurate results in iterative calculations
Iterative calculations often involve scenarios where a formula depends on its own output. This creates a circular reference that needs to be resolved iteratively, as each iteration narrows down the accuracy of the final result.
By enabling circular references and iterations in Excel, you can ensure accurate results in such calculations. Excel's built-in iterative calculation feature allows for precise refinement of values, helping you obtain the most precise and reliable outcome.
In complex models and financial analyses, iterating circular references can be crucial for accurate forecasting, scenario analysis, and decision-making. It provides a reliable way to incorporate changing inputs and feedback mechanisms, leading to more informative and dependable results.
In conclusion, the benefits of iterating circular references in Excel are twofold: saving time and effort in complex calculations, and achieving accurate results in iterative calculations. By leveraging these capabilities, you can enhance your analytical capabilities and make better-informed decisions based on reliable data.
Setting up and Enabling Circular References in Excel
Excel is a powerful tool that allows users to perform various calculations and analyses. One advanced feature that Excel offers is the ability to handle circular references, which occur when a formula depends on its own cell. In this chapter, we will explore how to set up and enable circular references in Excel.
A. Enabling iterative calculations in Excel settings
In order to enable circular references, you need to enable iterative calculations in Excel settings. Iterative calculations allow Excel to repeatedly recalculate a worksheet until a specific condition is met. Here's how you can enable iterative calculations:
- Step 1: Open Excel and go to "File" in the menu bar.
- Step 2: Select "Options" to open the Excel Options dialog box.
- Step 3: In the Excel Options dialog box, click on "Formulas" in the left sidebar.
- Step 4: Check the box next to "Enable iterative calculation."
- Step 5: Adjust the maximum number of iterations and the maximum change values as per your requirements.
- Step 6: Click "OK" to save the changes and close the Excel Options dialog box.
B. Creating circular references in formulas
Once iterative calculations are enabled, you can create circular references in formulas. Circular references can be useful in certain scenarios, such as when you need a formula to refer to its own cell. Here's how you can create circular references:
- Step 1: Select the cell where you want to enter the formula that will create the circular reference.
- Step 2: Type the formula that refers to the same cell, creating the circular reference. For example, "=A1+B1".
- Step 3: Press "Enter" to apply the formula and create the circular reference.
- Step 4: Excel will display a warning indicating that the formula contains a circular reference. Click "OK" to acknowledge the warning.
- Step 5: Excel will start the iterative calculation process and update the value of the cell based on the circular reference.
- Step 6: The iterative calculation will continue until the specified condition is met or the maximum number of iterations is reached.
By following these steps, you can set up and enable circular references in Excel. It is important to use circular references responsibly and ensure that they are necessary for your calculations. Understanding how to use circular references effectively can enhance your ability to perform complex calculations and analysis in Excel.
Managing and Troubleshooting Circular References
When working with complex spreadsheets in Excel, it's not uncommon to run into circular references – formulas that refer back to the cell they are located in or create a loop with other cells. These circular references can cause errors and inconsistencies in your calculations. In this chapter, we will explore how to effectively manage and troubleshoot circular references in Excel.
A. Tracing precedents and dependents to identify circular references
One of the first steps in managing circular references is identifying their presence in your spreadsheet. Excel provides a useful tool for tracing precedents and dependents, which allows you to visualize the relationships between cells and identify any circular references that may be present. Here's how you can use this tool:
- Select the cell: Start by selecting the cell that you suspect may have a circular reference.
- Trace Precedents: Click on the "Formulas" tab in the Excel ribbon and then select "Trace Precedents" under the "Formula Auditing" section. This will display arrows indicating the cells that directly contribute to the formula in the selected cell.
- Trace Dependents: Similarly, you can also use the "Trace Dependents" option to identify which cells depend on the selected cell.
- Review arrow paths: Examine the arrows and their paths to find any circular references. If you encounter a loop or a cell that refers back to itself, you have identified a circular reference.
B. Resolving circular references by adjusting formulas or logic
Once you have identified the circular references in your spreadsheet, you can take steps to resolve them. Here are some approaches you can try:
- Adjust formulas: Start by examining the formulas in the cells involved in the circular reference. Look for any inconsistencies or errors that may be causing the loop. Once identified, modify the formulas to remove the circular reference or restructure them to break the loop.
- Break the loop: In some cases, you may need to introduce additional cells or change the logic of your calculations to break the circular reference loop. This could involve creating intermediate calculations or finding alternative approaches to achieve the desired outcome.
- Use iterative calculations: Excel offers an option for iterative calculations, which can be helpful in certain scenarios with circular references. Enabling iterative calculations allows Excel to repeatedly recalculate the worksheet until a specific condition is met, helping to resolve circular references.
By adjusting formulas or logic, you can effectively resolve circular references in your Excel spreadsheets and ensure accurate calculations. However, it's important to carefully review and test the changes you make to ensure the integrity of your data.
Best Practices for Working with Circular References
When working with circular references in Excel, it is important to follow best practices to ensure efficient and accurate calculations. This chapter outlines some key strategies to minimize the use of circular references and effectively document them for future reference.
Minimizing the use of circular references when possible
- 1. Avoid unnecessary formulas: Before introducing a circular reference, assess if there are alternative approaches to achieve the desired result without resorting to circular calculations. Consider simplifying complex formulas or breaking them down into multiple parts.
- 2. Use iterative calculations sparingly: Iterative calculations can be resource-intensive and may cause performance issues or incorrect results. Only enable iterative calculations when absolutely necessary, and limit the number of iterations to minimize the impact on Excel's processing.
- 3. Optimize calculation order: Arrange your formulas and calculations carefully to ensure they are processed in the correct order. This can help prevent circular references or reduce their impact. Excel's calculation engine typically follows a top-down, left-to-right order, so structuring your worksheet accordingly can avoid unnecessary circular references.
Documenting and labeling circular references for future reference
- 1. Add comments: Annotate the cells or formulas involved in circular references with comments to provide context and explain their purpose. This can help future users, including yourself, understand the logic behind the circular calculations.
- 2. Use consistent naming conventions: Labeling cells or ranges involved in circular references with meaningful names can make it easier to identify and manage these references. Consider using descriptive names that reflect the purpose or use of the circular calculations.
- 3. Keep track of dependencies: Maintain a record or worksheet that documents the dependencies between cells to assist in troubleshooting or understanding circular references. Clearly indicate which cells rely on circular calculations and their interdependencies.
By following these best practices, you can minimize the reliance on circular references, optimize performance, and ensure that future users can comprehend and manage these calculations efficiently.
In conclusion, understanding and iterating circular references in Excel is crucial for effectively managing complex calculations and data analysis. By grasping the concept of circular references and learning how to work with them, users can unlock the full potential of Excel and improve their overall productivity. Furthermore, I encourage everyone to explore and experiment with circular references to enhance their Excel skills. Trying different scenarios and learning from each iteration can lead to valuable insights and innovative approaches to problem-solving.
ULTIMATE EXCEL TEMPLATES BUNDLE
MAC & PC Compatible
Free Email Support |
Nested Decorators in Python
Python functions are first-class objects in the Python programming language. It means that a function can be assigned to a variable, return another function, and, most importantly, take another function as an argument. The concept of the Python decorator is based on these features of functions.
It is assumed that you have a basic understanding of Python decorators. If you aren't familiar with decorators, you can learn from our decorators in the Python tutorial. In this tutorial, we will learn about the nested decorators or chaining of the decorators.
Nested Decorators in Python
Everything in Python is an object, and each object has it associated class in Python. Python decorators are used to modifying the function's behavior without changing its actual value. The decorator is the same as suggesting itself - which decorates something.
Nested decorators are simple as the normal decorators. Nesting means placing or storing inside the other. Therefore, Nested decorators mean applying more than one decorator inside a function. Python provides the facility to implement more than one decorator to a function. It makes decorators useful for reusable building blocks as it consists of several features together.
How are Nested Decorators used?
A function can be decorated multiple times. The nested decorators are also known as the chaining of the decorators. To create the nested decorator, first, we need to define the decorator that we want to wrap the output string with and apply them to function using the pie syntax (@ sign). Let's understand the following syntax.
As we can see in the above syntax, there are two decorators for a particular approach. These decorators will be executed in the bottom to top approach i.e. reverse order. We can take the reference of the construction of the building where we start the construction from the ground to then build floors.
Let's understand the following example.
this is a basic program of nested decorator.
In the above code, we have defined two decorator functions first, which are used to wrap the output string of the decorator function in the 'lower()' and 'upper()' function of the string.
Example - 2:
$ $ $ $ $ $ $ $ $ $ $ $ $ $ # # # # # # # # # # # # # # Hello # # # # # # # # # # # # # # $ $ $ $ $ $ $ $ $ $ $ $ $ $
Nesting Parameterized Decorators
Now, let's implement the nested parameterized decorator where the method takes arguments. Here, we will create the two parameterized decorators - One will perform the multiplication of the two parameters, and the second will perform the divide. Let's see the following example.
Let's understand what we have done in the above code -
The above code works same as the previous examples; the decorator could be applied inside a method as well.
Decorators are one of the best features of the Python programming language. In this tutorial, we have covered the advanced concept of decorators. We have explained simple and parameterized decorators both. |
Shards of the Planet Mercury May Be Hiding on Earth
New research explains how meteorites called aubrites may actually be shattered pieces of the planet closest to the sun from the early days of the solar system.
Mercury does not make sense. It is a bizarre hunk of rock with a composition that is unlike its neighboring rocky planets.
“It’s way too dense,” said David Rothery, a planetary scientist at the Open University in England.
Most of the planet, the closest to the sun, is taken up by its core. It lacks a thick mantle like Earth has, and no one is quite sure why. One possibility is that the planet used to be much bigger — perhaps twice its current bulk or more. Billions of years ago, this fledgling proto-Mercury, or super Mercury, could have been hit by a large object, stripping away its outer layers and leaving the remnant we see behind.
While a nice idea, there has never been direct evidence for it. But some researchers think they have found something. In work presented at the Lunar and Planetary Science Conference in Houston in March, Camille Cartier, a planetary scientist at the University of Lorraine in France, and colleagues said pieces of this proto-Mercury may be hiding in museums and other meteorite collections. Studying them could unlock the planet’s mysteries.
“We don’t have any samples of Mercury” at the moment, said Dr. Cartier. Gaining such specimens “would be a small revolution” in understanding the natural history of the solar system’s smallest planet.
According to the Meteoritical Society, nearly 70,000 meteorites have been gathered around the world from places as remote as the Sahara and Antarctica, finding their way into museums and other collections. Most are from asteroids ejected from the belt between Mars and Jupiter, while more than 500 come from the moon. More than 300 are from Mars.
Noticeably absent from these documented space rocks are confirmed meteorites from our solar system’s innermost planets, Venus and Mercury. It is typically hypothesized that it is difficult, although not impossible, for detritus closer to the sun and its gravity to make their way farther out into the solar system.
Among a small number of meteorite collections are a rare type of space rock called aubrites. Named after the village Aubres in France, where the first meteorite of this type was found in 1836, aubrites are pale in color and contain small amounts of metal. They are low in oxygen and seem to have formed in an ocean of magma. About 80 aubrite meteorites have been found on Earth.
For these reasons, they seem to match scientific models of conditions on the planet Mercury in earlier days of the solar system. “We have often said that aubrites are very good analogues for Mercury,” Dr. Cartier said.
But scientists have stopped short of saying they are actually pieces of Mercury. Klaus Keil, a scientist at the University of Hawai’i at Manoa who died in February, argued in 2010 that aubrites were more likely to have originated from other kinds of asteroids than something that was ejected from Mercury, with some scientists favoring a group of asteroids in the belt called E-type asteroids. Among his evidence were signs that aubrites had been blasted by the solar wind — something Mercury’s magnetic field should have protected against.
Dr. Cartier, however, has another idea. What if aubrites originally came from Mercury?
Following from the hypothesis that a sizable object collided with a younger Mercury, Dr. Cartier said a large amount of material would have been thrown into space, about a third of the planet’s mass. A small amount of that debris would have been pushed by the solar wind into what is now the asteroid belt, forming the E-type asteroids.
There, the asteroids would have remained for billions of years, occasionally smashing together and being continually blasted by the solar wind, explaining the solar wind fingerprint seen in aubrites. But eventually, she suggested, some pieces were pushed toward Earth and fell to our planet as aubritic meteorites.
Low levels of nickel and cobalt found in aubrites match what we would expect from the proto-Mercury, Dr. Cartier says, while data from NASA’s Messenger spacecraft that orbited Mercury from 2011 to 2015 supports similarities between Mercury’s composition and aubrites.
“I think aubrites are the shallowest portions of the mantle of a large proto-Mercury,” Dr. Cartier said. “This could resolve the origin of Mercury.”
If true, it would mean that we have had pieces of Mercury — albeit a much more ancient version of the planet — hiding in drawers and display cases for more than 150 years.
“It would be fantastic,” said Sara Russell, a meteorite expert at the Natural History Museum in London, who was not involved in Dr. Cartier’s work. The museum has 10 aubrites in its collection.
Other experts have reservations about the hypothesis.
Jean-Alix Barrat, a geochemist at the University of Western Brittany in France and one of the few aubrite experts in the world, does not think there is enough aubritic material in meteorite collections to work out whether their contents match with models of the super Mercury.
“The authors are a little bit optimistic,” he said. “The data they use is not sufficient to validate their conclusions.”
In response, Dr. Cartier said she removed possible contaminating rocks from her aubrite samples to get representative levels of nickel and cobalt, which she was “confident” are correct.
Jonti Horner, an expert in asteroid dynamics from the University of Southern Queensland in Australia, also was not sure whether material from Mercury could enter a stable orbit in the asteroid belt and hit Earth billions of years later. “It just doesn’t make sense to me from a dynamics point of view,” he said.
Christopher Spalding, an expert in planet formation at Princeton University and a co-author of Dr. Cartier’s study, says his modeling shows the solar wind can push material away from Mercury sufficiently to link it to E-type asteroids.
“The young sun was highly magnetic and spinning fast,” he said, turning the solar wind into a “whirlpool” that could send pieces of Mercury to the asteroid belt. Another possibility, yet to be modeled, is that the gravitational hefts of Venus and Earth scattered the material further out before some worked its way back to our planet.
Dr. Cartier’s proposal could be put to the test soon. A joint European-Japanese space mission called BepiColombo is currently on its way to orbit Mercury in December 2025. Dr. Cartier presented her idea to a group of BepiColombo scientists in early May.
“I was impressed by it,” said Dr. Rothery, a member of the BepiColombo science team. He said their mission could look for evidence of nickel in Mercury’s surface that would link the planet more conclusively to collected aubrites.
It will not be “straightforward,” he notes, given that Mercury’s surface today will only resemble what is left behind from the proto-Mercury. But he said the results would “help feed into the modeling.”
Willy Benz, an astrophysicist from the University of Bern in Switzerland who first proposed the idea of a proto-Mercury, says that if aubrites do come from Mercury, they will add to evidence of an active and violent early solar system.
“It will show that giant impacts are quite common,” he said, and that they “play an important role in shaping the architectures of planetary systems.”
Dr. Cartier is further testing her ideas by melting some aubrite samples under high pressure. If these experiments and the data from BepiColombo bolster her hypothesis, aubrites may suddenly be promoted from an oddity in our meteorite collections into some of the most remarkable meteorites ever collected — pieces of the solar system’s innermost world. |
Normal Force in Sliding Friction
by Ron Kurtus (revised 17 November 2016)
The normal force in sliding friction is the perpendicular force pushing the object to the surface on which it is sliding. It is an essential part of the standard sliding friction equation.
That force can be due to the weight of an object or that caused by an external push.
When the weight is on an incline, the normal force is reduced by the cosine of the incline angle.
Questions you may have include:
- What is the standard friction equation?
- When it weight the normal force?
- What are examples of external normal force?
This lesson will answer those questions. Useful tool: Units Conversion
Standard sliding friction equation
The normal force is seen in the standard sliding friction equation:
Fs = μsN
N = Fs/μs
- N is the normal or perpendicular force pushing the two objects together
- Fs is the sliding force of friction
- μs is the sliding coefficient of friction for the two surfaces (Greek letter "mu")
(See Standard Friction Equation for details.)
Static and kinetic coefficients
The sliding coefficient of friction can be static when the object is stationary or kinetic when the object is sliding over the other surface.
The coefficient of sliding friction in the static mode of motion (μss) is greater that the coefficient in the kinetic or moving mode (μks).
μss > μks
(See Coefficient of Sliding Friction for more information.)
Weight as normal force
The normal force N can be the weight of an object as caused by gravity. This would apply in situations where you slide a heavy object across the floor or some horizontal surface.
Since weight is the force pushing the objects together, the friction equation becomes:
Fs = μsW
where W is the weight of the object.
Thus if a box weighs 100 pounds and the coefficient of friction between it and the ground is 0.7, then the force required to push the box along the floor is 70 pounds.
Likewise if a box weighs 500 newtons is placed on ice with a coefficient of friction of only 0.001, then it would only take 0.5 newtons to move the box.
Weight on incline
If the weight is on an incline, the normal force will be reduced by the cosine of the incline angle. The equation is
N = W*cos(β)
- N is the normal force on the incline
- W is the weight
- β is the incline angle (Greek letter beta)
- cos(β) is the cosine of the angle β
- W*cos(β) is W times cos(β)
Thus, the friction equation is:
Fs = μW*cos(β)
An illustration of the friction on a box on an incline is:
Normal force is weight times cosine of angle
(See Sliding Friction on an Inclined Surface for more information)
External normal force
Examples of external normal forces include pushing a sanding block on an object and a pair of pliers.
Pushing object sideways
If you push a sanding block against a wooden desk you were sanding, the normal force would be the amount of force you pushed on the block. You would move the sanding block in one direction and the force of friction would be in the opposite direction.
Applying normal force on sanding block and wooden desk
Two normal forces
Sometimes, two normal forces are used to cause the friction.
One example is a pair of pliers that applies a normal force on both sides of a piece of wood that the pair of pliers is holding. Another example are the calipers on automobile disc brakes that apply a force on both sides of the metal disc to slow down the car.
The normal force in the standard friction equation is the force pushing the two objects together, perpendicular to their surfaces. That force can be due to the weight of an object or that caused by an external push. When the weight is on an incline, the normal force is reduced by the cosine of the incline angle.
It all makes sense
Resources and references
Friction Resources - Extensive list
Friction Concepts - HyperPhysics
Friction Science and Technology (Mechanical Engineering Series) by Peter J. Blau; Marcel Dekker Pub. (1995) $89.95
Control of Machines with Friction (The International Series in Engineering and Computer Science) by Brian Armstrong-Hélouvry; Springer Pub. (1991) $179.00
Questions and comments
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Click on a button to bookmark or share this page through Twitter, Facebook, email, or other services:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now?
Normal Force in Sliding Friction |
GCSE: Accounting & Finance
Meet the team of inpirational teachers who mark our essays
How to calculate 'break even'
- 1 There are three ways this can be done. All will give the same answer which is the number of products the business must make or sell to ‘break even’. This means they receive as much revenue as their costs.
- 2 A break even table will list the fixed cost, variable cost, total cost (fixed plus variable cost), revenue and profit or loss for each level of output. As profit or loss is the revenue minus the total cost this can be calculated relatively easily, especially if you use a spreadsheet program..
- 3 A break even graph plots the total cost and revenue for all the levels of output. Where the total cost and revenue intersect is the break even point. This can be easily produced from the table using the chart wizard.
- 4 The break even formula gives you the break even output. The formula is fixed cost divided by the price of one unit minus the variable cost.
- 5 The margin of safety is the number of items being produced, over and above the break even point.
What is cash flow?
- 1 Cash flow looks at the cash flowing through a business. It is not the same as the profit being made as businesses may be receiving goods on credit or giving credit to customers. This means that although a business may be profitable, it may still run out of cash. This could cause the business to go bankrupt.
- 2 A cash flow forecast predicts the flow of cash going through the business. A business may use it to see if there are any months when it will run out of cash.
- 3 Knowing that it may run out of cash in any month means that a business can plan for this by possibly arranging a bank overdraft.
- 4 A bank overdraft is an agreement arranged with a bank whereby if the business runs out of cash, the bank will lend it money to keep it trading. This overdraft will normally be at a high rate of interest but is better for a business than running out of cash.
- 5 A business may also cover a period of negative cash flow by deferring payment to suppliers or getting payment early from customers.
What could be a source of finance?
- 1 Many students go wrong when discussing sources of finance by not relating them to the size of the business or the reason they need it. A new business starting up has different needs to an existing business looking to expand.
- 2 Sources of finance available to sole traders and partnerships include the owner’s funds, borrowing from friends and relatives, bank borrowing or funds from venture capitalists that specialise in lending to new businesses.
- 3 A problem for sole traders and partnerships is unlimited liability. This means that the owner is responsible for all the debts of the business, not just the amount they have invested.
- 4 Private limited companies and public limited companies have limited liability. This means that investors in the businesses can only lose the amount they have invested. This makes it much easier for them to raise finance as people are more likely to lend to them knowing the maximum amount they can lose.
- 5 A benefit of selling shares compared with borrowing from the bank is that the money does not need to be repaid. Share holders will expect a share of the profits. With a loan, the amount borrowed has to be repaid with interest.
247 GCSE Accounting & Finance essays
- Marked by Teachers essays 5
- Peer Reviewed essays 7
Cash flow. A cash flow forecast is a document that predicts cash requirements in the future. It helps a business save money for things it may need in the future4 star(s)
A business can improve their financial situation by borrowing money from a bank, cutting costs or increasing sales. Businesses use cash flow forecasting to anticipate months where they may have a shortfall and get ready for them by taking action before they happen. It may help the business if they identify areas where the business was weak or strong and change strategy to deal with any problems and maximise potential. The five parts of a cash flow forecast are: Receipts, payments, excess of receipts over payments, opening bank balance and closing bank balance. Cash Inflow: This section shows how much the business (in this case, a garden centre)
- Essay length: 758 words
Examples of start up costs are premises, machinery, equipment, fixtures and fittings and market research to start up the business. Running Costs- Running costs are paid everyday to run the business, examples of these are wages, bills, raw materials and insurance. Fixed Variable Rent £40 Each Box purchased each day: £5.50 License for Trade £20 Block of Ice £50 Delivery Charge (weekly) £14 Total £124 Total : £27.50 Overall Total: £151.50 Fixed Variable Rent £40 Each Box purchase £55 License of Trade £20 Block of Ice £50 Delivery Charge £14 Overall Total: £399 Task Two The following section I will be explaining the importance of costs, revenues and profits.
- Essay length: 784 words
(Source: www.brunswickis.co.uk) Budgetary control is a process of monitoring and analysing financial control within organisation. Budget "A budget is a plan, which is set out in numbers. It sets out figures that an organisation or company hopes to achieve in the future." (Source: THE TIMES 100) Budget is a financial plan, sets out financial targets and a plan expressed in money over a given period. Organisations prepare budgets for sales, production, costs, assets, liabilities and cash flow and prepare in advance then compared with actual performance. Managers are responsible for the controllable costs within their budget and they are require to take appropriate action if there are any mistakes.
- Essay length: 1263 words
Business Finance. There are a number of sources of finance, which businesses will need in order to start up a new business, make their business expand and buying materials required for their business.3 star(s)
You can Retain Ownership; this means instead of raising funds by selling a share in the property or the business to an investor you retain complete ownership. There is also Tax Advantage because interest expenses on your mortgage are tax deductible and are made with pre-tax money. Disadvantages The Disadvantages using this method are that the longer you take to return the money, the higher the interest rate. Another disadvantage is that if the mortgage is not paid back, debt collectors will repossess your belongings so that you can pay back the mortgage.
- Essay length: 2125 words
In my opinion cash flow refers to the difference between the cash flowing into the business for example through sales revenue and the cash flowing out of the business for example bills and wages. A cash flow statement is a Financial document, which shows the cash inflows and the cash outflows for our business over the past 12 months. It includes those months in which our business suffered a negative cash flow (where cash outflows were greater than cash inflows)
- Essay length: 1677 words
Many of the investors had to borrow money to buy stocks but they only had to have 10% equity and 90% margin to buy securities. Speculations on stocks stimulated further price rises and created an economic bubble. The P/E ratios in 1929 were far beyond historical norms. The high level of speculations increased anxiety of the investors, so when on October, 24 prices started falling, many investors decided to sell their shares. The leading Wall Street bankers tried to stabilize the situation on Friday, but could not find a proper solution.
- Essay length: 1225 words
If all the products together make enough contribution then the business will make a profit. Fixed & Variable Fixed costs are costs which do not vary. They are mostly indirect costs - Management salaries, telephone bills and office rent.
- Essay length: 348 words
Calculation based on the difference between 4,010 and 3,600, over 3,600. At the same time cost of sales fell. We can straight away tell the company's gross profit has also increased. Taking these into account, we are able to calculate the Return on capital employed (ROCE), which for 2002 is 12.1, 2003 figures are better but 2004 are even better (13.0), showing the company is making use of its assets. An increase of 0.9 % in ROCE can be significant, especially in comparison to the amount of money the company may have borrowed. Therefore the company needs to ask it self is the ROCE sufficient enough, if it is in need of extra funds by means of debt.
- Essay length: 1344 words
But the company's success, were based on artificial inflated profits, dubious accounting practices, and some say fraud. The firm's success turned out to have involved an elaborate scam. Enron lied about its profits and stands accused of a range of shady dealings, including concealing debts. The profits eventually did not show up in the company's accounts. As the depth of deception unfolded investors and creditors retreated, forcing the firm into bankruptcy in December For Enron employees and retirees themselves, the consequences were crystal clear from the day the company crumbled. To put it simple, they lost their savings.
- Essay length: 830 words
This makes them hard to deal with, notably in break-even analysis. Examples of semi-variables include maintenance expenditure and telephone bills. In the latter case, it is clear that although a doubling of customer demand would not necessarily double a firm's telephone calls or bills, it is reasonable to expect that they would increase. Therefore the telephone is neither a fixed nor a variable cost. It is important to classify costs because it helps with spending, it helps with budgets and help in producing break-even charts.
- Essay length: 1342 words
For this task I am going to compare three different types of loans which are provided by Halifax, NatWest and Alliance and Leicester. I have chosen to get a loan from these banks because they are reliable.3 star(s)
NatWest Personal Loan NatWest offer a selection of loan products tailored to meet our customers' varying needs. So, whether you're looking to buy a new car or conservatory, need a financial boost during your studies or require a helping hand as a graduate, they're there to help. Their rate for fixed-rate personal loans of £10,000 or more is just 7.4% APR typical. I have chosen to take up a loan from Alliance and Leicester because they have the lowest interest rate of the three, which means I won't have to pay as much interest as I would to NatWest or Halifax.
- Essay length: 833 words
This is called depreciation. Each accountants must work how much depreciation to allow each fixed assets. This can then be used in the balance sheet and profit and loss account. The balance sheet will show the book value the book value of assets. This is their original value minus depreciation.
- Essay length: 224 words
Cash Flow for The Sea View Hotel. Evaluate how using cash flow forecasts and financial recording systems can contribute to managing business finances at The Sea View Hotel:
They have an irregular inflow for the refurbishments (£12000), but they spent over the money given (£15000), so more money went out than money went in. Task 3 Describe how each of the following financial transactions/documents could be recorded in order to prevent fraud in The Sea View Hotel: * Order Form: The purpose of the order form is to order goods from a supplier. It is completed by the buyer who then sends it on the company selling the required goods.
- Essay length: 890 words
I have been asked to define what revenue is and how businesses can generate revenue.Delight Lollies can generate revenue in many ways.
Another way they could generate revenue is renting out their premises to other businesses. This helps the business gain more money as they will be renting out another asset which will help them bring more money in rather than just getting money from sales. Delight Lollies can generate revenue in many ways. One way in which they can do this is by increasing the number of sales. They can do this by the word of mouth so encouraging their customers to tell their friends about their business so more people will come to their shop.
- Essay length: 833 words
Subway Groningen doesn't have that because all the money will goes straight to the sole trader and he will have to pay subway LLC for the franchising of subway and the rest of the money will end up going into the business. As we look all the overheads from Groningen subway they have a lot of fixed cost to pay such as, rent and rates, heating and lighting, telephone and general expenses this is because they are running a business and is dealing with customers, now the subway LLC you can say is the head office of the subway industry, they more have office supplies and vehicles and the equipment which I would guess sells to the franchising companies.
- Essay length: 2232 words
Finance for a new business. Mischa and Claire will need money to get them started. There are two basic costs linked with starting a new business, capital costs and start-up working capital.
Bank Loans - this is where money is borrowed for a fixed period and repayments (including interest) are paid monthly. Normally, banks ask for some sort of personal security, such as the owner's property which they could claim if the business defaulted on the loan. The government has setup a Small Firms Loan Guarantee system to help businesses which cannot provide security. Often, banks will offer new businesses incentives to open an account and provide an adviser. Grants and Loans - there are several types of grants and special loans available from local, national and European governments.
- Essay length: 770 words
Due to the said factors, internationalization of accounting standards is considered as a significant and essential part of the rapidly globalization economy. Recording transactions Accurate records are essential. If documents are lost of the business, the business could forget to demand payment for some jobs that already are done or another problem could be the payment of bills. These problems must be avoided at all costs because it could lead to bankruptcy. Monitoring activity and controlling the business Sound record keeping allows managers to keep track of orders, sales and bills.
- Essay length: 5880 words
2. Cheque: A cheque is usually the best way for a business to make payments (to other businesses) and receive payments from others. A cheque will also provide a record of any transaction that takes place. 3. Recording money coming into a business and going out of a business: Henry will need to use a two column cash book that will record payments he makes and payments he receives. Task 2 Part 2 / Part 3 / Part 4 In this section I will carry out the various financials that my DJ business will have to carry out in order to operate or function effectively.
- Essay length: 997 words
Term Definition Overdraft When you owe bank money and your balance is in minuses. Bank loans When you borrow money from a bank and pay it back with interest. Mortgages including remortgage of own house When you borrow money to buy a house (like the bank buying you a house) and you paying it back with interest normally per month. Loans from friends and family When you borrow money from people you know so normally comes with minimal risk as you can probably pay them when it suits you.
- Essay length: 614 words
By doing this the payment is delayed between 1-3 months. Leasing: The renting of perhaps: equipments, machinery and premises. Leasing is paid by parts(instalments) over a period of time normally within 1-3 years. Sale and lease back: In this case the business sells one of its main buildings to a financial institution and then leases it back from them by paying a rent. Businesses sell one of their main buildings to a financial association. Thereafter it is leased back from them by paying rent.
- Essay length: 718 words
If the business is making loss, cash flow forecast could be used to identify the problem by looking at the various out flow of the business and observing why are they higher than the cash inflow. JJ Supermarket receives its revenue (cash inflow) from sales of stock, capitals owner etc. JJ Supermarket will also have outflow from the business because in order to run a business they will have to pay for things like purchase of stock, business rates, electricity, advertisings etc.
- Essay length: 680 words
The company would also be able to continue despite the death, resignation or bankruptcy of management and members. Alton Towers could become dependant and not be a part of MEG, although I don't think this is a good idea because they might not get as many customers because other MEG attractions would not be advertising them, also MEG pay for things in the park and Alton Towers may not be able to cover the costs alone. On the other hand Alton Towers could do things without having to make a decision with MEG.
- Essay length: 624 words
In this assignment I will illustrate in a report the financial state of Domestic Dog Homes by use of accounting ratios. The ratios I will be using are: -Solvency -Profitability -Performance
The Acid test ratio formulae show us how the business can pay its liabilities without selling stock. Acid test ratio formulae: Acid test ratio = Current assets - Stock / Current liabilities Domestic Dog Home Acid test ratio: 1.46 = 40275 - 16300 / 16367 Profitability Equation Gross Profit Percentage To work out the Gross profit percentage you have to divide the gross profit with the turnover and multiply it by 100. The Gross Profit Percentage shows how well the business is managing its spending on stock.
- Essay length: 640 words
is because it is winter time and so your business may not be busy at this period of time. It is very important to fix this cash flow problem because if we do not fix this cash flow problem your business will not be able to make daily transactions and will be soon in a liquidity crisis. So, to solve this problem there are many solutions which can be put into action. Firstly, the obvious method to decrease cash outflow would be to reduce expenditure to improve your cash flow. The steps you can take in order to reduce expenditure are asking for a longer credit from your suppliers, so that you have time to pay for your expenses until cash is available.
- Essay length: 715 words
Different types of Costs My warrens restaurant goods product for fixed cost and variable cost Fixed Cost Variable Cost Microwave Ovens and Microwave Grills- Quantity 4- price £2316.00 Food stock e.g. Flour, vegetables, ingredients £500 per stock Samsung Medium Duty Microwave CM1329 - Touch Control 1300W Stainless Steel quantity 2 - £918 Chair =Kentucky armchair Sizes: 550 mm- price £1470 Kitchen utility e.g. plates = LSA MIKA DINNER PLATE, SET OF 6, Glass = LSA YOLA TUMBLER SET OF 6, LSA ILYA CHAMPAGNE FLUTES SET OF 2 - price £895 This is the table for my fixed cost and variable
- Essay length: 3736 words |
How the Event Horizon Telescope Showed Us a Black Hole
On April 10, 2019, we were presented with the first-ever close-up image of a black hole by the Event Horizon Telescope (EHT). This remarkable technological achievement was made possible by the collective efforts of hundreds of astrophysics, engineers, and computer scientists. They arranged for simultaneous observations of their target with multiple telescopes around the globe and correlated the data between the instruments to effectively achieve the creation of a planet-sized telescope. The data was then processed to make the image we saw in the news.
But did we really “see” a black hole when we were shown “just” a digital image? And how is it possible to create an Earth-sized telescope?
Let me start by explaining why EHT really needed an Earth-sized telescope. An abundance of dust exists between our telescopes and the observed black holes. This dust absorbs electromagnetic radiation of short wavelengths such as visible light (about 5.5 x 10-7 m), infrared light (about 10-6 m), and so on. However, the radiation of wavelengths of about 1 millimeter (10-3 m) and larger is not affected by the dust. The angular resolution of a telescope is proportional to the observed wavelength divided by the diameter of the telescope. A longer wavelength results in lower resolution, while a bigger telescope mirror ensures higher resolution. ETH, therefore, had to observe a wavelength of around 1 mm. (They observed at 1.3 mm.) However, this wavelength also implied that they needed a telescope similar in size to the diameter of our planet to resolve the black hole shadow. It is not practically possible to construct a mirror of such a size, but we can still achieve the required resolution, using the interferometer technique. To explain it, we will use a series of analogies.
First analogy: Imagine a real telescope mirror equivalent to the size of planet Earth and then placing over it a black cloth with several holes. The cloth would limit the telescope’s capabilities and reduce its light-collecting area, but we still would have a mighty planet-sized telescope with high-resolution capabilities.
Second analogy: Imagine a handful of small mirrors. One can place them together tightly and construct a nice medium-sized telescope mirror. But one can also choose to scatter them across a larger area. Each small mirror represents a place where the fabric from the first analogy has a hole. Thus, if one finds a smart way of connecting the small mirrors and analyzes the data collected by each of them together, one may be able to reproduce the capabilities (in particular, the resolution) of the large mirror similar in size to the area across which the mirrors were scattered. Additionally, in moving the small pieces around, one would cover more and more of the surface of the large mirror and thus get closer and closer to its full capabilities.
This is a toy illustration of how an interferometer works. EHT simultaneously collects the data from multiple telescopes spread across our planet and then correlates and analyzes the data from them jointly. The involved telescopes also change their relative positions with respect to the target due to the Earth’s rotation covering larger parts of the Earth-sized mirror.
Over the history of astronomical observations, we have learned to employ and trust technology to help us study the sky. The first observations were done with unaided eyes only. Then optical telescopes magnified the image and increased the light-collecting area from the pupil size to the size of the lens (and later the mirror) so smaller and fainter objects became visible in detail. The films (and other receivers) afforded us much longer exposures than capable by the human eye. The films and receivers also allowed us to look outside the range of visible spectra, which was extremely useful to the study of celestial objects. (As the product of evolution on our particular planet, our eyes are strategically designed to be sensitive to the radiation from the Sun with a complete disregard of whether it is a good frequency range for the study of the rest of the universe.) Interferometers are just the next step in the evolution of visual aids. Therefore, we indeed “saw” a black hole although we were shown “just” a post-processed digital image.
It is true that science-wise the image of M87’s black hole did not teach us anything unexpected. It looked exactly as predicted. But perhaps this is not a bad thing. When the Large Hadron Collider in CERN started operating, it had to rediscover all the previously discovered particles. Only then, could it be trusted to search for unknown particles and to probe new physics. The first EHT image was proof of the value of new technology, and it passed the test. Should the subsequently released image show something unexpected and new, we will be more inclined to dive into its physical implications rather than questioning what went wrong with the observation. (Such a discovery, which matches predictions so well, has also, hopefully, demonstrated to the world in this age of anti-science that experts likewise should be trusted.)
What is next for the EHT? The other long-anticipated, and I would argue, more exciting target, is our own black hole in the center of our Milky Way galaxy known as Sagittarius A* (Sgr A*)—the subject of my own research at the Institute. Sgr A* is the closest supermassive black hole to Earth. It is located 26,000 light years away and has a mass 4,000,000 times that of the Sun. In contrast, M87’s black hole is 2,000 times further away and is 1,600 times more massive, but the sizes of the shadows of the black holes are similar. The mass of Sgr A* was deduced from the orbits of the nearby stars, which were tracked for twenty-five years, and scientists concluded that the object around which they orbit is so massive and so small that it can be nothing but a black hole. (Professor Scott Tremaine wrote more on this subject in his article “The Odd Couple: Quasars and Black Holes” for the Institute Letter in 2015).
A puzzling side of the behavior of Sgr A* is its accretion, namely, the behavior of in-falling gas. Here I would like to point out that the black hole does not suck in any material. The material falls into it by itself. In the same way, Earth does not suck up the International Space Station, which closely orbits it. The station experiences friction with the outer layers of the planet’s atmosphere, which slows it down causing its orbit to sink lower; in order to stay in space, it has to be re-boosted, i.e., moved to a higher orbit, regularly. The gas clouds orbiting the black hole also experience the same kind of friction, get heated, slow down, and move closer and closer to the back hole, until they fall in. They, so to say, accrete onto or feed the black hole. The gas clouds also radiate the excessive heat while spiraling down, thus producing the emission we call black hole radiation. (The Hawking radiation from the black holes is hopelessly overwhelmed by the radiation of the accreting gas.)
The amount of the hot gas (about ten million Kelvin), which is bound to Sgr A*, is well constrained by X-ray observations. If this gas fed the black hole in the usual way, we would see a few orders of magnitude more radiation than we actually observe. It was therefore concluded that it spirals into the black hole faster than it can radiate the heat, because the density of the gas is low, and thus the amount that is getting fed to the black hole can be larger than we would normally infer from the amount of observed radiation. The particular details of the process, however, are still uncertain. We still do not know whether there is a radial outflow from Sgr A*; whether it has jets; what the velocity of the gas flow around it and the direction of the flow are at the various radii; whether the flow forms a disk or not; how the density and temperature of the gas and the strength of magnetic fields change with the distance from the black hole; and how much of the gas, which is too cool to emit X-rays, is present near the black hole. The last area is the subject of my own studies.
There are several unresolved questions concerning the feeding of our supermassive black hole, which EHT observations will be able to help answer. For instance, we will learn about the presence or absence of Sgr A* jets and confirm the direction of the gas flow rotation and its inclination (it was recently claimed to be face-on). Overall, it would open a completely new chapter in studying black hole physics. All in all, it is a true privilege to live in such an exciting and dynamic time for this wonderful field.
No one can predict where the deeper understanding of fundamental laws that rule this world will lead us and what doors they will open, but it is always unexpected and exciting. It is worth remembering that the study of electricity was once considered a completely impractical endeavor, which would never have any useful applications. Now we tax it. |
The merger of two neutron stars that generated gravitational waves detected last year may have led to the birth of the lowest mass black hole ever found, say scientists who analysed data from NASA's Chandra X-ray Observatory.
The data was taken in the days, weeks, and months after the detection of gravitational waves by the Laser Interferometer Gravitational Wave Observatory (LIGO) and gamma rays by NASA's Fermi mission on August 17, 2017.
While nearly every telescope observed this source, known officially as GW170817, X-rays from Chandra are critical for understanding what happened after the two neutron stars collided.
From the LIGO data astronomers have a good estimate that the mass of the object resulting from the neutron star merger is about 2.7 times the mass of the Sun.
This puts it on a tightrope of identity, implying it is either the most massive neutron star ever found or the lowest mass black hole ever found. The previous record holders for the latter are no less than about four or five times the Sun's mass.
"While neutron stars and black holes are mysterious, we have studied many of them throughout the universe using telescopes like Chandra," said Dave Pooley of Trinity University in the US, who led the study.
"That means we have both data and theories on how we expect such objects to behave in X-rays," said Pooley.
If the neutron stars merged and formed a heavier neutron star, then astronomers would expect it to spin rapidly and generate a very strong magnetic field. This, in turn, would have created an expanding bubble of high-energy particles that would result in bright X-ray emission.
Instead, the Chandra data show levels of X-rays that are a factor of a few to several hundred times lower than expected for a rapidly spinning, merged neutron star and the associated bubble of high-energy particles, implying a black hole likely formed instead.
If confirmed, this result shows that a recipe for making a black hole can sometimes be complicated. In the case of GW170817, it would have required two supernova explosions that left behind two neutron stars in a sufficiently tight orbit for gravitational wave radiation to bring the neutron stars together.
"Astronomers have long suspected that neutron star mergers would form a black hole and produce bursts of radiation, but we lacked a strong case for it until now," said Pawan Kumar of the University of Texas at Austin in the US.
A Chandra observation two to three days after the event failed to detect a source, but subsequent observations 9, 15 and 16 days after the event, resulted in detections. The source went behind the Sun soon after, but further brightening was seen in Chandra observations about 110 days after the event, followed by comparable X-ray intensity after about 160 days.
Researchers said that the observed X-ray emission as being due entirely to the shock wave - akin to a sonic boom from a supersonic plane - from the merger smashing into surrounding gas. There is no sign of X-rays resulting from a neutron star.
The claims by Pooley's team can be tested by future X-ray and radio observations. If the remnant turns out to be a neutron star with a strong magnetic field, then the source should get much brighter at X-ray and radio wavelengths in about a couple of years when the bubble of high energy particles catches up with the decelerating shock wave.
If it is indeed a black hole, astronomers expect it to continue to become fainter that has recently been observed as the shock wave weakens. |
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byOwen Jensen
Modified over 2 years ago
Multiplication and Division Rules to help!
Multiplication and Division are opposites See the Family of Facts: 4 x 3 = ÷ 3 = 4
Its all about Groups Multiplication means groups of Division means share into groups e.g. 3 x 2 means 3 groups of 2 e.g. 6 ÷ 3 means 6 shared into 3 groups
Multiplying by 0 When you multiply by 0 the answer is always 0 e.g. 4 x 0 = 0 e.g. 57,000 x 0 = 0
Multiplying by 1 When you multiply a number by 1, the answer is always the same as the number e.g. 5 x 1 = 1 e.g. 3 x 1 = 3 e.g. 35 x 1 = 1 e.g. 23,000 x 1 = 23,000
Multiplying by 2 Multiplying by 2 is the same as Doubling or adding the number to itself e.g. 5 x 2 = 10 Double 5 is = 10
Dividing by 2 When you divide by 2 it is the same as halving the number e.g. 50 ÷ 2 = 25 Half of 50 is 25
Multiplying by 4 If you want to quickly multiply a number by 4, then just double it twice! e.g. 6 x 4 = 24 double 6 is 12 double 12 is 24
Multiplying by 5 The 5 times table is easy to learn, if you remember that all answers end in a 5 or a
Multiplying by 10 When you multiply by 10, all the digits move 1 place to the left and a 0 is added. X 10
Copyright of for more videos,visit us. Full of ingredients to make your child a genius. “Don’t make me read,
WHAT DO THEY ALL MEAN?. Median Is the number that is in the middle of a set of numbers. (If two numbers make up the middle of a set of numbers then the.
Let’s take a 15 minute break Please be back on time.
Year 6 mental test 5 second questions Multiplication and Division Tables knowledge.
Mr. Coon’s Math Class. The answer to a multiplication problem. 6 X 6 = 36.
L.O.1 To be able to derive quickly division facts corresponding to tables up to 10x10.
Adding Up In Chunks. Category 1 Adding multiples of ten to any number.
1 1 = = = 18 9 = 30 5 = = 36 9 =
Copyright©amberpasillas2010. Parts of a Fraction 3 4 = the number of parts = the total number of parts that equal a whole copyright©amberpasillas2010.
Bell Schedules Club Time is available from 8:05-8:20 1 st 8:20 – 9:15 2 nd 9:20 – 10:10 3 rd 10:15 – 11:05 4 th 11:10 – 12:50 A(11:10)
Fraction X Adding Unlike Denominators By Monica Yuskaitis.
Fraction IX Least Common Multiple Least Common Denominator By Monica Yuskaitis.
Fraction XI Adding Mixed Numbers With Unlike Denominators.
MathOnMonday® Presents: Building Math Courage® A program designed for Adults or Kids who have struggled to learn their Math in the traditional classroom!
Here are three sets of 10. How many sets of 5 are there? Five times.
Division Learning to show the remainder. Division: Learning to show the remainder 17÷ 4 How many groups of 4 are there in 17? 4 r1.
Dutchess Community College Fire Science program Let’s take a 10 minute break Please be back on time.
25 seconds left….. 24 seconds left….. 23 seconds left…..
Number Theory Click to begin. Click here for Final Jeopardy.
Decimals 10ths and 100ths. Look at my square. It is divided into 10 rows. Dividing my square into 10 rows makes tenths. How many tenths are highlighted?
The following four-step routine is a suggestion for making your home study effective: 1. Get oriented. Take a few minutes to think back, look over your.
Lob: I can use pencil and paper methods to divide TU by U and HTU by U.
Partial Products. Category 1 1 x 3-digit problems.
Multiply Binomials (ax + b)(cx +d) (ax + by)(cx +dy)
A Question of Balance The two sides on a balanced scale must be equal to each other What does the Egg weigh? E + 6 = 11 E = 5.
1 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt Synthetic.
ORDER OF OPERATIONS LESSON 2 DAY 2. BEDMAS B – Brackets E – Exponents D – Division from left to right M – Multiply from left to right A – Add from left.
Time for a BREAK! You have 45 Minutes. Time Left 44.
Mental Mind Gym coming …. 30 Second Challenge - Early Additive.
I think of a number. I multiply it by 5 then add 4. The result is 39. Construct an equation and find the number. I think of a number. I multiply it by.
Copyright©amberpasillas2010 Least Common Multiple LCM 2 Methods.
Divisibility tests These are simple tricks to test what a number can be shared by. We are going to learn tricks for testing if a number can be shared by.
ADDING FRACTIONS You will need some paper! Fractions The top number is the numerator The bottom number is the denominator Example:2 numerator 5 denominator.
How do you multiply 512 x 46? Those are really big numbers!
L.O.1 To recall multiplication facts up to 10 x 10.
Numbers Treasure Hunt
Objectives: Generate and describe sequences. Vocabulary:
Powerpoint Jeopardy Category 1Category 2Category 3Category 4Category
Multiplication and Division of Integers Multiplying! / Steps: / Ignore signs and multiply / Determine the sign / Positive times a Positive = Positive.
One step equations Add Subtract Multiply Divide Addition X + 5 = -9 X = X = X = X = X = 2.
Learning objective: To be able to use partitioning to double or halve numbers.
SOLVING EQUATIONS AND EXPANDING BRACKETS AIM: USE FLOWCHARTS TO SOLVE EQUATIONS A + 5 = 12 A A A = 7 3C = 12 C 3 12 C = 4.
Let’s Add! Click the cloud below for a secret question! Get Started!
Money Math Review. How much money is it? 12 nickels.
Multiplication Facts Review. 6 x 4 = 24 5 x 5 = 25.
Fractional Blocks I am learning to use patterns to find fractions of shapes and sets. Half of a square. Half of a half a square.
2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt 2 pt 3 pt 4 pt 5 pt 1 pt ShapesPatterns Counting Number.
Special Shortcuts for and Triangles.
Chapter 2 Fractions. 2-1B: Estimating Fractions When do I Use…. Compatible numbers – when you are estimating a fraction & a whole number. Ex: ¼ x 25 =
© 2017 SlidePlayer.com Inc. All rights reserved. |
For 246 years, the damaging institution of slavery categorized Black men, women and children as property and wealth, not people. More than a century and a half after being granted their freedom, Black people are struggling to build wealth as homeowners.
Blacks have the lowest homeownership rates of all ethnic groups.
At its high in 2004, Black homeownership peaked at 49.7%, trailing well behind the record 76.2% homeownership rate for white people. The gap in rates between Black and white families is wider now than in 1960 when housing discrimination was legal.
Credit disparities across generations have been one of the barriers contributing to the Black homeownership gap.
“We know the outputs of lower credit scores, missing credit scores, fewer financial resources and lower incomes,” says Michael Neal, principal research associate in the Housing Finance Policy Center at the Urban Institute. “A part of that is economics, but what informs it is partly due to structural racism and discrimination.”
From Slavery to Sharecropping
Today’s racial wealth gap is a legacy of unequal wealth for Black and white Americans following the Civil War.
After emancipation, 4 million slaves were freed with no money, education or means of survival. Reparations were paid to slaveholders, not slaves and the ‘40 Acres and a Mule’ proposal to grant former slaves land never materialized. Credit and its impact on wealth-building for Black families began with a system known as sharecropping, a form of indentured servitude.
Poor and in need of income, most freed Blacks were forced into sharecropping, renting and farming a portion of landowners’ property in return for a crop’s share. Landowners extended credit to the sharecroppers to purchase materials like seeds and fertilizer from them. Sharecroppers also relied on short-term credit from local merchants to pay for essentials like food and clothing.
Landowners and merchants often charged higher prices and interest rates for goods bought on credit. When it was time to sell the crop, Black farmers were often cheated. Sharecropping created a cycle of poverty and debt, preventing Blacks from being property owners and accumulating wealth.
Redlining Destroys Black Wealth
Redlining is the discriminatory practice of denying loans to creditworthy borrowers because of their race or where they live.
“An example of this is in the 1930s following the Great Depression,” says Nikitra Bailey, Executive VP of the National Fair Housing Alliance. “The New Deal’s Federal Home Owners Loan Corporation developed one of the most harmful policy decisions in housing and financial services markets by creating a system that included race as a fundamental factor in determining the desirability and value of neighborhoods.”
Neighborhoods with high Black and immigrant populations were shaded red or considered high risk. That was a signal to lenders to avoid providing loans in these areas.
“It wasn’t based on whether a person could afford a loan,” says Bailey.
Redlining prevented Black families from participating in the homeownership boom of the 1950s and 1960s. While the Fair Housing Act of 1968 made redlining illegal, the economic impact of the discriminatory practice is still felt today.
A report from the National Community Reinvestment Coalition (NCRC) says over 8 million people living in formerly redlined areas experience lower home values, higher rates of poverty, greater vacancy and abandonment and poor health outcomes.
Learn More with DiversityInc Best Practices: The Ongoing Impact of Environmental Racism and Redlining
The Subprime Mortgage Crisis
During the housing boom of the 2000s, some lenders approved high-risk mortgages to borrowers with weak credit histories.
African American borrowers were more likely to receive subprime purchase and refinance loans than comparable white borrowers, according to The Center for Responsible Lending’s research of 2004 Home Mortgage Disclosure Act data.
When the Great Recession took hold between 2007 and 2009, the housing market collapsed, job losses accelerated and interest rates began to rise. Borrowers were unable to pay their mortgages, resulting in foreclosures.
“Families living in those communities, not the people who experienced foreclosure itself, but people who lived in proximity to the foreclosure lost a trillion dollars in wealth in Black and Latino communities,” says Bailey. “Black and Latino families were disproportionately targeted in subprime loans despite data showing many qualified for homeownership on safer and more affordable terms.”
The subprime crisis had an immediate impact on Black homeownership rates. In 2010, fewer than half of African American and Hispanic families owned homes. When they do own homes, Black families buy them at least eight years later, delaying their ability to accumulate wealth.
Black Homeownership and the Racial Wealth Gap
A home is the biggest asset many will own in their lifetime. Black people being locked out of homeownership has eroded their wealth across generations.
The median white family had $184,000 in wealth in 2019 compared to $23,000 for the median Black family.
“Credit is seen as a prerequisite to homeownership,” says Megan Haberle, Senior Director of Policy at the NCRC. “Because of the lack of intergenerational wealth and other kinds of persisting economic injustice, there’s a very skewed challenge facing many Black households in building strong credit and credit scores within our existing credit system.”
Black applicants are more likely than white borrowers to be denied a mortgage, with credit history cited as the most common reason. Black individuals are more likely to have thin credit histories or be credit invisible. Black people with a credit profile have an average credit score over 40 points lower than white applicants.
“About one-third of Black adults do not have a FICO score and another one-third of Black households have FICO scores under 620,” says Jung Hyun Choi, Senior Research Associate at the Urban Institute. “It makes it difficult for Black households to access a mortgage. If they access it and become homeowners, they still have to pay higher mortgage costs.”
Learn More with DiversityInc Best Practices: Can Labor Unions Shrink the Racial Wealth Gap?
Special Credit Purpose Programs
Slavery and segregation, racism and redlining — these exclusionary policies and practices were implemented for decades, creating significant disadvantages for Black families.
Special Purpose Credit Programs (SPCPs) are one way of responding to those inequities. SPCPs are lending products designed to meet the credit needs of underserved groups subjected to discrimination.
In 2022, Wells Fargo (No. 29 on DiversityInc’s 2022 Top 50 companies for Diversity list) announced an SPCP to help minority homeowners whose mortgages are currently serviced by Wells Fargo refinance their mortgages. The company recently broadened its initial $150 million investment to include purchase loans.
“Systemic inequities in the United States have prevented too many minority families from achieving their homeownership and wealth-building goals for too long,” says Kristy Fercho, Head of Home Lending and head of Diverse Segments, Representation and Inclusion at Wells Fargo.
Also in 2022, TD Bank (No. 13 on DiversityInc’s 2022 Top 50 companies for Diversity list) introduced TD Home Access Mortgage, a new mortgage loan product designed to improve homeownership opportunities in Black and Hispanic communities.
“Our program was designed to provide a $5,000 lender credit, more flexible debt-to-income ratios and credit score requirements,” says Michael Innis-Thompson, Senior Vice President, Head of Community Lending & Development & Fair Lending Center of Excellence at TD Bank. “We’ll go as low as 620 FICO scores. It addresses the issue that Blacks and Hispanics are more likely to have credit scores below 660.”
Improving Access to Homeownership
Experts say enforcing fair housing laws is essential to improving Black homeownership and the racial wealth gap.
“Even when prospective homebuyers have qualifying credit scores, they may be excluded from homeownership based on the neighborhood that they live in. That is illegal in the face of the Fair Housing Act and the Equal Credit Opportunity Act,” says Haberle. “This enforcement is important to complement policy reforms that target the credit score models themselves.”
Implementation of alternative credit data — payment history for everyday bills like utilities or cell phones — has been cited as one way to help Black people enter the credit system and shrink the Black homeownership gap. Fannie Mae and Freddie Mac have begun allowing lenders to consider up to 12 months of rent payment history for loan applicants.
“Rental payment is underreported in the credit bureau system,” says Choi. “It will take time for that to make an impact, but those are the things happening to help households of color without good credit scores or with low credit scores have a better opportunity in accessing homeownership.”
Community Development Financial Institutions or CDFIs can also be crucial in expanding mortgage credit in Black communities. CDFIs are financial institutions that provide financial services to under-resourced and low-income communities that need access to financing.
“Part of their mission focus, which is critical, is the relationship banking method they often employ to expand access to credit,” says Neal. “It’s not just we’re gonna look at your credit score, your debt to income, some of the traditional measures associated with mortgage lending. They’re going to work to build a relationship and develop a product that fits the needs of their clients a little better.”
Neal says that shrinking the Black homeownership gap would represent great philosophical and economic progress.
“There’s something philosophical about what it means to be an American. In my mind, that’s tied closely with homeownership,” he says. “In the economic sense, the hope is that we have been able to extend the promise of homeownership to everyone who wants it. The flip side of that is — even if you own a home — if the benefits that homeownership promises do not accrue to you, then we can’t raise the flag of victory.”
READ: Addressing the Wealth Gap: Study Shows the Movement Toward Equity Has a Long Way to Go |
Chart Figures and Their InterpretationCharts are powerful tools that provide a visual representation of data, enabling us to understand complex information quickly and easily. Whether in business, finance, or scientific research, charts offer a concise way to present data and facilitate data-driven decision-making. However, to fully utilize the potential of charts, it is crucial to understand the various figures they contain and how to interpret them accurately. In this article, we will explore common chart figures and delve into their interpretation.Axis Labels: Axes serve as the reference points for chart figures. The x-axis represents the horizontal scale, while the y-axis represents the vertical scale. Axis labels provide information about the units of measurement, allowing us to understand the context of the data being presented.Data Points: Data points are the individual values plotted on the chart. They are represented by markers such as dots, squares, or lines. Data points can represent various variables, such as sales figures, population growth rates, or experimental measurements. The position of each data point on the chart corresponds to its value on the respective axes.Trendlines: Trendlines are lines drawn on a chart to highlight patterns or trends within the data. They help us identify the general direction of the data and make predictions about future values. Trendlines can be linear, representing a steady increase or decrease, or nonlinear, indicating a more complex relationship between variables.Bar Charts: Bar charts use rectangular bars to represent data values. The length or height of each bar corresponds to the value being depicted. Bar charts are useful for comparing different categories or groups and showing the magnitude of a particular variable.Pie Charts: Pie charts divide a circle into sectors that represent the proportion of each category within a dataset. The size of each sector corresponds to the relative percentage or frequency of the data it represents. Pie charts are effective for displaying parts of a whole and making comparisons between different categories.Line Charts: Line charts use connected data points to display trends over time or continuous variables. They are ideal for showing changes in data values and identifying patterns or fluctuations. Line charts are commonly used in finance, stock market analysis, and tracking progress over time.Scatter Plots: Scatter plots use individual data points plotted on a graph to display the relationship between two variables. They help us identify correlations, clusters, or outliers within the data. Scatter plots are valuable in scientific research, enabling researchers to investigate the relationship between independent and dependent variables.Histograms: Histograms represent the distribution of a dataset by dividing it into bins or intervals along the x-axis. The height of each bar corresponds to the frequency or relative frequency of values within each bin. Histograms are useful for understanding the shape of data distributions, identifying outliers, and detecting patterns.Heat Maps: Heat maps use color gradients to represent values within a matrix or table. They are particularly effective in displaying large datasets and patterns within complex relationships. Heat maps are commonly used in fields such as data visualization, genetics, and geographical mapping.When interpreting chart figures, it is essential to consider the context, scales, and labels provided. Understanding the purpose of the chart and the variables being presented is crucial for accurate interpretation. Additionally, examining patterns, trends, outliers, and correlations within the data aids in drawing meaningful insights.In conclusion, chart figures provide a visual representation of data, enabling efficient analysis and decision-making. By familiarizing ourselves with the common chart figures and their interpretation, we can effectively utilize charts to comprehend complex information, identify patterns, and make informed decisions based on data-driven insights. |
|Conservation of mass|
Conservation of momentum
In physics, surface tension is an effect within the surface layer of a liquid that causes that layer to behave as an elastic sheet. This effect allows insects (such as the water strider) to walk on water. It allows small metal objects such as needles, razor blades, or foil fragments to float on the surface of water, and causes capillary action. Interface tension is the name of the same effect when it takes place between two liquids.
The cause of surface tension
Surface tension is caused by the attraction between the molecules of the liquid by various intermolecular forces. In the bulk of the liquid each molecule is pulled equally in all directions by neighboring liquid molecules, resulting in a net force of zero. At the surface of the liquid, the molecules are pulled inwards by other molecules deeper inside the liquid but they are not attracted as intensely by the molecules in the neighboring medium (be it vacuum, air or another liquid). Therefore, all of the molecules at the surface are subject to an inward force of molecular attraction which can be balanced only by the resistance of the liquid to compression. Thus, the liquid squeezes itself together until it has the locally lowest surface area possible.
Another way to think about it is that a molecule in contact with a neighbor is in a lower state of energy than if it weren't in contact with a neighbor. The interior molecules all have as many neighbors as they can possibly have. But the boundary molecules have fewer neighbors than interior molecules and are therefore in a higher state of energy. For the liquid to minimize its energy state, it must minimize its number of boundary molecules and therefore minimize its surface area.
As a result of this minimizing of surface area, the surface will want to assume the smoothest flattest shape it can (rigorous proof that "smooth" shapes minimize surface area relies on use of the Euler-Lagrange Equation). Since any curvature in the surface shape results in higher area, a higher energy will also result. Consequently, the surface will push back on the disturbing object in much the same way a ball pushed uphill will push back to minimize its gravitational energy.
Surface tension in everyday life
Some examples of the effects of surface tension seen with ordinary water:
- Beading of rain water on the surface of a waxed automobile. Water adheres weakly to wax and strongly to itself, so water clusters in drops. Surface tension gives them their near-spherical shape, because a sphere has the smallest possible surface area to volume ratio.
- Formation of drops occurs when a mass of liquid is stretched. The animation shows water adhering to the faucet gaining mass until it is stretched to a point where the surface tension can no longer bind it to the faucet. It then separates and surface tension forms the drop into a sphere. If a stream of water were running from the faucet, the stream would break up into drops during its fall. This is because of gravity stretching the stream, and surface tension then pinching it into spheres.
Surface tension has a big influence on other common phenomena, especially when certain substances, surfactants, are used to decrease it:
- Soap Bubbles have very large surface areas for very small masses. Bubbles cannot be formed from pure water because water has very high surface tension, but the use of surfactants can reduce the surface tension more than tenfold, making it very easy to increase its surface area.
- Colloids are a type of solution where surface tension is also very important. Oil will not spontaneously mix with water, but the presence of a surfactant provides a decrease in surface tension that allows the formation of small droplets of oil in the bulk of water (or vice versa).
Physics definition of surface tension
Surface tension is represented by the symbol σ, γ or T and is defined as the force along a line of unit length where the force is parallel to the surface but perpendicular to the line. One way to picture this is to imagine a flat soap film bounded on one side by a taut thread of length, L. The thread will be pulled toward the interior of the film by a force equal to γL. Surface tension is therefore measured in newtons per meter (N·m-1), although the cgs unit of dynes per cm is normally used.
A better definition of surface tension, in order to treat its thermodynamics, is work done per unit area. As such, in order to increase the surface area of a mass of liquid an amount, δA, a quantity of work, γδA, is needed. Since mechanical systems try to find a state of minimum potential energy, a free droplet of liquid naturally assumes a spherical shape. This is because a sphere has the minimum surface area for a given volume. Therefore surface tension can be also measured in joules per square meter (J·m-2), or, in the cgs system, ergs per cm2.
The equivalence of both units can be proven by dimensional analysis.
A related quantity is the energy of cohesion, which is the energy released when two bodies of the same liquid become joined by a boundary of unit area. Since this process involves the removal of a unit area of surface from each of the two bodies of liquid, the energy of cohesion is equal to twice the surface energy. A similar concept, the energy of adhesion, applies to two bodies of different liquids. Energy of adhesion is linked to the surface tension of an interface between two liquids.
See also Cassie's law.
Water strider physics
The photograph shows water striders standing on the surface of a pond. It is clearly visible that its feet cause indentations in the water's surface. And it is intuitively evident that the surface with indentations has more surface area than a flat surface. If surface tension tends to minimize surface area, how is it that the water striders are increasing the surface area?
Recall that what nature really tries to minimize is potential energy. By increasing the surface area of the water, the water striders have increased the potential energy of that surface. But note also that the water striders' center of mass is lower than it would be if they were standing on a flat surface. So their potential energy is decreased. Indeed when you combine the two effects, the net potential energy is minimized. If the water striders depressed the surface any more, the increased surface energy would more than cancel the decreased energy of lowering the insects' center of mass. If they depressed the surface any less, their higher center of mass would more than cancel the reduction in surface energy.
The photo of the water striders also illustrates the notion of surface tension being like having an elastic film over the surface of the liquid. In the surface depressions at their feet it is easy to see that the reaction of that imagined elastic film is exactly countering the weight of the insects.
Liquid in a vertical tube
An old style mercury barometer consists of a vertical glass tube about 1 cm in diameter partially filled with mercury, and with a vacuum in the unfilled volume (see diagram to the right). Notice that the mercury level at the center of the tube is higher than at the edges, making the upper surface of the mercury dome-shaped. The center of mass of the entire column of mercury would be slightly lower if the top surface of the mercury were flat over the entire cross-section of the tube. But the dome-shaped top gives slightly less surface area to the entire mass of mercury. Again the two effects combine to minimize the total potential energy. Such a surface shape is known as a convex meniscus.
The reason people consider the surface area of the entire mass of mercury, including the part of the surface that is in contact with the glass, is because mercury does not adhere at all to glass. So the surface tension of the mercury acts over its entire surface area, including where it is in contact with the glass. If instead of glass, the tube were made out of copper, the situation would be very different. Mercury aggressively adheres to copper. So in a copper tube, the level of mercury at the center of the tube will be lower rather than higher than at the edges (that is, it would be a concave meniscus). In a situation where the liquid adheres to the walls of its container, we consider the part of the fluid's surface area that is in contact with the container to have negative surface tension. The fluid then works to maximize the contact surface area. So in this case increasing the area in contact with the container decreases rather than increases the potential energy. That decrease is enough to compensate for the increased potential energy associated with lifting the fluid near the walls of the container.
The angle of contact of the surface of the liquid with the wall of the container can be used to determine the surface tension of the liquid-solid interface provided that the surface tension of the liquid-air interface is known. The relationship is given by:
- is the liquid-solid surface tension,
- is the liquid-air surface tension,
- is the contact angle, where a concave meniscus has contact angle less than 90° and a convex meniscus has contact angle of greater than 90°.
If a tube is sufficiently narrow and the liquid adhesion to its walls is sufficiently strong, surface tension can draw liquid up the tube in a phenomenon known as capillary action. The height the column is lifted to is given by:
- is the height the liquid is lifted,
- is the liquid-air surface tension,
- is the density of the liquid,
- is the radius of the capillary,
- is the acceleration of gravity,
- is the angle of contact described above. Note that if is greater than 90°, as with mercury in a glass container, the liquid will be depressed rather than lifted.
Pool of liquid on a nonadhesive surface
Pouring mercury onto a horizontal flat sheet of glass results in a puddle that has a perceptible thickness (do not try this except under a fume hood. Mercury vapor is a toxic hazard). The puddle will spread out only to the point where it is a little under half a centimeter thick, and no thinner. Again this is due to the action of mercury's strong surface tension. The liquid mass flattens out because that brings as much of the mercury to as low a level as possible. But the surface tension, at the same time, is acting to reduce the total surface area. The result is the compromise of a puddle of a nearly fixed thickness.
The same surface tension demonstration can be done with water, but only on a surface made of a substance that the water does not adhere to. Wax is such a substance. Water poured onto a smooth, flat, horizontal wax surface, say a waxed sheet of glass, will behave similarly to the mercury poured onto glass.
The thickness of a puddle of liquid on a nonadhesive horizontal surface is given by
is the depth of the puddle in centimeters or meters. is the surface tension of the liquid in dynes per centimeter or newtons per meter. is the acceleration due to gravity and is equal to 980 cm/s2 or 9.8 m/s2 is the density of the liquid in grams per cubic centimeter or kilograms per cubic meter
For mercury, and , which gives . For water at 25 °C, and , which gives .
In reality, the thicknesses of the puddles will be slightly less than these calculated values. This is due to the fact that surface tension of the mercury-glass interface is slightly less than that of the mercury-air interface. Likewise, the surface tension of the water-wax interface is less than that of the water-air interface. The contact angle, as described in the previous subsection, determines by how much the puddle thickness is reduced from the theoretical.
Liquid surfaces as minimization solver
To find the shape of the minimal surface bounded by some arbitrary shaped frame using strictly mathematical means can be a daunting task. Yet by fashioning the frame out of wire and dipping it in soap-solution, an approximately minimal surface will appear in the resulting soap-film within seconds. Without a single calculation, the soap-film arrives at a solution to a complex minimization equation on its own.
- Du Noüy Ring method: The traditional method used to measure surface or interfacial tension. Wetting properties of the surface or interface have little influence on this measuring technique. Maximum pull exerted on the ring by the surface is measured.
- Wilhelmy plate method: A universal method especially suited to check surface tension over long time intervals. A vertical plate of known perimeter is attached to a balance, and the force due to wetting is measured.
- Spinning drop method: This technique is ideal for measuring low interfacial tensions. The diameter of a drop within a heavy phase is measured while both are rotated.
- Pendant drop method: Surface and interfacial tension can be measured by this technique, even at elevated temperatures and pressures. Geometry of a drop is analyzed optically.
- Bubble pressure method (Jaeger's method): A measurement technique for determining surface tension at short surface ages. Maximum pressure of each bubble is measured.
- Drop volume method: A method for determining interfacial tension as a function of interface age. Liquid of one density is pumped into a second liquid of a different density and time between drops produced is measured.
- Capillary rise method: The end of a capillary is immersed into the solution. The height at which the solution reaches inside the capillary is related to the surface tension by the previously discussed equation.
- Stalagmometric method: A method of weighting and reading a drop of liquid.
Surface tension and thermodynamics
As stated above, the mechanical work needed to increase a surface is . For a reversible process, , therefore at constant temperature and pressure, surface tension equals Gibbs free energy per surface area:
, where is Gibbs free energy and is the area.
Influence of temperature on surface tension
Surface tension depends on temperature; for that reason, when a value is given for the surface tension of an interface, temperature must be explicitly stated. The general trend is that surface tension decreases with the increase of temperature, reaching a value of 0 at the critical temperature. There are only empirical equations to relate surface tension and temperature.
Influence of solute concentration on surface tension
Solutes can have different effects on surface tension depending on their structure:
- No effect, for example sugar
- Increase of surface tension, inorganic salts
- Decrease surface tension progressively, alcohols
- Decrease surface tension and, once a minimum is reached, no more effect: Surfactants
Pressure jump across a curved surface
If viscous forces are absent, the pressure jump across a curved surface is given by the Young-Laplace Equation, which relates pressure inside a liquid with the pressure outside it, the surface tension and the geometry of the surface.
This equation can be applied to any surface:
- For a flat surface so the pressure inside is the same as the pressure outside.
- For a spherical surface
- For a toroidal surface , where r and R are the radii of the toroid.
The table shows an example of how the pressure increases, showing that for not very small drops the effect is subtle but the pressure difference becomes enormous when the drop sizes approach the molecular size (a drop with a 1 nm radius contains approximately 100 water molecules), this can be attributed to the fact that at a very small scale the laws of continuum physics cannot be applied anymore.
|ΔP for water drops of different radii at STP|
|Droplet radius||1 mm||0.1 mm||1 μm||10 nm|
Influence of particle size on vapor pressure
Starting from Clausius-Clapeyron relation Kelvin Equation II can be obtained; it explains that because of surface tension, vapor pressure for small droplets of liquid in suspension is greater than standard vapor pressure of that same liquid when the interface is flat. That is to say that when a liquid is forming small droplets, the concentration of vapor of that liquid in the surroundings is greater, this is due to the fact that the pressure inside the droplet is greater than outside.
is the standard vapor pressure for that liquid at that temperature and pressure.
is the molar volume.
is the gas constant
is the Kelvin radius, the radius of the droplets.
This equation is used in catalyst chemistry to assess mesoporosity for solids.
The table shows some calculated values of this effect for water at different drop sizes:
|P/P0 for water drops of different radii at STP|
|Droplet radius (nm)||1000||100||10||1|
The effect becomes clear for very low drop sizes, as a drop on 1 nm radius has about 100 molecules inside, which is a quantity small enough to require a quantum mechanics analysis.
Surface tension values
|Surface tension values for some interfaces|
|Interface||Temperature||γ in (mN·m–1)|
|Water - air||20º C||72.86±0.05|
|Water - air||21.5º C||72.75|
|Water - air||25º C||71.99±0.05|
|Methylene iodide - air||20º C||67.00|
|Methylene iodide - air||21.5º C||63.11|
|Ethylene glycol - air||25º C||47.3|
|Ethylene glycol - air||40º C||46.3|
|Dimethyl sulfoxide - air||20º C||43.54|
|Propylene carbonate - air||20º C||41.1|
|Benzene - air||20º C||28.88|
|Benzene - air||30º C||27.56|
|Toluene - air||20º C||28.52|
|Chloroform - air||25º C||26.67|
|Propionic acid - air||20º C||26.69|
|Butyric acid - air||20º C||26.51|
|Carbon tetrachloride - air||25º C||26.43|
|Butyl acetate - air||20º C||25.09|
|Diethylene Glycol - air||20º C||30.09|
|Nonane - air||20º C||22.85|
|Methanol - air||20º C||22.50|
|Ethanol - air||20º C||22.39|
|Ethanol - air||30º C||21.55|
|Octane - air||20º C||21.62|
|Heptane - air||20º C||20.14|
|Ether - air||25º C||20.14|
|Mercury - air||20º C||486.5|
|Mercury - air||25º C||485.5|
|Mercury - air||30º C||484.5|
|NaCl - air||1073º C||115|
|KClO3 - air||20º C||81|
|Water - 1-Butanol||20º C||1.8|
|Water - Ethyl acetate||20º C||6.8|
|Water - Heptanoic acid||20º C||7.0|
|Water - Benzaldehyde||20º C||15.5|
|Water - Mercury||20º C||415|
|Ethanol - Mercury||20º C||389|
Surface tension values for some interfaces at the indicated temperatures. Note that the SI units millinewtons per meter (mN·m–1) are equivalent to the cgs units, dynes per centimeter (dyn·cm–1).
- Contact angle, the angle the surface makes with the wall of a container.
- Cheerios effect, the tendency for small wettable floating objects to attract one another
- Water striders, insects that rely on the surface tension of water to walk on top of it
- Wetting and dewetting
- Meniscus, surface curvature formed by a liquid in a container
- Tolman length, leading term in correcting the surface tension for curved surfaces
- Surfactants, substances which reduce surface tension
- Eötvös rule, a rule for predicting surface tension dependent on temperature
- The Dortmund Data Bank contains experimental temperature-dependent surface tensions
- ↑ Harvey E. White, Modern College Physics (van Nostrand, 1948).
- ↑ MIT, MIT Lecture Notes on Surface Tension, lecture 5. Retrieved April 1, 2007.
- ↑ MIT, Lecture Notes on Surface Tension, lecture 1. Retrieved April 1, 2007.
- ↑ MIT, MIT Lecture Notes on Surface Tension, lecture 3. Retrieved April 1, 2007.
- ↑ 5.0 5.1 5.2 Francis Weston Sears and Mark W. Zemanski, University Physics, 2nd ed. (Addison Wesley, 1955).
- ↑ Scott Aaronson, NP-Complete Problems and physical reality. Retrieved November 14, 2008.
- ↑ Sir Horace Lamb, Hydrodynamics, 6th ed. (Dover, 1932).
- ↑ G. Ertl, H. Knözinger, and J. Weitkamp, Handbook of Heterogeneous Catalysis, Vol. 2 (Weinheim: Wiley-VCH, 1997).
- ↑ 9.0 9.1 Colloids and Surfaces (1990)43,169-194,Pallas,N.R. and Harrison,Y
- ↑ A. W. Adamson and A. P. Gast, Physical Chemistry of Surfaces, 6th ed. (Wiley, 1997).
ReferencesISBN links support NWE through referral fees
- Adamson, Arthur W., and Alice P. Gast. Physical Chemistry of Surfaces, 6th ed. New York: John Wiley, 1997. ISBN 0471148733.
- Lamb, Sir Horace. Hydrodynamics, 6th ed. Dover, 1932.
- Savino, Raffaele. Surface Tension-Driven Flows and Applications. Research Signpost, 2006. ISBN 8130800659.
- Sears, Francis Weston, and Mark W. Zemanski. University Physics, 2nd ed. Addison Wesley, 1955.
- Venables, John A. Introduction to Surface and Thin Film Processes. Cambridge, UK: Cambridge University Press, 2000. ISBN 0521785006.
- White, Harvey E. Modern College Physics. van Nostrand, 1948.
- Zangwill, Andrew. Physics at Surfaces. Cambridge, UK: Cambridge University Press, 2001. ISBN 0521347521.
All links retrieved February 26, 2023.
- On surface tension and interesting real-world cases
- Surface tension values of some common test liquids for surface energy analysis
|General subfields within physics|
Atomic, molecular, and optical physics | Classical mechanics | Condensed matter physics | Continuum mechanics | Electromagnetism | General relativity | Particle physics | Quantum field theory | Quantum mechanics | Special relativity | Statistical mechanics | Thermodynamics
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
Presentation on theme: "Ohm’s Law Mitsuko J. Osugi Physics 409D Winter 2004 UBC Physics Outreach."— Presentation transcript:
Ohm’s Law Mitsuko J. Osugi Physics 409D Winter 2004 UBC Physics Outreach
Ohm’s Law Current through an ideal conductor is proportional to the applied voltage –Conductor is also known as a resistor –An ideal conductor is a material whose resistance does not change with temperature For an ohmic device, V = Voltage (Volts = V) I = Current (Amperes = A) R = Resistance (Ohms = Ω)
Current and Voltage Defined Conventional Current: (the current in electrical circuits) Flow of current from positive terminal to the negative terminal. - has units of Amperes (A) and is measured using ammeters. Voltage: Energy required to move a charge from one point to another. - has units of Volts (V) and is measured using voltmeters. Think of voltage as what pushes the electrons along in the circuit, and current as a group of electrons that are constantly trying to reach a state of equilibrium.
Ohmic Resistors Metals obey Ohm’s Law linearly so long as their temperature is held constant –Their resistance values do not fluctuate with temperature i.e. the resistance for each resistor is a constant Most ohmic resistors will behave non-linearly outside of a given range of temperature, pressure, etc.
Voltage and Current Relationship for Linear Resistors Voltage and current are linear when resistance is held constant.
We’ve now looked at how basic electrical circuits work with resistors that obey Ohm’s Law linearly. We understand quantitatively how these resistors work using the relationship V=IR, but lets see qualitatively using light bulbs.
The Light Bulb and its Components Has two metal contacts at the base which connect to the ends of an electrical circuit The metal contacts are attached to two stiff wires, which are attached to a thin metal filament. The filament is in the middle of the bulb, held up by a glass mount. The wires and the filament are housed in a glass bulb, which is filled with an inert gas, such as argon.
Light bulbs and Power Power dissipated by a bulb relates to the brightness of the bulb. The higher the power, the brighter the bulb. Power is measured in Watts [W] For example, think of the bulbs you use at home. The 100W bulbs are brighter than the 50W bulbs.
Bulbs in series experiment One bulb connected to the batteries. Add another bulb to the circuit in series. Q: When the second bulb is added, will the bulbs become brighter, dimmer, or not change? We can use Ohm’s Law to approximate what will happen in the circuit in theory:
Bulbs in parallel experiment One bulb connected to the batteries. Add a second bulb to the circuit in parallel. Q: What happens when the second bulb is added? We can use Ohm’s Law to approximate what will happen in the circuit:
Light bulbs are not linear The resistance of light bulbs increases with temperatureThe resistance of light bulbs increases with temperature The filaments of light bulbs are made of Tungsten, which is a very good conductor. It heats up easily.
As light bulbs warm up, their resistance increases. If the current through them remains constant: They glow slightly dimmer when first plugged in. Why? The bulbs are cooler when first plugged in so their resistance is lower. As they heat up their resistance increases but I remains constant P increases Most ohmic resistors will behave non-linearly outside of a given range of temperature, pressure, etc.
Voltage versus Current for Constant Resistance The light bulb does not have a linear relationship. The resistance of the bulb increases as the temperature of the bulb increases.
“Memory Bulbs” Experiment Touch each bulb in succession with the wire, each time completing the series circuit Q:What is going to happen? Pay close attention to what happens to each of the bulbs as I close each circuit.
“Memory Bulbs” Continued… Filaments stay hot after having been turned off In series, current through each resistor is constant –smallest resistor (coolest bulb) has least power dissipation, therefore it is the dimmest bulb How did THAT happen?? Temperature of bulbs increases resistance increases power dissipation (brightness) of bulbs increases
Conclusion Ohmic resistors obey Ohm’s Law linearly Resistance is affected by temperature. The resistance of a conductor increases as its temperature increases. Light bulbs do not obey Ohm’s Law linearly –As their temperature increases, the power dissipated by the bulb increases i.e. They are brighter when they are hotter
You’re turn to do some experiments! Now you get to try some experiments of your own, but first, a quick tutorial on the equipment you will be using
The equipment you’ll be using: - Voltmeter - Breadboard - Resistors - 9V battery Let’s do a quick review…
How to use a voltmeter: Voltmeter: - connect either end of the meter to each side of the resistor If you are reading a negative value, you have the probes switched. There should be no continuity beeping. If you hear beeping, STOP what you are doing and ask someone for help!
Voltage: Probes connect to either side of the resistor Measuring Voltage
Breadboards You encountered breadboards early in the year. Let’s review them: The breadboard How the holes on the top of the board are connected:
Series Resistors are connected such that the current can only take one path
Parallel Resistors are connected such that the current can take multiple paths
Real data In reality, the data we get is not the same as what we get in theory. Why? Because when we calculate numbers in theory, we are dealing with an ideal system. In reality there are sources of error in every aspect, which make our numbers imperfect. |
In statistical surveys, when subpopulations within an overall population vary, it could be advantageous to sample each subpopulation (stratum) independently. Stratification is the process of dividing members of the population into homogeneous subgroups before sampling. The strata should define a partition of the population. That is, it should be collectively exhaustive and mutually exclusive: every element in the population must be assigned to one and only one stratum. Then simple random sampling or systematic sampling is applied within each stratum. The objective is to improve the precision of the sample by reducing sampling error. It can produce a weighted mean that has less variability than the arithmetic mean of a simple random sample of the population.
Assume that we need to estimate the average number of votes for each candidate in an election. Assume that a country has 3 towns: Town A has 1 million factory workers, Town B has 2 million office workers and Town C has 3 million retirees. We can choose to get a random sample of size 60 over the entire population but there is some chance that the resulting random sample is poorly balanced across these towns and hence is biased, causing a significant error in estimation. Instead if we choose to take a random sample of 10, 20 and 30 from Town A, B and C respectively, then we can produce a smaller error in estimation for the same total sample size. This method is generally used when a population is not a homogeneous group.
Stratified sampling strategies
- Proportionate allocation uses a sampling fraction in each of the strata that is proportional to that of the total population. For instance, if the population consists of X total individuals, m of which are male and f female (and where m + f = X), then the relative size of the two samples (x1 = m/X males, x2 = f/X females) should reflect this proportion.
- Optimum allocation (or disproportionate allocation) - The sampling fraction of each stratum is proportionate to both the proportion (as above) and the standard deviation of the distribution of the variable. Larger samples are taken in the strata with the greatest variability to generate the least possible overall sampling variance.
A real-world example of using stratified sampling would be for a political survey. If the respondents needed to reflect the diversity of the population, the researcher would specifically seek to include participants of various minority groups such as race or religion, based on their proportionality to the total population as mentioned above. A stratified survey could thus claim to be more representative of the population than a survey of simple random sampling or systematic sampling.
The reasons to use stratified sampling rather than simple random sampling include
- If measurements within strata have lower standard deviation, stratification gives smaller error in estimation.
- For many applications, measurements become more manageable and/or cheaper when the population is grouped into strata.
- It is often desirable to have estimates of population parameters for groups within the population.
If the population density varies greatly within a region, stratified sampling will ensure that estimates can be made with equal accuracy in different parts of the region, and that comparisons of sub-regions can be made with equal statistical power. For example, in Ontario a survey taken throughout the province might use a larger sampling fraction in the less populated north, since the disparity in population between north and south is so great that a sampling fraction based on the provincial sample as a whole might result in the collection of only a handful of data from the north.
Stratified sampling is not useful when the population cannot be exhaustively partitioned into disjoint subgroups. It would be a misapplication of the technique to make subgroups' sample sizes proportional to the amount of data available from the subgroups, rather than scaling sample sizes to subgroup sizes (or to their variances, if known to vary significantly—e.g. by means of an F Test). Data representing each subgroup are taken to be of equal importance if suspected variation among them warrants stratified sampling. If subgroup variances differ significantly and the data needs to be stratified by variance, it is not possible to simultaneously make each subgroup sample size proportional to subgroup size within the total population. For an efficient way to partition sampling resources among groups that vary in their means, variance and costs, see "optimum allocation". The problem of stratified sampling in the case of unknown class priors (ratio of subpopulations in the entire population) can have deleterious effect on the performance of any analysis on the dataset, e.g. classification. In that regard, minimax sampling ratio can be used to make the dataset robust with respect to uncertainty in the underlying data generating process.
Combining sub-strata to ensure adequate numbers can lead to Simpson's paradox, where trends that actually exist in different groups of data disappear or even reverse when the groups are combined.
Mean and standard error
- number of strata
- the sum of all stratum sizes
- size of stratum
- sample mean of stratum
- number of observations in stratum
- sample standard deviation of stratum
Note that the term ( − ) / (), which equals (1 − / ), is a finite population correction and must be expressed in "sample units". Foregoing the finite population correction gives:
where the = / is the population weight of stratum .
Sample size allocation
- male, full-time: 90
- male, part-time: 18
- female, full-time: 9
- female, part-time: 63
- total: 180
and we are asked to take a sample of 40 staff, stratified according to the above categories.
The first step is to calculate the percentage of each group of the total.
- % male, full-time = 90 ÷ 180 = 50%
- % male, part-time = 18 ÷ 180 = 10%
- % female, full-time = 9 ÷ 180 = 5%
- % female, part-time = 63 ÷ 180 = 35%
This tells us that of our sample of 40,
- 50% (20 individuals) should be male, full-time.
- 10% (4 individuals) should be male, part-time.
- 5% (2 individuals) should be female, full-time.
- 35% (14 individuals) should be female, part-time.
Another easy way without having to calculate the percentage is to multiply each group size by the sample size and divide by the total population size (size of entire staff):
- male, full-time = 90 × (40 ÷ 180) = 20
- male, part-time = 18 × (40 ÷ 180) = 4
- female, full-time = 9 × (40 ÷ 180) = 2
- female, part-time = 63 × (40 ÷ 180) = 14
- Botev, Z.; Ridder, A. (2017). "Variance Reduction". Wiley StatsRef: Statistics Reference Online: 1–6. doi:10.1002/9781118445112.stat07975.
- "6.1 How to Use Stratified Sampling | STAT 506". onlinecourses.science.psu.edu. Retrieved 2015-07-23.
- Shahrokh Esfahani, Mohammad; Dougherty, Edward R. (2014). "Effect of separate sampling on classification accuracy". Bioinformatics. 30 (2): 242–250. doi:10.1093/bioinformatics/btt662. PMID 24257187.
- Hunt, Neville; Tyrrell, Sidney (2001). "Stratified Sampling". Webpage at Coventry University. Archived from the original on 13 October 2013. Retrieved 12 July 2012.
- Särndal, Carl-Erik; et al. (2003). "Stratified Sampling". Model Assisted Survey Sampling. New York: Springer. pp. 100–109. ISBN 0-387-40620-4. |
Mathematical induction assignment
Induction is a proof technique that is useful for proving statements that deal with an infinite number of items in a countable infinity such as integers. For example, consider the following problem:
Prove that for every integer N >= 1, The sum from 1 to N == (N*(N+1)) / 2
How can we prove such a statement is true? It makes a statement about an infinite (countably infinite) number of possibilities. We certainly could not, in finite time, solve each of the infinite equations individually. That is, though we could begin pluggting values in for N such as
P(1) 1 = (1*2)/ 2 is true
P(2) 1+2 = (2*3)/2 = 3 is true
P(3) 1+2+3 = (3*4)/2 = 6 is true
And so on…..
but since there is an infinite set of values that N can take on we would never be able to stop this process and KNOW for certain that there is not an exception somewhere down the line.
NOTE – to disprove statements such as this is a trivial task. That is, since the statement says that the equation holds for ALL possible values of N, all we need to do to disprove it is produce a single example where it fails. So, for example if the problem said “For all integers N >= 1, N divded by 2 produces a remainder of 0” – we could easily disprove it by taking a specific value of N for which the statement is false. For example N = 5 (or any odd integer).
However, we cannot prove such a statement true simply by looking at a single example! If we could, then we could have proved the previous statement as true (which it isn’t) by pointing out that if N = 8 then it does have a 0 remainder when divided by 2. Although this is obviously incorrect, students often attempt to prove statements about countably infinite sets in such a manner – it is known as an “Attempt to prove by example” and it is simply incorrect and causes math and science professors to have strokes and give poor grades.
The solution to our problem is known as mathematical induction or simply “induction”. It is a fairly simple process to understand though it can, at times, appear to be “proof by magic”. To understand how it works, consider an infinite collection of dominoes lined up one after the other. To show that we can knock them all over we would begin by directly proving that we can knock over the first domino. By directly proving, we mean that we would use any of the proof techniques studied earlier to show with certainty that the first domino can be knocked over.
The next thing we do is to ASSUME that an arbitrary domino later in the list (Let’s call it domino k) can be knocked down and then use this assumption to prove directly that domino k+1 must also fall. IF we can use our assumption that domino k falls to prove domino k+1 must also fall, then we can combine this fact with the fact that we can knock over the first to prove that all of the dominoes must fall.
Why? Because, we have shown that we can directly knock over the first and if we let k = 1 then we know k+1 = 2 must fall – BUT that means if k = 2 then k+1 = 3 must fall as well and so on and so on.
To see how this would work on our initial problem we will examine an inductive proof.
Problem : For every integer N >= 1, The sum from 1 to N == (N*(N+1)) / 2
We begin our inductive proof by proving what is known as the “base case”. This is equivalent to our concept of proving that we can knock down our first domino. Since our proof indicates that we are solving for all integers >= 1, our base case requires that we prove the problem statement with N == 1). This is usually trivial in most cases as it is with this problem. We simply plug in 1 for N and easily show that the equation holds.
P(1): 1 = (2*1)/2 = 2/2 = 1. BASE CASE
The next step in an inductive proof is to ASSUME that we can solve our problem for some arbitrary value of N (we often refer to it as the value k) and then show that this assumption can be used to prove that the statement holds for the value k+1 as well. This assumption is known as the “inductive hypothesis”.
P(k): ASSUME that 1+2+3+…+k = (k*(k+1))/2. INDUCTIVE HYPOTHESIS
Next, we must show that P(k+1) is true by using our inductive hypothesis at some point. This is known s the “inductive step”. Note that it is essential that we prove P(k+1) using our inductive hypothesis. This has the effect of showing that if one domino falls then the next one will necessarily fall as well. We simply plug in the value k+1 for N in our problem and use standard proof techniques/rules to solve the problem BUT note that we can use our assumption of the validity of P(k).
Show P(k+1). That is prove that 1+2+3+…+k+(k+1) = ((k+1)*(k+2))/2
1+2+…+k+(k+1) = [1+2+….+k] + (k+1) by Associativity of addition
= (k*(k+1))/2 + (k+1) by our Inductive Hypothesis!!!!
= [(k*(k+1)) + 2*(k+1)] / 2 by finding a common denominator
= [k2 + 3k + 2] / 2 by algebra
= [(k+1)*(k+2)] / 2 by factoring
End of proof.
Note the second step in our proof of P(k+1). This is our inductive step. It is where we apply our inductive hypothesis P(k) to the proof. By replacing part of our equation with the right hand side of our hypothesis we were able, with relatively simple algebra, to solve P(k+1).
As shown in the previous example, inductive proofs take a specific form.
Base case P(1). – prove it directly. Usually it is trivial to do so.
Show P(k+1) – do this by applying the right-hand side of the inductive hypothesis at some point in the proof and using standard proof techniques and algebra.
Problem P: For N >= 4, 2N >= N2
Proof by induction on N.
Base P(4): 24 = 16 >= 42
Note that in this example, our base case is not 1 but rather 4. In fact, the given equation does not hold for N=1,2, or 3 which is why it was written for all N >= 4.
Assume P(k): 2k >= k2.
Show P(k+1) : 2k+1 >= (k+1)2
2k+1 = 2 * 2k by rules of multiplication and exponents
>= 2 * k2 inductive step – by applying our inductive hypothesis assumption
Note that in our inductive step above we went from using an = to using >=. This is because of the application of our inductive hypothesis. We replaced 2k by k2 which, by the assumption is <= 2k.
>= (k+1)2 for k >= 4. End of proof.
Problem P: For alln>1, 8n – 3n is divisible by 5.
Base Case P(1): 81 – 31 = 8 – 3 = 5, which is clearly divisible by 5.
Assume, P(k): 8k – 3k is divisible by 5.
Show P(k+1): 8k+1 – 3k+1 is divisible by 5.
8k+1 – 3k+1 = 8k+1 – 3×8k + 3×8k – 3k+1 since adding X and subtracting X has no effect. In this case we subtract 3×8k as well as add it so it does not change the final value.
= 8k(8 – 3) + 3(8k – 3k) by factoring/algebra
= 8k(5) + 3(8k – 3k) by subtraction
The first term in 8k(5) + 3(8k – 3k) has 5 as a factor (explicitly), and the second term is divisible by 5 (by our inductive hypothesis!). Since we can factor a 5 out of both terms, then the entire expression, 8k(5) + 3(8k – 3k) = 8k+1 – 3k+1, must be divisible by 5.
End of proof.
The form of induction we have looked at so far is known as weak induction. This means that when we attempt to prove P(k+1) we can only use our base case and our inductive hypothesis P(k) to assist in our proof.
It turns out that we can actually strengthen our assumption so that we do not only assume that P(k) is true but can also assume that our hypothesis holds for ALL values less than k+1. Thus we can assume P( ) is true for P(k), but also for P(k-1), P(k-2) and so on right down to our base case.
Although this is not always necessary, it can make a number of proofs much easier.
Problem P: All integers N >= 2 are divisible by a prime number.
Base P(2) – trivial as 2 is a prime and ths divisible by itself.
Assume for ALL values x, such that 2 <= x < k+1, P(x) is true.
Show P(k+1) is true.
k+1 is either prime or it is not. If it is prime then it is obviously divisible by itself and thus P(k+1) is true.
If k+1 is not prime then it must be a composite number meaning k+1 = A * B.
Note that A and B must be >= 2 and also < k+1.
So, 2 <= A < k+1.
By our inductive hypothesis, we can assume that P(A) is true and thus A is divisible by some prime number q. Since q divides A and A divides k+1, we know by transitivity that q divides k+1.
The above argument holds for B as well but it is not necessary. We showed that a prime number q divides k+1.
Note how the strong induction allowed us to use an A that was anywhere from our base case to P(k).
Induction is also used to prove things about certain types of structures. In particular, if a data structure, say a tree, is defined recursively then induction can often be used to prove properties for a tree.
A Tree can be defined recursively (NOTE: Recursion and induction are the same fundamental principle!!!!) as follows:
A Tree is: a. A single node (root) OR >=2 trees connected to a root node. End of recursive definition!
Base case : A single node is a tree and that node is the ROOT of the tree.
Induction/Recursion If T1, T2, T3, …., Tk are all trees then we can build a new tree by creating a new node N as the root of the tree and adding edges from N to the roots of the trees T1,T2,T3….Tk.
Problem: Prove that every tree has exactly one more node than it has edges.
Our induction will be on the number of nodes in the tree.
Base P(1) : A tree with one node has no edges so this is trivially true.
Assume P(x) for 1 <= x <= k : A tree with x nodes has x-1 edges. STRONG
Show P(k+1) : A tree with k+1 nodes has k edges.
All trees have a unique root node. This means that our tree with k+1 nodes has such a root. This root must be connected to k different trees via k edges. If we temporarily remove the root node and all k edges connecting it to the k subtrees we are left with k trees. Since each of them must be smaller than k+1 we know the inductive hypothesis holds. This means that for each of the k subtrees, there is one more node than edges.
This means that taken together we have k more nodes than edges among the k trees right? Now let’s add back what we took out earlier. First we will add back the root. This means that we have k+1 more nodes than edges. Now we connect the root to the roots of the k subtrees and thus we add k edges. This gives us (k+1) – k = 1. and we are done.
A word of warning. There are some common mistakes people make when using induction.
Proving the wrong base case.
Sounds stupid but it can easily happen. Be careful not to “prove” something for a base case it does not truly hold for.
Proving nothing in the induction.
Again, sounds silly but I see a lot of cases where students, when they get to the final step, state what they are trying to prove P(k+1) and then start with the left hand side of an equation. They follow it with line after line of algebraic manipulation and end up “proving” the left hand side of the equation! Sort of like walking in circles.
Failing to apply the inductive hypothesis
It is very likely that your proof is wrong if you never, in fact, use the inductive hypothesis. It is essential to the concept of inductive proofs.
Examples of using induction to design an algorithm….
- Sorting – OK we know this one but suppose we didn’t….
Given a list of N numbers sort them
BASE: We can sort N numbers for N==1. Trivial a one element list is, by definition, sorted.
ASSUME we can sort a list of size k == N-1 numbers.
Show for p(N) - We don’t know how to sort a list of length N so let’s turn it into a smaller list – perhaps one of length N-1. How? Strip off an element – say the last one in the list. What remains? A list of length N-1 right? But by the IH we can sort that! So do it. Now, we have a sorted list of N-1 elements and that last element we stripped off.
Now we look for a simple way to combine them. Easy right? Just swap it down the list one element at a time until we find its correct position. So the resulting algorithm looks something like
[ ] Sort( list [ ], length )
If (length == 1) return list
Sortlist = sort( list[ ], length – 1 )
Pos = length
While ( Pos > 1 ) && ( Sortlist[Pos – 1] > Sortlist[POS] )
Swap ( Sortlist[pos], Sortlist[Pos -1])
What is it? The insertion sort! Not a great solution (unless list is close to sorted to begin with) but it works.
NOTE – we chose to reduce the list by one element – that last one – Could we have selected differently? Sure – induction doesn’t care which element we choose just that we reduce the size of the list. So suppose instead of just taking the last element we took say the largest element out? The induction-approach would say
Remove the largest – sort the remaining and place the largest in the last spot! Done. This, of course is the pathetic Selection sort which always sucks.
BUT NOTE – we have been only exploring weak induction. Perhaps strong induction would work. This means we don’t always have to reduce the problem by one – we could reduce it to 2 smaller versions and rely on strong induction.
IH – we can sort a list < N elts. Now, when given a list of size N we will break it in half (roughly) and can assume that both can be sorted via our IH!!!! So – given a list of size N we split it in half – sort both halves (using the IH) and then need only combine the two sorted lists? How – pretty easy right just merge them together.Obviously this results in the Merge sort algorithm which is quite an improvement.
The problems we deal with can come from all areas of computing. For example consider the following graphics-related problem.
Given a 2D plane and a collection of N lines dividing the plane into regions color the regions using only blue and red so that no two neighboring regions are the same color.
Our induction will be on the number of lines.
BASE CASE – 1 line. Results in two regions color one red and one blue – done.
ASSUME it works for k <= N lines.
Color the regions for N lines. How? WE don’t know right? Well reduce the number of lines!!!! Take one out, now we have N-1 lines. OK well our IH says we can do it so do it! Now we need to figure out how to take our solution for N-1 lines and add the Nth line and color the resulting regions? Any ideas?
You have a room with N people in it. One of them is a celebrity. A celebrity is defined as an individual who is known by everyone else in the room but does not know anyone there. Your task is to identify the celebrity by asking only questions of the form “Excuse me person X do you know person Y over there?”
Now I think we could all come up with a solution pretty quickly but can we improve on it via induction? I mean one way is to ask all N people if they know the other N-1 people and keep track of the answers and thus identify the celebrity when we are done. That is like N*N-1 total questions and thus O(N^2).
Let’s try induction….
Base – trivial
Assume we can identify celeb in room of <N people.
Given room with N what do we do? Select one to leave out. Identify the celeb if one exists of the remaining N-1 people. 3 possibilities
- The celeb is found from the N-1.
- The person we left out is the celeb
- No celeb in the room.
Case a trivial – just ask the potential celeb if they know person N (answer is no) and person N if they know the celeb (answer yes) then we confirm the celeb.
Case b oops….. now we have problems….. we’d have to go through everyone asking 2 questions each…… yuck….. we’re back to our O(N^2) algorithm….
Hmmm….. OK but let’s ask if we can reduce the problem in a different manner than simply grabbing someone at random and removing them….. OK – who do we wish to pull out???? Obviously NOT the celebrity because that caused us problems – we want to remove a non-celebrity….doesn’t matter which but we want one…. How do we find one EASILY?
- Assignment Help
- Homework Help
- Writing Help
- Academic Writing Assistance
- Editing Services
- Plagiarism Checker Online
- Research Writing Help |
In the early morning darkness on April 15, 1912, as the R.M.S. Titanic was sinking in the freezing Atlantic, survivors witnessed a large number of streaking lights in the sky, which many believed to be the souls of their drowning loved ones passing to heaven.
Says Kevin Luhman, what they most likely were seeing was the peak of the Lyriad meteor shower, an annual event occuring in mid-to-late April.
Though folklore of many cultures describes shooting or falling stars as rare events, “they’re hardly rare or even stars,” says Luhman, Penn State assistant professor of astronomy and astrophysics.
“From the dawn of civilization people have seen these streaks of light that looked like stars, but were moving quickly across the sky,” he notes. “These ‘shooting stars’ are actually space rocks—meteoroids—made visible by the heat generated when they enter the Earth’s atmosphere at high speeds.” These bits of ice and debris range in size from a speck of sand to a boulder. Larger objects are called asteroids, and smaller, planetary dust, Luhman explains.
Most meteoroids are about the size of a pebble and become visible 40 to 75 miles above the earth.
The largest meteoroids, called “fireballs” or “bolides,” explode into flashes so bright they can be seen during the day, says Luhman.
More common, though, are falling meteors too dim to see in daylight. “Meteors are falling all the time,” he adds. “There’s debris all throughout the solar system. Every minute somewhere around the Earth there’s some little piece of rock or ice that’s falling from space.”
A dark spot in the northern hemisphere promises the best viewing of falling meteors, notes Luhman. The pre-dawn hours of any clear night are the best time because, as Earth slowly spins around its axis, the side facing into its orbit tends to encounter more space grit. “You’re better off using just the naked eye versus the telescope, which shows just a small patch of sky,” he suggests. “If you look at the entire sky, that gives you a better chance of spotting the meteor.”
The very best viewing times, when “you might see one or two meteors per minute,” says Luhman, are the eleven or so meteor showers each year when Earth passes through a debris trail left behind by a comet—a giant ball of ice and grit also orbiting the sun. “They are named after the constellation in the sky out of which the meteors appear to come,” Luhman explains, noting the Leonid and Perseid showers as the most famous and spectacular.
The Perseids, appearing every August and named for Perseus, occur as Earth moves through a thousand-year-old cloud of cosmic debris ejected from the comet Swift-Tuttle, last seen in 1992, and—at six miles across—the largest object known to make repeated passes near Earth.
Says Luhman, it would have been hard to miss the falling lights of the 1833 Leonid meteor storm, which were so bright and fell so fast, about 100,000 in an hour, that many feared the end of the world. The storm, widely regarded as the birth of modern meteor astronomy, marked the discovery of the Leonids, visible every November 17 and caused by debris from the comet Tempel-Tuttle.
While meteors are now well-understood, meteorites—fragments of meteoroids and asteroids that survive both the passage through our atmosphere and the ground impact—are helping scientists to learn of the solar system’s origins, says Luhman. “These rocks are basically leftovers or raw ingredients from when the solar system was born 4.5 billion years ago.”
Some meteorites even tell us about planets. “If a comet or asteroid hits Mars, it can throw some of the pieces of the crust into space and, after millions of years, some of that material falls down to Earth’s surface,” says Luhman.
One of 34 Martian meteorites reached fame in 1996 when NASA scientists announced it showed signs of primitive life from more than 3.6 billion years ago. Closer study explained the formation as a geologic effect, all but ending the scientific controversy, explains Luhman.
Yet, shooting stars have hardly fallen out of our everyday conversations. Our movies, songs and poetry still speak highly of the bright lights as a magical sight, worthy of wishes. As the Disney company’s theme song has taught generations of children, “When you wish upon a star, your dreams come true.”
Source: by Lisa Duchene, Penn State
Explore further: SpaceX to test 'eject-button' for astronauts |
Diffusion is the net movement of a substance (e.g., an atom, ion or molecule) from a region of high concentration to a region of low concentration. This is also referred to as the movement of a substance down a concentration gradient. A gradient is the change in the value of a quantity (e.g., concentration, pressure, temperature) with the change in another variable (e.g., distance). For example, a change in concentration over a distance is called a concentration gradient, a change in pressure over a distance is called a pressure gradient, and a change in temperature over a distance is a called a temperature gradient.
The word diffusion is derived from the Latin word, "diffundere", which means "to spread out" (if a substance is “spreading out”, it is moving from an area of high concentration to an area of low concentration). A distinguishing feature of diffusion is that it results in mixing or mass transport, without requiring bulk motion (bulk flow). Thus, diffusion should not be confused with convection, or advection, which are other transport phenomena that utilize bulk motion to move particles from one place to another.
- 1 Diffusion vs. Bulk Flow
- 2 Diffusion in the context of different disciplines
- 3 Random Walk (Random Motion)
- 4 History of diffusion in physics
- 5 Basic models of diffusion
- 5.1 Diffusion flux
- 5.2 Fick's law and equations
- 5.3 Onsager's equations for multicomponent diffusion and thermodiffusion
- 5.4 Nondiagonal diffusion must be nonlinear
- 5.5 Einstein's mobility and Teorell formula
- 5.6 Jumps on the surface and in solids
- 5.7 Diffusion in porous media
- 6 Diffusion in physics
- 7 See also
- 8 References
- 9 External links
Diffusion vs. Bulk Flow
An example of a situation in which bulk flow and diffusion can be differentiated is the mechanism by which oxygen enters the body during external respiration (breathing). The lungs are located in the thoracic cavity, which is expanded as the first step in external respiration. This expansion leads to an increase in volume of the alveoli in the lungs, which causes a decrease in pressure in the alveoli. This creates a pressure gradient between the air outside the body (relatively high pressure) and the alveoli (relatively low pressure). The air moves down the pressure gradient through the airways of the lungs and into the alveoli until the pressure of the air and that in the alveoli are equal (i.e., the movement of air by bulk flow stops once there is no longer a pressure gradient).
The air arriving in the alveoli has a higher concentration of oxygen than the “stale” air in the alveoli. The increase in oxygen concentration creates a concentration gradient for oxygen between the air in the alveoli and the blood in the capillaries that surround the alveoli. Oxygen then moves by diffusion, down the concentration gradient, into the blood. The other consequence of the air arriving in alveoli is that the concentration of carbon dioxide in the alveoli decreases (air has a very low concentration of carbon dioxide compared to the blood in the body). This creates a concentration gradient for carbon dioxide to diffuse from the blood into the alveoli.
The blood is then transported around the body by the pumping action of the heart. As the left ventricle of the heart contracts, the volume decreases, which causes the pressure in the ventricle to increase. This creates a pressure gradient between the heart and the capillaries, and blood moves through blood vessels by bulk flow (down the pressure gradient). As the thoracic cavity contracts during expiration, the volume of the alveoli decreases and creates a pressure gradient between the alveoli and the air outside the body, and air moves by bulk flow down the pressure gradient.
Diffusion in the context of different disciplines
The concept of diffusion is widely used in: physics (particle diffusion), chemistry, biology, sociology, economics, and finance (diffusion of people, ideas and of price values). However, in each case, the object (e.g., atom, idea, etc.) that is undergoing diffusion is “spreading out” from a point or location at which there is a higher concentration of that object.
There are two ways to introduce the notion of diffusion: either a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles.
In the phenomenological approach, diffusion is the movement of a substance from a region of high concentration to a region of low concentration without bulk motion. According to Fick's laws, the diffusion flux is proportional to the negative gradient of concentrations. It goes from regions of higher concentration to regions of lower concentration. Some time later, various generalizations of Fick's laws were developed in the frame of thermodynamics and non-equilibrium thermodynamics.
From the atomistic point of view, diffusion is considered as a result of the random walk of the diffusing particles. In molecular diffusion, the moving molecules are self-propelled by thermal energy. Random walk of small particles in suspension in a fluid was discovered in 1827 by Robert Brown. The theory of the Brownian motion and the atomistic backgrounds of diffusion were developed by Albert Einstein. The concept of diffusion is typically applied to any subject matter involving random walks in ensembles of individuals.
In biology, the terms "net movement" or "net diffusion" are often used when considering the movement of ions or molecules by diffusion. For example, oxygen can diffuse through cell membranes and if there is a higher concentration of oxygen outside the cell than inside, oxygen molecules will diffuse into the cell. However, because the movement of molecules is random, occasionally oxygen molecules will move out of the cell (against the concentration gradient). Because there are more oxygen molecules outside the cell, the probability that oxygen molecules will enter the cell is higher than the probability that oxygen molecules will leave the cell. Therefore, the "net" movement of oxygen molecules (the difference between the number of molecules either entering or leaving the cell) will be into the cell. In other words, there will be a net movement of oxygen molecules down the concentration gradient.
Random Walk (Random Motion)
One common misconception is that individual atoms, ions or molecules move “randomly”, which they do not. In the animation on the right, the ion on in the left panel has a “random” motion, but this motion is not random as it is the result of “collisions” with other ions. As such, the movement of a single atom, ion, or molecule within a mixture just appears to be random when viewed in isolation. The movement of a substance within a mixture by “random walk” is governed by the kinetic energy within the system that can be affected by changes in concentration, pressure or temperature.
History of diffusion in physics
In the scope of time, diffusion in solids was used long before the theory of diffusion was created. For example, Pliny the Elder had previously described the cementation process which produces steel from the element iron (Fe) through carbon diffusion. Another example is well known for many centuries, the diffusion of colours of stained glass or earthenware and Chinese ceramics.
"...gases of different nature, when brought into contact, do not arrange themselves according to their density, the heaviest undermost, and the lighter uppermost, but they spontaneously diffuse, mutually and equally, through each other, and so remain in the intimate state of mixture for any length of time.”
The measurements of Graham contributed to James Clerk Maxwell deriving, in 1867, the coefficient of diffusion for CO2 in air. The error rate is less than 5%.
In 1855, Adolf Fick, the 26-year old anatomy demonstrator from Zürich, proposed his law of diffusion. He used Graham's research, stating his goal as "the development of a fundamental law, for the operation of diffusion in a single element of space". He asserted a deep analogy between diffusion and conduction of heat or electricity, creating a formalism that is similar to Fourier's law for heat conduction (1822) and Ohm's law for electrical current (1827).
Robert Boyle demonstrated diffusion in solids in the 17th century by penetration of Zinc into a copper coin. Nevertheless, diffusion in solids was not systematically studied till the second part of the 19th century. William Chandler Roberts-Austen, the well-known British metallurgist, and former assistant of Thomas Graham, studied systematically solid state diffusion on the example of gold in lead in 1896. :
"... My long connection with Graham's researches made it almost a duty to attempt to extend his work on liquid diffusion to metals."
In 1858, Rudolf Clausius introduced the concept of the mean free path. In the same year, James Clerk Maxwell developed the first atomistic theory of transport processes in gases. The modern atomistic theory of diffusion and Brownian motion was developed by Albert Einstein, Marian Smoluchowski and Jean-Baptiste Perrin. Ludwig Boltzmann, in the development of the atomistic backgrounds of the macroscopic transport processes, introduced the Boltzmann equation, which has served mathematics and physics with a source of transport process ideas and concerns for more than 140 years.
Yakov Frenkel (sometimes, Jakov/Jacov Frenkel) proposed, and elaborated in 1926, the idea of diffusion in crystals through local defects (vacancies and interstitial atoms). He concluded, the diffusion process in condensed matter is an ensemble of elementary jumps and quasichemical interactions of particles and defects. He introduced several mechanisms of diffusion and found rate constants from experimental data.
Some time later, Carl Wagner and Walter H. Schottky developed Frenkel's ideas about mechanisms of diffusion further. Presently, it is universally recognized that atomic defects are necessary to mediate diffusion in crystals.
Henry Eyring, with co-authors, applied his theory of absolute reaction rates to Frenkel's quasichemical model of diffusion. The analogy between reaction kinetics and diffusion leads to various nonlinear versions of Fick's law.
Basic models of diffusion
Each model of diffusion expresses the diffusion flux through concentrations, densities and their derivatives. Flux is a vector . The transfer of a physical quantity through a small area with normal per time is
The dimension of the diffusion flux is [flux]=[quantity]/([time]·[area]). The diffusing physical quantity may be the number of particles, mass, energy, electric charge, or any other scalar extensive quantity. For its density, , the diffusion equation has the form
where is intensity of any local source of this quantity (the rate of a chemical reaction, for example). For the diffusion equation, the no-flux boundary conditions can be formulated as on the boundary, where is the normal to the boundary at point .
Fick's law and equations
Fick's first law: the diffusion flux is proportional to the negative of the concentration gradient:
The corresponding diffusion equation (Fick's second law) is
where is the Laplace operator,
Onsager's equations for multicomponent diffusion and thermodiffusion
Fick's law describes diffusion of an admixture in a medium. The concentration of this admixture should be small and the gradient of this concentration should be also small. The driving force of diffusion in Fick's law is the antigradient of concentration, .
where is the flux of the ith physical quantity (component) and is the jth thermodynamic force.
The thermodynamic forces for the transport processes were introduced by Onsager as the space gradients of the derivatives of the entropy density s (he used the term "force" in quotation marks or "driving force"):
where are the "thermodynamic coordinates". For the heat and mass transfer one can take (the density of internal energy) and is the concentration of the ith component. The corresponding driving forces are the space vectors
where T is the absolute temperature and is the chemical potential of the ith component. It should be stressed that the separate diffusion equations describe the mixing or mass transport without bulk motion. Therefore, the terms with variation of the total pressure are neglected. It is possible for diffusion of small admixtures and for small gradients.
For the linear Onsager equations, we must take the thermodynamic forces in the linear approximation near equilibrium:
The transport equations are
Here, all the indexes i, j, k=0,1,2,... are related to the internal energy (0) and various components. The expression in the square brackets is the matrix of the diffusion (i,k>0), thermodiffusion (i>0, k=0 or k>0, i=0) and thermal conductivity (i=k=0) coefficients.
Under isothermal conditions T=const. The relevant thermodynamic potential is the free energy (or the free entropy). The thermodynamic driving forces for the isothermal diffusion are antigradients of chemical potentials, , and the matrix of diffusion coefficients is
There is intrinsic arbitrariness in the definition of the thermodynamic forces and kinetic coefficients because they are not measurable separately and only their combinations can be measured. For example, in the original work of Onsager the thermodynamic forces include additional multiplier T, whereas in the Course of Theoretical Physics this multiplier is omitted but the sign of the thermodynamic forces is opposite. All these changes are supplemented by the corresponding changes in the coefficients and do not effect the measurable quantities.
Nondiagonal diffusion must be nonlinear
The formalism of linear irreversible thermodynamics (Onsager) generates the systems of linear diffusion equations in the form
If the matrix of diffusion coefficients is diagonal then this system of equations is just a collection of decoupled Fick's equations for various components. Assume that diffusion is non-diagonal, for example, , and consider the state with . At this state, . If at some points then becomes negative at these points in a short time. Therefore, linear non-diagonal diffusion does not preserve positivity of concentrations. Non-diagonal equations of multicomponent diffusion must be non-linear.
Einstein's mobility and Teorell formula
Below, to combine in the same formula the chemical potential μ and the mobility, we use for mobility the notation .
The mobility—based approach was further applied by T. Teorell. In 1935, he studied the diffusion of ions through a membrane. He formulated the essence of his approach in the formula:
- the flux is equal to mobility×concentration×force per gram ion.
This is the so-called Teorell formula.
The force under isothermal conditions consists of two parts:
- Diffusion force caused by concentration gradient:
- Electrostatic force caused by electric potential gradient:
Here R is the gas constant, T is the absolute temperature, n is the concentration, the equilibrium concentration is marked by a superscript "eq", q is the charge and φ is the electric potential.
The simple but crucial difference between the Teorell formula and the Onsager laws is the concentration factor in the Teorell expression for the flux. In the Einstein – Teorell approach, If for the finite force the concentration tends to zero then the flux also tends to zero, whereas the Onsager equations violate this simple and physically obvious rule.
The general formulation of the Teorell formula for non-perfect systems under isothermal conditions is
where μ is the chemical potential, μ0 is the standard value of the chemical potential. The expression is the so-called activity. It measures the "effective concentration" of a species in a non-ideal mixture. In this notation, the Teorell formula for the flux has a very simple form
The standard derivation of the activity includes a normalization factor and for small concentrations , where is the standard concentration. Therefore this formula for the flux describes the flux of the normalized dimensionless quantity, ,
Teorell formula for multicomponent diffusion
The Teorell formula with combination of Onsager's definition of the diffusion force gives
where is the mobility of the ith component, is its activity, is the matrix of the coefficients, is the themodynamic diffusion force, . For the isothermal perfect systems, . Therefore, the Einstein-Teorell approach gives the following multicomponent generalization of the Fick's law for multicomponent diffusion:
Jumps on the surface and in solids
Diffusion of reagents on the surface of a catalyst may play an important role in heterogeneous catalysis. The model of diffusion in the ideal monolayer is based on the jumps of the reagents on the nearest free places. This model was used for CO on Pt oxidation under low gas pressure.
The system includes several reagents on the surface. Their surface concentrations are . The surface is a lattice of the adsorption places. Each reagent molecule fills a place on the surface. Some of the places are free. The concentration of the free paces is . The sum of all (including free places) is constant, the density of adsorption places b.
The jump model gives for the diffusion flux of (i=1,...,n):
The corresponding diffusion equation is:
Due to the conservation law, and we have the system of m diffusion equations. For one component we get Fick's law and linear equations because . For two and more components the equations are nonlinear.
If all particles can exchange their positions with their closest neighbours then a simple generalization gives
where is a symmetric matrix of coefficients which characterize the intensities of jumps. The free places (vacancies) should be considered as special "particles" with concentration .
Various versions of these jump models are also suitable for simple diffusion mechanisms in solids.
Diffusion in porous media
For diffusion in porous media the basic equations are:
where D is the diffusion coefficient, n is the concentration, m>0 (usually m>1, the case m=1 corresponds to Fick's law).
For diffusion of gases in porous media this equation is the formalisation of Darcy's law: the velocity of a gas in the porous media is
For underground water infiltration the Boussinesq approximation gives the same equation with m=2.
For plasma with the high level of radiation the Zeldovich-Raizer equation gives m>4 for the heat transfer.
Diffusion in physics
Elementary theory of diffusion coefficient in gases
The diffusion coefficient is the coefficient in the Fick's first law , where J is the diffusion flux (amount of substance) per unit area per unit time, n (for ideal mixtures) is the concentration, x is the position [length].
Let us consider two gases with molecules of the same diameter d and mass m (self-diffusion). In this case, the elementary mean free path theory of diffusion gives for the diffusion coefficient
We can see that the diffusion coefficient in the mean free path approximation grows with T as T3/2 and decreases with P as 1/P. If we use for P the ideal gas law P=RnT with the total concentration n, then we can see that for given concentration n the diffusion coefficient grows with T as T1/2 and for given temperature it decreases with the total concentration as 1/n.
For two different gases, A and B, with molecular masses mA, mB and molecular diameters dA, dB, the mean free path estimate of the diffusion coefficient of A in B and B in A is:
The theory of diffusion in gases based on Boltzmann's equation
In Boltzmann's kinetics of the mixture of gases, each gas has its own distribution function, , where t is the time moment, x is position and c is velocity of molecule of the ith component of the mixture. Each component has its mean velocity . If the velocities do not concide then there exists diffusion.
- individual concentrations of particles, (particles per volume),
- density of moment (mi is the ith particle mass),
- density of kinetic energy .
The kinetic temperature T and pressure P are defined in 3D space as
- ; ,
where is the total density.
For two gases, the difference between velocities, is given by the expression:
where is the force applied to the molecules of the ith component and is the thermodiffusion ratio.
The coefficient D12 is positive. This is the diffusion coefficient. Four terms in the formula for C1-C2 describe four main effects in the diffusion of gases:
- describes the flux of the first component from the areas with the high ratio n1/n to the areas with lower values of this ratio (and, analogously the flux of the second component from high n2/n to low n2/n because n2/n=1-n1/n);
- describes the flux of the heavier molecules to the areas with higher pressure and the lighter molecules to the areas with lower pressure, this is barodiffusion;
- describes diffusion caused by the difference of the forces applied to molecules of different types. For example, in the Earth's gravitational field, the heavier molecules should go down, or in electric field the charged molecules should move, until this effect is not equilibrated by the sum of other terms. This effect should not be confused with barodiffusion caused by the pressure gradient.
- describes thermodiffusion, the diffusion flux caused by the temperature gradient.
All these effects are called diffusion because they describe the differences between velocities of different components in the mixture. Therefore, these effects cannot be described as a bulk transport and differ from advection or convection.
In the first approximation,
- for rigid spheres;
- for repulsing force .
The number is defined by quadratures (formulas (3.7), (3.9), Ch. 10 of the classical Chapman and Cowling book)
We can see that the dependence on T for the rigid spheres is the same as for the simple mean free path theory but for the power repulsion laws the exponent is different. Dependence on a total concentration n for a given temperature has always the same character, 1/n.
In applications to gas dynamics, the diffusion flux and the bulk flow should be joined in one system of transport equations. The bulk flow describes the mass transfer. Its velocity V is the mass average velocity. It is defined through the momentum density and the mass concentrations:
where is the mass concentration of the ith species, is the mass density.
By definition, the diffusion velocity of the ith component is , . The mass transfer of the ith component is described by the continuity equation
where is the net mass production rate in chemical reactions, .
In these equations, the term describes advection of the ith component and the term represents diffusion of this component.
In 1948, Wendell H. Furry proposed to use the form of the diffusion rates found in kinetic theory as a framework for the new phenomenological approach to diffusion in gases. This approach was developed further by F.A. Williams and S.H. Lam. For the diffusion velocities in multicomponent gases (N components) they used
Here, is the diffusion coefficient matrix, is the thermal diffusion coefficient, is the body force per unite mass acting on the ith species, is the partial pressure fraction of the ith species (and is the partial pressure), is the mass fraction of the ith species, and .
Separation of diffusion from convection in gases
While Brownian motion of multi-molecular mesoscopic particles (like pollen grains studied by Brown) is observable under an optical microscope, molecular diffusion can only be probed in carefully controlled experimental conditions. Since Graham experiments, it is well known that avoiding of convection is necessary and this may be a non-trivial task.
Under normal conditions, molecular diffusion dominates only on length scales between nanometer and millimeter. On larger length scales, transport in liquids and gases is normally due to another transport phenomenon, convection, and to study diffusion on the larger scale, special efforts are needed.
Therefore, some often cited examples of diffusion are wrong: If cologne is sprayed in one place, it will soon be smelled in the entire room, but a simple calculation shows that this can't be due to diffusion. Convective motion persists in the room because the temperature inhomogeneity. If ink is dropped in water, one usually observes an inhomogeneous evolution of the spatial distribution, which clearly indicates convection (caused, in particular, by this dropping).
In contrast, heat conduction through solid media is an everyday occurrence (e.g. a metal spoon partly immersed in a hot liquid). This explains why the diffusion of heat was explained mathematically before the diffusion of mass.
Other types of diffusion
- Anisotropic diffusion, also known as the Perona-Malik equation, enhances high gradients
- Anomalous diffusion, in porous medium
- Atomic diffusion, in solids
- Eddy diffusion, in coarse-grained description of turbulent flow
- Effusion of a gas through small holes
- Electronic diffusion, resulting in an electric current called the diffusion current
- Facilitated diffusion, present in some organisms
- Gaseous diffusion, used for isotope separation
- Heat equation, diffusion of thermal energy
- Itō diffusion, mathematisation of Brownian motion, continuous stochastic process.
- Knudsen diffusion of gas in long pores with frequent wall collisions
- Momentum diffusion ex. the diffusion of the hydrodynamic velocity field
- Photon diffusion
- Plasma diffusion
- Random walk, model for diffusion
- Reverse diffusion, against the concentration gradient, in phase separation
- Rotational diffusion, random reorientations of molecules
- Surface diffusion, diffusion of adparticles on a surface
- Turbulent diffusion, transport of mass, heat, or momentum within a turbulent fluid
- Diffusion-limited aggregation
- Fick's laws of diffusion
- False diffusion
- Isobaric counterdiffusion
- J. Philibert (2005). One and a half century of diffusion: Fick, Einstein, before and beyond. Diffusion Fundamentals, 2, 1.1–1.10.
- S.R. De Groot, P. Mazur (1962). Non-equilibrium Thermodynamics. North-Holland, Amsterdam.
- A. Einstein (1905). "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen". Ann. Phys. 17 (8): 549–560. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806.
- Diffusion Processes, Thomas Graham Symposium, ed. J.N. Sherwood, A.V. Chadwick, W.M.Muir, F.L. Swinton, Gordon and Breach, London, 1971.
- L.W. Barr (1997), In: Diffusion in Materials, DIMAT 96, ed. H.Mehrer, Chr. Herzig, N.A. Stolwijk, H. Bracht, Scitec Publications, Vol.1, pp. 1–9.
- H. Mehrer, N.A. Stolwijk (2009). "Heroes and Highlights in the History of Diffusion". Diffusion Fundamentals 11 (1): 1–32.
- S. Chapman, T. G. Cowling (1970) The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, Cambridge University Press (3rd edition), ISBN 052140844X.
- J.F. Kincaid, H. Eyring, A.E. Stearn (1941). "The theory of absolute reaction rates and its application to viscosity and diffusion in the liquid State". Chem. Rev. 28 (2): 301–365. doi:10.1021/cr60090a005.
- A.N. Gorban, H.P. Sargsyan and H.A. Wahab (2011). "Quasichemical Models of Multicomponent Nonlinear Diffusion". Mathematical Modelling of Natural Phenomena 6 (5): 184–262. arXiv:1012.2908. doi:10.1051/mmnp/20116509.
- Onsager, L. (1931). "Reciprocal Relations in Irreversible Processes. I". Physical Review 37 (4): 405–426. Bibcode:1931PhRv...37..405O. doi:10.1103/PhysRev.37.405.
- L.D. Landau, E.M. Lifshitz (1980). Statistical Physics. Vol. 5 (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3372-7.
- S. Bromberg, K.A. Dill (2002), Molecular Driving Forces: Statistical Thermodynamics in Chemistry and Biology, Garland Science, ISBN 0815320515.
- T. Teorell (1935). "Studies on the "Diffusion Effect" upon Ionic Distribution. Some Theoretical Considerations". Proceedings of the National Academy of Sciences of the United States of America 21 (3): 152–61. Bibcode:1935PNAS...21..152T. doi:10.1073/pnas.21.3.152. PMC 1076553. PMID 16587950.
- J. L. Vázquez (2006), The Porous Medium Equation. Mathematical Theory, Oxford Univ. Press, ISBN 0198569033.
- S. H. Lam (2006). "Multicomponent diffusion revisited". Physics of Fluids 18 (7): 073101. Bibcode:2006PhFl...18g3101L. doi:10.1063/1.2221312.
- D. Ben-Avraham and S. Havlin (2000). Diffusion and Reactions in Fractals and Disordered Systems. Cambridge University Press. ISBN 0521622786.
- Weiss, G. (1994). Aspects and Applications of the Random Walk. North-Holland. ISBN 0444816062.
- Diffusion in a Bipolar Junction Transistor Demo
- Diffusion Furnace for doping of semiconductor wafers. POCl3 doping of Silicon.
- A Java applet implementing Diffusion |
By using high-energy X-rays at the ESRF, an international team defined the size and characteristics of this gap. The knowledge of the structure of a hydrophobic interface is important because they are crucial in biological systems, and can give insight in protein folding and stability. The researchers publish their results this week in PNAS Early Online Edition.
The repulsion of water is a phenomenon present in many aspects of our lives. Detergent molecules made up of components attracted to water (hydrophilic) and others that repel it (hydrophobic). Proteins also use the interaction with water to assemble into complexes. However, studying hydrophobic structures and what occurs when they encounter water is not entirely straightforward as it is influenced by certain factors. Early studies of the gap formed between water and a hydrophobic surface did not show a coherent picture.
Scientists from the Max Planck Institute for Metals Research (Germany), the University of South Australia (Adelaide) and the ESRF carried out experiments on silicon wafers covered by a water-repulsive layer at the surface. The wafers were then immersed in water by a special cell. Studies of the water structure at the interface of the hydrophobic layer confirmed that a gap is formed between the layer and water and that its size is the diameter of a water molecule, somewhere between 0.1 and 0.5 nanometer. The integrated density deficit at the interface amounts to half a monolayer of water molecules.
The scientists did further experiments in order to test the influence of gas, which is naturally present in water, on the hydrophobic water gap. During all their experiments they kept the water ultra clean (unlike water in nature) and after the gas was introduced into the cell until saturation. The result shows that, contrary to previous reports, gas does not play a role in the structure of water at flat interfaces.
This is the first time that high energy synchrotron X-rays have been used as a tool to measure the properties of this gap. "Some teams have used neutrons, but they didn't have enough resolution, after all, the gap is extremely small and difficult to track," explained Harald Reichert, the paper’s corresponding author. Despite the superior quality of the X-ray beam, the experiment was still a challenge: the water-repellent layer on the silicon wafer can survive only 50 seconds under the beam, so measurements had to be completed very quickly.
The next step for the team is to produce porous structures and study the properties of water at confined pore interfaces. "These studies will increase our knowledge of how water behaves in different environments. The structure of water in these environments is still, somewhat a mystery to us, despite the fact that our world is surrounded by water", declared Reichert.
Montserrat Capellas | alfa
Astronomers find unexpected, dust-obscured star formation in distant galaxy
24.03.2017 | University of Massachusetts at Amherst
Gravitational wave kicks monster black hole out of galactic core
24.03.2017 | NASA/Goddard Space Flight Center
Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far.
The results will be published on March 22 in the journal „Astronomy & Astrophysics“.
Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the...
Researchers at the Goethe University Frankfurt, together with partners from the University of Tübingen in Germany and Queen Mary University as well as Francis Crick Institute from London (UK) have developed a novel technology to decipher the secret ubiquitin code.
Ubiquitin is a small protein that can be linked to other cellular proteins, thereby controlling and modulating their functions. The attachment occurs in many...
In the eternal search for next generation high-efficiency solar cells and LEDs, scientists at Los Alamos National Laboratory and their partners are creating...
Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are less stable. Now researchers at the Technical University of Munich (TUM) have, for the first time ever, produced a composite material combining silicon nanosheets and a polymer that is both UV-resistant and easy to process. This brings the scientists a significant step closer to industrial applications like flexible displays and photosensors.
Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are...
Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to simulate these confined natural conditions in artificial vesicles for the first time. As reported in the academic journal Small, the results are offering better insight into the development of nanoreactors and artificial organelles.
Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to...
20.03.2017 | Event News
14.03.2017 | Event News
07.03.2017 | Event News
24.03.2017 | Materials Sciences
24.03.2017 | Physics and Astronomy
24.03.2017 | Physics and Astronomy |
Before the age of electronics, the closest thing to a computer was the abacus, although, strictly speaking, the abacus is actually a calculator since it requires a human operator. Computers, on the other hand, perform calculations automatically by following a series of built-in commands called software.
In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we practically never give them a second thought. But even prior to the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that’s since drastically reshaped every facet of modern life.
Table of Contents
The Language Before the Hardware
The universal language in which computers carry out processor instructions originated in the 17th century in the form of the binary numerical system. Developed by German philosopher and mathematician Gottfried Wilhelm Leibniz, the system came about as a way to represent decimal numbers using only two digits: the number zero and the number one. Leibniz’s system was partly inspired by philosophical explanations in the classical Chinese text the “I Ching,” which explained the universe in terms of dualities such as light and darkness and male and female. While there was no practical use for his newly codified system at the time, Leibniz believed that it was possible for a machine to someday make use of these long strings of binary numbers.
In 1847, English mathematician George Boole introduced a newly devised algebraic language built on Leibniz’s work. His “Boolean Algebra” was actually a system of logic, with mathematical equations used to represent statements in logic. Equally important was that it employed a binary approach in which the relationship between different mathematical quantities would be either true or false, 0 or 1.
As with Leibniz, there were no obvious applications for Boole’s algebra at the time, however, mathematician Charles Sanders Pierce spent decades expanding the system, and in 1886, determined that the calculations could be carried out with electrical switching circuits. As a result, Boolean logic would eventually become instrumental in the design of electronic computers.
The Earliest Processors
English mathematician Charles Babbage is credited with having assembled the first mechanical computers—at least technically speaking. His early 19th-century machines featured a way to input numbers, memory, and a processor, along with a way to output the results. Babbage called his initial attempt to build the world’s first computing machine the “difference engine.” The design called for a machine that calculated values and printed the results automatically onto a table. It was to be hand-cranked and would have weighed four tons. But Babbage’s baby was a costly endeavor. More than £17,000 pounds sterling was spent on the difference engine’s early development. The project was eventually scrapped after the British government cut off Babbage’s funding in 1842.
This forced Babbage to move on to another idea, an “analytical engine,” which was more ambitious in scope than its predecessor and was to be used for general-purpose computing rather than just arithmetic. While he was never able to follow through and build a working device, Babbage’s design featured essentially the same logical structure as electronic computers that would come into use in the 20th century. The analytical engine had integrated memory—a form of information storage found in all computers—that allows for branching, or the ability for a computer to execute a set of instructions that deviate from the default sequence order, as well as loops, which are sequences of instructions carried out repeatedly in succession.
Despite his failures to produce a fully functional computing machine, Babbage remained steadfastly undeterred in pursuing his ideas. Between 1847 and 1849, he drew up designs for a new and improved second version of his difference engine. This time, it calculated decimal numbers up to 30 digits long, performed calculations more quickly, and was simplified to require fewer parts. Still, the British government did not feel it was worth their investment. In the end, the most progress Babbage ever made on a prototype was completing one-seventh of his first design.
During this early era of computing, there were a few notable achievements: The tide-predicting machine, invented by Scotch-Irish mathematician, physicist, and engineer Sir William Thomson in 1872, was considered the first modern analog computer. Four years later, his older brother, James Thomson, came up with a concept for a computer that solved mathematical problems known as differential equations. He called his device an “integrating machine” and in later years, it would serve as the foundation for systems known as differential analyzers. In 1927, American scientist Vannevar Bush started development on the first machine to be named as such and published a description of his new invention in a scientific journal in 1931.
Dawn of Modern Computers
Up until the early 20th century, the evolution of computing was little more than scientists dabbling in the design of machines capable of efficiently performing various kinds of calculations for various purposes. It wasn’t until 1936 that a unified theory on what constitutes a “general-purpose computer” and how it should function was finally put forth. That year, English mathematician Alan Turing published a paper titled, “On Computable Numbers, with an Application to the Entscheidungsproblem,” which outlined how a theoretical device called a “Turing machine” could be used to carry out any conceivable mathematical computation by executing instructions. In theory, the machine would have limitless memory, read data, write results, and store a program of instructions.
While Turing’s computer was an abstract concept, it was a German engineer named Konrad Zuse who would go on to build the world’s first programmable computer. His first attempt at developing an electronic computer, the Z1, was a binary-driven calculator that read instructions from punched 35-millimeter film. The technology was unreliable, however, so he followed it up with the Z2, a similar device that used electromechanical relay circuits. While an improvement, it was in assembling his third model that everything came together for Zuse. Unveiled in 1941, the Z3 was faster, more reliable, and better able to perform complicated calculations. The biggest difference in this third incarnation was that the instructions were stored on an external tape, thus allowing it to function as a fully operational program-controlled system.
What’s perhaps most remarkable is that Zuse did much of his work in isolation. He’d been unaware that the Z3 was “Turing complete,” or in other words, capable of solving any computable mathematical problem—at least in theory. Nor did he have any knowledge of similar projects underway around the same time in other parts of the world.
Among the most notable of these was the IBM-funded Harvard Mark I, which debuted in 1944. Even more promising, though, was the development of electronic systems such as Great Britain’s 1943 computing prototype Colossus and the ENIAC, the first fully-operational electronic general-purpose computer that was put into service at the University of Pennsylvania in 1946.
Out of the ENIAC project came the next big leap in computing technology. John Von Neumann, a Hungarian mathematician who’d consulted on ENIAC project, would lay the groundwork for a stored program computer. Up to this point, computers operated on fixed programs and altering their function—for example, from performing calculations to word processing. This required the time-consuming process of having to manually rewire and restructure them. (It took several days to reprogram ENIAC.) Turing had proposed that ideally, having a program stored in the memory would allow the computer to modify itself at a much faster pace. Von Neumann was intrigued by the concept and in 1945 drafted a report that provided in detail a feasible architecture for stored program computing.
His published paper would be widely circulated among competing teams of researchers working on various computer designs. In 1948, a group in England introduced the Manchester Small-Scale Experimental Machine, the first computer to run a stored program based on the Von Neumann architecture. Nicknamed “Baby,” the Manchester Machine was an experimental computer that served as the predecessor to the Manchester Mark I. The EDVAC, the computer design for which Von Neumann’s report was originally intended, wasn’t completed until 1949.
Transitioning Toward Transistors
The first modern computers were nothing like the commercial products used by consumers today. They were elaborate hulking contraptions that often took up the space of an entire room. They also sucked enormous amounts of energy and were notoriously buggy. And since these early computers ran on bulky vacuum tubes, scientists hoping to improve processing speeds would either have to find bigger rooms—or come up with an alternative.
Fortunately, that much-needed breakthrough was already in the works. In 1947, a group of scientists at Bell Telephone Laboratories developed a new technology called point-contact transistors. Like vacuum tubes, transistors amplify electrical current and can be used as switches. More importantly, they were much smaller (about the size of an aspirin capsule), more reliable, and they used much less power overall. The co-inventors John Bardeen, Walter Brattain, and William Shockley would eventually be awarded the Nobel Prize in physics in 1956.
While Bardeen and Brattain continued doing research work, Shockley moved to further develop and commercialize transistor technology. One of the first hires at his newly founded company was an electrical engineer named Robert Noyce, who eventually split off and formed his own firm, Fairchild Semiconductor, a division of Fairchild Camera and Instrument. At the time, Noyce was looking into ways to seamlessly combine the transistor and other components into one integrated circuit to eliminate the process in which they had to be pieced together by hand. Thinking along similar lines, Jack Kilby, an engineer at Texas Instruments, ended up filing a patent first. It was Noyce’s design, however, that would be widely adopted.
Where integrated circuits had the most significant impact was in paving the way for the new era of personal computing. Over time, it opened up the possibility of running processes powered by millions of circuits—all on a microchip the size of a postage stamp. In essence, it’s what has enabled the ubiquitous handheld gadgets we use every day, that are ironically, much more powerful than the earliest computers that took up entire rooms. |
Public Key Cryptography (PKC) is an asymmetric encryption technique that relies on a pair of keys to secure data communication. The public key is the encryption key shared with everyone to receive transactions, and the private key is the decryption key which must be kept secret.
This technique is at the heart of cryptocurrencies and guarantees the integrity and authenticity of cryptographic transactions.
Key points to remember
- Public Key Cryptography (PKC) is an asymmetric encryption technique that relies on public/private key encryption to secure data communication.
- The public key is the encryption key shared with everyone to receive transactions, and the private key is the decryption key which must be kept secret.
- The recipient’s public key is used to encrypt the data; the recipient’s private key is used to decrypt the data.
- This technique is at the heart of cryptocurrencies and guarantees the integrity and authenticity of cryptographic transactions.
- Remember to keep your private keys private and secure at all times.
What is cryptographic key encryption?
The encryption algorithm used in blockchains is the cryptographic key encryption method to encrypt and decrypt data. A cryptographic key is a random string of data, such as numbers and letters, generated to encrypt data and decrypt encrypted data.
The cryptographic encryption can be symmetric or asymmetric key cryptography. In symmetric encryption, a single key is used to encrypt and decrypt data. In asymmetric encryption, two keys are required to encrypt and decrypt encrypted messages in a complex mathematical algorithm. Key pairs used in asymmetric cryptography are called public keys and private keys. A public key is used to encrypt messages and is widely shared and displayed publicly, such as your email address or bank account, to receive cryptocurrency. Instead, a private key is the decryption key used to decrypt messages and should be kept secret, like your password, to protect your cryptocurrencies.
Public key cryptography is used in cryptocurrency transactions to ensure that only the intended recipient can access the message. Integrity is ensured by asymmetric encryption because only a private key can decrypt information encrypted with a public key. The decryption process requires verifying that the message received matches the message sent, thereby authenticating that the data is not tampered with or tampered with.
History and common methods
In the early days of cryptography, distributing key pairs between two parties was quite difficult. The parties would first exchange a key which was to be kept in absolute secrecy using a face-to-face meeting or a trusted courier, then use the key to share encrypted messages.
Nowadays, the Diffie-Hellman key exchange method allows two parties without prior knowledge of each other to establish a shared secret key together over an insecure channel.
Some of the most commonly used algorithms for generating public keys are Rivest-Shamir-Adleman (RSA), Elliptic Curve Cryptography (ECC), and Digital Signature Standard (DSS).
The ECC algorithm uses elliptic curves to generate keys and is used for digital signatures and key agreement. The RSA algorithm is the oldest cryptographic system used in the transmission of shared keys for symmetric key cryptography. DSS is a Federal Information Processing Standard specifying the algorithms that can be used to generate digital signatures used by NIST.
How does public key cryptography work?
In public-key cryptography, known as asymmetric encryption, anyone can encrypt messages using a public key, but you need a matching private key to decrypt the message. First, the unencrypted data, or plaintext, is put into a cryptographic algorithm using the public key. Then the plaintext appears as random data. And finally, anyone with the corresponding private key can decrypt the data and transcribe it back to plain text.
For example, Jane (sender) wants to send 1 BTC to Alice (receiver). Jane knows Alice’s public key and uses it to encrypt the transaction. After receiving it, the transaction is decrypted using Alice’s private key. Alice should be the only person who can authorize the transaction, because no one else knows her private key.
Public key encryption can also be used to create digital signatures. Here are the steps to generate a digital signature:
- The sender selects the file to be digitally signed.
- The sender’s computer calculates the unique hash value for the contents of the file.
- The hash value is encrypted with the sender’s private key, creating the digital signature.
- The original file and digital signature are sent to the recipient.
- The recipient uses the associated document application, which identifies that the file has been digitally signed.
- The recipient’s computer decrypts the digital signature using the sender’s public key and verifies that the decrypted hash value matches the hash of the original file.
Private key encryption is performed using the recipient’s public and private keys. A public key is used to encrypt messages and is widely shared and publicly displayed. Instead, a private key is the decryption key used to decrypt messages and should be kept secret.
Differences between public and private keys
The public key is the user’s public address on the blockchain used to receive cryptocurrencies. Anyone can use it to send you digital currencies; however, only you can spend them using your private key. The public key is used to encrypt messages before sending them.
A private key is similar to the front door of your house. The public knows its location because the address (public key) can be easily found, but only you with a key to the front door of the house (private key) can enter it. Private keys are used to decrypt messages created with the corresponding public key. The private key is kept secret; in case of loss, restoration or access to your funds is impossible.
Risks associated with public key encryption
Although the indisputable advantage of public key cryptography is robust data security, there are still some risks associated with it, such as:
Poor quality key
A poorly designed, i.e. too short, asymmetric key algorithm is a security risk. Thus, the issuance, renewal, and revocation of the encryption key must be handled with the utmost care.
Loss of private key
As mentioned earlier, private keys cannot be shared publicly and must remain private and secure. This is because once the private key is lost, there is no way to access any data or funds stored in a crypto wallet.
Public key encryption is also vulnerable to a Man-in-the-Middle (MitM) attack in which the communication of public keys is intercepted by a third party (the “man in the middle”) and then modified to provide different public keys in place. .
The main way to establish a secure connection with servers, you must first verify their digital certificates.
Secure Socket Layer and Transport Layer Security Connections SSL/TLS uses public key encryption to enable the use of Secure Hypertext Transfer Protocol to create a secure connection between server and client. The communication session is first established using asymmetric encryption to verify the identity of both parties and exchange a shared session key that enables symmetric encryption.
Cryptography is essential for securing cryptocurrency transactions and ensuring that your data has not been tampered with. This is why private and public keys are essential to authorize these transactions.
Remember to keep your private keys private and secure at all times. Write your phrase on paper and store it in a fireproof safe. Also, go the extra mile by having your private keys engraved on a metal plate to protect them from high temperatures, humidity and harsh chemicals! |
Summary: The most Pythonic way to define a function in a single line is to (1) create an anonymous lambda function and (2) assign the function object to a variable name. You can then call the function by name just like any other regularly-defined function. For example, the statement
f = lambda x: x+1 creates a function
f that increments the argument
x by one and returns the result:
Problem: How to define a function in a single line of Python code? This article explores this mission-critical question in all detail!
Example: Say, you want to write the following function in a single line of code:
def f(x): return str(x * 3) + '!' print(f(1)) # 3! print(f('python')) # pythonpythonpython!
Let’s get a quick overview of how to accomplish this first:
Exercise: Change the one-liner functions to return the uppercase version of the generated string using the
string.upper() function. Then run the code to see if your output is correct!
Method 1: Single-Line Definition
The first and most straightforward way of defining a function in a single line is to just remove the line break:
def f1(x): return str(x * 3) + '!' print(f1(1)) print(f1('python'))
The function definition is identical to the original one with one difference: you removed the new line and the indentation from the definition. While this works for functions with single-line function bodies, you can easily extend it by using the semicolon as a separator:
>>> def fxx(): x=1; x=2; return x >>> fxx() 2
Sure, readability gets hurt if you’re doing this but you should still know the syntax in case you see code like this in a practical code project (you will)!
Method 2: Lambda Function
A lambda function is an anonymous function in Python. It starts with the keyword
lambda, followed by a comma-separated list of zero or more arguments, followed by the colon and the return expression. For example,
lambda x, y, z: x+y+z would calculate the sum of the three argument values
Here’s the most Pythonic way to write a function in a single line of code:
f2 = lambda x: str(x * 3) + '!' print(f2(1)) print(f2('python'))
You create a lambda function and assign the new function object to the variable
f2. This variable can now be used like any other function name defined in a regular function definition.
Method 3: exec()
Now, let’s get as unpythonic as we can get, shall we? The
exec() function takes one string as an argument. It then executes the code defined in the string argument. In combination with the multi-line character
'\n', this enables you to run all complicated multi-line code snippets in a single line. Hackers often use this technique to cram malicious scripts in a seemingly harmless single line of Python code. Powerful, I know.
# Method 3: exec() f3 = "def f(x):\n return str(x * 3) + '!'" exec(f3 + '\nprint(f(1))') exec(f3 + "\nprint(f('python'))")
f3 contains a two-line function definition of our original function
f. You then concatenate this string with a new line to print the result of running this newly created function in your script by passing arbitrary arguments into it.
Is it possible to write the if-then-else statement in a single line of code?
Yes, you can write most if statements in a single line of Python using any of the following methods:
- Write the if statement without else branch as a Python one-liner:
if 42 in range(100): print("42").
- If you want to set a variable, use the ternary operator:
x = "Alice" if "Jon" in "My name is Jonas" else "Bob".
- If you want to conditionally execute a function, still use the ternary operator:
print("42") if 42 in range(100) else print("21").
Python One-Liners Book: Master the Single Line First!
Python programmers will improve their computer science skills with these useful one-liners.
Python One-Liners will teach you how to read and write “one-liners”: concise statements of useful functionality packed into a single line of code. You’ll learn how to systematically unpack and understand any line of Python code, and write eloquent, powerfully compressed Python like an expert.
The book’s five chapters cover (1) tips and tricks, (2) regular expressions, (3) machine learning, (4) core data science topics, and (5) useful algorithms.
Detailed explanations of one-liners introduce key computer science concepts and boost your coding and analytical skills. You’ll learn about advanced Python features such as list comprehension, slicing, lambda functions, regular expressions, map and reduce functions, and slice assignments.
You’ll also learn how to:
- Leverage data structures to solve real-world problems, like using Boolean indexing to find cities with above-average pollution
- Use NumPy basics such as array, shape, axis, type, broadcasting, advanced indexing, slicing, sorting, searching, aggregating, and statistics
- Calculate basic statistics of multidimensional data arrays and the K-Means algorithms for unsupervised learning
- Create more advanced regular expressions using grouping and named groups, negative lookaheads, escaped characters, whitespaces, character sets (and negative characters sets), and greedy/nongreedy operators
- Understand a wide range of computer science topics, including anagrams, palindromes, supersets, permutations, factorials, prime numbers, Fibonacci numbers, obfuscation, searching, and algorithmic sorting
By the end of the book, you’ll know how to write Python at its most refined, and create concise, beautiful pieces of “Python art” in merely a single line.
While working as a researcher in distributed systems, Dr. Christian Mayer found his love for teaching computer science students.
To help students reach higher levels of Python success, he founded the programming education website Finxter.com that has taught exponential skills to millions of coders worldwide. He’s the author of the best-selling programming books Python One-Liners (NoStarch 2020), The Art of Clean Code (NoStarch 2022), and The Book of Dash (NoStarch 2022). Chris also coauthored the Coffee Break Python series of self-published books. He’s a computer science enthusiast, freelancer, and owner of one of the top 10 largest Python blogs worldwide.
His passions are writing, reading, and coding. But his greatest passion is to serve aspiring coders through Finxter and help them to boost their skills. You can join his free email academy here. |
Exoplanets – planets outside our solar system – are probably more hospitable to life than most people think, says a team of astrophysicists from the University of Toronto. Many of them probably have liquid water.
Lead author Jérémy Leconte, a postdoctoral fellow at the Canadian Institute for Theoretical Astrophysics (CITA) at the University of Toronto, said:
“Planets with potential oceans could have a climate that is much more similar to Earth’s than previously expected.”
The study was published in the journal Science.
Astronomers have believed that exoplanets orbit their stars with one hemisphere facing their sun, while the other side exists in permanent darkness. If this is the case, exoplanets would rotate in sync with their stars.
Mr. Leconte says exoplanets’ atmospheres could be making them spin faster. (Image: Leconte’s homepage)
Leconte’s study suggests that exoplanets spin very much like our Earth does, and exhibit day-night cycles.
“If we are correct, there is no permanent, cold night side on exoplanets causing water to remain trapped in a gigantic ice sheet. Whether this new understanding of exoplanets’ climate increases the ability of these planets to develop life remains an open question.”
The scientists reached their conclusions through a 3-dimensional climate model they developed to determine the effect of a given planet’s atmosphere on its rotation speed, which would lead to changes in its climate.
“Atmosphere is a key factor affecting a planet’s spin, the impact of which can be of enough significance to overcome synchronous rotation and put a planet in a day-night cycle.”
Many astronomers believe that several exoplanets should be able to maintain an atmosphere as plentiful as that of Earth, even though we have yet to find observational evidence.
In the case of our planet, with its relatively thin atmosphere, most Sunlight reaches the surface of Earth, maximizing the effect of heating throughout the atmosphere, which leads to a more moderate climate across the whole planet.
The day-night cycle as well as the difference between the equator and the poles, both of which create differences in the temperature at the surface, drive winds that redistribute the mass of the atmosphere.
The impact is such that it overcomes the effect of tidal friction exerted by a sun (star) on whatever satellite is orbiting it, much like our planet does on the Moon.
“The Moon always shows us the same side, because the tides raised by Earth create a friction that alters its spin.”
“The Moon is in synchronous rotation with Earth because the time it takes to spin once on its axis equals the time it takes for it to orbit around Earth. That is why there is a dark side of the moon. The tidal theory, however, neglects the effects of an atmosphere.”
The team in Toronto say that many known terrestrial exoplanets should not be in a state of synchronous rotation, as initially believed.
While their models show that these exoplanets would have day-night cycles, much like Earth does, each day/night might last from a few weeks to several months.
Venus, the planet in our Solar System with an atmosphere closes to the Sun, does not spin synchronously. The Earth doesn’t either, but we are too far away for the tidal friction to be significant. Venus, in fact, rotates backward – the Sun sets in the East and rises in the West. Tidal friction, however, does affect Venus’ spin (it rotates once every 243 days).
“Planets with potential oceans could thus have a climate that is much more similar to the Earth’s than previously expected. When a diurnal cycle is present, there is no permanent, cold night side where water can remain trapped in a gigantic ice sheet. Does this increase the ability of these planets to develop life as we know it? This is still an open question.”
The study was supported by grants from the Natural Sciences and Engineering Research Council of Canada.
Citation: “Asynchronous rotation of Earth-mass planets in the habitable zone of lower-mass stars,” Jérémy Leconte, Hanbo Wu, Kristen Menou, and Norman Murray. Science 1258686. Published online 15 January, 2015. [DOI:10.1126/science.1258686]. |
||This article needs more links to other articles to help integrate it into the encyclopedia. (December 2012)|
The Nullification Crisis was a sectional crisis during the presidency of Andrew Jackson created by South Carolina's 1832 Ordinance of Nullification. This ordinance declared by the power of the State that the federal Tariffs of 1828 and 1832 were unconstitutional and therefore null and void within the sovereign boundaries of South Carolina. The controversial and highly protective Tariff of 1828 (known to its detractors as the "Tariff of Abominations") was enacted into law during the presidency of John Quincy Adams. The tariff was opposed in the South and parts of New England. Its opponents expected that the election of Jackson as President would result in the tariff being significantly reduced.
The nation had suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. Many South Carolina politicians blamed the change in fortunes on the national tariff policy that developed after the War of 1812 to promote American manufacturing over its European competition. By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state itself declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and Vice President John C. Calhoun, the most effective proponent of the constitutional theory of state nullification.
On July 14, 1832, after Calhoun had resigned the Vice Presidency in order to run for the Senate where he could more effectively defend nullification, Jackson signed into law the Tariff of 1832. This compromise tariff received the support of most northerners and half of the southerners in Congress. The reductions were too little for South Carolina, and in November 1832 a state convention declared that the tariffs of both 1828 and 1832 were unconstitutional and unenforceable in South Carolina after February 1, 1833. Military preparations to resist anticipated federal enforcement were initiated by the state. In late February both a Force Bill, authorizing the President to use military forces against South Carolina, and a new negotiated tariff satisfactory to South Carolina were passed by Congress. The South Carolina convention reconvened and repealed its Nullification Ordinance on March 11, 1833.
The crisis was over, and both sides could find reasons to claim victory. The tariff rates were reduced and stayed low to the satisfaction of the South, but the states’ rights doctrine of nullification remained controversial. By the 1850s the issues of the expansion of slavery into the western territories and the threat of the Slave Power became the central issues in the nation.
Since the Nullification Crisis, the doctrine of states' rights has been asserted again by opponents of the Fugitive Slave Act of 1850, proponents of California's Specific Contract Act of 1863, (which nullified the Legal Tender Act of 1862) opponents of Federal acts prohibiting the sale and possession of marijuana in the first decade of the 21st century, and opponents of implementation of laws and regulations pertaining to firearms from the late 1900s up to 2013.
Background (1787 - 1816)
The historian Richard E. Ellis wrote:
|“||By creating a national government with the authority to act directly upon individuals, by denying to the state many of the prerogatives that they formerly had, and by leaving open to the central government the possibility of claiming for itself many powers not explicitly assigned to it, the Constitution and Bill of Rights as finally ratified substantially increased the strength of the central government at the expense of the states.||”|
The extent of this change and the problem of the actual distribution of powers between state and the federal governments would be a matter of political and ideological discussion up to the Civil War and beyond. In the early 1790s the debate centered on Alexander Hamilton's nationalistic financial program versus Jefferson's democratic and agrarian program, a conflict that led to the formation of two opposing national political parties. Later in the decade the Alien and Sedition Acts led to the states' rights position being articulated in the Kentucky and Virginia Resolutions. The Kentucky Resolutions, written by Thomas Jefferson, contained the following, which has often been cited as a justification for both nullification and secession:
|“||… that in cases of an abuse of the delegated powers, the members of the general government, being chosen by the people, a change by the people would be the constitutional remedy; but, where powers are assumed which have not been delegated, a nullification of the act is the rightful remedy: that every State has a natural right in cases not within the compact, (casus non fœderis) to nullify of their own authority all assumptions of power by others within their limits: that without this right, they would be under the dominion, absolute and unlimited, of whosoever might exercise this right of judgment for them: that nevertheless, this commonwealth, from motives of regard and respect for its co-States, has wished to communicate with them on the subject: that with them alone it is proper to communicate, they alone being parties to the compact, and solely authorized to judge in the last resort of the powers exercised under it… .||”|
The Virginia Resolutions, written by James Madison, hold a similar argument:
|“||The resolutions, having taken this view of the Federal compact, proceed to infer that, in cases of a deliberate, palpable, and dangerous exercise of other powers, not granted by the said compact, the States, who are parties thereto, have the right, and are in duty bound to interpose to arrest the evil, and for maintaining, within their respective limits, the authorities, rights, and liberties appertaining to them. ...The Constitution of the United States was formed by the sanction of the States, given by each in its sovereign capacity. It adds to the stability and dignity, as well as to the authority of the Constitution, that it rests on this solid foundation. The States, then, being parties to the constitutional compact, and in their sovereign capacity, it follows of necessity that there can be no tribunal above their authority to decide, in the last resort, whether the compact made by them be violated; and, consequently, as parties to it, they must themselves decide, in the last resort, such questions as may be of sufficient magnitude to require their interposition.||”|
Historians differ over the extent to which either resolution advocated the doctrine of nullification. Historian Lance Banning wrote, “The legislators of Kentucky (or more likely, John Breckinridge, the Kentucky legislator who sponsored the resolution) deleted Jefferson's suggestion that the rightful remedy for federal usurpations was a "nullification" of such acts by each state acting on its own to prevent their operation within its respective borders. Rather than suggesting individual, although concerted, measures of this sort, Kentucky was content to ask its sisters to unite in declarations that the acts were "void and of no force", and in "requesting their appeal" at the succeeding session of the Congress.” The key sentence, and the word "nullification" was used in supplementary Resolutions passed by Kentucky in 1799.
Madison's judgment is clearer. He was chairman of a committee of the Virginia Legislature which issued a book-length Report on the Resolutions of 1798, published in 1800 after they had been decried by several states. This asserted that the state did not claim legal force. "The declarations in such cases are expressions of opinion, unaccompanied by other effect than what they may produce upon opinion, by exciting reflection. The opinions of the judiciary, on the other hand, are carried into immediate effect by force." If the states collectively agreed in their declarations, there were several methods by which it might prevail, from persuading Congress to repeal the unconstitutional law, to calling a constitutional convention, as two-thirds of the states may. When, at the time of the Nullification Crisis, he was presented with the Kentucky resolutions of 1799, he argued that the resolutions themselves were not Jefferson's words, and that Jefferson meant this not as a constitutional but as a revolutionary right.
Madison biographer Ralph Ketcham wrote:
|“||Though Madison agreed entirely with the specific condemnation of the Alien and Sedition Acts, with the concept of the limited delegated power of the general government, and even with the proposition that laws contrary to the Constitution were illegal, he drew back from the declaration that each state legislature had the power to act within its borders against the authority of the general government to oppose laws the legislature deemed unconstitutional.”||”|
Historian Sean Wilentz explains the widespread opposition to these resolutions:
|“||Several states followed Maryland's House of Delegates in rejecting the idea that any state could, by legislative action, even claim that a federal law was unconstitutional, and suggested that any effort to do so was treasonous. A few northern states, including Massachusetts, denied the powers claimed by Kentucky and Virginia and insisted that the Sedition law was perfectly constitutional .... Ten state legislatures with heavy Federalist majorities from around the country censured Kentucky and Virginia for usurping powers that supposedly belonged to the federal judiciary. Northern Republicans supported the resolutions' objections to the alien and sedition acts, but opposed the idea of state review of federal laws. Southern Republicans outside Virginia and Kentucky were eloquently silent about the matter, and no southern legislature heeded the call to battle.||”|
The election of 1800 was a turning point in national politics as the Federalists were replaced by the Democratic-Republican Party led by Thomas Jefferson and James Madison, the authors of the Kentucky and Virginia Resolutions. But, the four presidential terms spanning the period from 1800 to 1817 “did little to advance the cause of states’ rights and much to weaken it.” Over Jefferson’s opposition, the power of the federal judiciary, led by Federalist Chief Justice John Marshall, increased. Jefferson expanded federal powers with the acquisition of the Louisiana Territory and his use of a national embargo designed to prevent involvement in a European war. Madison in 1809 used national troops to enforce a Supreme Court decision in Pennsylvania, appointed an “extreme nationalist” in Joseph Story to the Supreme Court, signed the bill creating the Second Bank of the United States, and called for a constitutional amendment to promote internal improvements.
Opposition to the War of 1812 was centered in New England. Delegates to a convention in Hartford, Connecticut met in December 1814 to consider a New England response to Madison’s war policy. The debate allowed many radicals to argue the cause of states’ rights and state sovereignty. In the end, moderate voices dominated and the final product was not secession or nullification, but a series of proposed constitutional amendments. Identifying the South’s domination of the government as the cause of much of their problems, the proposed amendments included “the repeal of the three-fifths clause, a requirement that two-thirds of both houses of Congress agree before any new state could be admitted to the Union, limits on the length of embargoes, and the outlawing of the election of a president from the same state to successive terms, clearly aimed at the Virginians.” The war was over before the proposals were submitted to President Madison.
After the conclusion of the War of 1812 Sean Wilentz notes:
|“||Madison’s speech [his 1815 annual message to Congress] affirmed that the war had reinforced the evolution of mainstream Republicanism, moving it further away from its original and localist assumptions. The war’s immense strain on the treasury led to new calls from nationalist Republicans for a national bank. The difficulties in moving and supplying troops exposed the wretchedness of the country’s transportation links, and the need for extensive new roads and canals. A boom in American manufacturing during the prolonged cessation of trade with Britain created an entirely new class of enterprisers, most of them tied politically to the Republicans, who might not survive without tariff protection. More broadly, the war reinforced feelings of national identity and connection.||”|
This spirit of nationalism was linked to the tremendous growth and economic prosperity of this post war era. However in 1819 the nation suffered its first financial panic and the 1820s turned out to be a decade of political turmoil that again led to fierce debates over competing views of the exact nature of American federalism. The “extreme democratic and agrarian rhetoric” that had been so effective in 1798 led to renewed attacks on the “numerous market-oriented enterprises, particularly banks, corporations, creditors, and absentee landholders”.
Tariffs (1816-1828)
The Tariff of 1816 had some protective features, and it received support throughout the nation, including that of John C. Calhoun and fellow South Carolinian William Lowndes. The first explicitly protective tariff linked to a specific program of internal improvements was the Tariff of 1824. Sponsored by Henry Clay, this tariff provided a general level of protection at 35% ad valorem (compared to 25% with the 1816 act) and hiked duties on iron, woolens, cotton, hemp, and wool and cotton bagging. The bill barely passed the federal House of Representatives by a vote of 107 to 102. The Middle states and Northwest supported the bill, the South and Southwest opposed it, and New England split its vote with a majority opposing it. In the Senate the bill, with the support of Tennessee Senator Andrew Jackson, passed by four votes, and President James Monroe, the Virginia heir to the Jefferson-Madison control of the White House, signed the bill on March 25, 1824. Daniel Webster of Massachusetts led the New England opposition to this tariff.
Protest against the prospect and the constitutionality of higher tariffs began in 1826 and 1827 with William Branch Giles, who had the Virginia legislature pass resolutions denying the power of Congress to pass protective tariffs, citing the Virginia Resolutions of 1798 and James Madison's 1800 defense of them. Madison denied both the appeal to nullification and the unconstitutionality; he had always held that the power to regulate commerce included protection. Jefferson had, at the end of his life, written against protective tariffs.
The Tariff of 1828 was largely the work of Martin Van Buren (although Silas Wright Jr. of New York prepared the main provisions) and was partly a political ploy to elect Andrew Jackson president. Van Buren calculated that the South would vote for Jackson regardless of the issues so he ignored their interests in drafting the bill. New England, he thought, was just as likely to support the incumbent John Quincy Adams, so the bill levied heavy taxes on raw materials consumed by New England such as hemp, flax, molasses, iron and sail duck. With an additional tariff on iron to satisfy Pennsylvania interests, Van Buren expected the tariff to help deliver Pennsylvania, New York, Missouri, Ohio, and Kentucky to Jackson. Over opposition from the South and some from New England, the tariff was passed with the full support of many Jackson supporters in Congress and signed by President Adams in early 1828.
As expected, Jackson and his running mate John Calhoun carried the entire South with overwhelming numbers in all the states but Louisiana where Adams drew 47% of the vote in a losing effort. However many Southerners became dissatisfied as Jackson, in his first two annual messages to Congress, failed to launch a strong attack on the tariff. Historian William J. Cooper Jr. writes:
|“||The most doctrinaire ideologues of the Old Republican group [supporters of the Jefferson and Madison position in the late 1790s] first found Jackson wanting. These purists identified the tariff of 1828, the hated Tariff of Abominations, as the most heinous manifestation of the nationalist policy they abhorred. That protective tariff violated their constitutional theory, for, as they interpreted the document, it gave no permission for a protective tariff. Moreover, they saw protection as benefiting the North and hurting the South.||”|
South Carolina Background (1819-1828)
South Carolina had been adversely affected by the national economic decline of the 1820s. During this decade, the population decreased by 56,000 whites and 30,000 slaves, out of a total free and slave population of 580,000. The whites left for better places; they took slaves with them or sold them to traders moving slaves to the Deep South for sale.
Historian Richard E. Ellis describes the situation:
|“||Throughout the colonial and early national periods, South Carolina had sustained substantial economic growth and prosperity. This had created an extremely wealthy and extravagant low country aristocracy whose fortunes were based first on the cultivation of rice and indigo, and then on cotton. Then the state was devastated by the Panic of 1819. The depression that followed was more severe than in almost any other state of the Union. Moreover, competition from the newer cotton producing areas along the Gulf Coast, blessed with fertile lands that produced a higher crop-yield per acre, made recovery painfully slow. To make matters worse, in large areas of South Carolina slaves vastly outnumbered whites, and there existed both considerable fear of slave rebellion and a growing sensitivity to even the smallest criticism of “the peculiar institution.”||”|
State leaders, led by states’ rights advocates like William Smith and Thomas Cooper, blamed most of the state’s economic problems on the Tariff of 1816 and national internal improvement projects Soil erosion and competition from the New Southwest were also very significant reasons for the state’s declining fortunes. George McDuffie was a particularly effective speaker for the anti-tariff forces, and he popularized the Forty Bale theory. McDuffie argued that the 40% tariff on cotton finished goods meant that “the manufacturer actually invades your barns, and plunders you of 40 out of every 100 bales that you produce.” Mathematically incorrect, this argument still struck a nerve with his constituency. Nationalists such as Calhoun were forced by the increasing power of such leaders to retreat from their previous positions and adopt, in the words of Ellis, "an even more extreme version of the states' rights doctrine" in order to maintain political significance within South Carolina.
South Carolina’s first effort at nullification occurred in 1822. Its planters believed that free black sailors had assisted Denmark Vesey in his planned slave rebellion. South Carolina passed a Negro Seamen Act, which required that all black foreign seamen be imprisoned while their ships were docked in Charleston. Britain strongly objected, especially as it was recruiting more Africans as sailors. What was worse, if the captains did not pay the fees to cover the cost of jailing, South Carolina would sell the sailors into slavery. Other southern states also passed laws against free black sailors.
Supreme Court Justice William Johnson, in his capacity as a circuit judge, declared the South Carolina law as unconstitutional since it violated United States treaties with Great Britain. The South Carolina Senate announced that the judge’s ruling was invalid and that the Act would be enforced. The federal government did not attempt to carry out Johnson's decision.
Route to nullification in South Carolina (1828-1832)
Historian Avery Craven argues that, for the most part, the debate from 1828-1832 was a local South Carolina affair. The state's leaders were not united and the sides were roughly equal. The western part of the state and a faction in Charleston, led by Joel Poinsett, would remain loyal to Jackson almost to the end. Only in small part was the conflict between “a National North against a States’-right South”.
After the final vote on the Tariff of 1828, the South Carolina congressional delegation held two caucuses, the second at the home of Senator Robert Y. Hayne. They were rebuffed in their efforts to coordinate a united Southern response and focused on how their state representatives would react. While many agreed with George McDuffie that tariff policy could lead to secession at some future date, they all agreed that as much as possible, the issue should be kept out of the upcoming presidential election. Calhoun, while not at this meeting, served as a moderating influence. He felt that the first step in reducing the tariff was to defeat Adams and his supporters in the upcoming election. William C. Preston, on behalf of the South Carolina legislature, asked Calhoun to prepare a report on the tariff situation. Calhoun readily accepted this challenge and in a few weeks time had a 35,000-word draft of what would become his “Exposition and Protest”.
Calhoun’s “Exposition” was completed late in 1828. He argued that the tariff of 1828 was unconstitutional because it favored manufacturing over commerce and agriculture. He thought that the tariff power could only be used to generate revenue, not to provide protection from foreign competition for American industries. He believed that the people of a state or several states, acting in a democratically elected convention, had the retained power to veto any act of the federal government which violated the Constitution. This veto, the core of the doctrine of nullification, was explained by Calhoun in the Exposition:
|“||If it be conceded, as it must be by every one who is the least conversant with our institutions, that the sovereign powers delegated are divided between the General and State Governments, and that the latter hold their portion by the same tenure as the former, it would seem impossible to deny to the States the right of deciding on the infractions of their powers, and the proper remedy to be applied for their correction. The right of judging, in such cases, is an essential attribute of sovereignty, of which the States cannot be divested without losing their sovereignty itself, and being reduced to a subordinate corporate condition. In fact, to divide power, and to give to one of the parties the exclusive right of judging of the portion allotted to each, is, in reality, not to divide it at all; and to reserve such exclusive right to the General Government (it matters not by what department to be exercised), is to convert it, in fact, into a great consolidated government, with unlimited powers, and to divest the States, in reality, of all their rights, It is impossible to understand the force of terms, and to deny so plain a conclusion.||”|
The report also detailed the specific southern grievances over the tariff that led to the current dissatisfaction. ” Fearful that “hotheads” such as McDuffie might force the legislature into taking some drastic action against the federal government, historian John Niven describes Calhoun’s political purpose in the document:
|“||All through that hot and humid summer, emotions among the vociferous planter population had been worked up to a near-frenzy of excitement. The whole tenor of the argument built up in the “Exposition” was aimed to present the case in a cool, considered manner that would dampen any drastic moves yet would set in motion the machinery for repeal of the tariff act. It would also warn other sections of the Union against any future legislation that an increasingly self-conscious South might consider punitive, especially on the subject of slavery.||”|
The report was submitted to the state legislature which had 5,000 copies printed and distributed. Calhoun, who still had designs on succeeding Jackson as president, was not identified as the author but word on this soon leaked out. The legislature took no action on the report at that time.
In the summer of 1828 Robert Barnwell Rhett, soon to be considered the most radical of the South Carolinians, entered the fray over the tariff. As a state representative, Rhett called for the governor to convene a special session of the legislature. An outstanding orator, Rhett appealed to his constituents to resist the majority in Congress. Rhett addressed the danger of doing nothing:
|“||But if you are doubtful of yourselves – if you are not prepared to follow up your principles wherever they may lead, to their very last consequence – if you love life better than honor, -- prefer ease to perilous liberty and glory; awake not! Stir not! -- Impotent resistance will add vengeance to your ruin. Live in smiling peace with your insatiable Oppressors, and die with the noble consolation that your submissive patience will survive triumphant your beggary and despair.||”|
Rhett’s rhetoric about revolution and war was too radical in the summer of 1828 but, with the election of Jackson assured, James Hamilton Jr. on October 28 in the Colleton County Courthouse in Walterborough “launched the formal nullification campaign.” Renouncing his former nationalism, Hamilton warned the people that, “Your task-master must soon become a tyrant, from the very abuses and corruption of the system, without the bowels of compassion, or a jot of human sympathy.” He called for implementation of Mr. Jefferson’s “rightful remedy” of nullification. Hamilton sent a copy of the speech directly to President-elect Jackson. But, despite a statewide campaign by Hamilton and McDuffie, a proposal to call a nullification convention in 1829 was defeated by the South Carolina legislature meeting at the end of 1828. State leaders such as Calhoun, Hayne, Smith, and William Drayton were all able to remain publicly non-committal or opposed to nullification for the next couple of years.
The division in the state between radicals and conservatives continued throughout 1829 and 1830. After the failure of a state project to arrange financing of a railroad within the state to promote internal trade, the state petitioned Congress to invest $250,000 in the company trying to build the railroad. After Congress tabled the measure, the debate in South Carolina resumed between those who wanted state investment and those who wanted to work to get Congress' support. The debate demonstrated that a significant minority of the state did have an interest in Clay’s American System. The effect of the Webster-Haynes debate was to energize the radicals, and some moderates started to move in their direction.
The state election campaign of 1830 focused on the tariff issue and the need for a state convention. On the defensive, radicals underplayed the intent of the convention as pro-nullification. When voters were presented with races where an unpledged convention was the issue, the radicals generally won. When conservatives effectively characterized the race as being about nullification, the radicals lost. The October election was narrowly carried by the radicals, although the blurring of the issues left them without any specific mandate. In South Carolina, the governor was selected by the legislature, which selected James Hamilton, the leader of the radical movement, as governor and fellow radical Henry L. Pinckney as speaker of the South Carolina House. For the open Senate seat, the legislature chose the more radical Stephen Miller over William Smith.
With radicals in leading positions, in 1831, they began to capture momentum. State politics became sharply divided along Nullifier and Unionist lines. Still, the margin in the legislature fell short of the two-thirds majority needed for a convention. Many of the radicals felt that convincing Calhoun of the futility of his plans for the presidency would lead him into their ranks. Calhoun meanwhile had concluded that Martin Van Buren was clearly establishing himself as Jackson’s heir apparent. At Hamilton’s prompting, George McDuffie made a three-hour speech in Charleston demanding nullification of the tariff at any cost. In the state, the success of McDuffie’s speech seemed to open up the possibilities of both military confrontation with the federal government and civil war within the state. With silence no longer an acceptable alternative, Calhoun looked for the opportunity to take control of the anti-tariff faction in the state; by June he was preparing what would be known as his Fort Hill Address.
Published on July 26, 1831, the address repeated and expanded the positions Calhoun had made in the “Exposition”. While the logic of much of the speech was consistent with the states’ rights position of most Jacksonians, and even Daniel Webster remarked that it “was the ablest and most plausible, and therefore the most dangerous vindication of that particular form of Revolution”, the speech still placed Calhoun clearly in the nullifier camp. Within South Carolina, his gestures at moderation in the speech were drowned out as planters received word of the Nat Turner insurrection in Virginia. Calhoun was not alone in finding a connection between the abolition movement and the sectional aspects of the tariff issue. It confirmed for Calhoun what he had written in a September 11, 1830 letter:
|“||I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness.||”|
From this point, the nullifiers accelerated their organization and rhetoric. In July 1831 the States Rights and Free Trade Association was formed in Charleston and expanded throughout the state. Unlike state political organizations in the past, which were led by the South Carolina planter aristocracy, this group appealed to all segments of the population, including non-slaveholder farmers, small slaveholders, and the Charleston non-agricultural class. Governor Hamilton was instrumental in seeing that the association, which was both a political and a social organization, expanded throughout the state. In the winter of 1831 and spring of 1832, the governor held conventions and rallies throughout the state to mobilize the nullification movement. The conservatives were unable to match the radicals in either organization or leadership.
The state elections of 1832 were “charged with tension and bespattered with violence,” and “polite debates often degenerated into frontier brawls.” Unlike the previous year’s election, the choice was clear between nullifiers and unionists. The nullifiers won and on October 20, 1832, Governor Hamilton called the legislature into a special session to consider a convention. The legislative vote was 96-25 in the House and 31-13 in the Senate
In November 1832 the Nullification Convention met. The convention declared that the tariffs of 1828 and 1832 were unconstitutional and unenforceable within the state of South Carolina after February 1, 1833. They said that attempts to use force to collect the taxes would lead to the state’s secession. Robert Hayne, who followed Hamilton as governor in 1833, established a 2,000-man group of mounted minutemen and 25,000 infantry who would march to Charleston in the event of a military conflict. These troops were to be armed with $100,000 in arms purchased in the North.
The enabling legislation passed by the legislature was carefully constructed to avoid clashes if at all possible and to create an aura of legality in the process. To avoid conflicts with Unionists, it allowed importers to pay the tariff if they so desired. Other merchants could pay the tariff by obtaining a paper tariff bond from the customs officer. They would then refuse to pay the bond when due, and if the customs official seized the goods, the merchant would file for a writ of replevin to recover the goods in state court. Customs officials who refused to return the goods (by placing them under the protection of federal troops) would be civilly liable for twice the value of the goods. To insure that state officials and judges supported the law, a "test oath" would be required for all new state officials, binding them to support the ordinance of nullification.
Governor Hayne in his inaugural address announced South Carolina's position:
|“||If the sacred soil of Carolina should be polluted by the footsteps of an invader, or be stained with the blood of her citizens, shed in defense, I trust in Almighty God that no son of hers … who has been nourished at her bosom … will be found raising a parricidal arm against our common mother. And even should she stand ALONE in this great struggle for constitutional liberty … that there will not be found, in the wider limits of the state, one recreant son who will not fly to the rescue, and be ready to lay down his life in her defense.||”|
Washington, D.C. (1828-1832)
When President Jackson took office in March 1829 he was well aware of the turmoil created by the “Tariff of Abominations”. While he may have abandoned some of his earlier beliefs that had allowed him to vote for the Tariff of 1824, he still felt protectionism was justified for products essential to military preparedness and did not believe that the current tariff should be reduced until the national debt was fully paid off. He addressed the issue in his inaugural address and his first three messages to Congress, but offered no specific relief. In December 1831, with the proponents of nullification in South Carolina gaining momentum, Jackson was recommending “the exercise of that spirit of concession and conciliation which has distinguished the friends of our Union in all great emergencies.” However on the constitutional issue of nullification, despite his strong beliefs in states’ rights, Jackson did not waver.
Calhoun’s “Exposition and Protest” did start a national debate over the doctrine of nullification. The leading proponents of the nationalistic view included Daniel Webster, Supreme Court Justice Joseph Story, Judge William Alexander Duer, John Quincy Adams, Nathaniel Chipman, and Nathan Dane. These people rejected the compact theory advanced by Calhoun, claiming that the Constitution was the product of the people, not the states. According to the nationalist position, the Supreme Court had the final say on the constitutionality of legislation, the national union was perpetual and had supreme authority over individual states. The nullifiers, on the other hand, asserted that the central government was not to be the ultimate arbiter of its own power, and that the states, as the contracting entities, could judge for themselves what was or was not constitutional. While Calhoun’s “Exposition” claimed that nullification was based on the reasoning behind the Kentucky and Virginia Resolutions, an aging James Madison in an August 28, 1830 letter to Edward Everett, intended for publication, disagreed. Madison wrote, denying that any individual state could alter the compact:
|“||Can more be necessary to demonstrate the inadmissibility of such a doctrine than that it puts it in the power of the smallest fraction over 1/4 of the U. S. — that is, of 7 States out of 24 — to give the law and even the Constn. to 17 States, each of the 17 having as parties to the Constn. an equal right with each of the 7 to expound it & to insist on the exposition. That the 7 might, in particular instances be right and the 17 wrong, is more than possible. But to establish a positive & permanent rule giving such a power to such a minority over such a majority, would overturn the first principle of free Govt. and in practice necessarily overturn the Govt. itself.||”|
Part of the South’s strategy to force repeal of the tariff was to arrange an alliance with the West. Under the plan, the South would support the West’s demand for free lands in the public domain if the West would support repeal of the tariff. With this purpose Robert Hayne took the floor on the Senate in early 1830, thus beginning “the most celebrated debate, in the Senate’s history.” Daniel Webster’s response shifted the debate, subsequently styled the Webster-Hayne debates, from the specific issue of western lands to a general debate on the very nature of the United States. Webster's position differed from Madison's: Webster asserted that the people of the United States acted as one aggregate body, Madison held that the people of the several states had acted collectively. John Rowan spoke against Webster on that issue, and Madison wrote, congratulating Webster, but explaining his own position. The debate presented the fullest articulation of the differences over nullification, and 40,000 copies of Webster’s response, which concluded with “liberty and Union, now and forever, one and inseparable”, were distributed nationwide.
Many people expected the states’ rights Jackson to side with Haynes. However once the debate shifted to secession and nullification, Jackson sided with Webster. On April 13, 1830 at the traditional Democratic Party celebration honoring Thomas Jefferson’s birthday, Jackson chose to make his position clear. In a battle of toasts, Hayne proposed, “The Union of the States, and the Sovereignty of the States.” Jackson’s response, when his turn came, was, “Our Federal Union: It must be preserved.” To those attending, the effect was dramatic. Calhoun would respond with his own toast, in a play on Webster’s closing remarks in the earlier debate, “The Union. Next to our liberty, the most dear.” Finally Martin Van Buren would offer, “Mutual forbearance and reciprocal concession. Through their agency the Union was established. The patriotic spirit from which they emanated will forever sustain it.”
Van Buren wrote in his autobiography of Jackson’s toast, “The veil was rent – the incantations of the night were exposed to the light of day.” Thomas Hart Benton, in his memoirs, stated that the toast “electrified the country.” Jackson would have the final words a few days later when a visitor from South Carolina asked if Jackson had any message he wanted relayed to his friends back in the state. Jackson’s reply was:
|“||Yes I have; please give my compliments to my friends in your State and say to them, that if a single drop of blood shall be shed there in opposition to the laws of the United States, I will hang the first man I can lay my hand on engaged in such treasonable conduct, upon the first tree I can reach.||”|
Other issues than the tariff were still being decided. In May 1830 Jackson vetoed an important (especially to Kentucky and Henry Clay) internal improvements program in the Maysville Road Bill and then followed this with additional vetoes of other such projects shortly before Congress adjourned at the end of May. Clay would use these vetoes to launch his presidential campaign. In 1831 the re-chartering of the Bank of the United States, with Clay and Jackson on opposite sides, reopened a long simmering problem. This issue was featured at the December 1831 National Republican convention in Baltimore which nominated Henry Clay for president, and the proposal to re-charter was formally introduced into Congress on January 6, 1832. The Calhoun-Jackson split entered the center stage when Calhoun, as vice-president presiding over the Senate, cast the tie-breaking vote to deny Martin Van Buren the post of minister to England. Van Buren was subsequently selected as Jackson’s running mate at the 1832 Democratic National Convention held in May.
In February 1832 Henry Clay, back in the Senate after a two decades absence, made a three day long speech calling for a new tariff schedule and an expansion of his American System. In an effort to reach out to John Calhoun and other southerners, Clay’s proposal provided for a ten million dollar revenue reduction based on the amount of budget surplus he anticipated for the coming year. Significant protection was still part of the plan as the reduction primarily came on those imports not in competition with domestic producers. Jackson proposed an alternative that reduced overall tariffs to 28%. John Quincy Adams, now in the House of Representatives, used his Committee of Manufacturers to produce a compromise bill that, in its final form, reduced revenues by five million dollars, lowered duties on non-competitive products, and retained high tariffs on woolens, iron, and cotton products. In the course of the political maneuvering, George McDuffie’s Ways and Means Committee, the normal originator of such bills, prepared a bill with drastic reduction across the board. McDuffie’s bill went nowhere. Jackson signed the Tariff of 1832 on July 14, 1832, a few days after he vetoed the Bank of the United States re-charter bill. Congress adjourned after it failed to override Jackson’s veto.
With Congress in adjournment, Jackson anxiously watched events in South Carolina. The nullifiers found no significant compromise in the Tariff of 1832 and acted accordingly (see the above section). Jackson heard rumors of efforts to subvert members of the army and navy in Charleston and he ordered the secretaries of the army and navy to begin rotating troops and officers based on their loyalty. He ordered General Winfield Scott to prepare for military operations and ordered a naval squadron in Norfolk to prepare to go to Charleston. Jackson kept lines of communication open with unionists like Joel Poinsett, William Drayton, and James L. Petigru and sent George Breathitt, brother of the Kentucky governor, to independently obtain political and military intelligence. After their defeat at the polls in October, Petigru advised Jackson that he should " Be prepared to hear very shortly of a State Convention and an act of Nullification.” On October 19, 1832 Jackson wrote to his Secretary of War, “The attempt will be made to surprise the Forts and garrisons by the militia, and must be guarded against with vestal vigilance and any attempt by force repelled with prompt and exemplary punishment.” By mid-November Jackson’s reelection was assured.
On December 3, 1832 Jackson sent his fourth annual message to Congress. The message “was stridently states’ rights and agrarian in its tone and thrust” and he disavowed protection as anything other than a temporary expedient. His intent regarding nullification, as communicated to Van Buren, was “to pass it barely in review, as a mere buble [sic], view the existing laws as competent to check and put it down.” He hoped to create a “moral force” that would transcend political parties and sections. The paragraph in the message that addressed nullification was:
|“||It is my painful duty to state that in one quarter of the United States opposition to the revenue laws has arisen to a height which threatens to thwart their execution, if not to endanger the integrity of the Union. What ever obstructions may be thrown in the way of the judicial authorities of the General Government, it is hoped they will be able peaceably to overcome them by the prudence of their own officers and the patriotism of the people. But should this reasonable reliance on the moderation and good sense of all portions of our fellow citizens be disappointed, it is believed that the laws themselves are fully adequate to the suppression of such attempts as may be immediately made. Should the exigency arise rendering the execution of the existing laws impracticable from any cause what ever, prompt notice of it will be given to Congress, with a suggestion of such views and measures as may be deemed necessary to meet it.||”|
On December 10 Jackson issued the Proclamation to the People of South Carolina, in which he characterized the positions of the nullifiers as "impractical absurdity" and "a metaphysical subtlety, in pursuit of an impractical theory." He provided this concise statement of his belief:
|“||I consider, then, the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which It was founded, and destructive of the great object for which it was formed.||”|
The language used by Jackson, combined with the reports coming out of South Carolina, raised the spectre of military confrontation for many on both sides of the issue. A group of Democrats, led by Van Buren and Thomas Hart Benton among others, saw the only solution to the crisis in a substantial reduction of the tariff.
Negotiation and Confrontation (1833)
In apparent contradiction of his previous claim that the tariff could be enforced with existing laws, on January 16 Jackson sent his Force Bill Message to Congress. Custom houses in Beaufort and Georgetown would be closed and replaced by ships located at each port. In Charleston the custom house would be moved to either Castle Pinckney or Fort Moultrie in Charleston harbor. Direct payment rather than bonds would be required, and federal jails would be established for violators that the state refused to arrest and all cases arising under the state’s nullification act could be removed to the United States Circuit Court. In the most controversial part, the militia acts of 1795 and 1807 would be revised to permit the enforcement of the custom laws by both the militia and the regular United States military. Attempts were made in South Carolina to shift the debate away from nullification by focusing instead on the proposed enforcement.
The Force bill went to the Senate Judiciary Committee chaired by Pennsylvania protectionist William Wilkins and supported by members Daniel Webster and Theodore Frelinghuysen of New Jersey; it gave Jackson everything he asked. On January 28 the Senate defeated a motion by a vote of 30 to 15 to postpone debate on the bill. All but two of the votes to delay were from the lower South and only three from this section voted against the motion. This did not signal any increased support for nullification but did signify doubts about enforcement. In order to draw more votes, proposals were made to limit the duration of the coercive powers and restrict the use of force to suppressing, rather than preventing, civil disorder. In the House the Judiciary Committee, in a 4-3 vote, rejected Jackson’s request to use force. By the time Calhoun made a major speech on February 15 strongly opposing it, the Force Bill was temporarily stalled.
On the tariff issue, the drafting of a compromise tariff was assigned in December to the House Ways and Means Committee, now headed by Gulian C. Verplanck. Debate on the committee’s product on the House floor began in January 1833. The Verplanck tariff proposed reductions back to the 1816 levels over the course of the next two years while maintaining the basic principle of protectionism. The anti-Jackson protectionists saw this as an economic disaster that did not allow the Tariff of 1832 to even be tested and "an undignified truckling to the menaces and blustering of South Carolina." Northern Democrats did not oppose it in principle but still demanded protection for the varying interests of their own constituents. Those sympathetic to the nullifiers wanted a specific abandonment of the principle of protectionism and were willing to offer a longer transition period as a bargaining point. It was clear that the Verplanck tariff was not going to be implemented.
In South Carolina, efforts were being made to avoid an unnecessary confrontation. Governor Hayne ordered the 25,000 troops he had created to train at home rather than gathering in Charleston. At a mass meeting in Charleston on January 21, it was decided to postpone the February 1 deadline for implementing nullification while Congress worked on a compromise tariff. At the same time a commissioner from Virginia, Benjamin Watkins Leigh, arrived in Charleston bearing resolutions that criticized both Jackson and the nullifiers and offering his state as a mediator.
Henry Clay had not taken his defeat in the presidential election well and was unsure on what position he could take in the tariff negotiations. His long term concern was that Jackson eventually was determined to kill protectionism along with the American Plan. In February, after consulting with manufacturers and sugar interests in Louisiana who favored protection for the sugar industry, Clay started to work on a specific compromise plan. As a starting point, he accepted the nullifiers' offer of a transition period but extended it from seven and a half years to nine years with a final target of a 20% ad valorem rate. After first securing the support of his protectionist base, Clay, through an intermediary, broached the subject with Calhoun. Calhoun was receptive and after a private meeting with Clay at Clay’s boardinghouse, negotiations preceded.
Clay introduced the negotiated tariff bill on February 12, and it was immediately referred to a select committee consisting of Clay as chairman, Felix Grundy of Tennessee, George M. Dallas of Pennsylvania, William Cabell Rives of Virginia, Webster, John M. Clayton of Delaware, and Calhoun. On February 21 the committee reported a bill to the floor of the Senate which was largely the original bill proposed by Clay. The Tariff of 1832 would continue except that reduction of all rates above 20% would be reduced by one tenth every two years with the final reductions back to 20% coming in 1842. Protectionism as a principle was not abandoned and provisions were made for raising the tariff if national interests demanded it.
Although not specifically linked by any negotiated agreement, it became clear that the Force Bill and Compromise Tariff of 1833 were inexorably linked. In his February 25 speech ending the debate on the tariff, Clay captured the spirit of the voices for compromise by condemning Jackson's Proclamation to South Carolina as inflammatory, admitting the same problem with the Force Bill but indicating its necessity, and praising the Compromise Tariff as the final measure to restore balance, promote the rule of law, and avoid the "sacked cities," "desolated fields," and "smoking ruins" that he said would be the product of the failure to reach a final accord. The House passed the Compromise Tariff by 119-85 and the Force Bill by 149-48. In the Senate the tariff passed 29-16 and the Force bill by 32-1 with many opponents of it walking out rather than voting for it.
Calhoun rushed to Charleston with the news of the final compromises. The Nullification Convention met again on March 11. It repealed the November Nullification Ordinance and also, "in a purely symbolic gesture", nullified the Force Bill. While the nullifiers claimed victory on the tariff issue, even though they had made concessions, the verdict was very different on nullification. The majority had, in the end, ruled and this boded ill for the South and their minorities hold on slavery. Rhett summed this up at the convention on March 13. Warning that, "A people, owning slaves, are mad, or worse than mad, who do not hold their destinies in their own hands," he continued:
|“||Every stride of this Government, over your rights, brings it nearer and nearer to your peculiar policy. …The whole world are in arms against your institutions … Let Gentlemen not be deceived. It is not the Tariff – not Internal Improvement – nor yet the Force bill, which constitutes the great evil against which we are contending. … These are but the forms in which the despotic nature of the government is evinced – but it is the despotism which constitutes the evil: and until this Government is made a limited Government … there is no liberty – no security for the South.||”|
People reflected on the meaning of the nullification crisis and its outcome for the country. On May 1, 1833 Jackson wrote, "the tariff was only a pretext, and disunion and southern confederacy the real object. The next pretext will be the negro, or slavery question."
The final resolution of the crisis and Jackson’s leadership had appeal throughout the North and South. Robert Remini, the historian and Jackson biographer, described the opposition that nullification drew from traditionally states’ rights Southern states:
The Alabama legislature, for example, pronounced the doctrine “unsound in theory and dangerous in practice.” Georgia said it was “mischievous,” “rash and revolutionary.” Mississippi lawmakers chided the South Carolinians for acting with “reckless precipitancy.”
Forest McDonald, describing the split over nullification among proponents of states rights, wrote, “The doctrine of states’ rights, as embraced by most Americans, was not concerned exclusively, or even primarily with state resistance to federal authority.” But, by the end of the nullification crisis, many southerners started to question whether the Jacksonian Democrats still represented Southern interests. The historian William J. Cooper notes that, “Numerous southerners had begun to perceive it [the Jacksonian Democratic Party] as a spear aimed at the South rather than a shield defending the South.”
In the political vacuum created by this alienation, the southern wing of the Whig Party was formed. The party was a coalition of interests united by the common thread of opposition to Andrew Jackson and, more specifically, his “definition of federal and executive power.” The party included former National Republicans with an “urban, commercial, and nationalist outlook” as well as former nullifiers. Emphasizing that “they were more southern than the Democrats,” the party grew within the South by going “after the abolition issue with unabashed vigor and glee.” With both parties arguing who could best defend southern institutions, the nuances of the differences between free soil and abolitionism, which became an issue in the late 1840s with the Mexican War and territorial expansion, never became part of the political dialogue. This failure increased the volatility of the slavery issues.
Richard Ellis argues that the end of the crisis signified the beginning of a new era. Within the states’ rights movement, the traditional desire for simply “a weak, inactive, and frugal government” was challenged. Ellis states that “in the years leading up to the Civil War the nullifiers and their pro-slavery allies used the doctrine of states’ rights and state sovereignty in such a way as to try to expand the powers of the federal government so that it could more effectively protect the peculiar institution.” By the 1850s, states’ rights had become a call for state equality under the Constitution.
Madison reacted to this incipient tendency by writing two paragraphs of "Advice to My Country," found among his papers. It said that the Union "should be cherished and perpetuated. Let the open enemy to it be regarded as a Pandora with her box opened; and the disguised one, as the Serpent creeping with his deadly wiles into paradise." Richard Rush published this "Advice" in 1850, by which time Southern spirit was so high that it was denounced as a forgery.
The first test for the South over the slavery issue began during the final congressional session of 1835. In what became known as the Gag Rule Debates, abolitionists flooded the Congress with anti-slavery petitions to end slavery and the slave trade in Washington, D.C. The debate was reopened each session as Southerners, led by South Carolinians Henry Pinckney and John Hammond, prevented the petitions from even being officially received by Congress. Led by John Quincy Adams, the slavery debate remained on the national stage until late 1844 when Congress lifted all restrictions on processing the petitions.
Describing the legacy of the crisis, Sean Wilentz writes:
|“||The battle between Jacksonian democratic nationalists, northern and southern, and nullifier sectionalists would resound through the politics of slavery and antislavery for decades to come. Jackson’s victory, ironically, would help accelerate the emergence of southern pro-slavery as a coherent and articulate political force, which would help solidify northern antislavery opinion, inside as well as outside Jackson’s party. Those developments would accelerate the emergence of two fundamentally incompatible democracies, one in the slave South, the other in the free North.||”|
For South Carolina, the legacy of the crisis involved both the divisions within the state during the crisis and the apparent isolation of the state as the crisis was resolved. By 1860, when South Carolina became the first state to secede, the state was more internally united than any other southern state. Historian Charles Edward Cauthen writes:
|“||Probably to a greater extent than in any other Southern state South Carolina had been prepared by her leaders over a period of thirty years for the issues of 1860. Indoctrination in the principles of state sovereignty, education in the necessity of maintaining Southern institutions, warnings of the dangers of control of the federal government by a section hostile to its interests – in a word, the education of the masses in the principles and necessity of secession under certain circumstances – had been carried on with a skill and success hardly inferior to the masterly propaganda of the abolitionists themselves. It was this education, this propaganda, by South Carolina leaders which made secession the almost spontaneous movement that it was.||”|
See also
- Origins of the American Civil War
- American System (economic plan)
- American School (economics)
- Alexander Hamilton
- Friedrich List
- Nullification Convention
- Remini, Andrew Jackson, v2 pp. 136-137. Niven pg. 135-137. Freehling, Prelude to Civil War pg 143
- Freehling, The Road to Disunion, pg. 255. Craven pg. 60. Ellis pg. 7
- Craven pg.65. Niven pg. 135-137. Freehling, Prelude to Civil War pg 143
- Niven p. 192. Calhoun replaced Robert Y. Hayne as senator so that Hayne could follow James Hamilton as governor. Niven writes, "There is no doubt that these moves were part of a well-thought-out plan whereby Hayne would restrain the hotheads in the state legislature and Calhoun would defend his brainchild, nullification, in Washington against administration stalwarts and the likes of Daniel Webster, the new apostle of northern nationalism."
- Howe p. 410. In the Senate only Virginia and South Carolina voted against the 1832 tariff. Howe writes, "Most southerners saw the measure as a significant amelioration of their grievance and were now content to back Jackson for reelection rather than pursue the more drastic remedy such as the one South Carolina was touting."
- Freehling, Prelude to Civil War pg. 1-3. Freehling writes, “In Charleston Governor Robert Y. Hayne ... tried to form an army which could hope to challenge the forces of ‘Old Hickory.’ Hayne recruited a brigade of mounted minutemen, 2,000 strong, which could swoop down on Charleston the moment fighting broke out, and a volunteer army of 25,000 men which could march on foot to save the beleaguered city. In the North Governor Hayne’s agents bought over $100,000 worth of arms; in Charleston Hamilton readied his volunteers for an assault on the federal forts.”
- Wilentz pg. 388
- Woods pg. 78
- Tuttle, California Digest 26 pg. 47
- Ellis pg. 4
- McDonald pg. vii. McDonald wrote, “Of all the problems that beset the United States during the century from the Declaration of Independence to the end of Reconstruction, the most pervasive concerned disagreements about the nature of the Union and the line to be drawn between the authority of the general government and that of the several states. At times the issue bubbled silently and unseen between the surface of public consciousness; at times it exploded: now and again the balance between general and local authority seemed to be settled in one direction or another, only to be upset anew and to move back toward the opposite position, but the contention never went away.”
- Ellis pg. 1-2.
- For full text of the resolutions, see Kentucky Resolutions of 1798 and Kentucky Resolutions of 1799.
- James Madison, Virginia Resolutions of 1798
- Banning pg. 388
- Brant, p. 297, 629
- Brant, pp. 298.
- Brant, p.629
- Ketchum pg. 396
- Wilentz pg. 80.
- Ellis p.5. Madison called for the constitutional amendment because he believed much of the American System was unconstitutional. Historian Richard Buel Jr. notes that in preparing for the worst from the Hartford Convention, the Madison administration made preparation to intervene militarily in case of New England secession. Troops from the Canadian border were moved near Albany so that they could move into either Massachusetts or Connecticut if necessary. New England troops were also returned to their recruitment areas in order to serve as a focus for loyalists. Buel pg.220-221
- McDonald pg. 69-70
- Wilentz pg.166
- Wilentz pg. 181
- Ellis pg. 6. Wilentz pg. 182.
- Freehling, Prelude to Civil War pg. 92-93
- Wilentz pg. 243. Economic historian Frank Taussig notes “The act of 1816, which is generally said to mark the beginning of a distinctly protective policy in this country, belongs rather to the earlier series of acts, beginning with that of 1789, than to the group of acts of 1824, 1828, and 1832. Its highest permanent rate of duty was twenty per cent., an increase over the previous rates which is chiefly accounted for by the heavy interest charge on the debt incurred during the war. But after the crash of 1819, a movement in favor of protection set in, which was backed by a strong popular feeling such as had been absent in the earlier years.” http://teachingamericanhistory.org/library/index.asp?document=1136
- Remini, Henry Clay pg. 232. Freehling, The Road to Disunion, pg. 257.
- McDonald pg. 95
- Brant, p. 622
- Remini, Andrew Jackson, v2 pp. 136-137. McDonald presents a slightly different rationale. He stated that the bill would “adversely affect New England woolen manufacturers, ship builders, and shipowners” and Van Buren calculated that New England and the South would unite to defeat the bill, allowing Jacksonians to have it both ways – in the North they could claim they tried but failed to pass a needed tariff and in the South they could claim that they had thwarted an effort to increase import duties. McDonald pg. 94-95
- Cooper pg. 11-12.
- Freehling, The Road to Disunion, pg. 255. Historian Avery Craven wrote, “Historians have generally ignored the fact that the South Carolina statesmen, in the so-called Nullification controversy, were struggling against a practical situation. They have conjured up a great struggle between nationalism and States” rights and described these men as theorists reveling in constitutional refinements for the mere sake of logic. Yet here was a clear case of commercial and agricultural depression. Craven pg. 60
- Ellis pg. 7. Freehling notes that divisions over nullification in the state generally corresponded to the extent that the section suffered economically. The exception was the “Low country rice and luxury cotton planters” who supported nullification despite their ability to survive the economic depression. This section had the highest percentage of slave population. Freehling, Prelude to Civil War, pg. 25.
- Cauthen pg. 1
- Ellis pg. 7. Freehling, Road to Disunion, pg. 256
- Gerald Horne, Negro Comrades of the Crown: African Americans and the British Empire Fight the U.S. Before Emancipation, New York University (NYU) Press, 2012, pp. 97-98
- Freehling, Road to Disunion, p. 254
- Craven pg.65.
- Niven pg. 135-137. Freehling, Prelude to Civil War pg 143.
- South Carolina Exposition and Protest
- Niven pg. 158-162
- Niven pg. 161
- Niven pg. 163-164
- Walther pg. 123. Craven pg. 63-64.
- Freehling, Prelude to Civil War pg. 149
- Freehling, Prelude to Civil War pg. 152-155, 173-175. A two-thirds vote of each house of the legislature was required to convene a state convention.
- Freehling, Prelude to Civil War pg. 177-186
- Freehling, Prelude to Civil War, pg. 205-213
- Freehling, Prelude to Civil War, pg. 213-218
- Peterson pg. 189-192. Niven pg. 174-181. Calhoun wrote of McDuffie’s speech, “I think it every way imprudent and have so written Hamilton … I see clearly it brings matters to a crisis, and that I must meet it promptly and manfully.” Freehling in his works frequently refers to the radicals as “Calhounites” even before 1831. This is because the radicals, rallying around Calhoun’s “Exposition,” were linked ideologically, if not yet practically, with Calhoun.
- Niven pg. 181-184
- Ellis pg. 193. Freehling, Prelude to Civil War, pg. 257.
- Freehling pg. 224-239
- Freehling, Prelude to Civil War pg. 252-260
- Freehling, Prelude to Civil War pg. 1-3.
- Ellis pg. 97-98
- Remini, Andrew Jackson, v. 3 pg. 14
- Ellis pg. 41-43
- Ellis p. 9
- Ellis pg. 9
- Brant, p.627.
- Ellis pg. 10. Ellis wrote, "But the nullifiers' attempt to legitimize their controversial doctrine by claiming it was a logical extension of the principles embodied in the Kentucky and Virginia Resolutions upset him. In a private letter he deliberately wrote for publication, Madison denied many of the assertions of the nullifiers and lashed out in particular at South Carolina's claim that if a state nullified an act of the federal government it could only be overruled by an amendment to the Constitution." Full text of the letter is available at http://www.constitution.org/jm/18300828_everett.htm.
- Brant, pp. 626-7. Webster never asserted the consolidating position again.
- McDonald pg.105-106
- Remini, Andrew Jackson, v.2 pg. 233-235.
- Remini, Andrew Jackson, v.2 pg. 233-237.
- Remini, Andrew Jackson, v.2 pg. 255-256 Peterson pg. 196-197.
- Remini, Andrew Jackson, v.2 pg. 343-348
- Remini, Andrew Jackson, v.2 pg. 347-355
- Remini, Andrew Jackson, v.2 pg. 358-373. Peterson pg. 203-212
- Remini, Andrew Jackson, v.2 pg. 382-389
- Ellis pg. 82
- Remini, Andrew Jackson, v. 3 pg. 9-11. Full text of his message available at http://www.thisnation.com/library/sotu/1832aj.html
- Ellis pg 83-84. Full document available at: http://www.yale.edu/lawweb/avalon/presiden/proclamations/jack01.htm
- Ellis pg. 93-95
- Ellis pg. 160-165. Peterson pg. 222-224. Peterson differs with Ellis in arguing that passage of the Force Bill “was never in doubt.”
- Ellis pg. 99-100. Peterson pg. 217.
- Wilentz pg. 384-385.
- Peterson pg. 217-226
- Peterson pg. 226-228
- Peterson pg. 229-232
- Freehling, Prelude to Civil War, pg. 295-297
- Freehling, Prelude to Civil War, pg. 297. Willentz pg. 388
- Jon Meacham (2009), American Lion: Andrew Jackson in the White House, New York: Random House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72.
- Remini, Andrew Jackson, v3. pg. 42.
- McDonald pg. 110
- Cooper pg. 53-65
- Ellis pg. 198
- Brant p. 646; Rush produced a copy in Mrs. Madison's hand; the original also survives. The contemporary letter to Edward Coles (Brant, p. 639) makes plain that the enemy in question is the nullifier.
- Freehling, Prelude to Civil War pg. 346-356. McDonald (pg 121-122) saw states’ rights in the period from 1833-1847 as almost totally successful in creating a “virtually nonfunctional” federal government. This did not insure political harmony, as “the national political arena became the center of heated controversy concerning the newly raised issue of slavery, a controversy that reached the flash point during the debates about the annexation of the Republic of Texas” pg. 121-122
- Cauthen pg. 32
- Brant, Irving: The Fourth President: A Life of James Madison Bobbs Merrill, 1970.
- Buel, Richard Jr. America on the Brink: How the Political Struggle Over the War of 1812 Almost Destroyed the Young Republic. (2005) ISBN 1-4039-6238-3
- Cauthen, Charles Edward. South Carolina Goes to War. (1950) ISBN 1-57003-560-1
- Cooper, William J. Jr. The South and the Politics of Slavery 1828-1856 (1978) ISBN 0-8071-0385-3
- Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987)
- Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776-1854 (1991), Vol. 1
- Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816-1836. (1965) ISBN 0-19-507681-8
- Howe, Daniel Walker. What Hath God Wrought: The Transformation of America, 1815-1848. (2007) ISBN 978-0-19-507894-7
- McDonald, Forrest. States’ Rights and the Union: Imperium in Imperio 1776-1876 (2000) ISBN 0-7006-1040-5
- Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0
- Peterson, Merrill D. The Great Triumvirate: Webster, Clay, and Calhoun. (1987) ISBN 0-19-503877-0
- Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822-1832,v2 (1981) ISBN 0-06-014844-6
- Remini, Robert V. Andrew Jackson and the Course of American Democracy, 1833-1845, v3 (1984) ISBN 0-06-015279-6
- Remini, Robert V. Henry Clay: Statesman for the Union (1991) ISBN 0-393-31088-4
- Tuttle, Charles A. (Court Reporter) California Digest: A Digest of the Reports of the Supreme Court of California, Volume 26 (1906)
- Walther, Eric C. The Fire-Eaters (1992) ISBN 0-8071-1731-5
- Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln. (2005) ISBN 0-393-05820-4
- Woods, Thomas E. Jr. Nullification (2010) ISBN 978-1-59698-149-2
Further reading
- Barnwell, John. Love of Order: South Carolina's First Secession Crisis (1982)
- Capers, Gerald M. John C. Calhoun, Opportunist: A Reappraisal (1960)
- Coit, Margaret L. John C. Calhoun: American Portrait (1950)
- Houston, David Franklin (1896). A Critical Study of Nullification in South Carolina. Longmans, Green, and Co.
- Latner, Richard B. "The Nullification Crisis and Republican Subversion," Journal of Southern History 43 (1977): 18-38, in JSTOR
- McCurry, Stephanie. Masters of Small Worlds.New York: Oxford UP, 1993.
- Pease, Jane H. and William H. Pease, "The Economics and Politics of Charleston's Nullification Crisis", Journal of Southern History 47 (1981): 335-62, in JSTOR
- Ratcliffe, Donald. "The Nullification Crisis, Southern Discontents, and the American Political Process", American Nineteenth Century History. Vol 1: 2 (2000) pp. 1–30
- Wiltse, Charles. John C. Calhoun, nullifier, 1829-1839 (1949)
- South Carolina Exposition and Protest, by Calhoun, 1828.
- The Fort Hill Address: On the Relations of the States and the Federal Government, by Calhoun, July 1831.
- South Carolina Ordinance of Nullification, November 24, 1832.
- President Jackson's Proclamation to South Carolina, December 10, 1832.
- Primary Documents in American History: Nullification Proclamation (Library of Congress)
- President Jackson's Message to the Senate and House Regarding South Carolina's Nullification Ordinance, January 16, 1833
- Nullification Revisited: An article examining the constitutionality of nullification (from a favorable aspect, and with regard to both recent and historical events). |
Of all the particles that we know of, the elusive neutrino is by far the most difficult to explain. We know there are three types of neutrino: the electron neutrino (νe), the muon neutrino (νμ), and the tau neutrino (ντ), as well as their antimatter counterparts (νe, νμ, and ντ). We know that they have extremely tiny but non-zero masses: the heaviest they can be means it would take over 4 million of them to add up to an electron, the next-lightest particle.
We know that they oscillate — or transform — from one type into another as they travel through space. We know that when we calculate the number of neutrinos produced by the Sun from nuclear fusion, only about a third of the expected number arrive on Earth. We know that they're generated in the atmosphere from cosmic rays, and from accelerators and reactors when particles decay. According to the Standard Model, there should be only three.
But that story doesn't add up.
The story began back in 1930, when we were measuring the products of some radioactive decays. In some of those decays, a neutron in an unstable nucleus would get converted into a proton, emitting an electron in the process. But if you added up the mass and energy of the decay products, they were always less than the initial mass of the reactants: it was like energy wasn't conserved.
To keep energy conservation, Wolfgang Pauli postulated a new type of particle: the neutrino. Although he lamented having done a terrible thing by proposing a particle that could not be detected, it only took 26 years to demonstrate that neutrinos existed. Specifically, the νe was detected from nuclear reactors. Neutrinos were extremely low in mass, but they existed.
Over time, the discoveries continued, as did the surprises. We modeled the nuclear reactions in the Sun and calculated how many neutrinos should arrive on Earth. When we detected them, however, we saw only a third the expected number. When we measured the neutrinos produced from cosmic ray showers, we again only saw a fraction of what we expected, but it was a different fraction than the neutrinos produced by the Sun.
One possible explanation put forth was based on the quantum mechanical phenomenon of mixing. If you have two particles with identical (or almost identical) quantum properties, they can mix together to form new physical states. If we had three types of neutrino with almost identical masses and other properties, perhaps they could mix together to form the neutrinos (νe, νμ, and ντ) and antineutrinos (νe, νμ, and ντ) we observe in our Universe?
The key measurements first came in during the 1990s, where we were able to measure both atmospheric and solar neutrinos to unprecedented precision. These two measurements informed us about how the neutrinos mixed together, and allowed us to calculate a mass difference between the three different types. With two measurements, we got two differences, which means that the relative figures should be fixed.
Meanwhile, we knew from particle colliders that there could only be three types of neutrino that coupled to the Standard Model particles, and we learned mass limits on the sum of neutrinos from cosmological observations.
From all of this, we were able to conclude:
- there are three types of neutrino,
- they have tiny, non-zero masses,
- they oscillate over large distances from one flavor (electron, muon, or tau) into another,
- and they can only make up a tiny fraction of the dark matter.
All of this was consistent, until one pesky experiment gave results we absolutely couldn't explain: the LSND (Liquid Scintillator Neutrino Detector) experiment.
Imagine producing an unstable particle like a muon and letting it decay. You'll produce an electron, an anti-electron neutrino, and a muon neutrino. Over very short distances, you expect a negligible amount of neutrino oscillations, in order to be consistent with the solar and atmospheric neutrinos. But instead, LSND showed that the neutrinos were oscillating: from one type into another, over distances far less than even one kilometer.
In the physical models we make, there are simple relationships between the distance a neutrino travels, the neutrino energy, and the differences in mass between the different types of neutrinos. The ratio of distance-to-energy corresponds to a mass difference, and from solar and atmospheric neutrinos, we got mass differences of ~milli-electron-volt (meV) scales. But with the small distances from the LSND experiment, it implied mass differences that were about 1000 times greater: ~electron-volt (eV) scales.
These three measurements — the solar neutrino measurements, the atmospheric neutrino measurements, and the LSND results — are mutually incompatible with the three Standard Model neutrinos we know.
Many people dismissed the LSND results, claiming that there must be an error there. After all, its mass was the outlier (too high), it was only one experiment, and there were many solar and atmospheric measurements from independent experiments over many years. If neutrinos were as massive as LSND said, the cosmic microwave background shouldn't display the properties we see. If there's a hot neutrino component to dark matter, it would ruin the Lyman-alpha forest: where we observe the absorption properties of foreground gas clouds from distant light.
When it comes to science, though, experiments and not theories are the ultimate arbiter of what is correct. You cannot simply say, "this experiment is wrong but I don't know what's wrong about it." You have to try and reproduce it with an independent check, and see what you get. That was the idea of the MiniBooNe experiment at Fermilab, which produced neutrinos from the booster ring in the old Tevatron at Fermilab.
Collide these high-energy particles, produce charged pions, and then the pions decay to muons, creating muon neutrinos (νμ) and muon anti-neutrinos (νμ). With the same distance-to-energy ratio as the LSND experiment, MiniBooNe's goal was to either confirm or refute the results of LSND. After 16 years of data-taking, MiniBooNe is not only consistent with LSND, it's extended it.
This is a historic moment for neutrinos. We create muon neutrinos in a particular region, and then just 541 meters downstream, detect that they've been oscillating in a fashion that's inconsistent with the other measurements. If you assume that there's two-neutrino oscillation happening, there must be at least four neutrino types, which means one of them must be sterile: it cannot couple to the strong, electromagnetic, or weak forces.
But this does not necessarily mean that there is a fourth (or more) neutrino! The experiments, which have now reached a combined statistical significance of 6.0σ, have exceeded the standard for discovery in particle physics. But that only means that the experimental results are robust; interpreting what they mean is another story entirely.
Could there be a more complicated type of mixing between neutrinos than we presently know? Could neutrinos couple to dark matter or dark energy? Could they couple to themselves in a new way that isn't described by Standard Model interactions? Could the density of the material they pass through — or even the density of the material they're detected in — make a difference? Could this distance-to-energy ratio be just one component to unlocking a far greater puzzle?
There are planned and ongoing experiments designed to gather more data about exactly this puzzle.
Nuclear reactors, for example, have already observed an electron neutrino and anti-neutrino (νe and νe) deficiency over what's predicted. The PROSPECT collaboration will measure disappearing reactor neutrinos better than ever before, teaching us whether they might be oscillating into the same, sterile state.
The MicroBooNe detector, expecting results next year, will improve MiniBooNe and have a slightly shorter length baseline and be made of different detector materials of different densities: liquid argon instead of mineral oil. Further down the road, ICARUS and SBND, both to be set up at Fermilab as well, will have significantly longer and shorter (respectively) length baselines and will also use liquid argon for their detectors. If there's something fishy going on that's either consistent with a new, sterile neutrino or something else entirely, these experiments will lead the way.
Regardless of what the ultimate explanation is, it's quite clear that the normal Standard Model, with three neutrinos that oscillate between electron/muon/tau types, cannot account for everything we've observed up to this point. The LSND results, once dismissed as a baffling experimental result that must surely be wrong, have been confirmed in a big way. With reactor deficiencies, MiniBooNe's results, and three new experiments on the horizon to gather more data about these mysteriously misbehaving particles, we may be poised for a new revolution in physics.
The high-energy frontier is only one way we have of learning about the Universe on a fundamental level. Sometimes, we just have to know what the right question to ask truly is. By looking at the lowest-energy particles at different distances from where they're generated, we just might take the next great leap in our knowledge of physics. Welcome to the era of the neutrino, which is taking us, at last, beyond the Standard Model.
Thanks to Bill Louis of Los Alamos National Laboratory for an incredibly insightful and informative interview about LSND, MiniBooNe, and neutrino experiments. |
Social inequality occurs when resources in a given society are distributed unevenly, typically through norms of allocation, that engender specific patterns along lines of socially defined categories of persons. Economic inequality, usually described on the basis of the unequal distribution of income or wealth, is a frequently studied type of social inequality. Though the disciplines of economics and sociology generally use different theoretical approaches to examine and explain economic inequality, both fields are actively involved in researching this inequality. However, social and natural resources other than purely economic resources are also unevenly distributed in most societies and may contribute to social status. Norms of allocation can also affect the distribution of rights and privileges, social power, access to public goods such as education or the judicial system, adequate housing, transportation, credit and financial services such as banking and other social goods and services.
While many societies worldwide hold that their resources are distributed on the basis of merit, research shows that the distribution of resources often follows delineations that distinguish different social categories of persons on the basis of other socially defined characteristics. For example, social inequality is linked to racial inequality, gender inequality, and ethnic inequality as well as other status characteristics and these forms can be related to corruption.
- 1 Overview
- 2 Inequality and ideology
- 3 Inequality and social class
- 4 Patterns of inequality
- 5 Global inequality
- 6 Inequality and economic growth
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
Social inequality is found in almost every society. Social inequality is shaped by a range of structural factors, such as geographical location or citizenship status, and are often underpinned by cultural discourses and identities defining, for example, whether the poor are 'deserving' or 'undeserving'. In simple societies, those that have few social roles and statuses occupied by its members, social inequality may be very low. In tribal societies, for example, a tribal head or chieftain may hold some privileges, use some tools, or wear marks of office to which others do not have access, but the daily life of the chieftain is very much like the daily life of any other tribal member. Anthropologists identify such highly egalitarian cultures as "kinship-oriented," which appear to value social harmony more than wealth or status. These cultures are contrasted with materially oriented cultures in which status and wealth are prized and competition and conflict are common. Kinship-oriented cultures may actively work to prevent social hierarchies from developing because they believe that could lead to conflict and instability. In today's world, most of our population lives in more complex than simple societies. As social complexity increases, inequality tends to increase along with a widening gap between the poorest and the most wealthy members of society.
Social status is accorded to persons in a society on at least two bases: ascribed characteristics and achieved characteristics. Ascribed characteristics are those present at birth or assigned by others and over which an individual has little or no control. Examples include sex, skin colour, eye shape, place of birth, sexuality, gender identity, parentage and social status of parents. Achieved characteristics are those which we earn or choose; examples include level of education, marital status, leadership status and other measures of merit. In most societies, an individual's social status is a combination of ascribed and achieved factors. In some societies, however, only ascribed statuses are considered in determining one's social status and there exists little to no social mobility and, therefore, few paths to more social equality. This type of social inequality is generally referred to as caste inequality.
One's social location in a society's overall structure of social stratification affects and is affected by almost every aspect of social life and one's life chances. The single best predictor of an individual's future social status is the social status into which they were born. Theoretical approaches to explaining social inequality concentrate on questions about how such social differentiations arise, what types of resources are being allocated, what are the roles of human cooperation and conflict in allocating resources, and how do these differing types and forms of inequality affect the overall functioning of a society?
The variables considered most important in explaining inequality and the manner in which those variables combine to produce the inequities and their social consequences in a given society can change across time and place. In addition to interest in comparing and contrasting social inequality at local and national levels, in the wake of today's globalizing processes, the most interesting question becomes: what does inequality look like on a worldwide scale and what does such global inequality bode for the future? In effect, globalization reduces the distances of time and space, producing a global interaction of cultures and societies and social roles that can increase global inequities.
Inequality and ideology
Philosophical questions about social ethics and the desirability or inevitability of inequality in human societies have given rise to a spate of ideologies to address such questions. We can broadly classify these ideologies on the basis of whether they justify or legitimize inequality, casting it as desirable or inevitable, or whether they cast equality as desirable and inequality as a feature of society to be reduced or eliminated. One end of this ideological continuum can be called "Individualist", the other "Collectivist". In Western societies, there is a long history associated with the idea of individual ownership of property and economic liberalism, the ideological belief in organizing the economy on individualist lines such that the greatest possible number of economic decisions are made by individuals and not by collective institutions or organizations. Laissez-faire, free market ideologies—including classical liberalism, neoliberalism, and libertarianism—are formed around the idea that social inequality is a "natural" feature of societies, is therefore inevitable and, in some philosophies, even desirable. Inequality provides for differing goods and services to be offered on the open market, spurs ambition, and provides incentive for industriousness and innovation. At the other end of the continuum, collectivists place little to no trust in "free market" economic systems, noting widespread lack of access among specific groups or classes of individuals to the costs of entry to the market. Widespread inequalities often lead to conflict and dissatisfaction with the current social order. Such ideologies include Fabianism, socialism, and Marxism or communism. Inequality, in these ideologies, must be reduced, eliminated, or kept under tight control through collective regulation.
Though the above discussion is limited to specific Western ideologies, it should be noted that similar thinking can be found, historically, in differing societies throughout the world. While, in general, eastern societies tend toward collectivism, elements of individualism and free market organization can be found in certain regions and historical eras. Classic Chinese society in the Han and Tang dynasties, for example, while highly organized into tight hierarchies of horizontal inequality with a distinct power elite also had many elements of free trade among its various regions and subcultures.
Today, there is belief held by some that social inequality often creates political conflict and growing consensus that political structures determine the solution for such conflicts. Under this line of thinking, adequately designed social and political institutions are seen as ensuring the smooth functioning of economic markets such that there is political stability, which improves the long-term outlook, enhances labour and capital productivity and so stimulates economic growth. With higher economic growth, net gains are positive across all levels and political reforms are easier to sustain. This may explain why, over time, in more egalitarian societies fiscal performance is better, stimulating greater accumulation of capital and higher growth.
Socioeconomic status (SES) is a combined total measure of a person's work experience and of an individual's or family's economic and social position in relation to others, based on income, education, and occupation. It is often used as synonymous with social class, a set of hierarchical social categories that indicate an individual's or household's relative position in a stratified matrix of social relationships. Social class is delineated by a number of variables, some of which change across time and place. For Karl Marx, there exist two major social classes with significant inequality between the two. The two are delineated by their relationship to the means of production in a given society. Those two classes are defined as the owners of the means of production and those who sell their labour to the owners of the means of production. In capitalistic societies, the two classifications represent the opposing social interests of its members, capital gain for the capitalists and good wages for the labourers, creating social conflict.
Max Weber uses social classes to examine wealth and status. For him, social class is strongly associated with prestige and privileges. It may explain social reproduction, the tendency of social classes to remain stable across generations maintaining most of their inequalities as well. Such inequalities include differences in income, wealth, access to education, pension levels, social status, socioeconomic safety-net. In general, social class can be defined as a large category of similarly ranked people located in a hierarchy and distinguished from other large categories in the hierarchy by such traits as occupation, education, income, and wealth.
In modern Western societies, inequalities are often broadly classified into three major divisions of social class: upper class, middle class, and lower class. Each of these classes can be further subdivided into smaller classes (e.g. "upper middle"). Members of different classes have varied access to financial resources, which affects their placement in the social stratification system.
The quantitative variables most often used as an indicator of social inequality are income and wealth. In a given society, the distribution of individual or household accumulation of wealth tells us more about variation in well-being than does income, alone. Gross Domestic Product (GDP), especially per capita GDP, is sometimes used to describe economic inequality at the international or global level. A better measure at that level, however, is the Gini coefficient, a measure of statistical dispersion used to represent the distribution of a specific quantity, such as income or wealth, at a global level, among a nation's residents, or even within a metropolitan area. Other widely used measures of economic inequality are the percentage of people living with under US$1.25 or $2 a day and the share of national income held by the wealthiest 10% of the population, sometimes called "the Palma" measure.
Patterns of inequality
There are a number of socially defined characteristics of individuals that contribute to social status and, therefore, equality or inequality within a society. When researchers use quantitative variables such as income or wealth to measure inequality, on an examination of the data, patterns are found that indicate these other social variables contribute to income or wealth as intervening variables. Significant inequalities in income and wealth are found when specific socially defined categories of people are compared. Among the most pervasive of these variables are sex/gender, race, and ethnicity. This is not to say, in societies wherein merit is considered to be the primary factor determining one's place or rank in the social order, that merit has no effect on variations in income or wealth. It is to say that these other socially defined characteristics can, and often do, intervene in the valuation of merit.
Sex- and gender-based prejudice and discrimination, called sexism, are major contributing factors to social inequality. Most societies, even agricultural ones, have some sexual division of labour and gender-based division of labour tends to increase during industrialization. The emphasis on gender inequality is born out of the deepening division in the roles assigned to men and women, particularly in the economic, political and educational spheres. Women are underrepresented in political activities and decision making processes in most states in both the Global North and Global South.
Gender discrimination, especially concerning the lower social status of women, has been a topic of serious discussion not only within academic and activist communities but also by governmental agencies and international bodies such as the United Nations. These discussions seek to identify and remedy widespread, institutionalized barriers to access for women in their societies. By making use of gender analysis, researchers try to understand the social expectations, responsibilities, resources and priorities of women and men within a specific context, examining the social, economic and environmental factors which influence their roles and decision-making capacity. By enforcing artificial separations between the social and economic roles of men and women, the lives of women and girls are negatively impacted and this can have the effect of limiting social and economic development.
Cultural ideals about women's work can also affect men whose outward gender expression is considered "feminine" within a given society. Transgender and gender-variant persons may express their gender through their appearance, the statements they make, or official documents they present. In this context, gender normativity, which is understood as the social expectations placed on us when we present particular bodies, produces widespread cultural/institutional devaluations of trans identities, homosexuality and femininity. Trans persons, in particular, have been defined as socially unproductive and disruptive.
A variety of global issues like HIV/AIDS, illiteracy, and poverty are often seen as "women's issues" since women are disproportionately affected. In many countries, women and girls face problems such as lack of access to education, which limit their opportunities to succeed, and further limits their ability to contribute economically to their society. Women are underrepresented in political activities and decision making processes throughout most of the world. As of 2007, around 20 percent of women were below the $1.25/day international poverty line and 40 percent below the $2/day mark. More than one-quarter of females under the age of 25 were below the $1.25/day international poverty line and about half on less than $2/day.
Women's participation in work has been increasing globally, but women are still faced with wage discrepancies and differences compared to what men earn. This is true globally even in the agricultural and rural sector in developed as well as developing countries. Structural impediments to women's ability to pursue and advance in their chosen professions often result in a phenomenon known as the glass ceiling, which refers to unseen - and often unacknowledged barriers that prevent minorities and women from rising to the upper rungs of the corporate ladder, regardless of their qualifications or achievements. This effect can be seen in the corporate and bureaucratic environments of many countries, lowering the chances of women to excel. It prevents women from succeeding and making the maximum use of their potential, which is at a cost for women as well as the society's development. Ensuring that women's rights are protected and endorsed can promote a sense of belonging that motivates women to contribute to their society. Once able to work, women should be titled to the same job security and safe working environments as men. Until such safeguards are in place, women and girls will continue to experience not only barriers to work and opportunities to earn, but will continue to be the primary victims of discrimination, oppression, and gender-based violence.
Women and persons whose gender identity does not conform to patriarchal beliefs about sex (only male and female) continue to face violence on global domestic, interpersonal, institutional and administrative scales. While first-wave Liberal Feminist initiatives raised awareness about the lack of fundamental rights and freedoms that women have access to, second-wave feminism (see also Radical Feminism) highlighted the structural forces that underlie gender-based violence. Masculinities are generally constructed so as to subordinate femininities and other expressions of gender that are not heterosexual, assertive and dominant. Gender sociologist and author, Raewyn Connell, discusses in her 2009 book, Gender, how masculinity is dangerous, heterosexual, violent and authoritative. These structures of masculinity ultimately contribute to the vast amounts of gendered violence, marginalization and suppression that women, queer, transgender, gender variant and gender non-conforming persons face. Some scholars suggest that women's underrepresentation in political systems speaks the idea that "formal citizenship does not always imply full social membership". Men, male bodies and expressions of masculinity are linked to ideas about work and citizenship. Others point out that patriarchal states tend top scale and claw back their social policies relative to the disadvantage of women. This process ensures that women encounter resistance into meaningful positions of power in institutions, administrations, and political systems and communities.
Racial and ethnic inequality
Racial or ethnic inequality is the result of hierarchical social distinctions between racial and ethnic categories within a society and often established based on characteristics such as skin color and other physical characteristics or an individual's place of origin or culture. Even though race has no biological connection, it has become a socially constructed category capable of restricting or enabling social status. Unequal treatment and opportunities between such categories is usually the result of some categories being considered superior to others. This inequality can manifest through discriminatory hiring and pay practices. In some cases, employers have been shown to prefer hiring potential employees based on the perceived ethnicity of a candidate's given name - even if all they have to go by in their decision are resumes featuring identical qualifications. These sorts of discriminatory practices stem from prejudice and stereotyping, which occurs when people form assumptions about the tendencies and characteristics of certain social categories, often rooted in assumptions about biology, cognitive capabilities, or even inherent moral failings. These negative attributions are then disseminated through a society through a number of different mediums, including television, newspapers and the internet, all of which play a role in promoting preconceived notions of race that disadvantage and marginalize groups of people. This along with xenophobia and other forms of discrimination continue to occur in societies with the rise of globalization.
Racial inequality can also result in diminished opportunities for members of marginalized groups, which in turn can lead to cycles of poverty and political marginalization. Racial and ethnic categories become a minority category in a society. Minority members in such a society are often subjected to discriminatory actions resulting from majority policies, including assimilation, exclusion, oppression, expulsion, and extermination. For example, during the run-up to the 2012 federal elections in the United States, legislation in certain "battleground states" that claimed to target voter fraud had the effect of disenfranchising tens of thousands of primarily African American voters. These types of institutional barriers to full and equal social participation have far-reaching effects within marginalized communities, including reduced economic opportunity and output, reduced educational outcomes and opportunities and reduced levels of overall health.
In the United States, Angela Davis argues that mass incarceration has been a modern tool of the state to impose inequality, repression, and discrimination upon African American and Hispanics. The War on Drugs has been a campaign with disparate effects, ensuring the constant incarceration of poor, vulnerable, and marginalized populations in North America. Over a million African Americans are incarcerated in the US, many of whom have been convicted of a drug possession charge. With the States of Colorado and Washington having legalized the possession of marijuana, drug reformists and anti-war on drugs lobbyists are hopeful that drug issues will be interpreted and dealt with from a healthcare perspective instead of a matter of criminal law. In Canada, Aboriginal, First Nations and Indigenous persons represent over a quarter of the federal prison population, even though they only represent 3% of the country's population.
Age discrimination is defined as the unfair treatment of people with regard to promotions, recruitment, resources, or privileges because of their age. It is also known as ageism: the stereotyping of and discrimination against individuals or groups based upon their age. It is a set of beliefs, attitudes, norms, and values used to justify age-based prejudice, discrimination, and subordination. One form of ageism is adultism, which is the discrimination against children and people under the legal adult age. An example of an act of adultism might be the policy of a certain establishment, restaurant, or place of business to not allow those under the legal adult age to enter their premises after a certain time or at all. While some people may benefit or enjoy these practices, some find them offensive and discriminatory. Discrimination against those under the age of 40 however is not illegal under the current U.S. Age Discrimination in Employment Act (ADEA).
As implied in the definitions above, treating people differently based upon their age is not necessarily discrimination. Virtually every society has age-stratification, meaning that the age structure in a society changes as people begin to live longer and the population becomes older. In most cultures, there are different social role expectations for people of different ages to perform. Every society manages people's ageing by allocating certain roles for different age groups. Age discrimination primarily occurs when age is used as an unfair criterion for allocating more or less resources. Scholars of age inequality have suggested that certain social organizations favor particular age inequalities. For instance, because of their emphasis on training and maintaining productive citizens, modern capitalist societies may dedicate disproportionate resources to training the young and maintaining the middle-aged worker to the detriment of the elderly and the retired (especially those already disadvantaged by income/wealth inequality).
In modern, technologically advanced societies, there is a tendency for both the young and the old to be relatively disadvantaged. However, more recently, in the United States the tendency is for the young to be most disadvantaged. For example, poverty levels in the U.S. have been decreasing among people aged 65 and older since the early 1970s whereas the number children under 18 in poverty has steadily risen. Sometimes, the elderly have had the opportunity to build their wealth throughout their lives, younger people have the disadvantage of recently entering into or having not yet entered into the economic sphere. The larger contributor to this, however is the increase in the number of people over 65 receiving Social Security and Medicare benefits in the U.S.
When we compare income distribution among youth across the globe, we find that about half (48.5 percent) of the world's young people are confined to the bottom two income brackets as of 2007. This means that, out of the three billion persons under the age of 24 in the world as of 2007, approximately 1.5 billion were living in situations in which they and their families had access to just nine percent of global income. Moving up the income distribution ladder, children and youth do not fare much better: more than two-thirds of the world's youth have access to less than 20 percent of global wealth, with 86 percent of all young people living on about one-third of world income. For the just over 400 million youth who are fortunate enough to rank among families or situations at the top of the income distribution, however, opportunities improve greatly with more than 60 percent of global income within their reach.
Although this does not exhaust the scope of age discrimination, in modern societies it is often discussed primarily with regards to the work environment. Indeed, non-participation in the labour force and the unequal access to rewarding jobs means that the elderly and the young are often subject to unfair disadvantages because of their age. On the one hand, the elderly are less likely to be involved in the workforce: At the same time, old age may or may not put one at a disadvantage in accessing positions of prestige. Old age may benefit one in such positions, but it may also disadvantage one because of negative ageist stereotyping of old people. On the other hand, young people are often disadvantaged from accessing prestigious or relatively rewarding jobs, because of their recent entry to the work force or because they are still completing their education. Typically, once they enter the labour force or take a part-time job while in school, they start at entry level positions with low level wages. Furthermore, because of their lack of prior work experience, they can also often be forced to take marginal jobs, where they can be taken advantage of by their employers.
Inequalities in health
Health inequalities can be defined as differences in health status or in the distribution of health determinants between different population groups. Health inequalities are in many cases related to access to health care. In industrialized nations, health inequalities are most prevalent in countries that have not implemented a universal health care system, such as the United States. Because the US health care system is heavily privatized, access to health care is dependent upon one's economic capital; Health care is not a right, it is a commodity that can be purchased through private insurance companies (or that is sometimes provided through an employer). The way health care is organized in the U.S. contributes to health inequalities based on gender, socioeconomic status and race/ethnicity. As Wright and Perry assert, "social status differences in health care are a primary mechanism of health inequalities". In the United States, over 48 million people are without medical care coverage. This means that almost one sixth of the population is without health insurance, mostly people belonging to the lower classes of society.
While universal access to health care may not completely eliminate health inequalities, it has been shown that it greatly reduces them. In this context, privatization gives individuals the 'power' to purchase their own health care (through private health insurance companies), but this leads to social inequality by only allowing people who have economic resources to access health care. Citizens are seen as consumers who have a 'choice' to buy the best health care they can afford; in alignment with neoliberal ideology, this puts the burden on the individual rather than the government or the community.
In countries that have a universal health care system, health inequalities have been reduced. In Canada, for example, equity in the availability of health services has been improved dramatically through Medicare. People don't have to worry about how they will pay health care, or rely on emergency rooms for care, since health care is provided for the entire population. However, inequality issues still remain. For example, not everyone has the same level of access to services. Inequalities in health are not, however, only related to access to health care. Even if everyone had the same level of access, inequalities may still remain. This is because health status is a product of more than just how much medical care people have available to them. While Medicare has equalized access to health care by removing the need for direct payments at the time of services, which improved the health of low status people, inequities in health are still prevalent in Canada This may be due to the state of the current social system, which bear other types of inequalities such as economic, racial and gender inequality.
A lack of health equity is also evident in the developing world, where the importance of equitable access to healthcare has been cited as crucial to achieving many of the Millennium Development Goals. Health inequalities can vary greatly depending on the country one is looking at. Inequalities in health are often associated with socioeconomic status and access to health care. Health inequities can occur when the distribution of public health services is unequal. For example, in Indonesia in 1990, only 12% of government spending for health was for services consumed by the poorest 20% of households, while the wealthiest 20% consumed 29% of the government subsidy in the health sector. Access to health care is heavily influenced by socioeconomic status as well, as wealthier population groups have a higher probability of obtaining care when they need it. A study by Makinen et al. (2000) found that in the majority of developing countries they looked at, there was an upward trend by quintile in health care use for those reporting illness. Wealthier groups are also more likely to be seen by doctors and to receive medicine.
The economies of the world have developed unevenly, historically, such that entire geographical regions were left mired in poverty and disease while others began to reduce poverty and disease on a wholesale basis. This was represented by a type of North–South divide that existed after WWII between First world, more developed, industrialized, wealthy countries and Third world countries, primarily as measured by GDP. From around 1980, however, through at least 2011, the GDP gap, while still wide, appeared to be closing and, in some more rapidly developing countries, life expectancies began to rise. However, there are numerous limitations of GDP as an economic indicator of social "well-being."
If we look at the Gini coefficient for world income, over time, after WW II the global Gini coefficient sat at just under .45. Between around 1959 to 1966, the global Gini increased sharply, to a peak of around .48 in 1966. After falling and leveling off a couple of times during a period from around 1967 to 1984, the Gini began to climb again in the mid-eighties until reaching a high or around .54 in 2000 then jumped again to around .70 in 2002. Since the late 1980s, the gap between some regions has markedly narrowed— between Asia and the advanced economies of the West, for example—but huge gaps remain globally. Overall equality across humanity, considered as individuals, has improved very little. Within the decade between 2003 and 2013, income inequality grew even in traditionally egalitarian countries like Germany, Sweden and Denmark. With a few exceptions—France, Japan, Spain—the top 10 percent of earners in most advanced economies raced ahead, while the bottom 10 percent fell further behind. By 2013, a tiny elite of multibillionaires, 85 to be exact, had amassed wealth equivalent to all the wealth owned by the poorest half (3.5 billion) of the world's total population of 7 billion. Country of citizenship (an ascribed status characteristic) explains 60% of variability in global income; citizenship and parental income class (both ascribed status characteristics) combined explain more than 80% of income variability.
Inequality and economic growth
The concept of economic growth is fundamental in capitalist economies. Productivity must grow as population grows and capital must grow to feed into increased productivity. Investment of capital leads to returns on investment (ROI) and increased capital accumulation. The hypothesis that economic inequality is a necessary precondition for economic growth has been a mainstay of liberal economic theory. Recent research, particularly over the first two decades of the 21st century, has called this basic assumption into question. While growing inequality does have a positive correlation with economic growth under specific sets of conditions, inequality in general is not positively correlated with economic growth and, under some conditions, shows a negative correlation with economic growth.
Milanovic (2011) points out that overall, global inequality between countries is more important to growth of the world economy than inequality within countries. While global economic growth may be a policy priority, recent evidence about regional and national inequalities cannot be dismissed when more local economic growth is a policy objective. The recent financial crisis and global recession hit countries and shook financial systems all over the world. This led to the implementation of large-scale fiscal expansionary interventions and, as a result, to massive public debt issuance in some countries. Governmental bailouts of the banking system further burdened fiscal balances and raises considerable concern about the fiscal solvency of some countries. Most governments want to keep deficits under control but rolling back the expansionary measures or cutting spending and raising taxes implies an enormous wealth transfer from tax payers to the private financial sector. Expansionary fiscal policies shift resources and causes worries about growing inequality within countries. Moreover, recent data confirm an ongoing trend of increasing income inequality since the early nineties. Increasing inequality within countries has been accompanied by a redistribution of economic resources between developed economies and emerging markets. Davtyn, et al. (2014) studied the interaction of these fiscal conditions and changes in fiscal and economic policies with income inequality in the UK, Canada, and the US. They find income inequality has negative effect on economic growth in the case of the UK but a positive effect in the cases of the US and Canada. Income inequality generally reduces government net lending/borrowing for all the countries. Economic growth, they find, leads to an increase of income inequality in the case of the UK and to the decline of inequality in the cases of the US and Canada. At the same time, economic growth improves government net lending/borrowing in all the countries. Government spending leads to the decline in inequality in the UK but to its increase in the US and Canada.
Following the results of Alesina and Rodrick (1994), Bourguignon (2004), and Birdsall (2005) show that developing countries with high inequality tend to grow more slowly, Ortiz and Cummings (2011) show that developing countries with high inequality tend to grow more slowly. For 131 countries for which they could estimate the change in Gini index values between 1990 and 2008, they find that those countries that increased levels of inequality experienced slower annual per capita GDP growth over the same time period. Noting a lack of data for national wealth, they build an index using Forbes list of billionaires by country normalized by GDP and validated through correlation with a Gini coefficient for wealth and the share of wealth going to the top decile. They find that many countries generating low rates of economic growth are also characterized by a high level of wealth inequality with wealth concentration among a class of entrenched elites. They conclude that extreme inequality in the distribution of wealth globally, regionally and nationally, coupled with the negative effects of higher levels of income disparities, should make us question current economic development approaches and examine the need to place equity at the center of the development agenda.
Ostry, et al. (2014) reject the hypothesis that there is a major trade-off between a reduction of income inequality (through income redistribution) and economic growth. If that were the case, they hold, then redistribution that reduces income inequality would on average be bad for growth, taking into account both the direct effect of higher redistribution and the effect of the resulting lower inequality. Their research shows rather the opposite: increasing income inequality always has a significant and, in most cases, negative effect on economic growth while redistribution has an overall pro-growth effect (in one sample) or no growth effect. Their conclusion is that increasing inequality, particularly when inequality is already high, results in low growth, if any, and such growth may be unsustainable over long periods.
Piketty and Saez (2014) note that there are important differences between income and wealth inequality dynamics. First, wealth concentration is always much higher than income concentration. The top 10 percent of wealth share typically falls in the 60 to 90 percent range of all wealth, whereas the top 10 percent income share is in the 30 to 50 percent range. The bottom 50 percent wealth share is always less than 5 percent, whereas the bottom 50 percent income share generally falls in the 20 to 30 percent range. The bottom half of the population hardly owns any wealth, but it does earn appreciable income:The inequality of labor income can be high, but it is usually much less extreme. On average, members of the bottom half of the population, in terms of wealth, own less than one-tenth of the average wealth. The inequality of labor income can be high, but it is usually much less extreme. Members of the bottom half of the population in income earn about half the average income. In sum, the concentration of capital ownership is always extreme, so that the very notion of capital is fairly abstract for large segments—if not the majority—of the population. Piketty (2014) finds that wealth-income ratios, today, seem to be returning to very high levels in low economic growth countries, similar to what he calls the "classic patrimonial" wealth-based societies of the 19th century wherein a minority lives off its wealth while the rest of the population works for subsistence living. He surmises that wealth accumulation is high because growth is low.
- Civil rights
- Digital divide
- Educational inequality
- Gini coefficient
- Global justice
- Health equity
- Horizontal inequality
- List of countries by income inequality
- List of countries by distribution of wealth
- LGBT social movements
- Social apartheid
- Social equality
- Social justice
- Social exclusion
- Social mobility
- Social stratification
- Structural violence
- Tax evasion
- Sernau, Scott (2013). Social Inequality in a Global Age (4th edition). Thousand Oaks, CA: Sage. ISBN 978-1452205403.
- Rugaber, Christopher S.; Boak, Josh (January 27, 2014). "Wealth gap: A guide to what it is, why it matters". AP News. Retrieved January 27, 2014.
- Walker, Dr. Charles. "New Dimensions of Social Inequality". www.ceelbas.ac.uk. Retrieved 2015-09-22.
- Deji, Olanike F. (2011). Gender and Rural Development. London: LIT Verlag Münster. p. 93. ISBN 978-3643901033.
- Neckerman, Kathryn M. and Florencia Torche (2007). "Inequality: Causes and Consequences". Annual Review of Sociology. 33: 335–357. JSTOR 29737766. doi:10.2307/29737766.
- George, Victor and Paul Wilding (1990). Ideology and Social Welfare (2nd edition). Routledge. ISBN 978-0415051019.
- Adams, Ian (2001). Political Ideology Today. Manchester: Manchester University Press. ISBN 978-0719060205.
- Ebrey, Patricia Buckley Anne Walthall, James Palais. (2006). East Asia: A Cultural, Social, and Political History. Boston: Houghton Mifflin Company.
- Davtyan, Karen (2014). "Interrelation among Economic Growth, Income Inequality, and Fiscal Performance: Evidence from Anglo-Saxon Countries". Research Institute of Applied Economics Working Paper 2014/05. Regional Quantitative Analysis Research Group. p. 45. Retrieved 9 July 2014.
- Stiglitz, Joseph. 2012. The Price of Inequality. New York, NY: Norton.
- Gilbert, Dennis. 2011: The American Class Structure in an Age of Growing Inequality, 8th ed. Thousand Oaks, CA: Pine Forge Press.
- Saunders, Peter (1990). Social Class and Stratification. Routledge. ISBN 978-0-415-04125-6.
- Doob, B. Christopher (2013). Social Inequality and Social Stratification in US Society (1st ed.). Upper Saddle River, New Jersey: Pearson Education. ISBN 0-205-79241-3.
- Domhoff, G. William (2013). Who Rules America? The Triumph of the Corporate Rich. McGraw-Hill. p. 288. ISBN 978-0078026713.
- Gini, C. (1936). "On the Measure of Concentration with Special Reference to Income and Statistics", Colorado College Publication, General Series No. 208, 73–79.
- Cobham, Alex and Andy Sumner (2013). Is It All About the Tails? The Palma Measure of Income Inequality (Working Paper 343). Washington, D.C.: Centre for Global Development.
- Collins, Patricia Hill (1998). "Toward a new vision: race, class and gender as categories of analysis and connection" in Social Class and Stratification: Classic Statements and Theoretical Debates. Boston: Rowman & Littlefield. pp. 231–247.
- Struening, Karen (2002). New Family Values: Liberty, Equality, Diversity. New York: Rowman & Littlefield. ISBN 978-0-7425-1231-3.
- "About us". Un.org. 2003-12-31. Retrieved 2013-07-17.
- Issac Kwaka Acheampong and Sidharta Sarkar. Gender, Poverty & Sustainable Livelihood. p. 108.
- Stanley, E. A. (2011). " Fugitive flesh: Gender self-determination, queer abolition, and trans resistance" in E. Stanley, A. and N. Smith (eds.), Captive genders: Trans embodiment and the prison industrial complex. Edinburgh, UK: AK Press.
- Irving, D. (2008). "Normalized transgressions: Legitimizing the transsexual body as productive.". Radical History Review. 100: 38–59. doi:10.1215/01636545-2007-021.
- "Empowering Women as Key Change Agents".
- "Platform for Action". United Nations Fourth World Conference on Women. Retrieved 9 April 2013.
- "Meeting the Needs of the World's Women".
- Ortiz, Isabel and Matthew Cummins (2011). Global Inequality: Beyond the Bottom Billion (PDF). UNICEF SOCIAL AND ECONOMIC POLICY WORKING PAPER.
- "Women, Poverty & Economics".
- "UN: Gender discrimination accounts for 90% of wage gap between men and women".
- "The Glass Ceiling Effect" (PDF).
- Janet Henshall Momsen (2004). Gender and Development. Routledge.
- "Goal 3: Promote Gender Equity and Empower Women" (PDF).
- "UN Women and ILO join forces to promote women's empowerment in the workplace".
- Connel, R.W. (1995) . Masculinities. University of California Press. ISBN 978-0520246980.
- O'Connor 1993 p.504
- Mandel 2012
- Furlong, Andy (2013). Youth Studies: An Introduction. New York: Routledge. p. 37. ISBN 978-0-415-56479-3.
- Rooth, Dan-Olof (April 2007). "Implicit Discrimination in Hiring: Real World Evidence". IZA Discussion Paper.
- Dubow, Saul (1995). Scientific Racism in Modern South Africa. Cambridge University Press. p. 121. ISBN 0-521-47907-X.
- "The World Conference against racism, racial discrimination, xenophobia and related intolerance".
- Henrard, Kristen (2000). Devising an Adequate System of Minority Protection: Individual Human Rights, Minority Rights and the Right to Self-Determination. New York: Springer. ISBN 978-9041113597.
- Alvarez, R. Michael; Baily, Delia; Katz, Jonathan (January 2008). "The Effect of Voter Identification Laws on Turnout". California Institute of Technology Social Science Working Paper No. 1267R.
- Thompson, Teresa L. (2012). The Routledge Handbook of Health Communication. Routledge. pp. 241–42.
- Davis, Angela Y. Abolition Democracy: Beyond Prisons, Torture, and Empire. Seven Stories. p. 160. ISBN 1583226958.
- Kirkpatrick, George R.; Katsiaficas, George N.; Kirkpatrick, Robert George; Mary Lou Emery (1987). Introduction to critical sociology. Ardent Media. p. 261. ISBN 978-0-8290-1595-9. Retrieved 28 January 2011.
- Lauter And Howe (1971) Conspiracy of the Young. Meridian Press.
- Sargeant, Malcolm (ed.) (2011). Age Discrimination and Diversity Multiple Discrimination from an Age Perspective. Cambridge University Press. ISBN 978-1107003774.
- Ortiz, Isabel and Matthew Cummins (April 2011). "Global inequality: Beyond the bottom billion". UNICEF SOCIAL AND ECONOMIC POLICY WORKING PAPER. UNICEF. Retrieved 9 July 2014.
- "United Nations Health Impact Assessment: Glossary of Terms Used". Retrieved 10 April 2013.
- Wright, Eric R.; Perry, Brea L. (2010). "Medical Sociology and Health Services Research: Past Accomplishments and Future Policy Challenges". Journal of Health and Social Behaviour: 107–119.
- "US Census".
- Veugeulers, P; Yip, A. (2003). "Socioeconomic Disparities in Health Care Use: Does Universal Coverage Reduce Inequalities in Health?". Journal of Epidemiology and Community Health. 57 (6): 107–119. doi:10.1136/jech.57.6.424.
- Hacker, Jacob S. (2006). The Great Risk Shift: The Assault on American Jobs, Families, Health Care, and Retirement - and How You Can Fight Back. Oxford University Press.
- Grant, Karen R. (1994). Health and Health Care in Essentials of Contemporary Sociology. Toronto: Copp Clark Longman. p. 275.
- Grant, K.R. (1998). The Inverse Care Law in Canada: Differential Access Under Universal Free Health Insurance. Toronto: Harcourt Brace Jovanovich. pp. 118–134.
- World Bank (1993). World Development Report. New York: Oxford University Press.
- Mankinen, M.; et al. (January 2000). "Inequalities in Health Care Use and Expenditures: Empirical Data from Eight Developing Countries and Countries in Transition". Bulletin of the World Health Organization 38:1. doi:10.1590/S0042-96862000000100006.
- Graph: Gapminder.org
- Rosling, Hans (2013). "How much do you know about the world?". BBC. Retrieved 9 July 2014.
- "GDP: A Flawed Measure of Progress". New Economy Working Group. Retrieved 9 July 2014.
- Bronko, Milanovic (2003). "The Two Faces of Globalization". World Development. 31 (4): 667–683. doi:10.1016/s0305-750x(03)00002-0.
- Stiglits Joseph E. (13 October 2013). "Inequality is a Choice". New York Times. Retrieved 9 July 2014.
- "Outlook on the Global Agenda 2014" (PDF). World Economic Forum. Retrieved 9 July 2014.
- Milanovic, Branko (Autumn 2011). "Global income inequality: the past two centuries and implications for 21st century" (PDF). World Bank. Retrieved 10 July 2014.
- Berg, Andrew G.; Ostry, Jonathan D. (2011). "Equality and Efficiency". Finance and Development. International Monetary Fund. 48 (3). Retrieved September 10, 2012.
- Ostry, Jonathan D. and Andrew Berg, Charalambos G. Tsangarides (April 2014). "Redistribution, Inequality, and Growth". International Monetary Fund. Retrieved 10 July 2014.
- Alesina, A. and D. Rodrik (1994). "Distributive Politics and Economic Growth". The Quarterly Journal of Economics (MIT Press). 109 (2): 465–90. doi:10.2307/2118470.
- Bourguignon, F. (2004). The Poverty-Growth-Inequality Triangle (PDF). Washington, D.C.: World Bank.
- Birdsall, N. (2005). Why Inequality Matters in a Globalizing World. Helsinki: UNU-WIDER Annual Lecture.
- "Inequality in the long run". Science. 344 (6186): 838–43. 2014. doi:10.1126/science.1251936.
- Piketty, Thomas (2014). Capital in the 21st century. Belknap Press. ISBN 978-0674430006.
- Abel, T (2008). "Cultural capital and social inequality in health". Journal of Epidemiology and Community Health. 62 (7): e13. doi:10.1136/jech.2007.066159.
- Acker, Joan (1990). "Hierarchies, jobs, bodies: a theory of gendered organizations". Gender and Society. 4: 139–58. doi:10.1177/089124390004002002.
- Bourdieu, Pierre. 1996.The State Nobility: Elite Schools in the Field of Power, translated by Lauretta C. Clough. Stanford: Stanford University Press.
- Brennan, S (2009). "Feminist Ethics and Everyday Inequalities". Hypatia. 24 (1): 141–159. doi:10.1111/j.1527-2001.2009.00011.x.
- Brenner, N (2010). "Variegated neoliberalization: geographies, modalities, pathways". Global networks. 10 (2): 182–222. doi:10.1111/j.1471-0374.2009.00277.x.
- Coburn, D (2004). "Beyond the income inequality hypothesis: class, neo-liberalism, and health inequalities". Social Science & Medicine. 58 (1): 41–56. doi:10.1016/s0277-9536(03)00159-x.
- Esping-Andersen, Gosta. 1999. "The Three Worlds of Welfare Capitalism." In The Welfare State Reader edited by Christopher Pierson and Francis G. Castles. Polity Press.
- Wilkinson, Richard; Pickett, Kate (2009). The Spirit Level: Why More Equal Societies Almost Always Do Better. Allen Lane. ISBN 978-1-84614-039-6.
- Frankfurt, H (1987). "Equality as a Moral Ideal". Ethics. 98 (1): 21–43. doi:10.1086/292913.
- Cruz, Adrienne and Sabine Klinger (2011). Gender-based violence in the world of work International Labour Organization
- Goldthorpe, J. H. (2010). "Analysing Social Inequality: A Critique of Two Recent Contributions from Economics and Epidemiology". European Sociological Review. 26 (6): 731–744. doi:10.1093/esr/jcp046.
- Irving, D (2008). "Normalized transgressions: Legitimizing the transsexual body as productive". Radical History Review. 100: 38–59. doi:10.1215/01636545-2007-021.
- Jin, Y.; Li, H.; et al. (2011). "Income inequality, consumption, and social-status seeking". Journal of Comparative Economics. 39 (2): 191–204. doi:10.1016/j.jce.2010.12.004.
- Lazzarato, M (2009). "Neoliberalism in Action: Inequality, Insecurity and the Reconstitution of the Social". Theory, Culture & Society. 26 (6): 109–133. doi:10.1177/0263276409350283.
- Mandel, Hadas (2012). "Winners and Losers: The Consequences of Welfare State Policies for Gender Wage Inequality". European Sociological Review. 28: 241–262. doi:10.1093/esr/jcq061.
- Ortiz, Isabel & Matthew Cummins. 2011. Global Inequality: Beyond the Bottom Billion – A Rapid Review of Income Distribution in 141 Countries. United Nations Children's Fund (UNICEF), New York.
- Pakulski, J.; Waters, M. (1996). "The Reshaping and Dissolution of Social Class in Advanced Society". Theory and Society. 25 (5): 667–691. doi:10.1007/bf00188101.
- Piketty, Thomas (2014). Capital in the 21st century. Belknap Press.
- Sernau, Scott (2013). Social Inequality in a Global Age (4th edition). Thousand Oaks, CA: Sage.
- Stanley, E. A. 2011. "Fugitive flesh: Gender self-determination, queer abolition, and trans resistance." In E. A. Stanley & N. Smith (Eds.), Captive genders: Trans embodiment and the prison industrial complex (pp. 1–14). Edinburgh, UK: AK Press.
- Stiglitz, Joseph. 2012. The Price of Inequality. New York: Norton.
- United Nations (UN) Inequality-adjusted Human Development Report (IHDR) 2013. United Nations Development Programme (UNDP).
- Weber, Max. 1946. "Power." In Max Weber: Essays in Sociology. Translated and Edited by H.H. Gerth and C. Wright Mills. New York: Oxford University Press.
- Weeden, K. A.; Grusky, D. B. (2012). "The Three Worlds of Inequality". American Journal of Sociology. 117 (6): 1723–1785. doi:10.1086/665035.
- Wright, E. O. (2000). "Working-Class Power, Capitalist-Class Interests, and Class Compromise". American Journal of Sociology. 105 (4): 957–1002. doi:10.1086/210397.
|Library resources about
- Inequality watch
- "Wealth Gap" - A Guide (January 2014), AP News
- Guardian.com/business/2015/jan/19/global-wealth-oxfam-inequality-davos-economic-summit-switzerland New Oxfam report says half of global wealth held by the 1% (2015-01-19). "Oxfam warns of widening inequality gap." The Guardian
- How Much More (Or Less) Would You Make If We Rolled Back Inequality? (January 2015). "How much more (or less) would families be earning today if inequality had remained flat since 1979?" National Public Radio
- OECD - Education GPS: Gender differences in educationhe:שוויון |
As the performance gap between processors and main memory continues to widen, increasingly aggressive implementations of cache memories are needed to bridge the gap. Cache memory is a small fast memory used to temporarily hold the contents of portions of the main memory that are most likely to be used.
In computing terms, Cache refers to the software or hardware component that holds data to serve the future demands of that particular saved data a lot faster. The stored data can also be that might have been copied or saved elsewhere for protection purposes. When the requested data is found in a cache, a cache hit is said to occur. Meanwhile, when the requested data is not found in a cache, a cache miss is said to occur.
Cache hits are super-fast and occur quicker than recomputing as a result of reading through a much slower data store. In simple words, the more the data search requests are implicated on cache, the faster it gets to serve and provide super-fast results.
Today, caches and cache memory have become an vital component of all processors.
To make caches more efficient and effective to store data and serve data search requests, caches must be relatively smaller in size. By making them smaller in size, it provides one more benefit to the user and, i.e., cost-effectiveness. Cost-effectiveness is a benefit which every user looks for in every aspect.
The ability of caches to bridge the performance gap is determined by two primary factors, the time needed to retrieve data from the cache and the fraction of memory references that are and can be satisfied by the memory cache. These two primary factors are commonly referred to as ‘‘access (hit) time’’ and ‘‘hit ratio’’.
The computer has three logical systems, the CPU, the memory and storage system, and the input/ output system. Cache memory has significantly advanced in recent years ever since Wilkes proposed a ‘‘two-level main store’’ in 1965. The first being Conventional slave memory and the second being Unconventional slave memory.
It has now become a conventional component of high-speed computing with increasing size and sophistication. The performance of cache is critical to overall system processing ability and the ongoing research in this area is attempting to reduce the speed gap between the CPU and the cache memory as much as possible.
The Different Types of Cache memory aspects –
- Cache Fetch Algorithm- The cache fetch algorithm is used to decide when to bring the information to the cache. Several possibilities exist where information can be fetched on-demand when it is needed or before it is needed that is prefetched.
- Cache Placement Algorithm- Information is generally retrieved from the cache associatively because large associative memories are relatively slow and quite expensive. Hence, the cache is organized in relatively smaller associative memories. Thus, only one of the associative memories has to be searched to determine whether the desired information is available in the cache or not. Each of this small associative memory is called a ‘‘set’’ and the number of elements over which the search is conducted is called set size.
- Line Size- The fixed-size unit of information transfer between the cache and main memory is called the line. The line corresponds to the page, which is the unit transfer between the main memory and the secondary storage.
- Replacement Algorithm- When information is requested by the CPU from the main memory and the cache is full, some information in the cache must be stored for replacement.
- Main Memory Update Algorithm- When the CPU stores a memory, that operation can be reflected through copy-back and write-through in multiple ways.
- Supervisor Cache- The frequent switch between the user and supervisor in most systems results in high miss ratios because the cache is often reloaded.
- Input/ Output- Input and output are additional sources of reference to the information of memory. It is important that an output request stream reference the most current values for the information transferred. Similarly, the input data must be immediately reflected in all copies of those lines in memory.
- Data/ Instruction Cache- Another cache strategy is to split the cache into two parts- one for data and one for instructions.
- Virtual vs Real Addressing- In systems with virtual memory, the cache may be accessed with real addresses or virtual addresses.
- Cold start vs warm start- Most systems are multi-programmed, that is, the CPU runs several processors. While only one can run at a time, as they alternate every few nanoseconds.
- Multi-level Cache- As the cache grows in size, there comes a point where it splits into parts- a small, high-level cache that is faster and smaller and a larger second-level cache.
- Cache Bandwidth- It is the rate at which the data can be read from or written to the cache.
THE FUTURE PROSPECTS OF CACHE AND CACHE MEMORY
New memory technologies are blurring out the previously performed characteristics of adjacent layers in the memory hierarchy. There are no longer such layers of orders of magnitude which are different in request capacity or latency. Beyond the traditional single-layer view of caching, there exists a data placement challenge. An offline algorithm for data placement across multiple tiers of memory with asymmetric read and write costs exists called CHOPT.
It is optimal and can therefore serve as the upper bound of performance gain for any data placement algorithm. The ACM demonstrates an approximation of CHOPT which makes its execution time for long traces practical using special sampling of requests incurring a small 0.2% of an average error on representative workloads at a sampling ratio of 1%.
An important to note is that in the near future, static energy will dominate the energy consumption in deep-micro processes. In the stimulation using SPEC95 integer benchmarks, their technique used about 45% of leakage memory in the cache at maximum, and about 28% on average. Their results identify substantial improvement opportunities for future online memory management research for developing further memory cache aspects in the future. |
In an experiment, a student launches a ball with an initial horizontal velocity of 5.00 meters/sec at an elevation 2.00 meters above ground. Draw and clearly label with appropriate values and units a graph of the ball's horizontal velocity vs. time and the ball's vertical velocity vs. time. The graph should cover the motion from the instant after the ball is launched until the instant before it hits the ground. Assume the downward direction is negative for this problem.
This is College Physics Answers with Shaun Dychko. A ball begins at an elevation of two meters above the ground and maybe it's being shot out of some kind of spring-loaded device maybe and shoots out with a speed horizontally of 5.00 meters per second. Then it'll be in free fall as well and so it's going to have a vertical acceleration due to gravity. It begins at a height of two meters, which we'll write down here. That means to say that the final position is zero and the initial position, y naught equals two meters. The initial y-component of its velocity is zero. We're going to draw graphs of its horizontal velocity with time and its vertical velocity with time. In order to figure out how much time we should have in our column for time, we'll figure out how long it takes for it to hit the ground starting at height of 2 meters. Equation 77 in Chapter 2 says this, the final y position equals the initial y position plus the initial y velocity times time minus one half times acceleration due to gravity times time squared. But given that the final position is zero and the initial y velocity is zero, we can have just these two terms. We'll move this one to the left-hand side by adding one-half gt squared to both sides. Then we get this line, and then we'll multiply both sides by 2 over g. Then we'll end up with t squared equals y naught times 2 over g and then we'll take the square root of both sides as we solve for time. Time is square root of 2 times 2 meters divided by 9.8 meters per second squared, which is 0.6389 seconds. That's what we need for our maximum time approximately in our spreadsheet. I'm using Google Docs here, and here's the horizontal velocity with time. Because there's no acceleration horizontally, we just have five arrows for every single time. We started at zero seconds, and 0.1, 0.2, 0.3 up to 0.7, and the horizontal velocity is always 5. This is just a straight line horizontally. That's all there is to that. Our graph has units on the axis, and seconds here for time, and meters per second for the velocity, and a descriptive title. Now the white position is a little bit more interesting. We can make an equation for the y velocity. It's going to be the initial y velocity, which we've already said is zero, minus acceleration due to gravity times time. Essentially becomes this once you substitute for zero for Vy naught and this is the formula that I'm putting in the spreadsheet for the y-direction. You can see that formula here. It's negative 9.8 times whatever the time is in the cell in the column A with the same row, which is for in this case. Here, we have negative 9.8 times the time in A5, in A6, in A7, and so on. They made a graph of that and we can see that this is a straight line showing that the velocity is increasing linearly with time, which we expect because this is a graph with a slope of negative g, where it's a slope-intercept form where y equals mx plus b where b is zero. And so this is a form y equals mx, where m being the slope is negative b. There we go. |
An Algorithm for Converting a Decimal Number to a Binary Number
By Kevin Ritzman
In this learning activity you'll examine a systematic method for converting a decimal number (base 10) into a binary number (base 2).
An Algorithm for Converting a Binary Number to a Decimal Number
In this learning activity you'll examine two methods for converting a binary number to a decimal number.
What is the .Net Framework?
In this learning activity you'll discover what the .NET Framework is, what problems it solves, and how it came to be.
Visual Logic Basic Programming: Using Flow Charts
By Jason Vosters
In this learning activity you'll understand what flow charts are and how they're used to create computer programs.
Programming in C++ Using Constants
By Ryan Appel
In this learning activity you'll discover the const keyword and it’s uses in C++.
An Overview of the Raspberry Pi
By Joseph Wetzel
In this learning activity you'll discover what the Raspberry Pi is, how it's used, and some projects you can do at home.
Parameters and Arguments
In this learning activity, you’ll discover the difference between a parameter and an argument.
Strings in the .NET Framework
By Brett Sheleski
In this activity, we will explain the string datatype in the .net framework and cover the peculiarities of a reference type that appears to behave like value type.
Debugging: What is It?
Learn more about debugging including: what it is, why we use it, and what it looks like in action.
Object or Class?
By Jay Stulo
In this learning activity, you'll watch an animated explanation of the terms class and object as used by computer programmers, and then contrast the differences.
What is an Algorithm?
In this learning activity, we’ll learn what an algorithm is.
An interactive html page that allows the user to manipulate a Cardioid-like shape generated by lines.
How to Evaluate a Problem Statement using MEA and IPO Techniques
By Matthew Green
In this learning activity you'll discover how to evaluate a problem statement using both the MEA and the IP Techniques.
Value Types and Reference Types in the .Net Framework
In this learning activity you'll discover the functional differences between value and reference types within the .Net framework.
In this learning activity we will discuss breakpoints: what they are and how to use them.
Attributes of a Class: Fields and Properties
In this learning activity you'll discover how C#, attributes provide a way of associating data with an object in two forms - Fields and Properties.
The CLR: Overview of the Common Language Runtime
In this learning activity, you’ll explore the Common Language Runtime, or CLR.
CRUD Applications - Create Retrieve Update Delete
In this learning activity you'll discover what C.R.U.D. applications are and how they're used in everyday life.
By Rose Guthrie, Donna Gehl
Explore the AngularJS framework, learn how it’s used to architect, and organize code when building web applications.
Pointers In C++
In this learning activity you'll be introduced to programming pointers compatible with both the C and C++ languages.
Async and Await
In this learning activity, we’ll explore how async and await is used in mobile programming to allow multiple tasks to happen at once.
Visual Studio: Creating a New VSTS Repository
By Brian Foote
In this learning activity, you'll practice creating a new repository in Visual Studio Online, a cloud-based version control system.
Instantiation: Constructing an Object
In this learning activity you'll discover how constructors in the C# programming language are used to instantiate objects.
Recursion - See Recursion
In this learning object you'll learn what recursion is and how to use it. |
Color (American English) or colour (Commonwealth English) is the visual perceptual property corresponding in humans to the categories called red, blue, yellow, etc. Color derives from the spectrum of light (distribution of light power versus wavelength) interacting in the eye with the spectral sensitivities of the light receptors. Color categories and physical specifications of color are also associated with objects or materials based on their physical properties such as light absorption, reflection, or emission spectra. By defining a color space colors can be identified numerically by their coordinates.
Because perception of color stems from the varying spectral sensitivity of different types of cone cells in the retina to different parts of the spectrum, colors may be defined and quantified by the degree to which they stimulate these cells. These physical or physiological quantifications of color, however, do not fully explain the psychophysical perception of color appearance.
The science of color is sometimes called chromatics, colorimetry, or simply color science. It includes the perception of color by the human eye and brain, the origin of color in materials, color theory in art, and the physics of electromagnetic radiation in the visible range (that is, what we commonly refer to simply as light).
- 1 Physics of color
- 2 Perception
- 3 Associations
- 4 Spectral colors and color reproduction
- 5 Additive coloring
- 6 Subtractive coloring
- 7 Structural color
- 8 Mentions of color in social media
- 9 Additional terms
- 10 See also
- 11 References
- 12 External links and sources
Physics of color
|Red||~ 700–635 nm||~ 430–480 THz|
|Orange||~ 635–590 nm||~ 480–510 THz|
|Yellow||~ 590–560 nm||~ 510–540 THz|
|Green||~ 560–520 nm||~ 540–580 THz|
|Cyan||~ 520–490 nm||~ 580–610 THz|
|Blue||~ 490–450 nm||~ 610–670 THz|
|Violet||~ 450–400 nm||~ 670–750 THz|
Electromagnetic radiation is characterized by its wavelength (or frequency) and its intensity. When the wavelength is within the visible spectrum (the range of wavelengths humans can perceive, approximately from 390 nm to 700 nm), it is known as "visible light".
Most light sources emit light at many different wavelengths; a source's spectrum is a distribution giving its intensity at each wavelength. Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary widely among different species, and to a lesser extent among individuals within the same species. In each such class the members are called metamers of the color in question.
The familiar colors of the rainbow in the spectrum – named using the Latin word for appearance or apparition by Isaac Newton in 1671 – include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The table at right shows approximate frequencies (in terahertz) and wavelengths (in nanometers) for various pure spectral colors. The wavelengths listed are as measured in air or vacuum (see refractive index).
The color table should not be interpreted as a definitive list – the pure spectral colors form a continuous spectrum, and how it is divided into distinct colors linguistically is a matter of culture and historical contingency (although people everywhere have been shown to perceive colors in the same way). A common list identifies six main bands: red, orange, yellow, green, blue, and violet. Newton's conception included a seventh color, indigo, between blue and violet. It is possible that what Newton referred to as blue is nearer to what today we call cyan, and that indigo was simply the dark blue of the indigo dye that was being imported at the time.
The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably; for example, a low-intensity orange-yellow is brown, and a low-intensity yellow-green is olive-green.
Color of objects
The color of an object depends on both the physics of the object in its environment and the characteristics of the perceiving eye and brain. Physically, objects can be said to have the color of the light leaving their surfaces, which normally depends on the spectrum of the incident illumination and the reflectance properties of the surface, as well as potentially on the angles of illumination and viewing. Some objects not only reflect light, but also transmit light or emit light themselves, which also contribute to the color. A viewer's perception of the object's color depends not only on the spectrum of the light leaving its surface, but also on a host of contextual cues, so that color differences between objects can be discerned mostly independent of the lighting spectrum, viewing angle, etc. This effect is known as color constancy.
Some generalizations of the physics can be drawn, neglecting perceptual effects for now:
- Light arriving at an opaque surface is either reflected "specularly" (that is, in the manner of a mirror), scattered (that is, reflected with diffuse scattering), or absorbed – or some combination of these.
- Opaque objects that do not reflect specularly (which tend to have rough surfaces) have their color determined by which wavelengths of light they scatter strongly (with the light that is not scattered being absorbed). If objects scatter all wavelengths with roughly equal strength, they appear white. If they absorb all wavelengths, they appear black.
- Opaque objects that specularly reflect light of different wavelengths with different efficiencies look like mirrors tinted with colors determined by those differences. An object that reflects some fraction of impinging light and absorbs the rest may look black but also be faintly reflective; examples are black objects coated with layers of enamel or lacquer.
- Objects that transmit light are either translucent (scattering the transmitted light) or transparent (not scattering the transmitted light). If they also absorb (or reflect) light of various wavelengths differentially, they appear tinted with a color determined by the nature of that absorption (or that reflectance).
- Objects may emit light that they generate from having excited electrons, rather than merely reflecting or transmitting light. The electrons may be excited due to elevated temperature (incandescence), as a result of chemical reactions (chemoluminescence), after absorbing light of other frequencies ("fluorescence" or "phosphorescence") or from electrical contacts as in light emitting diodes, or other light sources.
To summarize, the color of an object is a complex result of its surface properties, its transmission properties, and its emission properties, all of which contribute to the mix of wavelengths in the light leaving the surface of the object. The perceived color is then further conditioned by the nature of the ambient illumination, and by the color properties of other objects nearby, and via other characteristics of the perceiving eye and brain.
Development of theories of color vision
Although Aristotle and other ancient scientists had already written on the nature of light and color vision, it was not until Newton that light was identified as the source of the color sensation. In 1810, Goethe published his comprehensive Theory of Colors in which he ascribed physiological effects to color that are now understood as psychological.
In 1801 Thomas Young proposed his trichromatic theory, based on the observation that any color could be matched with a combination of three lights. This theory was later refined by James Clerk Maxwell and Hermann von Helmholtz. As Helmholtz puts it, "the principles of Newton's law of mixture were experimentally confirmed by Maxwell in 1856. Young's theory of color sensations, like so much else that this marvelous investigator achieved in advance of his time, remained unnoticed until Maxwell directed attention to it."
At the same time as Helmholtz, Ewald Hering developed the opponent process theory of color, noting that color blindness and afterimages typically come in opponent pairs (red-green, blue-orange, yellow-violet, and black-white). Ultimately these two theories were synthesized in 1957 by Hurvich and Jameson, who showed that retinal processing corresponds to the trichromatic theory, while processing at the level of the lateral geniculate nucleus corresponds to the opponent theory.
In 1931, an international group of experts known as the Commission internationale de l'éclairage (CIE) developed a mathematical color model, which mapped out the space of observable colors and assigned a set of three numbers to each.
Color in the eye
The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans being trichromatic, the retina contains three types of color receptor cells, or cones. One type, relatively distinct from the other two, is most responsive to light that we perceive as blue or blue-violet, with wavelengths around 450 nm; cones of this type are sometimes called short-wavelength cones, S cones, or blue cones. The other two types are closely related genetically and chemically: middle-wavelength cones, M cones, or green cones are most sensitive to light perceived as green, with wavelengths around 540 nm, while the long-wavelength cones, L cones, or red cones, are most sensitive to light we perceive as greenish yellow, with wavelengths around 570 nm.
Light, no matter how complex its composition of wavelengths, is reduced to three color components by the eye. For each location in the visual field, the three types of cones yield three signals based on the extent to which each is stimulated. These amounts of stimulation are sometimes called tristimulus values.
The response curve as a function of wavelength varies for each type of cone. Because the curves overlap, some tristimulus values do not occur for any incoming light combination. For example, it is not possible to stimulate only the mid-wavelength (so-called "green") cones; the other cones will inevitably be stimulated to some degree at the same time. The set of all possible tristimulus values determines the human color space. It has been estimated that humans can distinguish roughly 10 million different colors.
The other type of light-sensitive cell in the eye, the rod, has a different response curve. In normal situations, when light is bright enough to strongly stimulate the cones, rods play virtually no role in vision at all. On the other hand, in dim light, the cones are understimulated leaving only the signal from the rods, resulting in a colorless response. (Furthermore, the rods are barely sensitive to light in the "red" range.) In certain conditions of intermediate illumination, the rod response and a weak cone response can together result in color discriminations not accounted for by cone responses alone. These effects, combined, are summarized also in the Kruithof curve, that describes the change of color perception and pleasingness of light as function of temperature and intensity.
Color in the brain
While the mechanisms of color vision at the level of the retina are well-described in terms of tristimulus values, color processing after that point is organized differently. A dominant theory of color vision proposes that color information is transmitted out of the eye by three opponent processes, or opponent channels, each constructed from the raw output of the cones: a red–green channel, a blue–yellow channel, and a black–white "luminance" channel. This theory has been supported by neurobiology, and accounts for the structure of our subjective color experience. Specifically, it explains why we cannot perceive a "reddish green" or "yellowish blue", and it predicts the color wheel: it is the collection of colors for which at least one of the two color channels measures a value at one of its extremes.
The exact nature of color perception beyond the processing already described, and indeed the status of color as a feature of the perceived world or rather as a feature of our perception of the world – a type of qualia – is a matter of complex and continuing philosophical dispute.
Nonstandard color perception
If one or more types of a person's color-sensing cones are missing or less responsive than normal to incoming light, that person can distinguish fewer colors and is said to be color deficient or color blind (though this latter term can be misleading; almost all color deficient individuals can distinguish at least some colors). Some kinds of color deficiency are caused by anomalies in the number or nature of cones in the retina. Others (like central or cortical achromatopsia) are caused by neural anomalies in those parts of the brain where visual processing takes place.
While most humans are trichromatic (having three types of color receptors), many animals, known as tetrachromats, have four types. These include some species of spiders, most marsupials, birds, reptiles, and many species of fish. Other species are sensitive to only two axes of color or do not perceive color at all; these are called dichromats and monochromats respectively. A distinction is made between retinal tetrachromacy (having four pigments in cone cells in the retina, compared to three in trichromats) and functional tetrachromacy (having the ability to make enhanced color discriminations based on that retinal difference). As many as half of all women are retinal tetrachromats.:p.256 The phenomenon arises when an individual receives two slightly different copies of the gene for either the medium- or long-wavelength cones, which are carried on the x-chromosome. To have two different genes, a person must have two x-chromosomes, which is why the phenomenon only occurs in women. For some of these retinal tetrachromats, color discriminations are enhanced, making them functional tetrachromats.
In certain forms of synesthesia/ideasthesia, perceiving letters and numbers (grapheme–color synesthesia) or hearing musical sounds (music–color synesthesia) will lead to the unusual additional experiences of seeing colors. Behavioral and functional neuroimaging experiments have demonstrated that these color experiences lead to changes in behavioral tasks and lead to increased activation of brain regions involved in color perception, thus demonstrating their reality, and similarity to real color percepts, albeit evoked through a non-standard route.
After exposure to strong light in their sensitivity range, photoreceptors of a given type become desensitized. For a few seconds after the light ceases, they will continue to signal less strongly than they otherwise would. Colors observed during that period will appear to lack the color component detected by the desensitized photoreceptors. This effect is responsible for the phenomenon of afterimages, in which the eye may continue to see a bright figure after looking away from it, but in a complementary color.
Afterimage effects have also been utilized by artists, including Vincent van Gogh.
When an artist uses a limited color palette, the eye tends to compensate by seeing any gray or neutral color as the color which is missing from the color wheel. For example, in a limited palette consisting of red, yellow, black, and white, a mixture of yellow and black will appear as a variety of green, a mixture of red and black will appear as a variety of purple, and pure gray will appear bluish.
The trichromatic theory is strictly true when the visual system is in a fixed state of adaptation. In reality, the visual system is constantly adapting to changes in the environment and compares the various colors in a scene to reduce the effects of the illumination. If a scene is illuminated with one light, and then with another, as long as the difference between the light sources stays within a reasonable range, the colors in the scene appear relatively constant to us. This was studied by Edwin Land in the 1970s and led to his retinex theory of color constancy.
It should be noted, that both phenomena are readily explained and mathematically modeled with modern theories of chromatic adaptation and color appearance (e.g. CIECAM02, iCAM). There is no need to dismiss the trichromatic theory of vision, but rather it can be enhanced with an understanding of how the visual system adapts to changes in the viewing environment.
Colors vary in several different ways, including hue (shades of red, orange, yellow, green, blue, and violet), saturation, brightness, and gloss. Some color words are derived from the name of an object of that color, such as "orange" or "salmon", while others are abstract, like "red".
In the 1969 study Basic Color Terms: Their Universality and Evolution, Brent Berlin and Paul Kay describe a pattern in naming "basic" colors (like "red" but not "red-orange" or "dark red" or "blood red", which are "shades" of red). All languages that have two "basic" color names distinguish dark/cool colors from bright/warm colors. The next colors to be distinguished are usually red and then yellow or green. All languages with six "basic" colors include black, white, red, green, blue, and yellow. The pattern holds up to a set of twelve: black, gray, white, pink, red, orange, yellow, green, blue, purple, brown, and azure (distinct from blue in Russian and Italian, but not English).
Individual colors have a variety of cultural associations such as national colors (in general described in individual color articles and color symbolism). The field of color psychology attempts to identify the effects of color on human emotion and activity. Chromotherapy is a form of alternative medicine attributed to various Eastern traditions. Colors have different associations in different countries and cultures.
Different colors have been demonstrated to have effects on cognition. For example, researchers at the University of Linz in Austria demonstrated that the color red significantly decreases cognitive functioning in men.
Spectral colors and color reproduction
Most light sources are mixtures of various wavelengths of light. Many such sources can still effectively produce a spectral color, as the eye cannot distinguish them from single-wavelength sources. For example, most computer displays reproduce the spectral color orange as a combination of red and green light; it appears orange because the red and green are mixed in the right proportions to allow the eye's cones to respond the way they do to the spectral color orange.
A useful concept in understanding the perceived color of a non-monochromatic light source is the dominant wavelength, which identifies the single wavelength of light that produces a sensation most similar to the light source. Dominant wavelength is roughly akin to hue.
There are many color perceptions that by definition cannot be pure spectral colors due to desaturation or because they are purples (mixtures of red and violet light, from opposite ends of the spectrum). Some examples of necessarily non-spectral colors are the achromatic colors (black, gray, and white) and colors such as pink, tan, and magenta.
Two different light spectra that have the same effect on the three color receptors in the human eye will be perceived as the same color. They are metamers of that color. This is exemplified by the white light emitted by fluorescent lamps, which typically has a spectrum of a few narrow bands, while daylight has a continuous spectrum. The human eye cannot tell the difference between such light spectra just by looking into the light source, although reflected colors from objects can look different. (This is often exploited; for example, to make fruit or tomatoes look more intensely red.)
Similarly, most human color perceptions can be generated by a mixture of three colors called primaries. This is used to reproduce color scenes in photography, printing, television, and other media. There are a number of methods or color spaces for specifying a color in terms of three particular primary colors. Each method has its advantages and disadvantages depending on the particular application.
No mixture of colors, however, can produce a response truly identical to that of a spectral color, although one can get close, especially for the longer wavelengths, where the CIE 1931 color space chromaticity diagram has a nearly straight edge. For example, mixing green light (530 nm) and blue light (460 nm) produces cyan light that is slightly desaturated, because response of the red color receptor would be greater to the green and blue light in the mixture than it would be to a pure cyan light at 485 nm that has the same intensity as the mixture of blue and green.
Because of this, and because the primaries in color printing systems generally are not pure themselves, the colors reproduced are never perfectly saturated spectral colors, and so spectral colors cannot be matched exactly. However, natural scenes rarely contain fully saturated colors, thus such scenes can usually be approximated well by these systems. The range of colors that can be reproduced with a given color reproduction system is called the gamut. The CIE chromaticity diagram can be used to describe the gamut.
Another problem with color reproduction systems is connected with the acquisition devices, like cameras or scanners. The characteristics of the color sensors in the devices are often very far from the characteristics of the receptors in the human eye. In effect, acquisition of colors can be relatively poor if they have special, often very "jagged", spectra caused for example by unusual lighting of the photographed scene. A color reproduction system "tuned" to a human with normal color vision may give very inaccurate results for other observers.
The different color response of different devices can be problematic if not properly managed. For color information stored and transferred in digital form, color management techniques, such as those based on ICC profiles, can help to avoid distortions of the reproduced colors. Color management does not circumvent the gamut limitations of particular output devices, but can assist in finding good mapping of input colors into the gamut that can be reproduced.
Additive color is light created by mixing together light of two or more different colors. Red, green, and blue are the additive primary colors normally used in additive color systems such as projectors and computer terminals.
Subtractive coloring uses dyes, inks, pigments, or filters to absorb some wavelengths of light and not others. The color that a surface displays comes from the parts of the visible spectrum that are not absorbed and therefore remain visible. Without pigments or dye, fabric fibers, paint base and paper are usually made of particles that scatter white light (all colors) well in all directions. When a pigment or ink is added, wavelengths are absorbed or "subtracted" from white light, so light of another color reaches the eye.
If the light is not a pure white source (the case of nearly all forms of artificial lighting), the resulting spectrum will appear a slightly different color. Red paint, viewed under blue light, may appear black. Red paint is red because it scatters only the red components of the spectrum. If red paint is illuminated by blue light, it will be absorbed by the red paint, creating the appearance of a black object.
Structural colors are colors caused by interference effects rather than by pigments. Color effects are produced when a material is scored with fine parallel lines, formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the color's wavelength. If the microstructures are spaced randomly, light of shorter wavelengths will be scattered preferentially to produce Tyndall effect colors: the blue of the sky (Rayleigh scattering, caused by structures much smaller than the wavelength of light, in this case air molecules), the luster of opals, and the blue of human irises. If the microstructures are aligned in arrays, for example the array of pits in a CD, they behave as a diffraction grating: the grating reflects different wavelengths in different directions due to interference phenomena, separating mixed "white" light into light of different wavelengths. If the structure is one or more thin layers then it will reflect some wavelengths and transmit others, depending on the layers' thickness.
Structural color is studied in the field of thin-film optics. A layman's term that describes particularly the most ordered or the most changeable structural colors is iridescence. Structural color is responsible for the blues and greens of the feathers of many birds (the blue jay, for example), as well as certain butterfly wings and beetle shells. Variations in the pattern's spacing often give rise to an iridescent effect, as seen in peacock feathers, soap bubbles, films of oil, and mother of pearl, because the reflected color depends upon the viewing angle. Numerous scientists have carried out research in butterfly wings and beetle shells, including Isaac Newton and Robert Hooke. Since 1942, electron micrography has been used, advancing the development of products that exploit structural color, such as "photonic" cosmetics.
According to Pantone, the top three colors in social media for 2012 were red (186 million mentions; accredited to Taylor Swift's Red album, NASA's landing on Mars, and red carpet coverage), blue (125 million mentions; accredited to the United States presidential election, 2012, Mars rover Curiosity finding blue rocks, and blue sports teams), and Green (102 million mentions; accredited to "environmental friendliness", Green Bay Packers, and green eyed girls).
- Color wheel: an illustrative organization of color hues in a circle that shows relationships.
- Colorfulness, chroma, purity, or saturation: how "intense" or "concentrated" a color is. Technical definitions distinguish between colorfulness, chroma, and saturation as distinct perceptual attributes and include purity as a physical quantity. These terms, and others related to light and color are internationally agreed upon and published in the CIE Lighting Vocabulary. More readily available texts on colorimetry also define and explain these terms.
- Dichromatism: a phenomenon where the hue is dependent on concentration and/or thickness of the absorbing substance.
- Hue: the color's direction from white, for example in a color wheel or chromaticity diagram.
- Shade: a color made darker by adding black.
- Tint: a color made lighter by adding white.
- Value, brightness, lightness, or luminosity: how light or dark a color is.
Find more about
at Wikipedia's sister projects
|Definitions from Wiktionary|
|Media from Commons|
- Craig F. Bohren (2006). Fundamentals of Atmospheric Radiation: An Introduction with 400 Problems. Wiley-VCH. ISBN 3-527-40503-8.
- Berlin, B. and Kay, P., Basic Color Terms: Their Universality and Evolution, Berkeley: University of California Press, 1969.
- Waldman, Gary (2002). Introduction to light : the physics of light, vision, and color (Dover ed.). Mineola: Dover Publications. p. 193. ISBN 978-0-486-42118-6.
- Judd, Deane B.; Wyszecki, Günter (1975). Color in Business, Science and Industry. Wiley Series in Pure and Applied Optics (third ed.). New York: Wiley-Interscience. p. 388. ISBN 0-471-45212-2.
- Hermann von Helmholtz, Physiological Optics – The Sensations of Vision, 1866, as translated in Sources of Color Science, David L. MacAdam, ed., Cambridge: MIT Press, 1970.
- Palmer, S.E. (1999). Vision Science: Photons to Phenomenology, Cambridge, MA: MIT Press. ISBN 0-262-16183-4.
- "Under well-lit viewing conditions (photopic vision), cones ...are highly active and rods are inactive." Hirakawa, K.; Parks, T.W. (2005). Chromatic Adaptation and White-Balance Problem (PDF). IEEE ICIP. doi:10.1109/ICIP.2005.1530559. Archived from the original (PDF) on November 28, 2006.
- Jameson, K. A.; Highnote, S. M.,; Wasserman, L. M. (2001). "Richer color experience in observers with multiple photopigment opsin genes." (PDF). Psychonomic Bulletin and Review 8 (2): 244–261. doi:10.3758/BF03196159. PMID 11495112.
- Depauw, Robert C. "United States Patent". Retrieved 20 March 2011.
- M.D. Fairchild, Color Appearance Models, 2nd Ed., Wiley, Chichester (2005).
- "Chart: Color Meanings by Culture". Retrieved 2010-06-29.[dead link]
- Gnambs, Timo; Appel, Markus; Batinic, Bernad. (2010). Color red in web-based knowledge testing. Computers in Human Behavior, 26, p1625-1631.
- "Economic and Social Research Council - Science in the Dock, Art in the Stocks". Archived from the original on November 2, 2007. Retrieved 2007-10-07.
- "Celebrate Color". pantone.com. Pantone. Retrieved 7 December 2014.
- CIE Pub. 17-4, International Lighting Vocabulary, 1987. http://www.cie.co.at/publ/abst/17-4-89.html
- R.S. Berns, Principles of Color Technology, 3rd Ed., Wiley, New York (2001).
|Wikidata has a property, P462, for color (see uses)|
- Bibliography Database on Color Theory, Buenos Aires University
- Maund, Barry. "Color". Stanford Encyclopedia of Philosophy.
- "Color". Internet Encyclopedia of Philosophy.
- Why Should Engineers and Scientists Be Worried About Color?
- Robert Ridgway's A Nomenclature of Colors (1886) and Color Standards and Color Nomenclature (1912) - text-searchable digital facsimiles at Linda Hall Library
- Albert Henry Munsell's A Color Notation, (1907) at Project Gutenberg
- AIC, International Colour Association
- The Effect of Color | OFF BOOK Documentary produced by Off Book (web series)
- Study of the history of colors |
CBSE Important Questions Class 10 Science Chapter 1
Important Questions Class 10 Science Chapter 1 – Chemical Reactions and Equations
Science is a fascinating subject, but at the same time, it has some concepts which are difficult to comprehend. Thus, students will have to make extra efforts to understand the chapters across Biology, Physics & Chemistry.
The first chapter of Class 10 Science is about ‘Chemical Reactions and Equations’. The chapter introduces students to various topics such as physical changes, chemical reactions, types of chemical reactions, chemical equilibrium and equations, etc. It’s an introductory chapter to the vast field of Chemistry that will expose students to the world of chemical reactions and chemical equations.
For Science, students are advised to practice many questions to score good marks in exams. At Extramarks, we recognise the importance of solving questions, and we have collated questions from various sources, including NCERT textbooks, NCERT exemplars, other reference books, past years’ exam papers, etc. Step-by-step solutions are prepared by our Science subject teachers to make it easy for students to understand the concepts. Students can register on the Extramarks website and access our full solutions for Important Questions Class 10 Science Chapter 1.
Regularly practising and solving questions from our question set of Science Class 10 Chapter 1 Important Questions will boost the student’s confidence. By referring to the given solutions, students will also understand the answer writing skills better.
Students must register on the Extramarks website and get access to our Important Questions Class 10 Science Chapter 1 question bank. Apart from the solutions to important questions, the Extramarks website has an abundance of study materials like NCERT Solutions, CBSE revision notes, past year question papers, NCERT books and much more.
|CBSE Class 10 Science Important Questions|
|1||Chemical Reactions and Equations|
|2||Acids, Bases and Salts|
|3||Metals and Non-metals|
|4||Carbon and Its Compounds|
|5||Periodic Classification of Elements|
|7||Control and Coordination|
|8||How do Organisms Reproduce?|
|9||Heredity and Evolution|
|10||Light Reflection and Refraction|
|11||Human Eye and Colourful World|
|13||Magnetic Effects of Electric Current|
|14||Sources of Energy|
|16||Management of Natural Resources|
Important Questions Class 10 Science Chapter 1 – With Solutions
At Extramarks, we highlight crucial concepts and questions from each chapter which help students with their studies right before their examinations. By solving the Important Questions Class 10 Science Chapter 1, the students will be familiar with the questions asked in final exams. Science is a subject which requires deep conceptual understanding, so cramming answers won’t help especially in higher classes.
So, while solving important questions, students must understand every concept to answer any question easily. This encourages the students to master the topic and increases their confidence in achieving high grades. Our step-by-step solutions given for all questions in our Class 10 Science Chapter 1 Important Questions help students revise the chapter while solving the questions.
Below are a few questions and their answers from our questionnaire of Important Questions Class 10 Science Chapter 1:
Question 1. A substance which oxidises itself and reduces others is known as
- Oxidising agent
- reducing agent
- Both (a) and (b)
- None of these.
Answer 1: Correct option is (B) Reducing agent
Explanation: The reducing agent is HCl, and MnO2 is oxidised to MnCl2.
Increase of oxidation number
+4 -1 -2 0 +1 -2
MnO2 + 4HCl → MnCl2 + Cl2 + 2H2O
Decrease of oxidation number
In this given reaction, Mn is getting reduced from +4 to +2 oxidation state, and Cl is getting oxidised from -1 to 0, so
The substance reduced is MnO2, and the substance oxidised is HCl.
The substance that is reduced shows as an oxidising agent, while a substance that is oxidised shows as a reducing agent.
Hence Oxidising agent is MnO2, and the Reducing agent is HCl. The correct answer is option B.
Question 2. Why are food particles preferably packed in aluminium foil?
Answer 2: Food particles are mostly packed in an aluminium foil sheet because it does not corrode in the atmosphere. A protective coating of aluminium oxide (Al2O3) is formed on the surface of the foil, and it stops any further chemical reaction of the metal with air and water so that even if it is kept for a longer time, food particles do not get spoiled.
Question 3. Write balanced chemical equations for the following chemical reactions-
(a) Hydrogen + Chlorine → Hydrogen chloride
(b) Lead + Copper chloride → Lead chloride + Copper
(c) Zinc oxide + Carbon → Zinc + Carbon monoxide
- H2(g) + Cl2 (g) → 2HCl (g)
- Pb(s) + CuCl2 (aq) → PbCl2 (aq) + Cu(s)
- ZnO(s) + C(s) → Zn (s) + CO(g)
Explanation: It is a balanced chemical reaction
Question 4: Two grams of ferrous sulphate crystals are heated in a dry boiling tube.
- a. explain two observations.
- Name the type of chemical reaction taking place.
- Write a balanced chemical equation for the reaction and name the products formed.
(a) If Ferrous sulphate crystals (FeSO4.7H2O) lose water when heated and the colour of the crystals is modified. Then it decomposes to ferric oxide (Fe2O3), (SO2), sulphur dioxide & sulphur trioxide (SO3) with an odour of burning sulphur.
(b) This is a thermal decomposition reaction.
- c) 2FeSO4(s) Fe2O3(s) + SO2(g) + SO3(g)
Ferrous sulphate Ferric oxide Sulphur dioxide Sulphur trioxide
Question 5: What happens if dilute hydrochloric acid is added to iron filling? Tick the correct answer
- Hydrogen gas and iron chloride are produced.
- Chlorine gas and iron hydroxide are produced
- No reaction takes place
- Iron salt and water are produced
Answer 5: (A) Hydrogen gas and iron chloride are produced.
Explanations: option A is correct because hydrogen gas and iron chloride are produced when treated with HCl and iron filling.
2HCl + 2Fe FeCl2 + H2
Question 6: Dissolving sugar is an example of which change-
- Physical change
- Chemical change
- Redox Reaction
- None of these.
Answer 6: correct option is (A) Physical change
Explanations: Option A is correct because the chemical composition is not changed in a physical change, and we get it in its original form while heating.
Question 7: Recognise a metal for each case:
(i) it does not interact with cold as well as hot water but reacts with any additional physical state of water,
(ii) it does not interact with any physical state of water.
(i) Aluminium (Al)
(ii) Copper (Cu)
Explanations: i) aluminium oxide is created on the surface of the metal. Still, the reaction does not go further as the oxide formed isn’t porous, so water won’t be able to penetrate through the metal to continue the reaction any further.
- ii) Copper does not react with water because they lie below hydrogen in the reactivity series. The conclusion is that they cannot replace hydrogen in water molecules.
Question 8: Which of the gases is used to store fat and oil-containing foods for a long time?
- a) Carbon dioxide
- b) Oxygen
- c) Nitrogen
- d) Neon
Answer: correct option is c) Nitrogen
Explanations: Nitrogen can be used for the storage of fresh samples of oil for a long time. Due to the rancidification of oils and fats, you will observe the change in colour, odour and test. The gases oxygen and carbon dioxide do not help in the rancidity of oil. So the following option is incorrect.
Question 9: Why do we store silver chloride in dark-coloured bottles?
Answer: Silver chloride is the best example of a light-sensitive chemical compound, and reaction is an example of a photolytic decomposition reaction. It reacts with light very fast and loses its property by forming chlorine gas and silver. So avoid this silver chloride in dark-coloured bottles.
Explanations: 2AgCl 2Ag + Cl2
Silver chloride decomposes into silver & chlorine gas when exposed to light. Dark-coloured bottles interrupt the path of light such that light cannot reach silver chloride in the bottles, and its decomposition is prevented.
Question 10: Identify the type of reaction in the following example:
2H2 (g) + O2 (g) → 2H2O (I)
- Combination reaction
- Decomposition reaction
- Displacement reaction
- Double displacement reaction
Answer 10: correct option is (a) combination reaction
Explanations: It is a combination reaction because, in this reaction, two substances combine to form a single substance.
Question 11: Translate a balanced chemical equation with state symbols for the following reactions:
- i) Solutions of Barium chloride and Sodium sulphate in water react to give insoluble Barium sulphate and a solution of Sodium chloride.
- ii) Sodium hydroxide solution in water interacts with hydrochloric acid to produce Sodium chloride solution and water.
iii) Hydrogen gas combines with nitrogen to form ammonia.
- iv) potassium metal reacts with water to give potassium hydroxide and hydrogen gas.
- i) BaCl2 + Na2SO4 → BaSO4 + 2NaCl
- ii) NaOH + HCl → NaCl + H2O
iii) 3H2 + N2 2NH3
- iv) 2K + 2H2O 2KOH + H2
Explanations: all reactions show a combination reaction
Question 12: Which statements about the chemical reaction below are incorrect?-
2PbO(solid) + C(s) → 2Pb(s) + CO2(g)
(a) Lead is getting reduced
(b) Carbon Dioxide CO2 is getting oxidised
(c) Carbon is getting oxidised
(d) Lead oxide is getting reduced
(i) (a) and (b)
(ii) (a) and (c)
(iii) (a), (b) and (c)
Answer 12: option (i) (a) and (b)
Explanations: (a) because oxygen is being removed and (b) because the removed oxygen from Lead is added to the elemental Carbon and forms carbon dioxide.
Question 13: What is a balanced chemical equation? Why should chemical equations be balanced?
Answer 13: A chemical equation represents a chemical reaction. The presentation of a chemical reaction in which the number of atoms of each element is equal on the reactant and product sides is known as a balanced chemical equation. Chemical reactions should be balanced because only a balanced equation explains the relative quantities of different reactants and products involved in the reaction.
Explanations: A balanced chemical reaction is an equation with equal numbers of each type of atom on both sides of the arrow. A chemical equation is a written symbolic representation of a chemical reaction. The reactant chemicals are given on the left side and the product chemical(s) on the right side. The law of conservation mass states that no atoms or ions can be created and destroyed in chemical reactions; therefore, the number of atoms present in the reactants has to balance the number of atoms in the products.
Question 14: Why is respiration considered an exothermic reaction? Explain.
Answer 14: Respiration is the process of burning food in the living body to produce energy. Respiration is considered an exothermic chemical reaction because glucose oxidation occurs in the respiration process, which creates a large amount of heat energy consumed in the form of ATP. During respiration, we inhale oxygen from the atmosphere, which reacts with glucose in our body cells to produce carbon dioxide and water. It is explained in the following chemical equation.
C6H12O6+ 6O2 → 6CO2 + H2O + energy (ATP)
Explanations: For the survival of life, we require energy. We consume this energy from the food we eat. The food molecules, through digestion, are broken down into simpler molecules like glucose. These molecular substances come in contact with the oxygen in our body cells and produce Carbon dioxide and water along with a limited amount of ATP (adenosine triphosphate)energy (Respiration process). Hence the energy is in the form of heat (which maintains our body temperature); respiration is considered an exothermic reaction.
Question 15: Write one equation each for decomposition reactions in which energy is supplied in the form of heat, light or electricity.
(a) Thermal decomposition reaction (Thermolysis)
Decomposition of potassium chlorate: If heated strongly, potassium chlorate decomposes into potassium chloride and oxygen molecules. This reaction is commonly used for the synthesis of oxygen molecules.
2KClO3 + heat → 2KCl + 3O2
(b) Electrolytic decomposition reaction (Electrolysis)-
Decomposition of sodium chloride NaCl: On passing electricity through molten sodium chloride NaCl, it decomposes into sodium and chlorine.
2NaCl—- 2Na + Cl2 (in Electrolysis)
(c) Photodecomposition reaction (Photolysis)
Decomposition of Hydrogen peroxide- In the presence of light, hydrogen peroxide decomposes into water and oxygen molecules.
2H2O2 + light H2O + O2
Question 16: Why does the colour of copper sulphate solution change when an iron nail is dipped in it?
Answer 16: The colour of the copper sulphate solution changes when an iron nail is dipped in it because of the displacement of copper from the copper sulphate solution and the formation of iron sulphate solution. The brown deposit is of copper. The chemical reaction involved in this experiment is:
Fe nail(s)+CuSO4(blue solution)→FeSO4(green)(aq) + Cu
Explanations: Iron displaces Cu from copper sulphate solution as iron is more reactive than copper. Therefore this is a displacement reaction.
Question 17: Fe2O3 + 2Al → Al2O3 + 2Fe The above reaction is an example of a
- Combination reaction.
- Double displacement reaction.
- Decomposition reaction.
- Displacement reaction.
Answer 17: Option is d Displacement reaction.
Explanation: The oxygen atom from ferrous oxide is displaced to the Aluminium metal to form Aluminium Oxide. In this reaction, Aluminium is a more reactive metal atom than Iron. Hence Al will displace Fe from its oxide. This type of chemical reaction in which one of the elements substitutes another is called a displacement reaction. Here, the less reactive metal is replaced by the more reactive metal. Since one-time displacement occurs, it is called a single displacement reaction.
Question 18:Write the balanced chemical equation for the following and identify the type of reaction in each case.
(a) Potassium bromide(aq) + Barium iodide(aq) → Potassium iodide(aq) + Barium bromide(s)
(b) Zinc carbonate(s) → Zinc oxide(s) + Carbon dioxide(g)
(c) Hydrogen(g) + Chlorine(g) → Hydrogen chloride(g)
(d) Magnesium(s) + Hydrochloric acid(aq) → Magnesium chloride(aq) + Hydrogen(g)
(a) 2KBr (aqueous) + Bal2(aq) → 2Kl(aq) + BaBr2(s)
Types- Double displacement reaction
(b) ZnCO3 (s) → ZnO (s) + CO2 (g)
Types- Decomposition reaction
(c) H2 (g) + Cl2 (g) → 2HCl(g)
Types – Combination or synthesis reaction
(d) Mg (s) + 2HCl (aqueous) → MgCl2 (aq) + H2 (g)
Types- Displacement reaction
Question 19- Why are decomposition reactions called the opposite of combination reactions? Write equations for decomposition reactions.
Answer 19: A combination reaction is said to be the reaction between two or more molecules to form a larger molecule. A decomposition reaction is defined as splitting large molecules into two or smaller molecules. It explains that the decomposition reaction is the opposite of the combination reaction.
In most cases, the decomposition reaction is endothermic since the heat from the surrounding or induced heat is used to diffuse the bonds of the larger molecule. Some examples of decomposition reactions are
ZnCO3 → ZnO + CO2
CaCO3 + Energy → CaO + CO2
2HgO → 2Hg + O2
Explanations: In a decomposition reaction, a single substance breaks down into two or more substances, while in a combination reaction, two or more substances react to produce one substance. Therefore, decomposition reactions are called the opposite of combination reactions.
Question 20– What is the difference between displacement and double displacement reactions? Write relevant equations for the above.
Answer 20: A displacement reaction occurs when a more reactive substance replaces a less reactive substance from its salt solutions. A double displacement reaction occurs when a mutual exchange of metal ions happens between 2 compounds.
In this displacement reaction, only a single displacement occurs, whereas in the double displacement reaction, as the name suggests, two displacements occur between the molecules.
Mg + 2HCl → MgCl2 + H2
Double displacement reaction
2KBr + BaI2 → 2KI + BaBr2
Explanations: A displacement reaction occurs if a more reactive substance replaces a less reactive one from its salt solution. A double displacement reaction occurs when a mutual exchange of metal ions happens between two compounds. In this reaction, only a single displacement occurs, whereas in the double displacement reaction, as the name explains, two displacements occur between the molecules.
Question 21: Why do we apply paint on iron articles?
Answer 21: Iron articles are painted to prevent them from rusting. If left unpainted, the metal surface comes in contact with the atmospheric oxygen and, in the presence of moisture, it forms (FeO)Iron(III) oxide. Therefore if painted, the surface does not come in contact with moisture and air, thus preventing Rusting.
Explanations: Paint is always applied to the iron articles to prevent them from corrosion by rust formation. Rust is an iron oxide, commonly red oxide, developed by the redox reactions of iron Fe and oxygen O in the presence of water or atmospheric air moisture. Paint always prevents iron from getting exposed to air and humidity.
Question 22: Explain the following topics with one example each.
(a) Corrosion (b) Rancidity
(a) Corrosion is a slow process where a refined metal atom is oxidised by atmospheric oxygen to create a more stable compound, like oxides. The metal atom gradually degrades during the corrosion process. Rusting of iron is an important example of corrosion where the iron is converted to Iron oxide. Millions of pounds are spent annually to prevent bridges and other monuments from rusting.
(b) Rancidity: The condition produced by the aerial oxidation of the oil and fat in the food material has an unpleasant taste and odour. The rancidity is retarded if the food is kept inside the refrigerator since the low temperature does not promote the oxidation reaction.
Explanations: Corrosion is a reaction where a metal reacts with water, air or acid to form oxides and carbonates. It is also known as rust. For example, black coating on silver in the presence of air or atmosphere. Rancidity is the oxidation process of fats and oils when kept in the open or in the presence of oxygen for a long time. Due to this, changes in taste and odour of food can be observed. To prevent rancidity, food items are flushed with nitrogen or kept in airtight containers. For example, the taste and smell of butter change when held for a long time.
Question 23: In the refining of silver, the recovery of silver from Silver nitrate solution involves displacement reaction by Copper metal. Write down the reaction involved.
So Cu(s) + 2AgNO3(aq) → Cu(NO3)2(aq) + 2Ag(s)
Explanations: The silver nitrate is in the solution, and the metallic copper will dissolve to form copper nitrate. The silver in the aqueous solution will be precipitated out as metallic silver. The silver in the solution is exchanged for copper, and the copper not in the solution is substituted for silver.
Question 24: Explain the following in terms of the gain of oxygen with two examples each.
(a) Oxidation (b) Reduction
(a) In a chemical reaction when oxygen is added to the element to form its respective oxide, it is the element being oxidised. For Example:
4Na(s) + O2(g) → 2Na2O(s)
H2S + O2 → H2O + SO2
(b) In a chemical reaction, it is said to be reduced when oxygen is removed from the compound. For Ex.,:
so CuO(s) + H2(g) → Cu(s) + H2O(l)
2HgO → 2Hg + O2(g)
Explanations: The process of adding oxygen or removing hydrogen in a chemical reaction is called an oxidation reaction. The method of adding hydrogen or removing oxygen in a chemical reaction is called a reduction reaction.
Question 25: Balance the following chemical equations properly.
- i) HNO3 + Ca(OH)2 → Ca(NO3)2 + H2O
- ii) NaOH + H2SO4 → Na2SO4 + H2O
iii) NaCl + AgNO3 → AgCl + NaNO3
- iv) BaCl2 + H2SO4 → BaSO4 + HCl
Answer 25– Balance the chemical reaction is shown below:
- i) 2HNO3 + Ca(OH)2 → Ca(NO3)2 + 2H2O
- ii) 2NaOH + H2SO4 → Na2SO4 + 2H2O
iii) NaCl + AgNO3 → AgCl + NaNO3
- iv) BaCl2 + H2SO4 → BaSO4 + 2HCl
Question 26: A shiny brown-coloured element ‘X’ on heating in the air becomes black. Name the element ‘X’ & the black-coloured compound formed.
The shiny brown-coloured element is Copper metal (Cu). If the metal is heated in air, it interacts with atmospheric oxygen to form copper oxide. Therefore, the black-coloured compound is copper oxide.
2Cu(s) + O2(g) → 2CuO(s)
Explanations: copper is an element which has a shiny brown colour appearance. So, x is copper. If copper is heated in air, it becomes black due to the formation of copper oxide. It is an oxidation reaction in which the copper gains oxygen to form copper oxide.
Question 27: The following chemical reaction is an example of a:
4NH3 (g) + 5O2 (g) → 4NO(g) + 6H2O(l)
(i) displacement reaction
(ii) combination reaction
(iii) redox reaction
(iv) neutralisation reaction
(A) (i) and (iv)
(B) (ii) and (iii)
(C) (i) and (iii)
(D) (iii) and (iv)
Answer 27: (C) (i) and (iii)
The given reaction undergoes displacement and neutralisation reactions.
Displacement reaction: The oxygen atom displaces hydrogen from ammonia to form nitric acid and water.
Redox reaction: Ammonia interacts with oxygen atoms to undergo an oxidation reaction, and oxygen combines with hydrogen to undergo a reduction reaction.
Explanation: The chemical reaction provided is a mixture of displacement and redox reactions. Oxygen replaces hydrogen in the ammonia, making it a displacement reaction. Nitrogen gets oxidised, and oxygen is reduced, resulting in a redox reaction.
Question 28: Electrolysis of water is a decomposition reaction. The mole ratio of hydrogen & oxygen gases evolved during the electrolysis of water is
Answer 28: Correct option is (b) 2:1
On electrolysis of water, the water dissociates to liberate hydrogen and oxygen gas.
2H2O → H2 + O2
Generally, 1 Mole of water has 2 moles of Hydrogen and 1 mole of oxygen. Hence the mole ratio between hydrogen and oxygen is 2:1.
Question 29- Which of the following gases can be utilised for storing fresh samples of oil for a long time?
(a) Carbon dioxide or oxygen
(b) Nitrogen or oxygen
(c) Carbon dioxide or Helium
(d) Helium or nitrogen
Answer 29: Correct option is d. Helium or nitrogen
Oxygen molecules cannot be used as it is an oxidising agent. Helium can be utilised as it is an inert or noble gas. Nitrogen is less reactive, and it is less expensive than Helium. In most cases, nitrogen is used in packets to prevent rancidity.
Explanations: Helium is a noble gas that does not react with fats and oil and protects the oil from its oxidation or rancidity. Nitrogen gas has a triple bond between two nitrogen atoms; due to a triple bond, nitrogen acts as a noble gas and does not react with fats and oils and their rancidity.
Question 30: Which of the following processes involves chemical reactions?
(i) Storing of oxygen gas under pressure in a gas cylinder
(ii) Liquefaction of air
(iii) Keeping petrol in a china dish in the open
(iv) Heating copper wire in the +nce of air at a high-temperature
Answer 30: The correct answer is (iv) Heating copper wire in the presence of air at a high-temperature
In the first three options shown here, there is no involvement of a chemical reaction. If copper is heated in the presence of air at a high temperature, copper undergoes an oxidation reaction to give out copper oxide.
Question 31 : Which of the following is(are) double displacement reaction(s)?
(i) Pb + CuCl2 → PbCl2 + Cu
(ii) Na2SO4 + BaCl2 → BaSO4 + 2NaCl
(iii) C + O2 → CO2
(iv) CH4 + 2O2 → CO2 + 2H2O
(a) (i) and (iv)
(b) (ii) only
(c) (i) and (ii)
(d) (iii) and (iv)
Answer 31: The correct answer is (b) (ii) only
Sodium Na and Barium Ba are displaced from each other’s salts; hence, it is a double displacement reaction.
Explanations: In a double displacement reaction, one compound exchanges its ions with the ions of another compound to develop two new compounds. In sodium sulphate Na2SO4, Na+ ion combines with Cl– ion of BaCl2 whereas Barium ion combines with SO42-ion to create BaSO4.
Question 32: The following reaction is used for the preparation of oxygen gas in the laboratory
2 KClO3 (s)→2 KCl (s) + 3O2 (g)
Which of the following statements is exactly correct about the reaction?
(a) It is a decomposition reaction & endothermic
(b) It is a combination reaction
(c) It is a decomposition reaction & accompanied by the release of heat
(d) It is a photochemical decomposition reaction & exothermic
Answer 32: Correct option is (a) It is a decomposition reaction and endothermic
Potassium chlorate decomposes to give potassium chloride KCl and oxygen. It is a decomposition reaction which is endothermic. The shown reaction is a thermal decomposition reaction, as the KClO3 decomposes to KCl salts and O2 gas on heating.
The heat is passed to the reaction mixture in the above reaction, so it is also an endothermic reaction.
Question 33: Solid calcium oxide interacts vigorously with water to form calcium hydroxide accompanied by the liberation of heat. This method is called slaking of lime. Calcium hydroxide dissolves in water to form its solution called lime water. Which among the following is (are) true about slaking of lime and the solution formed?
(i) It is an endothermic reaction
(ii) It is an exothermic reaction
(iii) The pH of the resulting solution will be more than seven
(iv) The pH of the resulting solution will be less than seven
(a) (i) and (ii)
(b) (ii) and (iii)
(c) (i) and (iv)
(d) (iii) and (iv)
Answer 33: The correct option is (b) (ii) & (iii)
Explanations: When solid calcium oxide interacts vigorously with water, it forms calcium hydroxide, accompanied by heat generation. This effect proves the reaction is exothermic. The pH value of the solution will be more than seven because oxides and hydroxides of metals are alkaline.
Question 34: In the double displacement reaction between aqueous potassium iodide & aqueous lead nitrate, a yellow precipitate of lead iodide is developed. While performing the activity, when lead nitrate is unavailable, which of the following can be used in place of lead nitrate?
(a) Lead sulphate (insoluble)
(b) Lead acetate
(c) Ammonium nitrate
(d) Potassium sulphate
Answer 34: The correct option is (b) Lead acetate
Explanations: To get lead iodide, we need a compound containing Lead because Ammonium nitrate and Potassium sulphate are ruled out. Lead sulphate is insoluble because it cannot be used, so the answer is (b) Lead acetate.
Question 35: Which of the following is not a physical change?
(i) Boiling of water to give water vapour
(ii) Melting of ice to give water
(iii) Dissolution of salt in water
(iv) Combustion of Liquefied Petroleum Gas (LPG)
Answer 35: Correct option is (iv) Combustion of Liquefied Petroleum Gas (LPG)
Explanation: The combustion process is always a chemical change because a new compound is formed after burning and is irreversible.
Question 36: Complete the missing components/variables given as x and y in the following reactions-
(a) Pb(NO3 )2 (aq) + 2KI(aqueous) → PbI2 (x) + 2KNO3 (y)
(b) Cu(solid) + 2AgNO3 (aqueous) → Cu(NO3)2 (aq) + x(s)
(c) Zn(s) + H2SO4 (aqueous) → ZnSO4(x) + H2(y)
(d) CaCO3 (solid) → x CaO(s) + CO2(g)
(a) Pb(NO3 )2 (aq) + 2KI(aqueous) → PbI2 (s) + 2KNO3 (aq)
(b) Cu(s) + 2AgNO3 (aqueous) → Cu(NO3)2 (aqueous) + 2Ag(s)
(c) Zn(s) + H2SO4 (aqueous) → ZnSO4(aq) + H2(g)
(d) CaCO3(s) → heat → CaO(solid) + CO2(g)
- x(s), y (aq)
- x is 2Ag
- x-(aq) y(g)
- x is heat
- When lead nitrate is mixed with aqueous potassium iodide, it undergoes a precipitation reaction. A yellow precipitate of lead iodide occurred in an aqueous solution of potassium nitrate due to the displacement of nitrate by iodine.
- Copper metal is highly reactive than silver and replaces silver from silver nitrate aqueous solution, forming copper nitrate aqueous solution and silver metal.
- When zinc metal is combined with sulphuric acid, hydrogen gas is evolved, and the aqueous solution of zinc sulphate is obtained.
- Calcium carbonate undergoes a decomposition reaction on heating to form calcium oxide and carbon dioxide gas.
Question 37: Identify the reducing agent in the following reactions
(a) 4NH3 + 5O2 → 4NO + 6H2O
(b) H2O + F2 → HF + HOF
(c) Fe2O3 + 3CO → 2Fe + 3CO2
(d) 2H2 + O2 → 2H2O
Answer 37: Reducing agents are
- NH3– Ammonia
- H2O – Water
- CO – Carbon monoxide
- H2 – Hydrogen
- Ammonia molecule reduces the oxygen to water and the oxidation of oxygen changes from 0 to -2 state.
- Water (H2O) molecule reduces the fluorine to HF, and the oxidation state changes from 0 to -1.
- Carbon monoxide (CO) reduces iron (III) oxide to iron metal as the oxidation of iron (III) oxide changes from +3 to 0.
- Hydrogen (H2) gas reduces the oxygen to water and behaves as a reducing agent, changing the oxidation way of oxygen from 0 to -2.
Question 38: Write the balanced chemical equations for the following reactions
(a) Sodium carbonate on reaction with hydrochloric acid in equal molar concentrations gives sodium chloride and sodium hydrogen carbonate.
(b) Sodium hydrogen carbonate on reaction with hydrochloric acid gives sodium chloride and water and escapes carbon dioxide.
(c) Copper sulphate interacts with potassium iodide, precipitates cuprous iodide (Cu2I2 ), liberates iodine gas, and forms potassium sulphate.
Answer 38 Solutions are:
(a) Na2CO3 + HCl → NaCl + NaHCO3
(b) NaHCO3 + HCl → NaCl + H2O + CO2
(c) 2CuSO4 + 4Kl → 2K2SO4 + CU2I2 + I2
Question 39: Ferrous sulphate compound decomposes with the evolution of a gas having a characteristic smell of burning sulphur. Write the chemical reaction and identify the various type of reaction.
Answer 39: The chemical reactions are 2FeSO4 + heat → Fe2O3 + SO2+ SO3
It is a decomposition reaction.
Explanations: Ferrous sulphate undergoes a decomposition reaction on heating to form a ferric oxide and release sulphur dioxide gas & sulphur trioxide gas, which have the characteristic odour of burning sulphur.
The given decomposition reaction is a thermal decomposition reaction as well as an endothermic reaction.
Question 40: Grapes hanging on the plant do not ferment, but after being plucked from the plant can be fermented. Under what conditions do these grapes ferment? Is it a physical as well as a chemical change?
Answer 40– When hanging on the plant, grapes do not ferment because of their active immune system. After grapes are plucked from the plant, they ferment as microbes start acting on the sugar +nt in grapes and cause fermentation. Because of the fermentation, the sugar in grapes is changed into ethanol & carbon dioxide. As the chemical constituents of sugar in grapes changes, it is a chemical change.
Explanations: Grapes on the plant do not ferment because of the defence chemical mechanism of plants. If grapes are plucked from the plant, grapes interact with yeast to carry out fermentation. And sugar changes to alcohol, and it is a chemical change.
Question 41: Balance the following chemical equations and identify the type of chemical reaction.
(a) Mg(solid) + Cl2 (g) → MgCl2 (s)
(b) HgO(solid) → heat Hg(l) + O2 (g)
(c) Na(solid) + S(s) → Fuse Na2S(s)
(d) TiCl4 (l) + Mg(solid) → Ti(s) + MgCl2 (s)
(e) CaO(solid) + SiO2 (s) → CaSiO3 (s)
(f) H2O2 (liquid) → U V H2O(l) + O2 (g)
Answer 41: Solutions are shown below:
(a) Mg(solid) + Cl2(g) → MgCl2(s)
This type of reaction is a combination or synthesis reaction
(b) 2HgO(s) — (Heat) → 2 Hg(I) + O2(g)
It is the best example of a thermal decomposition reaction.
(c) 2Na(solid) + S(s) — (Fuse) → Na2S(s)
It is the best example of a Combination reaction.
(d) TiCI4(l) + Mg(s) → Ti(s) + 2MgCl2 (solid)
This reaction falls under the class of Displacement reactions
(e) CaO(s) + SiO2(s) + CaSiO3(s)
It is a combination and synthesis reaction.
(f) 2H2O2(I) + UV → 2H2O(I) + O2 (g)
and this is a photodecomposition reaction.
Question 42: A substance X, an oxide of a group 2 element, is used intensively in the cement industry. This element is present in bones also. On treatment with a water solution, it forms a solution which turns red litmus to blue. Identify X and further write the chemical reactions involved.
Answer 42: Compound X is Calcium oxide. CaO is extensively utilised in the cement industry. On treatment with water, CaO produces calcium hydroxide Ca(OH)2, which is alkaline and turns red litmus into blue colour.
CaO(s) +H2O(l)→ Ca(OH)2(aq)
Explanations: Calcium is an element of group 2 and is present here in bones, also. Calcium oxide, generally known as quicklime, is used intensively in the cement industry. Calcium, on treatment with water, forms alkaline calcium hydroxide, which turns the red litmus to blue.
Question 43: Why do fireflies glow at night?
Answer 43: Fireflies produce a chemical reaction inside their bodies, allowing them to glow at night. Oxygen interacts with calcium, ATP and luciferin in the presence of an enzyme called luciferase. This results in bioluminescence.
Explanations: Fireflies glow at night due to chemical reactions inside their body to release light in the form of energy. Fireflies consume a luciferin protein, which is an organic compound. The oxygen interacts with calcium, adenosine triphosphate (ATP) and the chemical luciferin in the presence of luciferase, a bioluminescent enzyme, to create a new substance called oxyluciferin along with the evolution of energy in the form of light. This type of light production is termed bioluminescence.
Question 44: A silver article commonly turns black when kept in the open for a few days. The article, if rubbed with toothpaste, again starts shining. (a) Why do silver articles turn black when retained in the open for a few days? Name the phenomenon involved. (b) Name the black substance developed and give its chemical formula.
- Silver interacts with H2S present in the atmosphere to develop a black colour compound Silver Sulphide. This phenomenon is known as corrosion.
- The black colour compound developed is Silver Sulphide.
2Ag(solid)+ H2S(g) → Ag2S(s) + H2(g)
Explanations: a) Silver metal articles, when kept in the open air for a few days, interact with sulphur compounds present in the atmospheric oxygen and convert to black due to the formation of Ag2S. This phenomenon is called corrosion.
2Ag(solid) + H2S(g) → Ag2S(s) + H2(g)
Silver hydrogen sulphide silver sulphide hydrogen gas
Toothpaste contains calcium carbonate and aluminium hydroxide that can remove the black layer of silver sulphide, and silver shines again.
3Ag2S + 2Al → 6 Ag + Al2S3
Silver Sulphide Aluminium Silver metal Aluminium sulphide
- b) Name the black substance developed and give its chemical formula.
Ans: The black substance developed is silver sulphide and has a chemical formula of Ag2S.
Question 45: Zinc liberates hydrogen gas when reacted with dilute hydrochloric acid, whereas copper does not. Explain why?
Answer 45: Zinc is highly reactive than copper as Zinc is placed above Hydrogen & Copper is positioned below Hydrogen in the activity series of metals. Hence Zinc reacts with HCl, whereas copper will not react.
Explanations: Zinc metal is placed above hydrogen in the reactivity series and is more reactive to replace hydrogen from dilute hydrochloric acid and liberate hydrogen gas.
Zn(s) + HCl(aq) → ZnCl2 (aq) + H2(g)
Zinc hydrogen chloride Zinc chloride Hydrogen gas
Copper metal is positioned below hydrogen in the reactivity series, less reactive to displace hydrogen from dilute hydrochloric acid, and no reaction occurs.
Cu(s) + HCl(aq) → No reaction
Question 46: On heating blue coloured powder of copper (II) nitrate in a boiling tube, copper oxide (black), oxygen gas & a brown gas X are formed
(a) Write a balanced chemical equation of this reaction.
(b) Identify the brown gas (X) that evolved.
(c) Identify the reaction types.
(d) What could be the pH value range of the aqueous solution of the gas X?
(a) 2 Cu(NO3)2(s) → 2 CuO(s) + 4 NO2 + O2
Copper Nitrate Copper Oxide Nitrogen Dioxide oxygen gas
(blue) (black) (brown)(X)
(b) The brown gas X is NO2 nitrogen dioxide.
(c) The reaction type is Thermal decomposition.
(d) pH< 7, so NO2 dissolves in water to form an acidic solution (pH range below 7).
Explanations: a) The copper (II) nitrate decomposes on heating to synthesise black copper oxide, and oxygen gas is liberated along with brown nitrogen dioxide gas.
2 Cu(NO3)2(s) → 2 CuO(s) + 4 NO2 + O2
Copper Nitrate Copper Oxide Nitrogen Dioxide
(blue) (black) (brown)(X)
(b) Identify the brown gas X evolved.
Ans: The brown gas X evolved is nitrogen dioxide (NO2).
(c) The type of reaction is thermal decomposition as the single reactant decomposes on heating to give three products.
(d) The pH of the oxides of non-metal is acidic; nitrogen is a non-metal, so nitrogen dioxide gas, which is brown gas X, is acidic, and the range of the aqueous solution of nitrogen dioxide is 0-7.
Question 47: What happens when a piece of-
- a) Zinc metal added to copper sulphate solution?
- b) Aluminium metal is combined to dilute hydrochloric acid?
- c) silver metal is combined with copper sulphate solution?
Also, write the balanced chemical equation when the reaction occurs
- a) If Zinc is combined with copper sulphate solution, Zinc replaces copper to form Zinc sulphate.
Zn(s) + CuSO4(aqueous)→ ZnSO4(aq) + Cu(s)
b)Aluminium metals interact with dilute HCl to form Aluminium chloride, and Hydrogen gas is evolved in this reaction.
2Al (s)+ 6HCl(aqueous)→ 2AlCl3(aq) + 3 H2
c)When silver metal is added to the Copper Sulphate solution, there will not be any reaction as silver is non-reactive metal.
Explanations: a) When zinc metal is added to blue copper sulphate solution, Zinc is highly reactive than copper (as Zinc is positioned above copper in the reactivity series). Therefore, it displaces copper metal from copper sulphate to form a colourless zinc sulphate solution and reddish brown copper metal steady down at the bottom surface.
- b) If aluminium metal is added to dilute hydrochloric acid, an aqueous solution of aluminium chloride is formed, and hydrogen gas is evolved.
- c) Silver is a less reactive metal than copper, so it will not displace copper from copper sulphate solution; therefore, no reaction occurs when silver is mixed with copper sulphate solution.
Ag(s) + CuSO4(aq) → No Reaction
Silver Copper Sulphate
Question 48: You are provided with two containers of copper and Aluminium. You are also given dil HCl, dil HNO3, ZnCl2 and H2O solution. In which of the above containers solutions can be kept?
Answer 48: All these solutions can be kept in a copper container as copper is a noble metal which will not interact with HCl or not even HNO3. If we keep the aqueous solution in an aluminium container, Aluminium reacts with acids to form Zinc chloride. Water can be kept in either copper or aluminium containers as they do not react with copper.
Explanations: Copper is a less reactive metal placed below hydrogen and zinc metal in the reactivity series and hence cannot replace hydrogen from acids and water as well as Zinc from zinc chloride. Therefore, all the dilute HCl, HNO3, ZnCl2 and H2O solutions can be kept in copper vessels.
Aluminium is a reactive metal and placed above hydrogen in the reactivity series; hence reacts with the following:
- a) with dilute hydrochloric acid HCl:
2 Al(s) + 6 HCl(aq) → AlCl3(aq) + H2(g)
Aluminium Hydrochloric Acid Aluminium Chloride
Aluminium interacts with dil HCl to develop aluminium chloride solution, and hydrogen gas is evolved. Therefore, dilute HCl solution cannot be stored in aluminium vessels.
(ii)With dilute nitric acid HNO3:
Aluminium interacts with a dilute solution of nitric acid to develop the protective layer of aluminium oxide (Al2O3), which prevents the further reaction of an acid with aluminium vessels; hence, dilute nitric acid solution can be stored in the aluminium vessel.
(iii) With zinc chloride ZnCl2
2 Al(s) + 3 ZnCl2(aq) → 2 AlCl3(aq) + 3 Zn(s)
Aluminium Zinc chloride Aluminium Chloride Zinc
Aluminium is a more reactive metal than Zinc (Aluminium is placed above Zinc in the reactivity series) and displaces Zinc from zinc chloride solution to form aluminium chloride and zinc metal. Hence Zinc chloride solution cannot be stored in aluminium vessels.
(iv) With Water H2O:
Aluminium metal reacts with cold water but can react with steam to form a protective layer of aluminium oxide (Al2O3) which prevents further reaction with water; hence, water can be stored in aluminium vessels.
Question 49: Give the characteristic tests for the following gases
- Pass CO2 into limewater which will turn water into milky. It is the confirmation test for the presence of Carbon-dioxide.
- The smell is the characteristic feature of SO2, which smells like a rotten egg.
- Testing for oxygen involves burning matchstick near oxygen makes it burn even more brightly.
- When a burning matchstick is brought near H2 gas, the flame burns with a pop sound. It is the test to confirm Hydrogen gas.
(a) CO2: Carbon dioxide gas represents the characteristic of turning limewater milky due to the formation of an insoluble precipitate of calcium carbonate.
Ca(OH)2(aqueous) + CO2(g) → CaCO3(s) + H2O(g)
Lime Water Carbon Dioxide Calcium Carbonate
(b) SO2: Sulphur dioxide gas turns the purple colour acidic solution of potassium permanganate colourless. The SO2 acts as a reducing agent and forms colourless potassium sulphate and colourless manganese sulphate.
2 KMnO4(aq) + 2 H2O(l) + 5 SO2(g) → K2SO4(aq) + 2 H2SO4 + 2MnSO4
Potassium Permanganate Potassium Sulphate Manganese sulphate
(Purple) (Colourless) (Colourless)
(c) O2: When a matchstick is brought near the oxygen gas, it burns with more intensity and with bright flames as the oxygen gas supports the burning.
(d) H2: When a matchstick is brought near the hydrogen gas, it burns with a ‘popup’ sound.
Question 50: On adding a drop of barium chloride solution to an aqueous solution of sodium sulphite, a white ppt is obtained.
(i) Write balanced chemical equations of the reaction involved?
(ii) What other name can be provided for this precipitation reaction?
(iii) On combined dilute hydrochloric acid HCl to the reaction mixture, white ppt disappears. Why?
i)On mixing a drop of Barium Chloride solution with an aqueous solution of sodium sulphite, barium sulphite is formed, which is a white colour precipitate.
BaCl2 + Na2SO3——>BaSO3 + 2NaCl
- ii) In this case, the precipitation reaction is a double displacement.
iii) When we add dilute HCl to this reaction mixture, Barium chloride, Sulphur dioxide and water are formed. Barium chloride is a soluble substance which will make the white precipitate disappear.
BaSO3 + HCl→ BaCl3 + SO2 + H2O.
Benefits of Solving Important Questions Class 10 Science Chapter 1
Science demands a lot of practice. Classes 8, 9 and 10 are very important for students to develop a strong fundamental knowledge. We recommend students access Extramarks comprehensive set of Important Questions Class 10 Science Chapter 1. By regularly solving questions and going through our answer solutions, students will gain good confidence to solve tough problems from the chemical reactions and equations chapter.
Below are a few benefits of frequently solving questions from our Important Questions Class 10 Science Chapter 1:
- By referring to the detailed step-by-step solutions given in our solutions, students will better learn about all the balanced chemical equations and the chemical reactions topics covered in Chapter 1 of Class 10 Science syllabus.
- The questions and answers are based on the latest CBSE syllabus and as per CBSE guidelines. So students can rely on them fully.
- The questions covered in our set of Important Questions in Class 10 Science Chapter 1 are based on various topics covered in the Chemical reactions and equations chapter. So while solving these questions, students can revise the chapter and clarify any doubts.
- Practising questions similar to exam questions would help students perform better in their exams and score good marks.
Extramarks provides comprehensive learning solutions for students from Class 1 to Class 12. We have other study resources on our website, along with important questions and answers. Students can click on the links shown below to access some of these resources:
- NCERT books
- CBSE Revision Notes
- CBSE syllabus
- CBSE sample papers
- CBSE previous year’s question papers
- Important formulas
- CBSE extra questions
Q.2 [ 7212138 ]
Q.3 [ 7210681 ]
Q.4 The reaction shown in the given figure is an example of
(a) combination reaction
(b) displacement reaction
(c) oxidation reaction
(d) neutralisation reaction
In this reaction, iron displaces copper from its salt (copper sulphate). Therefore, it is a displacement reaction.
Q.5 The balanced chemical equation that represents the formation of barium sulphate from barium chloride solution and aluminum sulphate solution is
(b) BaCl2 + Al2(SO4)3 BaSO4 + AlCl3
The chemical equation represented in option (c) correctly represents the reaction as it clearly indicates the physical states of all reactants and products and is also balanced.
FAQs (Frequently Asked Questions)
1. Is the study resource of Important Questions Class 10 Science Chapter 1 enough to score good marks?
The solutions we have given are concise and written from an examination perspective. The answers to the exercise questions are clearly explained with examples. They are 100% accurate. These solutions will help students prepare for the exam as we follow the guidelines provided by NCERT and CBSE Science syllabus. These NCERT solutions will assist students in developing a conceptual foundation that explains all of the key concepts in an easy-to-understand language. This exercise covers all topics and subtopics that could be expected in your Class 10 Science exams.
Along with the study materials provided by Extramarks team, students should always refer to the official NCERT textbooks and exemplars provided as part of CBSE curriculum.
2. Apart from the NCERT textbook, Where can I find good study resources for Class 10 Science?
You can find the important study materials for Class 10 Science on the Extramarks official website. Our study materials cover all important topics from sources like NCERT textbooks, NCERT exemplar and other reference sources related to the CBSE curriculum. You can build your confidence and improve your scores by practising and revising from our study resources. The important questions and their solutions will help you better understand the concepts covered in the chapter.
You can get it very easily from the website by registering on it. Apart from it, Extramarks also provides NCERT study material for classes 1 to 12 and CBSE-related past-year papers.
3. What are the important chapters covered in the Class 10 Science CBSE curriculum?
Class 10 Science plays a vital role in building the foundation for students in Classes 11 and 12. Every chapter covered in Class 10 Science has a critical role. The important chapters covered in Class 10 Science include the following:
- Chapter 1 Chemical Reactions and Equations
- Chapter 2 Acids, Bases and Salts
- Chapter 3 Metals and Non-metals
- Chapter 4 Carbon and its Compounds
- Chapter 5 Periodic Classification of Elements
- Chapter 6 Life Processes
- Chapter 7 Control and Coordination
- Chapter 8 How do Organisms Reproduce?
- Chapter 9 Heredity and Evolution
- Chapter 10 Light Reflection and Refraction
- Chapter 11 The Human Eye and The Colorful World
- Chapter 12 Electricity
- Chapter 13 Magnetic Effects of Electric Current
- Chapter 14 Sources of Energy
- Chapter 15 Our Environment
Chapter 16 Sustainable Management of Natural Resources |
by Kevin E. Trenberth*
The climate is changing. In general, temperatures are increasing (Figure 1), owing to human-induced changes in the composition of the atmosphere, notably increased carbon dioxide from the burning of fossil fuels (IPCC, 2007). Land is mostly warming faster than the ocean. A close examination of Figure 1, however, shows that the temperatures actually declined from 1901 to 2005 in the south-eastern USA and the North Atlantic. Why is this? In the North Atlantic, changes in ocean currents clearly contribute. Over the south-eastern USA, changes in the atmospheric circulation that brought cloudier and much wetter conditions played a major role (Trenberth et al., 2007). This non-uniformity of change highlights the challenges of regional climate change that has considerable spatial structure and temporal variability.
|Figure 1 — Linear trend of annual temperatures for 1901 to 2005
(°C century–1). Areas in grey have insufficient data to produce reliable trends. Trends significant at the 5% level are indicated by white + marks. (From Trenberth et al., Climate Change 2007: The Physical Science Basis, Intergovernmental Panel on Climate Change)
A foundation of climate research and future projections comes from the observations. These come from many and varied sources. Many are taken for weather forecasting purposes. Changes are common in instrumentation and siting, thereby disrupting the climate record, for which continuity and homogeneity are vitally important for assessing climate variations and change. Increasing volumes of observations come from space-based platforms, but satellites have a finite life time (typically five years or so), the orbit drifts and decays over time, the instruments degrade and, hence, the apparent climate record can become corrupted by spurious changes. An ongoing challenge is to create climate data records from the observations to serve many purposes.
Loss of Earth-observing satellites is also of concern, as documented in the recent National Research Council decadal survey (2007). Ground-based observations are not being adequately kept up in many countries. Calibration of climate records is critical. Small changes over a long time are characteristic of climate change but they occur in the midst of large variations associated with weather and natural climate variations, such as El Niño. Yet the climate is changing and it is imperative to track the changes and causes as they occur and identify what the prospects are for the future—to the extent that they are predictable. We need to build a system based on these observations to inform decision-makers about what is happening and why and what the predictions are for the future on several time horizons.
In this article, an outline is given of a subset of activities related to the needs of decision-makers for climate information for adaptation purposes. It builds on some discussions held at a workshop on learning from the Fourth Assessment of the Intergovernmental Panel on Climate Change (IPCC) (Sydney, Australia, 4-6 October 2007). The Workshop was sponsored by the Global Climate Observing System, the World Climate Research Programme (WCRP) and the International Geosphere-Biosphere Programme of the International Council for Science. Within WCRP, the WCRP Observations and Assimilation Panel (WOAP), which the author chairs, attempts to highlight outstanding issues and ways forward in addressing them.
Building an information base for adaptation
A detailed diagnosis of the vital signs of planet Earth has revealed that the planet is running a “fever” and the prognosis is that it is liable to become much worse. “Warming of the climate system is unequivocal” and it is “very likely” due to human activities. This is the verdict of the Fourth Assessment Report of the IPCC, known as AR4 (IPCC, 2007). Although mitigation of human climate change is vitally important, the evidence suggests that climate will continue to change substantially as a result of human activities over the next several decades, so that adaptation will be essential.
An imperative and essential first step is to build a climate information system (Trenberth et al., 2002; 2006) that informs decision-makers about what is happening and why, and what the immediate prospects are (see Figure 2). Overall what is required are the observations (that satisfy the climate-observing principles); a performance tracking system; the ingest, archival and stewardship of data; access to data, including data management and integration; analysis and re-analysis of the observations and derivation of products, especially including climate data records (National Research Council, 2005); assessment of what has happened and why (attribution), including likely impacts on humans and ecosystems; prediction of near-term climate change over several decades; and responsiveness to decision-makers and users.
|Figure 2 — A schematic of the flow of the climate information system, as basic research feeds into applied and operational research and the development of climate services. The system is built on the climate observing system that includes the analysis and assimilation of data using models to produce analyses and fields for initializing models; the use of models for attribution and prediction and with all the information assessed and assembled into products and information that are disseminated to users. The users in turn provide feedback on their needs and how to improve information.|
This first means gathering the information on changes in climate and the external forcings and attributing, to the extent possible, why the changes have occurred. Any attribution activity is fundamentally about the science of climate predictability, including the predictability of climate variability (e.g. El Niño-Southern Oscillation (ENSO), seasonal variability, etc.), as well as long-term climate change. Indeed, understanding and attributing what has just happened would seem to be a prerequisite to making the next climate prediction. The central goal is to understand the causes of observed climate variability and change, including the uncertainties, and to be able to: (a) use this understanding to improve model realism and forecast skill; and (b) communicate this understanding to users of climate knowledge and the public in general. Attribution may take two stages. For instance, one stage entails running atmospheric models to determine the extent to which recent conditions could have been predicted, given the observed sea-surface temperatures (SSTs), soil moisture, sea ice and other anomalous influences on the atmosphere. The second step is to say why the SSTs and soil moisture, etc., are the way they are. As models have become better, climate events that could not be attributed in the past now can be.
Models are still far from perfect and are likely to underestimate what can actually be attributed. Individual researchers may be ahead of consensus. Accordingly, both steps should take account of shortcomings of models, and empirical (statistical, etc.) evidence can often be more compelling. A requirement is to have significantly expanded computer resources for ensemble simulations and simulations that have sufficient resolution for regional-scale climate attribution (e.g. droughts, hurricanes, floods). A research question is how to do this efficiently.
The development of this climate information system potentially takes on, in a more operational framework, a key part of what is currently done by the IPCC. The research questions are many on how to develop the system and what the system includes in ways that make it viable. The related follow-on activities are then to improve and initialize climate models and make ensemble predictions for the next 30 years or so, as given below.
Initialization and validation of decadal forecasts
Running atmospheric models with specified SSTs has often produced understanding of past climate anomalies. For instance, the Sahel drought (Giannini et al., 2003) and the “Dust Bowl” period of drought in the USA in the 1930s (Schubert et al., 2004; Seager et al., 2005) can be simulated in this way. Hurrell et al. (2004) find that some aspects of the North Atlantic Oscillation can be simulated with prescribed SSTs. It is essential to have the patterns of SSTs around the globe simulated much as observed, and it is clearly not possible to make such predictions without initialization of oceans and other aspects of the climate system.
The extent to which this leads to predictability is not yet clear but the underlying hypothesis is that there is significant predictability that can be exploited for improved adaptation and planning by decision-makers. Early tests of this approach (Smith et al., 2007) show the promise and benefit of initializing models, but the benefit thus far stems mainly from ENSO.
In a 30-year time frame, climate predictions are not sensitive to emissions scenarios and this aspect can hence be largely removed from consideration. Yet, forecasts in this time frame would be exceedingly valuable. Climate (change) predictions are therefore needed to provide information on a time-scale of 0-30 years but with estimates of uncertainty (ensembles) and estimates of sensitivity to errors in initial conditions. This also leads to improved models through regular testing against data, as noted below. The WCRP has initiated research in this area under the banner “seamless climate prediction” that calls for prediction on multiple time-scales, ranging from numerical weather prediction to extended range over weeks (see The Observing system Research and Predictability EXperiment (THORPEX), interannual variability including ENSO, to multi-decadal predictions, all as initial value problems requiring specification of the initial observed state.
For weather prediction, detailed analyses of the atmosphere are required but uncertainties in the initial state grow rapidly over several days. For climate predictions, the initial state of the atmosphere is less critical; states separated by a day or so can be substituted. However, the initial states of other climate-system components, some of which may not be critical to day-to-day weather prediction, become vital. For predictions of a season to a year or so, SSTs, sea-ice extent and upper-ocean heat content, soil moisture, snow cover and state of surface vegetation over land are all important. Such initial value predictions are already operational for forecasting El Niño and extensions to the global oceans are underway. On longer time-scales, increased information throughout the ocean is essential. The mass, extent, thickness and state of sea ice and snow cover are vital at high latitudes. The states of soil moisture and surface vegetation are especially important in understanding and predicting warm season precipitation and temperature anomalies, along with other aspects of the land surface. Any information on systematic changes to the atmosphere (especially its composition and influences from volcanic eruptions), as well as external forcings, such as from changes in the Sun, is also needed. Uncertainties in the initial state and the lack of detailed predictability of the atmosphere and other aspects of climate mandate that ensembles of predictions must be made and statistical forecasts given.
The activity feeds directly into providing regional predictions, including downscaling, and should result in probability distributions for fields of interest. The results would have direct applications regionally where impacts are most felt and where planned adaptation can occur.
Predictability should arise from certain phenomena that evolve slowly or which have large thermal inertia, such as ocean current systems, including the meridional overturning circulation of the ocean, ice-sheets, sea-level and land properties. Some predictability can be determined from model experiments, but only to the extent that models themselves are adequate. Coping with systematic errors is a particular challenge in assimilating real observations. Having available high-quality comprehensive datasets both to initialize the models and test them in hindcast mode is vitally important and is linked to re-analysis of the climate system components, perhaps in coupled mode. Such data are also essential for improving models. Compromises have to be made over Latest issue and fidelity versus multiple runs and perturbation ensembles, as well as multi-model ensembles. Metrics for evaluation is a developing field but attention must be devoted to modes of variability, such as ENSO, and how to cope with missing phenomena such as tropical cyclones.
Confronting models with observations
It is desirable to confront models with a variety of observational evidence in order to interpret observed historical changes in the climate system and to have confidence in projections of future change. The AR4 demonstrates that we now have a relatively good understanding of the causes of surface temperature changes observed over the 20th century on both global and continental scales. We are able to quantify the contributions to observed change from the main external influences (including human influences) on the climate over the past century. However, our capability of interpreting change in most other impacts-relevant variables (such as circulation change, precipitation change and changes in extremes of various types) remains more limited.
It is necessary to perform specific, designed experiments to isolate and correct the causes of long-standing model biases resulting, for example, from persistent difficulties in the representation of convective processes resulting in the poor representation of the diurnal cycle of precipitation, coupled air-sea modes of interaction and the distribution of marine stratus. Focused experimentation with climate models is needed to isolate the causes of specific and persistent model biases at the process level. For instance, the Cloud Feedback Model Intercomparison Project (CFMIP) focuses on cloud feedbacks. The Transpose Atmospheric Model Intercomparison Project employs climate models in weather forecast mode to examine biases in models that develop rapidly in forecasts of up to five days.
There is a pressing need to develop and apply a set of community-accepted model metrics that could be used to weigh the many different models contributing to the large ensembles. The metrics could be based on the ability to simulate the mean annual cycle; observed climate variability on scales from hours (diurnal) to decadal; the features of the longer-term historical evolution of the climate system as estimated from paleo records of, for example, the last millennium, the Last Glacial Maximum, or other times when observational constraints are adequate; and the ability to produce short-term weather predictions as an initial value problem and short-term evolution of the climate system over the satellite period as an initial value and boundary forced problem.
Reprocessing and re-analyses
While we have generally seen continuing improvement in the Earth-observing satellite network, with significant enhancements in the measurements made by operational satellites, problems have arisen over establishing and maintaining climate observations from space that are highlighted by the de-scoping of the National Polar-orbiting Operational Environmental Satellite System, in which climate observations have been seriously compromised. Longer-term prospects for Earth observations are also not as good as they have been (National Research Council, 2007).
The continuing problems in establishing and maintaining global measurements of essential climate variables highlight the need for formal international coordination of these measurements across agencies and missions, in liaison with user groups from the climate community. Coordination is especially important in the design phase of missions (so that consistency and continuity can be maintained) and in the calibration and validation processes (so that spatially and temporally consistent data can be collected). Both these aspects have been identified by the Committee on Earth Observation Satellites through the virtual constellation concept and the Global Space-based Inter-Calibration System.
Much more work is needed to take advantage of observations already made. A key part of the overall strategy in creating climate data records is the need to have a vibrant programme of re-processing of past data (GCOS, 2005) and re-analysis of all the data into global fields. The AR4 IPCC report demonstrates shortcomings in many climate records, especially those from space. Related research has also demonstrated, however, the potential for improvements in the records as progress is made on algorithm development and solutions are found to problems. These include discontinuities in the record across different instruments and satellites, drifts in orbit effects and all issues related to the creation of true climate data records.
The WCRP Observations and Assimilation Panel has posted a set of guidelines on when and whether it is appropriate to carry out re-processing. Coordination among the major space agencies is highly desirable to agree on algorithms and calibration procedures. The fields would include temperature, water vapour, clouds, radiation, sea-surface temperatures, sea ice and snow cover and, especially, tropical storms and hurricanes. The research community can do this; but it requires substantial resources and coordination among international space agencies, in particular.
Global atmospheric analyses are produced in real-time operationally. As the assimilating model used to analyse the observations is improved, the analyses may change in character. Re-analysis is the name given to the re-processing of all these and other observations with a state-of-the-art system that is held constant in time, thereby improving the continuity of the resulting climate record. The challenge of dealing with the changing observing system is still before us. The result is a more coherent description of the changing atmosphere, ocean, land surface and other climate components that can be utilized by the many customers for climate products, including those that can not be directly observed. Re-analysis thus contributes to the capacity-building objectives of programmes such as the Global Earth Observing System of Systems and should be considered an essential component of a climate observing system. WOAP promotes re-analysis and the Third International Re-analysis Conference is being held in Tokyo, Japan in January 2008.
Building a climate information system (see Figure 2) potentially integrates research and output from WCRP, the International Geosphere-Biosphere Programme, Diversitas (an international programme of biodiversity science (ESSP/IGBP/IHDP/WCRP)), the International Human Dimensions Programme on Global Environmental Change (ESSP/DIVERSITAS/IGBP/WCRP) and the Global Climate Observing System. Basic research feeds into applied and operational research that, in turn, develops climate products and services.
Many aspects require research on assembling observations, analysis and assimilation, attribution studies, establishing relationships among physical and environmental impact variables, running models and coping with model biases, predictions and projections, downscaling and regionalizing results and developing information systems and ways of interacting with users. Meeting the challenges in the above research requires adequate funding but potentially pays off with a valuable information system.
Global Climate Observing System, 2004: Implementation Plan for the Global Observing System for Climate in support of the UNFCCC. GCOS-92. 143 pp.
Giannini, A., R. Saravanan and P. Chang, 2003: Oceanic forcing of Sahel rainfall on interannual to interdecadal time scales. Science, 302, 1027−1030.
Hoerling, M. and A. Kumar, 2003: The perfect ocean for drought. Science, 299, 691−694.
Hurrell, J.W., M.P. Hoerling, A.S. Phillips and T. Xu, 2004: Twentieth Century North Atlantic climate change. Part I: Assessing determinism. Climate Dyn., 23, 371-389.
Intergovernmental Panel on Climate change (IPCC), 2007: Climate Change 2007—The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the IPCC. (S. Solomon, D. Qin, M. Manning, Z. Chen, M.C. Marquis, K. B. Avery, M. Tignor and H.L. Miller (Eds)). Cambridge University Press. Cambridge, UK, and New York, NY, USA, 996 pp.
National Research Council, 2004: Climate Data Records from Environmental Satellites: Interim Report, National Academy Press, Washington, DC, USA. 105 pp.
National Research Council, 2007: Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond. The National Academies Press, Washington, DC, USA.
Schubert, S. D., M.J. Suarez, P.J. Region, R.D. Koster and J.T. Bacmeister, 2004: Causes of long-term drought in the United States Great Plains. J. Climate, 17, 485−503.
Seager, R., Y. Kushnir, C. Herweijer, N. Naik and J. Velez, 2005: Modeling of tropical forcing of persistent droughts and pluvials over western North America: 1856-2000. J. Climate, 18, 4065-4088.
Smith, D.M., S. Cusack, A.W. Colman, C.K. Folland, G.R. Harris and J.M. Murphy, 2007: Improved surface temperature prediction for the coming decade from a global climate model. Science, 317, 796-799.
Trenberth, K.E., T.R. Karl and T.W. Spence, 2002: The need for a systems approach to climate observations. Bull. Amer. Meteor. Soc., 83, 1593–1602.
Trenberth, K.E., B. Moore, T.R. Karl and C. Nobre, 2006: Monitoring and prediction of the Earth’s climate: A future perspective. J. Climate, 19, 5001−5008.
Trenberth, K.E., P.D. Jones, P. Ambenje, R. Bojariu, D. Easterling, A. Klein Tank, D. Parker, F. Rahimzadeh, J.A. Renwick, M. Rusticucci, B. Soden and P. Zhai, 2007: Observations: Surface and Atmospheric Climate Change. In: Climate Change 2007. The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. (S. Solomon, D. Qin, M. Manning, Z. Chen, M.C. Marquis, K.B. Avery, M. Tignor and H.L. Miller (Eds)). Cambridge University Press. Cambridge, UK, and New York, NY, USA, 235−336, plus annex online.
* National Center for Atmospheric Research, Boulder, CO 80307, USA E-mail: trenbert [at]ucar.edu |
Heart rate is the speed of the heartbeat measured by the number of contractions (beats) of the heart per minute (bpm). The heart rate can vary according to the body's physical needs, including the need to absorb oxygen and excrete carbon dioxide. It is usually equal or close to the pulse measured at any peripheral point. Activities that can provoke change include physical exercise, sleep, anxiety, stress, illness, and ingestion of drugs.
The American Heart Association states the normal resting adult human heart rate is 60–100 bpm. Tachycardia is a fast heart rate, defined as above 100 bpm at rest. Bradycardia is a slow heart rate, defined as below 60 bpm at rest. During sleep a slow heartbeat with rates around 40–50 bpm is common and is considered normal. When the heart is not beating in a regular pattern, this is referred to as an arrhythmia. Abnormalities of heart rate sometimes indicate disease.
While heart rhythm is regulated entirely by the sinoatrial node under normal conditions, heart rate is regulated by sympathetic and parasympathetic input to the sinoatrial node. The accelerans nerve provides sympathetic input to the heart by releasing norepinephrine onto the cells of the sinoatrial node (SA node), and the vagus nerve provides parasympathetic input to the heart by releasing acetylcholine onto sinoatrial node cells. Therefore, stimulation of the accelerans nerve increases heart rate, while stimulation of the vagus nerve decreases it.
Due to individuals having a constant blood volume,[dubious ] one of the physiological ways to deliver more oxygen to an organ is to increase heart rate to permit blood to pass by the organ more often. Normal resting heart rates range from 60-100 bpm. Bradycardia is defined as a resting heart rate below 60 bpm. However, heart rates from 50 to 60 bpm are common among healthy people and do not necessarily require special attention. Tachycardia is defined as a resting heart rate above 100 bpm, though persistent rest rates between 80–100 bpm, mainly if they are present during sleep, may be signs of hyperthyroidism or anemia (see below).
- Central nervous system stimulants such as substituted amphetamines increase heart rate.
- Central nervous system depressants or sedatives decrease the heart rate (apart from some particularly strange ones with equally strange effects, such as ketamine which can cause – amongst many other things – stimulant-like effects such as tachycardia).
There are many ways in which the heart rate speeds up or slows down. Most involve stimulant-like endorphins and hormones being released in the brain, many of which are those that are 'forced'/'enticed' out by the ingestion and processing of drugs.
This section discusses target heart rates for healthy persons and are inappropriately high for most persons with coronary artery disease.
Influences from the central nervous systemEdit
The heart rate is rhythmically generated by the sinoatrial node. It is also influenced by central factors through sympathetic and parasympathetic nerves. Nervous influence over the heartrate is centralized within the two paired cardiovascular centres of the medulla oblongata. The cardioaccelerator regions stimulate activity via sympathetic stimulation of the cardioaccelerator nerves, and the cardioinhibitory centers decrease heart activity via parasympathetic stimulation as one component of the vagus nerve. During rest, both centers provide slight stimulation to the heart, contributing to autonomic tone. This is a similar concept to tone in skeletal muscles. Normally, vagal stimulation predominates as, left unregulated, the SA node would initiate a sinus rhythm of approximately 100 bpm.
Both sympathetic and parasympathetic stimuli flow through the paired cardiac plexus near the base of the heart. The cardioaccelerator center also sends additional fibers, forming the cardiac nerves via sympathetic ganglia (the cervical ganglia plus superior thoracic ganglia T1–T4) to both the SA and AV nodes, plus additional fibers to the atria and ventricles. The ventricles are more richly innervated by sympathetic fibers than parasympathetic fibers. Sympathetic stimulation causes the release of the neurotransmitter norepinephrine (also known as noradrenaline) at the neuromuscular junction of the cardiac nerves. This shortens the repolarization period, thus speeding the rate of depolarization and contraction, which results in an increased heartrate. It opens chemical or ligand-gated sodium and calcium ion channels, allowing an influx of positively charged ions.
Parasympathetic stimulation originates from the cardioinhibitory region with impulses traveling via the vagus nerve (cranial nerve X). The vagus nerve sends branches to both the SA and AV nodes, and to portions of both the atria and ventricles. Parasympathetic stimulation releases the neurotransmitter acetylcholine (ACh) at the neuromuscular junction. ACh slows HR by opening chemical- or ligand-gated potassium ion channels to slow the rate of spontaneous depolarization, which extends repolarization and increases the time before the next spontaneous depolarization occurs. Without any nervous stimulation, the SA node would establish a sinus rhythm of approximately 100 bpm. Since resting rates are considerably less than this, it becomes evident that parasympathetic stimulation normally slows HR. This is similar to an individual driving a car with one foot on the brake pedal. To speed up, one need merely remove one’s foot from the brake and let the engine increase speed. In the case of the heart, decreasing parasympathetic stimulation decreases the release of ACh, which allows HR to increase up to approximately 100 bpm. Any increases beyond this rate would require sympathetic stimulation.
Input to the cardiovascular centresEdit
The cardiovascular centres receive input from a series of visceral receptors with impulses traveling through visceral sensory fibers within the vagus and sympathetic nerves via the cardiac plexus. Among these receptors are various proprioreceptors, baroreceptors, and chemoreceptors, plus stimuli from the limbic system which normally enable the precise regulation of heart function, via cardiac reflexes. Increased physical activity results in increased rates of firing by various proprioreceptors located in muscles, joint capsules, and tendons. The cardiovascular centres monitor these increased rates of firing, suppressing parasympathetic stimulation or increasing sympathetic stimulation as needed in order to increase blood flow.
Similarly, baroreceptors are stretch receptors located in the aortic sinus, carotid bodies, the venae cavae, and other locations, including pulmonary vessels and the right side of the heart itself. Rates of firing from the baroreceptors represent blood pressure, level of physical activity, and the relative distribution of blood. The cardiac centers monitor baroreceptor firing to maintain cardiac homeostasis, a mechanism called the baroreceptor reflex. With increased pressure and stretch, the rate of baroreceptor firing increases, and the cardiac centers decrease sympathetic stimulation and increase parasympathetic stimulation. As pressure and stretch decrease, the rate of baroreceptor firing decreases, and the cardiac centers increase sympathetic stimulation and decrease parasympathetic stimulation.
There is a similar reflex, called the atrial reflex or Bainbridge reflex, associated with varying rates of blood flow to the atria. Increased venous return stretches the walls of the atria where specialized baroreceptors are located. However, as the atrial baroreceptors increase their rate of firing and as they stretch due to the increased blood pressure, the cardiac center responds by increasing sympathetic stimulation and inhibiting parasympathetic stimulation to increase HR. The opposite is also true.
Increased metabolic byproducts associated with increased activity, such as carbon dioxide, hydrogen ions, and lactic acid, plus falling oxygen levels, are detected by a suite of chemoreceptors innervated by the glossopharyngeal and vagus nerves. These chemoreceptors provide feedback to the cardiovascular centers about the need for increased or decreased blood flow, based on the relative levels of these substances.
The limbic system can also significantly impact HR related to emotional state. During periods of stress, it is not unusual to identify higher than normal HRs, often accompanied by a surge in the stress hormone cortisol. Individuals experiencing extreme anxiety may manifest panic attacks with symptoms that resemble those of heart attacks. These events are typically transient and treatable. Meditation techniques have been developed to ease anxiety and have been shown to lower HR effectively. Doing simple deep and slow breathing exercises with one’s eyes closed can also significantly reduce this anxiety and HR.
Factors influencing heart rateEdit
Using a combination of autorhythmicity and innervation, the cardiovascular center is able to provide relatively precise control over the heart rate, but other factors can impact on this. These include hormones, notably epinephrine, norepinephrine, and thyroid hormones; levels of various ions including calcium, potassium, and sodium; body temperature; hypoxia; and pH balance.
Epinephrine and norepinephrineEdit
The catecholamines, epinephrine and norepinephrine, secreted by the adrenal medulla form one component of the extended fight-or-flight mechanism. The other component is sympathetic stimulation. Epinephrine and norepinephrine have similar effects: binding to the beta-1 adrenergic receptors, and opening sodium and calcium ion chemical- or ligand-gated channels. The rate of depolarization is increased by this additional influx of positively charged ions, so the threshold is reached more quickly and the period of repolarization is shortened. However, massive releases of these hormones coupled with sympathetic stimulation may actually lead to arrhythmias. There is no parasympathetic stimulation to the adrenal medulla.
In general, increased levels of the thyroid hormones (thyroxine(T4) and triiodothyronine (T3)), increase the heart rate; excessive levels can trigger tachycardia. The impact of thyroid hormones is typically of a much longer duration than that of the catecholamines. The physiologically active form of triiodothyronine, has been shown to directly enter cardiomyocytes and alter activity at the level of the genome.[clarification needed] It also impacts the beta adrenergic response similar to epinephrine and norepinephrine.
Calcium ion levels have a great impact on heart rate and contractility: increased calcium levels cause an increase in both. High levels of calcium ions result in hypercalcemia and excessive levels can induce cardiac arrest. Drugs known as calcium channel blockers slow HR by binding to these channels and blocking or slowing the inward movement of calcium ions.
Caffeine and nicotineEdit
This section needs expansion. You can help by adding to it. (February 2015)
Caffeine and nicotine are both stimulants of the nervous system and of the cardiac centres causing an increased heart rate. Caffeine works by increasing the rates of depolarization at the SA node, whereas nicotine stimulates the activity of the sympathetic neurons that deliver impulses to the heart. Both stimulants are legal and unregulated, and are known to be very addictive.
Effects of stressEdit
Both surprise and stress induce physiological response: elevate heart rate substantially. In a study conducted on 8 female and male student actors ages 18 to 25, their reaction to an unforeseen occurrence (the cause of stress) during a performance was observed in terms of heart rate. In the data collected, there was a noticeable trend between the location of actors (onstage and offstage) and their elevation in heart rate in response to stress; the actors present offstage reacted to the stressor immediately, demonstrated by their immediate elevation in heart the minute the unexpected event occurred, but the actors present onstage at the time of the stressor reacted in the following 5 minute period (demonstrated by their increasingly elevated heart rate). This trend regarding stress and heart rate is supported by previous studies; negative emotion/stimulus has a prolonged effect on heart rate in individuals who are directly impacted. In regard to the characters present onstage, a reduced startle response has been associated with a passive defense, and the diminished initial heart rate response has been predicted to have a greater tendency to dissociation. Further, note that heart rate is an accurate measure of stress and the startle response which can be easily observed to determine the effects of certain stressors.
Factors decreasing heart rateEdit
The heart rate can be slowed by altered sodium and potassium levels, hypoxia, acidosis, alkalosis, and hypothermia. The relationship between electrolytes and HR is complex, but maintaining electrolyte balance is critical to the normal wave of depolarization. Of the two ions, potassium has the greater clinical significance. Initially, both hyponatremia (low sodium levels) and hypernatremia (high sodium levels) may lead to tachycardia. Severely high hypernatremia may lead to fibrillation, which may cause CO to cease. Severe hyponatremia leads to both bradycardia and other arrhythmias. Hypokalemia (low potassium levels) also leads to arrhythmias, whereas hyperkalemia (high potassium levels) causes the heart to become weak and flaccid, and ultimately to fail.
Heart muscle relies exclusively on aerobic metabolism for energy. Hypoxia (an insufficient supply of oxygen) leads to decreasing HRs, since metabolic reactions fueling heart contraction are restricted.
Acidosis is a condition in which excess hydrogen ions are present, and the patient’s blood expresses a low pH value. Alkalosis is a condition in which there are too few hydrogen ions, and the patient’s blood has an elevated pH. Normal blood pH falls in the range of 7.35–7.45, so a number lower than this range represents acidosis and a higher number represents alkalosis. Enzymes, being the regulators or catalysts of virtually all biochemical reactions - are sensitive to pH and will change shape slightly with values outside their normal range. These variations in pH and accompanying slight physical changes to the active site on the enzyme decrease the rate of formation of the enzyme-substrate complex, subsequently decreasing the rate of many enzymatic reactions, which can have complex effects on HR. Severe changes in pH will lead to denaturation of the enzyme.
The last variable is body temperature. Elevated body temperature is called hyperthermia, and suppressed body temperature is called hypothermia. Slight hyperthermia results in increasing HR and strength of contraction. Hypothermia slows the rate and strength of heart contractions. This distinct slowing of the heart is one component of the larger diving reflex that diverts blood to essential organs while submerged. If sufficiently chilled, the heart will stop beating, a technique that may be employed during open heart surgery. In this case, the patient’s blood is normally diverted to an artificial heart-lung machine to maintain the body’s blood supply and gas exchange until the surgery is complete, and sinus rhythm can be restored. Excessive hyperthermia and hypothermia will both result in death, as enzymes drive the body systems to cease normal function, beginning with the central nervous system.
In different circumstancesEdit
Heart rate is not a stable value and it increases or decreases in response to the body's need in a way to maintain an equilibrium (basal metabolic rate) between requirement and delivery of oxygen and nutrients. The normal SA node firing rate is affected by autonomic nervous system activity: sympathetic stimulation increases and parasympathetic stimulation decreases the firing rate. A number of different metrics are used to describe heart rate.
Resting heart rateEdit
Normal pulse rates at rest, in beats per minute (BPM):
(0–3 months old)
(3 – 6 months)
(6 – 12 months)
(1 – 10 years)
|children over 10 years
& adults, including seniors
The basal or resting heart rate (HRrest) is defined as the heart rate when a person is awake, in a neutrally temperate environment, and has not been subject to any recent exertion or stimulation, such as stress or surprise. A large body of evidence indicates that the normal range is 60-100 beats per minute. This resting heart rate is often correlated with mortality. For example, all-cause mortality is increased by 1.22 (hazard ratio) when heart rate exceeds 90 beats per minute. The mortality rate of patients with myocardial infarction increased from 15% to 41% if their admission heart rate was greater than 90 beats per minute. ECG of 46,129 individuals with low risk for cardiovascular disease revealed that 96% had resting heart rates ranging from 48-98 beats per minute. Finally, expert consensus reveals that 98% of cardiologists believe that the "60 to 100" range is too high, with a vast majority of them agreeing that 50 to 90 beats per minute is more appropriate. The normal resting heart rate is based on the at-rest firing rate of the heart's sinoatrial node, where the faster pacemaker cells driving the self-generated rhythmic firing and responsible for the heart's autorhythmicity are located. For endurance athletes at the elite level, it is not unusual to have a resting heart rate between 33 and 50 bpm.
Maximum heart rateEdit
The maximum heart rate (HRmax) is the highest heart rate an individual can achieve without severe problems through exercise stress,[unreliable medical source?] and generally decreases with age. Since HRmax varies by individual, the most accurate way of measuring any single person's HRmax is via a cardiac stress test. In this test, a person is subjected to controlled physiologic stress (generally by treadmill) while being monitored by an ECG. The intensity of exercise is periodically increased until certain changes in heart function are detected on the ECG monitor, at which point the subject is directed to stop. Typical duration of the test ranges ten to twenty minutes.
Adults who are beginning a new exercise regimen are often advised to perform this test only in the presence of medical staff due to risks associated with high heart rates. For general purposes, a formula is often employed to estimate a person's maximum heart rate. However, these predictive formulas have been criticized as inaccurate because they generalized population-averages and usually focus on a person's age. It is well-established that there is a "poor relationship between maximal heart rate and age" and large standard deviations relative to predicted heart rates. (see Limitations of Estimation Formulas).
A number of formulas are used to estimate HRmax
Nes, et al.Edit
Based on measurements of 3320 healthy men and women aged between 19 and 89, and including the potential modifying effect of gender, body composition, and physical activity, Nes et al found
- HRmax = 211 − (0.64 × age)
This relationship was found to hold substantially regardless of gender, physical activity status, maximal oxygen uptake, smoking, or body mass index. However, a standard error of the estimate of 10.8 beats/min must be accounted for when applying the formula to clinical settings, and the researchers concluded that actual measurement via a maximal test may be preferable whenever possible.
Tanaka, Monahan, & SealsEdit
From Tanaka, Monahan, & Seals (2001):
- HRmax = 208 − (0.7 × age)
Their meta-analysis (of 351 prior studies involving 492 groups and 18,712 subjects) and laboratory study (of 514 healthy subjects) concluded that, using this equation, HRmax was very strongly correlated to age (r = −0.90). The regression equation that was obtained in the laboratory-based study (209 − 0.7 x age), was virtually identical to that of the meta-study. The results showed HRmax to be independent of gender and independent of wide variations in habitual physical activity levels. This study found a standard deviation of ~10 beats per minute for individuals of any age, meaning the HRmax formula given has an accuracy of ±20 beats per minute.
In 2007, researchers at the Oakland University analyzed maximum heart rates of 132 individuals recorded yearly over 25 years, and produced a linear equation very similar to the Tanaka formula, HRmax = 206.9 − (0.67 × age), and a nonlinear equation, HRmax = 191.5 − (0.007 × age2). The linear equation had a confidence interval of ±5–8 bpm and the nonlinear equation had a tighter range of ±2–5 bpm. Also a third nonlinear equation was produced: HRmax = 163 + (1.16 × age) − (0.018 × age2).[disputed (for: formulae conflict with source cited) ]
Haskell & FoxEdit
Notwithstanding the research of Tanaka, Monahan, & Seals, the most widely cited formula for HRmax (which contains no reference to any standard deviation) is still:
- HRmax = 220 − age
Although attributed to various sources, it is widely thought to have been devised in 1970 by Dr. William Haskell and Dr. Samuel Fox. Inquiry into the history of this formula reveals that it was not developed from original research, but resulted from observation based on data from approximately 11 references consisting of published research or unpublished scientific compilations. It gained widespread use through being used by Polar Electro in its heart rate monitors, which Dr. Haskell has "laughed about", as the formula "was never supposed to be an absolute guide to rule people's training."
While it is the most common (and easy to remember and calculate), this particular formula is not considered by reputable health and fitness professionals to be a good predictor of HRmax. Despite the widespread publication of this formula, research spanning two decades reveals its large inherent error, Sxy = 7–11 bpm. Consequently, the estimation calculated by HRmax = 220 − age has neither the accuracy nor the scientific merit for use in exercise physiology and related fields.
Robergs & LandwehrEdit
A 2002 study of 43 different formulas for HRmax (including that of Haskell and Fox – see above) published in the Journal of Exercise Psychology concluded that:
- no "acceptable" formula currently existed (they used the term "acceptable" to mean acceptable for both prediction of VO2, and prescription of exercise training HR ranges)
- the least objectionable formula (Inbar, et. al., 1994) was:
- HRmax = 205.8 − (0.685 × age)
- This had a standard deviation that, although large (6.4 bpm), was considered acceptable for prescribing exercise training HR ranges.
Gulati (for women)Edit
Research conducted at Northwestern University by Martha Gulati, et al., in 2010 suggested a maximum heart rate formula for women:
- HRmax = 206 − (0.88 × age)
Wohlfart, B. and Farazdaghi, G.R.Edit
A 2003 study from Lund, Sweden gives reference values (obtained during bicycle ergometry) for men:
- HRmax = 203.7 / ( 1 + exp( 0.033 × (age − 104.3) ) )
and for women:
- HRmax = 190.2 / ( 1 + exp( 0.0453 × (age − 107.5) ) )
- HRmax = 206.3 − (0.711 × age)
- HRmax = 217 − (0.85 × age)
- (Often attributed to "Miller et al. from Indiana University")
Maximum heart rates vary significantly between individuals. Even within a single elite sports team, such as Olympic rowers in their 20s, maximum heart rates have been reported as varying from 160 to 220. Such a variation would equate to a 60 or 90 year age gap in the linear equations above, and would seem to indicate the extreme variation about these average figures.
Figures are generally considered averages, and depend greatly on individual physiology and fitness. For example, an endurance runner's rates will typically be lower due to the increased size of the heart required to support the exercise, while a sprinter's rates will be higher due to the improved response time and short duration. While each may have predicted heart rates of 180 (= 220 − age), these two people could have actual HRmax 20 beats apart (e.g., 170–190).
Further, note that individuals of the same age, the same training, in the same sport, on the same team, can have actual HRmax 60 bpm apart (160–220): the range is extremely broad, and some say "The heart rate is probably the least important variable in comparing athletes."
Heart rate reserveEdit
Heart rate reserve (HRreserve) is the difference between a person's measured or predicted maximum heart rate and resting heart rate. Some methods of measurement of exercise intensity measure percentage of heart rate reserve. Additionally, as a person increases their cardiovascular fitness, their HRrest will drop, and the heart rate reserve will increase. Percentage of HRreserve is equivalent to percentage of VO2 reserve.
- HRreserve = HRmax − HRrest
This is often used to gauge exercise intensity (first used in 1957 by Karvonen).
Karvonen's study findings have been questioned, due to the following:
- The study did not use VO2 data to develop the equation.
- Only six subjects were used, and the correlation between the percentages of HRreserve and VO2 max was not statistically significant.
Target heart rateEdit
For healthy people, the Target Heart Rate or Training Heart Rate (THR) is a desired range of heart rate reached during aerobic exercise which enables one's heart and lungs to receive the most benefit from a workout. This theoretical range varies based mostly on age; however, a person's physical condition, sex, and previous training also are used in the calculation. Below are two ways to calculate one's THR. In each of these methods, there is an element called "intensity" which is expressed as a percentage. The THR can be calculated as a range of 65–85% intensity. However, it is crucial to derive an accurate HRmax to ensure these calculations are meaningful.
Example for someone with a HRmax of 180 (age 40, estimating HRmax As 220 − age):
- 65% Intensity: (220 − (age = 40)) × 0.65 → 117 bpm
- 85% Intensity: (220 − (age = 40)) × 0.85 → 154bpm
The Karvonen method factors in resting heart rate (HRrest) to calculate target heart rate (THR), using a range of 50–85% intensity:
- THR = ((HRmax − HRrest) × % intensity) + HRrest
- THR = (HRreserve × % intensity) + HRrest
Example for someone with a HRmax of 180 and a HRrest of 70 (and therefore a HRreserve of 110):
- 50% Intensity: ((180 − 70) × 0.50) + 70 = 125 bpm
- 85% Intensity: ((180 − 70) × 0.85) + 70 = 163 bpm
An alternative to the Karvonen method is the Zoladz method, which derives exercise zones by subtracting values from HRmax:
- THR = HRmax − Adjuster ± 5 bpm
- Zone 1 Adjuster = 50 bpm
- Zone 2 Adjuster = 40 bpm
- Zone 3 Adjuster = 30 bpm
- Zone 4 Adjuster = 20 bpm
- Zone 5 Adjuster = 10 bpm
Example for someone with a HRmax of 180:
- Zone 1(easy exercise): 180 − 50 ± 5 → 125 − 135 bpm
- Zone 4(tough exercise): 180 − 20 ± 5 → 155 − 165 bpm
Heart rate recoveryEdit
Heart rate recovery (HRrecovery) is the reduction in heart rate at peak exercise and the rate as measured after a cool-down period of fixed duration. A greater reduction in heart rate after exercise during the reference period is associated with a higher level of cardiac fitness.
Heart rates that do not drop by more than 12 bpm one minute after stopping exercise are associated with an increased risk of death. Investigators of the Lipid Research Clinics Prevalence Study, which included 5,000 subjects, found that patients with an abnormal HRrecovery (defined as a decrease of 42 beats per minutes or less at two minutes post-exercise) had a mortality rate 2.5 times greater than patients with a normal recovery. Another study by Nishime et al. and featuring 9,454 patients followed for a median period of 5.2 years found a four-fold increase in mortality in subjects with an abnormal HRrecovery (≤12 bpm reduction one minute after the cessation of exercise). Shetler et al. studied 2,193 patients for thirteen years and found that a HRrecovery of ≤22 bpm after two minutes "best identified high-risk patients". They also found that while HRrecovery had significant prognostic value it had no diagnostic value.
The human heart beats more than 3.5 billion times in an average lifetime.
The heartbeat of a human embryo begins at approximately 21 days after conception, or five weeks after the last normal menstrual period (LMP), which is the date normally used to date pregnancy in the medical community. The electrical depolarizations that trigger cardiac myocytes to contract arise spontaneously within the myocyte itself. The heartbeat is initiated in the pacemaker regions and spreads to the rest of the heart through a conduction pathway. Pacemaker cells develop in the primitive atrium and the sinus venosus to form the sinoatrial node and the atrioventricular node respectively. Conductive cells develop the bundle of His and carry the depolarization into the lower heart.
The human heart begins beating at a rate near the mother’s, about 75–80 beats per minute (bpm). The embryonic heart rate then accelerates linearly for the first month of beating, peaking at 165–185 bpm during the early 7th week, (early 9th week after the LMP). This acceleration is approximately 3.3 bpm per day, or about 10 bpm every three days, an increase of 100 bpm in the first month.
After peaking at about 9.2 weeks after the LMP, it decelerates to about 150 bpm (+/-25 bpm) during the 15th week after the LMP. After the 15th week the deceleration slows reaching an average rate of about 145 (+/-25 bpm) bpm at term. The regression formula which describes this acceleration before the embryo reaches 25 mm in crown-rump length or 9.2 LMP weeks is:
There is no difference in male and female heart rates before birth.
Heart rate is measured by finding the pulse of the heart. This pulse rate can be found at any point on the body where the artery's pulsation is transmitted to the surface by pressuring it with the index and middle fingers; often it is compressed against an underlying structure like bone. (A good area is on the neck, under the corner of the jaw.) The thumb should not be used for measuring another person's heart rate, as its strong pulse may interfere with the correct perception of the target pulse.
The radial artery is the easiest to use to check the heart rate. However, in emergency situations the most reliable arteries to measure heart rate are carotid arteries. This is important mainly in patients with atrial fibrillation, in whom heart beats are irregular and stroke volume is largely different from one beat to another. In those beats following a shorter diastolic interval left ventricle doesn't fill properly, stroke volume is lower and pulse wave is not strong enough to be detected by palpation on a distal artery like the radial artery. It can be detected, however, by doppler.
Possible points for measuring the heart rate are:
- The ventral aspect of the wrist on the side of the thumb (radial artery).
- The ulnar artery.
- The neck (carotid artery).
- The inside of the elbow, or under the biceps muscle (brachial artery).
- The groin (femoral artery).
- Behind the medial malleolus on the feet (posterior tibial artery).
- Middle of dorsum of the foot (dorsalis pedis).
- Behind the knee (popliteal artery).
- Over the abdomen (abdominal aorta).
- The chest (apex of the heart), which can be felt with one's hand or fingers. It is also possible to auscultate the heart using a stethoscope.
- The temple (superficial temporal artery).
- The lateral edge of the mandible (facial artery).
- The side of the head near the ear (posterior auricular artery).
A more precise method of determining heart rate involves the use of an electrocardiograph, or ECG (also abbreviated EKG). An ECG generates a pattern based on electrical activity of the heart, which closely follows heart function. Continuous ECG monitoring is routinely done in many clinical settings, especially in critical care medicine. On the ECG, instantaneous heart rate is calculated using the R wave-to-R wave (RR) interval and multiplying/dividing in order to derive heart rate in heartbeats/min. Multiple methods exist:
- HR = 1,500/(RR interval in millimeters)
- HR = 60/(RR interval in seconds)
- HR = 300/number of "large" squares between successive R waves.
- HR= 1,500 number of large blocks
Heart rate monitors allow measurements to be taken continuously and can be used during exercise when manual measurement would be difficult or impossible (such as when the hands are being used). Various commercial heart rate monitors are also available. Some monitors, used during sport, consist of a chest strap with electrodes. The signal is transmitted to a wrist receiver for display.
Tachycardia is a resting heart rate more than 100 beats per minute. This number can vary as smaller people and children have faster heart rates than average adults.
Physiological conditions where tachycardia occurs:
- Emotional conditions such as anxiety or stress.
Pathological conditions where tachycardia occurs:
- Hypersecretion of catecholamines
- Valvular heart diseases
- Acute Radiation Syndrome
Bradycardia was defined as a heart rate less than 60 beats per minute when textbooks asserted that the normal range for heart rates was 60–100 bpm. The normal range has since been revised in textbooks to 50–90 bpm for a human at total rest. Setting a lower threshold for bradycardia prevents misclassification of fit individuals as having a pathologic heart rate. The normal heart rate number can vary as children and adolescents tend to have faster heart rates than average adults. Bradycardia may be associated with medical conditions such as hypothyroidism.
Trained athletes tend to have slow resting heart rates, and resting bradycardia in athletes should not be considered abnormal if the individual has no symptoms associated with it. For example, Miguel Indurain, a Spanish cyclist and five time Tour de France winner, had a resting heart rate of 28 beats per minute, one of the lowest ever recorded in a healthy human. Daniel Green achieved the world record for the slowest heartbeat in a healthy human with a heart rate of just 26 bpm in 2014.
Arrhythmias are abnormalities of the heart rate and rhythm (sometimes felt as palpitations). They can be divided into two broad categories: fast and slow heart rates. Some cause few or minimal symptoms. Others produce more serious symptoms of lightheadedness, dizziness and fainting.
Correlation with cardiovascular mortality riskEdit
A number of investigations indicate that faster resting heart rate has emerged as a new risk factor for mortality in homeothermic mammals, particularly cardiovascular mortality in human beings. Faster heart rate may accompany increased production of inflammation molecules and increased production of reactive oxygen species in cardiovascular system, in addition to increased mechanical stress to the heart. There is a correlation between increased resting rate and cardiovascular risk. This is not seen to be "using an allotment of heart beats" but rather an increased risk to the system from the increased rate.
An Australian-led international study of patients with cardiovascular disease has shown that heart beat rate is a key indicator for the risk of heart attack. The study, published in The Lancet (September 2008) studied 11,000 people, across 33 countries, who were being treated for heart problems. Those patients whose heart rate was above 70 beats per minute had significantly higher incidence of heart attacks, hospital admissions and the need for surgery. Higher heart rate is thought to be correlated with an increase in heart attack and about a 46 percent increase in hospitalizations for non-fatal or fatal heart attack.
Other studies have shown that a high resting heart rate is associated with an increase in cardiovascular and all-cause mortality in the general population and in patients with chronic disease. A faster resting heart rate is associated with shorter life expectancy and is considered a strong risk factor for heart disease and heart failure, independent of level of physical fitness. Specifically, a resting heart rate above 65 beats per minute has been shown to have a strong independent effect on premature mortality; every 10 beats per minute increase in resting heart rate has been shown to be associated with a 10–20% increase in risk of death. In one study, men with no evidence of heart disease and a resting heart rate of more than 90 beats per minute had a five times higher risk of sudden cardiac death. Similarly, another study found that men with resting heart rates of over 90 beats per minute had an almost two-fold increase in risk for cardiovascular disease mortality; in women it was associated with a three-fold increase.
Given these data, heart rate should be considered in the assessment of cardiovascular risk, even in apparently healthy individuals. Heart rate has many advantages as a clinical parameter: It is inexpensive and quick to measure and is easily understandable. Although the accepted limits of heart rate are between 60 and 100 beats per minute, this was based for convenience on the scale of the squares on electrocardiogram paper; a better definition of normal sinus heart rate may be between 50 and 90 beats per minute.
Standard textbooks of physiology and medicine mention that heart rate (HR) is readily calculated from the ECG as follows:
- HR = 1,500/RR interval in millimeters, HR = 60/RR interval in seconds, or HR = 300/number of large squares between successive R waves. In each case, the authors are actually referring to instantaneous HR, which is the number of times the heart would beat if successive RR intervals were constant. However, because the above formula is almost always mentioned, students determine HR this way without looking at the ECG any further.
Lifestyle and pharmacological regimens may be beneficial to those with high resting heart rates. Exercise is one possible measure to take when an individual's heart rate is higher than 80 beats per minute. Diet has also been found to be beneficial in lowering resting heart rate: In studies of resting heart rate and risk of death and cardiac complications on patients with type 2 diabetes, legumes were found to lower resting heart rate. This is thought to occur because in addition to the direct beneficial effects of legumes, they also displace animal proteins in the diet, which are higher in saturated fat and cholesterol.
- "All About Heart Rate (Pulse)". All About Heart Rate (Pulse). American Heart Association. 22 Aug 2017. Retrieved 25 Jan 2018.
- "Tachycardia| Fast Heart Rate". Tachycardia. American Heart Association. 2 May 2013. Retrieved 21 May 2014.
- Fuster, Wayne & O'Rouke 2001, pp. 78–79.
- Schmidt-Nielsen, Knut (1997). Animal physiology: adaptation and environment (5th ed.). Cambridge: Cambridge Univ. Press. p. 104. ISBN 978-0-521-57098-5.
- Aladin, Amer I.; Whelton, Seamus P.; Al-Mallah, Mouaz H.; Blaha, Michael J.; Keteyian, Steven J.; Juraschek, Stephen P.; Rubin, Jonathan; Brawner, Clinton A.; Michos, Erin D. (2014-12-01). "Relation of resting heart rate to risk for all-cause mortality by gender after considering exercise capacity (the Henry Ford exercise testing project)". The American Journal of Cardiology. 114 (11): 1701–06. doi:10.1016/j.amjcard.2014.08.042. ISSN 1879-1913. PMID 25439450.
- Hjalmarson, A.; Gilpin, E. A.; Kjekshus, J.; Schieman, G.; Nicod, P.; Henning, H.; Ross, J. (1990-03-01). "Influence of heart rate on mortality after acute myocardial infarction". The American Journal of Cardiology. 65 (9): 547–53. doi:10.1016/0002-9149(90)91029-6. ISSN 0002-9149. PMID 1968702.
- Mason, Jay W.; Ramseth, Douglas J.; Chanter, Dennis O.; Moon, Thomas E.; Goodman, Daniel B.; Mendzelevski, Boaz (2007-07-01). "Electrocardiographic reference ranges derived from 79,743 ambulatory subjects". Journal of Electrocardiology. 40 (3): 228–34. doi:10.1016/j.jelectrocard.2006.09.003. ISSN 1532-8430. PMID 17276451.
- Spodick, D. H. (1993-08-15). "Survey of selected cardiologists for an operational definition of normal sinus heart rate". The American Journal of Cardiology. 72 (5): 487–88. doi:10.1016/0002-9149(93)91153-9. ISSN 0002-9149. PMID 8352202.
- Anderson JM (1991). "Rehabilitating elderly cardiac patients". West. J. Med. 154 (5): 573–78. PMC 1002834. PMID 1866953.
- Hall, Arthur C. Guyton, John E. (2005). Textbook of medical physiology (11th ed.). Philadelphia: W.B. Saunders. pp. 116–22. ISBN 978-0-7216-0240-0.
- Betts, J. Gordon (2013). Anatomy & physiology. pp. 787–846. ISBN 978-1938168130. Retrieved 11 August 2014.
- Mustonen, Veera; Pantzar, Mika (2013). "Tracking social rhythms of the heart". Approaching Religion. 3 (2): 16–21. doi:10.30664/ar.67512.
- Brosschot, J.F.; Thayer, J.F. (2003). "Heart rate response is longer after negative emotions than after positive emotions". International Journal of Psychophysiology. 50 (3): 181–87. doi:10.1016/s0167-8760(03)00146-6.
- Chou, C.Y.; Marca, R.L.; Steptoe, A.; Brewin, C.R. (2014). "Heart rate, startle response, and intrusive trauma memories". Psychophysiology. 51 (3): 236–46. doi:10.1111/psyp.12176. PMC 4283725. PMID 24397333.
- Sherwood, L. (2008). Human Physiology, From Cells to Systems. p. 327. ISBN 9780495391845. Retrieved 2013-03-10.
- U.S. Department of Health and Human Services - National Ites of Health Pulse
- Berne, Robert; Levy, Matthew; Koeppen, Bruce; Stanton, Bruce (2004). Physiology. Elsevier Mosby. p. 276. ISBN 978-0-8243-0348-8.
- "HRmax (Fitness)". MiMi.
- Atwal S, Porter J, MacDonald P (February 2002). "Cardiovascular effects of strenuous exercise in adult recreational hockey: the Hockey Heart Study". CMAJ. 166 (3): 303–07. PMC 99308. PMID 11868637.
- Froelicher, Victor; Myers, Jonathan (2006). Exercise and the Heart (fifth ed.). Philadelphia: Elsevier. pp. ix, 108–12. ISBN 978-1-4160-0311-3.
- Nes, B.M.; Janszky, I.; Wisloff, U.; Stoylen, A.; Karlsen, T. (December 2013). "Age‐predicted maximal heart rate in healthy subjects: The HUNT Fitness Study". Scandinavian Journal of Medicine & Science in Sports. 23 (6): 697–704. doi:10.1111/j.1600-0838.2012.01445.x. PMID 22376273.
- Tanaka H, Monahan KD, Seals DR (January 2001). "Age-predicted maximal heart rate revisited". J. Am. Coll. Cardiol. 37 (1): 153–56. doi:10.1016/S0735-1097(00)01054-8. PMID 11153730.
- Gellish RL, Goslin BR, Olson RE, McDonald A, Russi GD, Moudgil VK (2007). "Longitudinal modeling of the relationship between age and maximal heart rate". Med Sci Sports Exerc. 39 (5): 822–29. doi:10.1097/mss.0b013e31803349c6. PMID 17468581.
- Kolata, Gina (2001-04-24). "'Maximum' Heart Rate Theory Is Challenged". New York Times.
- Robergs R, Landwehr R (2002). "The Surprising History of the 'HRmax=220-age' Equation" (PDF). Journal of Exercise Physiology. 5 (2): 1–10.
- Inbar, O. Oten, A., Scheinowitz, M., Rotstein, A., Dlin, R. and Casaburi, R. "Normal cardiopulmonary responses during incremental exercise in 20-70-yr-old men." Med Sci Sport Exerc 1994;26(5):538-546.
- Gulati M, Shaw LJ, Thisted RA, Black HR, Bairey Merz CN, Arnsdorf MF (2010). "Heart rate response to exercise stress testing in asymptomatic women: the st. James women take heart project". Circulation. 122 (2): 130–37. doi:10.1161/CIRCULATIONAHA.110.939249. PMID 20585008.
- Wohlfart B, Farazdaghi GR (May 2003). "Reference values for the physical work capacity on a bicycle ergometer for men -- a comparison with a previous study on women". Clin Physiol Funct Imaging. 23 (3): 166–70. doi:10.1046/j.1475-097X.2003.00491.x. PMID 12752560.
- Farazdaghi GR, Wohlfart B (November 2001). "Reference values for the physical work capacity on a bicycle ergometer for women between 20 and 80 years of age". Clin Physiol. 21 (6): 682–87. doi:10.1046/j.1365-2281.2001.00373.x. PMID 11722475.
- Lounana J, Campion F, Noakes TD, Medelli J (2007). "Relationship between %HRmax, %HR reserve, %VO2max, and %VO2 reserve in elite cyclists". Med Sci Sports Exerc. 39 (2): 350–57. doi:10.1249/01.mss.0000246996.63976.5f. PMID 17277600.
- Karvonen MJ, Kentala E, Mustala O (1957). "The effects of training on heart rate; a longitudinal study". Ann Med Exp Biol Fenn. 35 (3): 307–15. PMID 13470504.
- Swain DP, Leutholtz BC, King ME, Haas LA, Branch JD (1998). "Relationship between % heart rate reserve and % VO2 reserve in treadmill exercise". Med Sci Sports Exerc. 30 (2): 318–21. doi:10.1097/00005768-199802000-00022. PMID 9502363.
- Karvonen J, Vuorimaa T (May 1988). "Heart rate and exercise intensity during sports activities. Practical application". Sports Medicine. 5 (5): 303–11. doi:10.2165/00007256-198805050-00002. PMID 3387734.
- Cole CR, Blackstone EH, Pashkow FJ, Snader CE, Lauer MS (1999). "Heart-rate recovery immediately after exercise as a predictor of mortality". N. Engl. J. Med. 341 (18): 1351–57. doi:10.1056/NEJM199910283411804. PMID 10536127.
- Froelicher, Victor; Myers, Jonathan (2006). Exercise and the Heart (fifth ed.). Philadelphia: Elsevier. p. 114. ISBN 978-1-4160-0311-3.
- OBGYN.net "Embryonic Heart Rates Compared in Assisted and Non-Assisted Pregnancies" Archived 2006-06-30 at the Wayback Machine
- Terry J. DuBose Sex, Heart Rate and Age Archived 2012-06-15 at the Wayback Machine
- Fuster, Wayne & O'Rouke 2001, pp. 824–29.
- Regulation of Human Heart Rate. Serendip. Retrieved on June 27, 2007.
- Salerno DM, Zanetti J (1991). "Seismocardiography for monitoring changes in left ventricular function during ischemia". Chest. 100 (4): 991–93. doi:10.1378/chest.100.4.991.
- Guinness World Records 2004 (Bantam ed.). New York: Bantam Books. 2004. pp. 10–11. ISBN 978-0-553-58712-8.
- "Slowest heart rate: Daniel Green breaks Guinness World Records record". World Record Academy. 29 November 2014.
- Zhang GQ, Zhang W (2009). "Heart rate, lifespan, and mortality risk". Ageing Res. Rev. 8 (1): 52–60. doi:10.1016/j.arr.2008.10.001. PMID 19022405.
- Fox K, Ford I (2008). "Heart rate as a prognostic risk factor in patients with coronary artery disease and left-ventricular systolic dysfunction (BEAUTIFUL): a subgroup analysis of a randomised controlled trial". Lancet. 372 (6): 817–21. doi:10.1016/S0140-6736(08)61171-X. PMID 18757091.
- Cook, Stéphane; Hess, Otto M. (2010-03-01). "Resting heart rate and cardiovascular events: time for a new crusade?". European Heart Journal. 31 (5): 517–19. doi:10.1093/eurheartj/ehp484. ISSN 1522-9645. PMID 19933283.
- Cooney, Marie Therese; Vartiainen, Erkki; Laatikainen, Tiina; Laakitainen, Tinna; Juolevi, Anne; Dudina, Alexandra; Graham, Ian M. (2010-04-01). "Elevated resting heart rate is an independent risk factor for cardiovascular disease in healthy men and women". American Heart Journal. 159 (4): 612–19.e3. doi:10.1016/j.ahj.2009.12.029. ISSN 1097-6744. PMID 20362720.
- Teodorescu, Carmen; Reinier, Kyndaron; Uy-Evanado, Audrey; Gunson, Karen; Jui, Jonathan; Chugh, Sumeet S. (2013-08-01). "Resting heart rate and risk of sudden cardiac death in the general population: influence of left ventricular systolic dysfunction and heart rate-modulating drugs". Heart Rhythm. 10 (8): 1153–58. doi:10.1016/j.hrthm.2013.05.009. ISSN 1556-3871. PMC 3765077. PMID 23680897.
- Jensen, Magnus Thorsten; Suadicani, Poul; Hein, Hans Ole; Gyntelberg, Finn (2013-06-01). "Elevated resting heart rate, physical fitness and all-cause mortality: a 16-year follow-up in the Copenhagen Male Study". Heart. 99 (12): 882–87. doi:10.1136/heartjnl-2012-303375. ISSN 1468-201X. PMC 3664385. PMID 23595657.
- Woodward, Mark; Webster, Ruth; Murakami, Yoshitaka; Barzi, Federica; Lam, Tai-Hing; Fang, Xianghua; Suh, Il; Batty, G. David; Huxley, Rachel (2014-06-01). "The association between resting heart rate, cardiovascular disease and mortality: evidence from 112,680 men and women in 12 cohorts". European Journal of Preventive Cardiology. 21 (6): 719–26. doi:10.1177/2047487312452501. ISSN 2047-4881. PMID 22718796.
- Arnold, J. Malcolm; Fitchett, David H.; Howlett, Jonathan G.; Lonn, Eva M.; Tardif, Jean-Claude (2008-05-01). "Resting heart rate: a modifiable prognostic indicator of cardiovascular risk and outcomes?". The Canadian Journal of Cardiology. 24 Suppl A: 3A–8A. doi:10.1016/s0828-282x(08)71019-5. ISSN 1916-7075. PMC 2787005. PMID 18437251.
- Nauman, Javaid (2012-06-12). "Why measure resting heart rate?". Tidsskrift for den Norske Lægeforening: Tidsskrift for Praktisk Medicin, Ny Række. 132 (11): 1314. doi:10.4045/tidsskr.12.0553. ISSN 0807-7096. PMID 22717845.
- Spodick, DH (1992). "Operational definition of normal sinus heart rate". Am J Cardiol. 69 (14): 1245–46. doi:10.1016/0002-9149(92)90947-W.
- Sloan, Richard P.; Shapiro, Peter A.; DeMeersman, Ronald E.; Bagiella, Emilia; Brondolo, Elizabeth N.; McKinley, Paula S.; Slavov, Iordan; Fang, Yixin; Myers, Michael M. (2009-05-01). "The effect of aerobic training and cardiac autonomic regulation in young adults". American Journal of Public Health. 99 (5): 921–28. doi:10.2105/AJPH.2007.133165. ISSN 1541-0048. PMC 2667843. PMID 19299682.
- Jenkins, David J. A.; Kendall, Cyril W. C.; Augustin, Livia S. A.; Mitchell, Sandra; Sahye-Pudaruth, Sandhya; Blanco Mejia, Sonia; Chiavaroli, Laura; Mirrahimi, Arash; Ireland, Christopher (2012-11-26). "Effect of legumes as part of a low glycemic index diet on glycemic control and cardiovascular risk factors in type 2 diabetes mellitus: a randomized controlled trial". Archives of Internal Medicine. 172 (21): 1653–60. doi:10.1001/2013.jamainternmed.70. ISSN 1538-3679. PMID 23089999.
- "Atrioventricular Block: Practice Essentials, Background, Pathophysiology". Medscape Reference. 2 July 2018.
- This article incorporates text from the CC-BY book: OpenStax College, Anatomy & Physiology. OpenStax CNX. 30 jul 2014.. |
Probability is the measure of the likelihood that an event will occur. See glossary of probability and statistics. Probability quantifies as a number between 0 and 1, loosely speaking, 0 indicates impossibility and 1 indicates certainty; the higher the probability of an event, the more it is that the event will occur. A simple example is the tossing of a fair coin. Since the coin is fair, the two outcomes are both probable; these concepts have been given an axiomatic mathematical formalization in probability theory, used in such areas of study as mathematics, finance, science, artificial intelligence/machine learning, computer science, game theory, philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is used to describe the underlying mechanics and regularities of complex systems; when dealing with experiments that are random and well-defined in a purely theoretical setting, probabilities can be numerically described by the number of desired outcomes divided by the total number of all outcomes.
For example, tossing a fair coin twice will yield "head-head", "head-tail", "tail-head", "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents possess different views about the fundamental nature of probability: Objectivists assign numbers to describe some objective or physical state of affairs; the most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome if it is performed only once.
Subjectivists assign numbers per subjective probability. The degree of belief has been interpreted as, "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E." The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some prior probability distribution; these data are incorporated in a likelihood function. The product of the prior and the likelihood, results in a posterior probability distribution that incorporates all the information known to date. By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions regardless of how much information the agents share; the word probability derives from the Latin probabilitas, which can mean "probity", a measure of the authority of a witness in a legal case in Europe, correlated with the witness's nobility.
In a sense, this differs much from the modern meaning of probability, which, in contrast, is a measure of the weight of empirical evidence, is arrived at from inductive reasoning and statistical inference. The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues are still obscured by the superstitions of gamblers. According to Richard Jeffrey, "Before the middle of the seventeenth century, the term'probable' meant approvable, was applied in that sense, unequivocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially,'probable' could apply to propositions for which there was good evidence.
The sixteenth century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes. Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal. Christiaan Huygens gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi and Abraham de Moivre's Doctrine of Chances treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the concept of mathematical probability; the theory of errors may be traced back to Roger Cotes's Opera Miscellanea, but a memoir prepared by Thomas Simpson in 1755 first applied the theory to the discussion of errors of observation. The reprint of this memoir lays down the axioms that positive and negative errors are probable, that certain assignable limits define the range of all errors.
Simpson discusses c
God in Christianity
God in Christianity is the eternal being who created and preserves all things. Christians believe God to be both immanent. Christian teachings of the immanence and involvement of God and his love for humanity exclude the belief that God is of the same substance as the created universe but accept that God's divine Nature was hypostatically united to human nature in the person of Jesus Christ, in an event known as the Incarnation. Early Christian views of God were expressed in the Pauline Epistles and the early creeds, which proclaimed one God and the divinity of Jesus in the same breath as in 1 Corinthians: "For if there are so-called gods, whether in heaven or on earth, yet for us there is but one God, the Father, from whom all things came and for whom we live. "Although the Judeo-Christian sect of the Ebionites protested against this apotheosis of Jesus, the great mass of Gentile Christians accepted it." This began to differentiate the Gentile Christian views of God from traditional Jewish teachings of the time.
The theology of the attributes and nature of God has been discussed since the earliest days of Christianity, with Irenaeus writing in the 2nd century: "His greatness lacks nothing, but contains all things". In the 8th century, John of Damascus listed eighteen attributes which remain accepted; as time passed, theologians developed systematic lists of these attributes, some based on statements in the Bible, others based on theological reasoning. The Kingdom of God is a prominent phrase in the Synoptic Gospels and while there is near unanimous agreement among scholars that it represents a key element of the teachings of Jesus, there is little scholarly agreement on its exact interpretation. Although the New Testament does not have a formal doctrine of the Trinity as such, "it does speak of the Father, the Son, the Holy Spirit... in such a way as to compel a Trinitarian understanding of God." This never becomes a tritheism. Around the year 200, Tertullian formulated a version of the doctrine of the Trinity which affirmed the divinity of Jesus and came close to the definitive form produced by the Ecumenical Council of 381.
The doctrine of the Trinity can be summed up as: "The One God exists in Three Persons and One Substance, as God the Father, God the Son and God the Holy Spirit." Trinitarians, who form the large majority of Christians, hold it as a core tenet of their faith. Nontrinitarian denominations define the Father, the Son, the Holy Spirit in a number of different ways. Early Christian views of God are reflected in Apostle Paul's statement in 1 Corinthians, written ca. AD 53-54, i.e. about twenty years after the crucifixion of Jesus: for us there is but one God, the Father, from whom all things came and for whom we live. Apart from asserting that there is but one God, Paul's statement includes a number of other significant elements: he distinguishes Christian belief from the Jewish background of the time by referring to Jesus and the Father in the same breath, by conferring on Jesus the title of divine honor "Lord", as well as calling him Christ. In the Acts during the Areopagus sermon given by Paul, he further characterizes the early Christian understanding: The God that made the world and all things therein, he, being Lord of heaven and earth and reflects on the relationship between God and Christians: that they should seek God, if haply they might feel after him and find him, though he is not far from each one of us for in him we live.
The Pauline Epistles include a number of references to the Holy Spirit, with the theme which appears in 1 Thessalonians "…God, the God who gives you his Holy Spirit" appearing throughout his epistles. In John 14:26 Jesus refers to "the Holy Spirit, whom the Father will send in my name". By the end of the 1st century, Clement of Rome had referred to the Father and Holy Spirit, linked the Father to creation, 1 Clement 19.2 stating: "let us look steadfastly to the Father and creator of the universe". By the middle of the 2nd century, in Against Heresies Irenaeus had emphasized that the Creator is the "one and only God" and the "maker of heaven and earth"; these preceded the formal presentation of the concept of Trinity by Tertullian early in the 3rd century. The period from the late 2nd century to the beginning of the 4th century is called the "epoch of the Great Church" and the Ante-Nicene Period and witnessed significant theological development, the consolidation and formalization of a number of Christian teachings.
From the 2nd century onward, western creeds started with an affirmation of belief in "God the Father" and the primary reference of this phrase was to "God in his capacity as Father and creator of the universe". This did not exclude either the fact the "eternal father of the universe was the Father of Jesus the Christ" or that he had "vouchsafed to adopt as his son by grace". Eastern creeds began with an affirmation of faith in "one God" and always expanded this by adding "the Father Almighty, Maker of all things visible and invisible" or words to that effect; as time passed and philosophers developed more precise understandin
Aristotelian theology and the scholastic view of God have been influential in Western intellectual history. In his first philosophy called the Metaphysics, Aristotle discusses the meaning of being as being, he refers to the unmoved movers, assigns one to each movement in the heavens and tasks future astronomers with correlating the estimated 47 to 55 motions of the Eudoxan planetary model with the most current and accurate observations. According to Aristotle, each unmoved mover continuously contemplates its own contemplation, thus captivated, their tireless performance is the result of their own desire. This is one way, they must have no sensory perception whatsoever on account of Aristotle's theory of cognition: were any form of sense perception to intrude upon their thoughts, in that instant they would cease to be themselves, because actual self-reflection is their singular essence, their whole being. Like the heavenly bodies in their unadorned pursuit, so the wise look, with affection, toward the star.
In the Metaphysics, Aristotle discusses potentiality. The former is perfection, fullness of being; the former is the latter the determinable principle. The unmoved movers are actual, Actus Purus, because they are unchanging, immaterial substance. All material beings have some potentiality; the Physics introduces matter and form and the four causes—material, formal and final. For example, to explain a statue, one can offer: The material cause, that out of which the statue is made, is the marble or bronze; the formal cause, that according to which the statue is made, is the shape that the sculptor has learned to sculpt. The efficient cause, or agent, is the sculptor; the final cause, is that for the statue. Contrary to the so-called "traditional" view of prime matter, Aristotle asserts that there can be no pure potentiality without any actuality whatsoever. All material substances have unactualized potentials. Aristotle argues that, although motion is eternal, there cannot be an infinite series of movers and of things moved.
Therefore, there must be some, who are not the first in such a series, that inspire the eternal motion without themselves being moved "as the soul is moved by beauty". Because the planetary spheres each move unfalteringly for all eternity in uniform circular motion with a given rotational period relative to the supreme diurnal motion of the sphere of fixed stars, they must each love and desire to mimic different unmoved movers corresponding to the given periods; because they eternally inspire uniform motion in the celestial spheres, the unmoved movers must themselves be eternal and unchanging. Because they are eternal, they have had an infinite amount of time in which to actualize any potentialities and therefore cannot be a composition of matter and form, or potentiality and actuality, they must always be actual, thus immaterial, because at all times in history they have existed an infinite amount of time, things that do not come to fruition given unlimited opportunities to do so cannot do so.
The life of the unmoved mover is self-contemplative thought. According to Aristotle, the gods cannot be distracted from this eternal self-contemplation because, in that instant, they would cease to exist. John Burnet noted The Neoplatonists were quite justified in regarding themselves as the spiritual heirs of Pythagoras, and this tendency was at work all along. Aristotle might seem to be an exception. In days, Apollonios of Tyana showed in practice what this sort of thing must lead to; the theurgy and thaumaturgy of the late Greek schools were only the fruit of the seed sown by the generation which preceded the Persian War. Aristotle's principles of being influenced Anselm's view of God, whom he called "that than which nothing greater can be conceived." Anselm thought that God did not feel emotions such as anger or love, but appeared to do so through our imperfect understanding. The incongruity of judging "being" against something that might not exist, may have led Anselm to his famous ontological argument for God's existence.
Many medieval philosophers made use of the idea of approaching a knowledge of God through negative attributes. For example, we should not say that God exists in the usual sense of the term, all we can safely say is that God is not nonexistent. We should not say that God is wise. We should not say that God is One, but we can stat
In monotheistic thought, God is conceived of as the supreme being, creator deity, principal object of faith. The conceptions of God, as described by theologians include the attributes of omniscience, omnipresence, as having an eternal and necessary existence. Depending on one's kind of theism, these attributes are used either in way of analogy, or in a literal sense as distinct properties. God is most held to be incorporeal. Incorporeality and corporeality of God are related to conceptions of transcendence and immanence of God, with positions of synthesis such as the "immanent transcendence". Psychoanalyst Carl Jung equated religious ideas of God with transcendental aspects of consciousness in his interpretation; some religions describe God without reference to gender, while others or their translations use sex-specific terminology. Judaism attributes only a grammatical gender to God, using terms such as "Him" or "Father" for convenience. God has been conceived as either impersonal. In theism, God is the creator and sustainer of the universe, while in deism, God is the creator, but not the sustainer, of the universe.
In pantheism, God is the universe itself. In atheism, there is an absence of belief in God. In agnosticism, the existence of God is deemed unknowable. God has been conceived as the source of all moral obligation, the "greatest conceivable existent". Many notable philosophers have developed arguments against the existence of God. Monotheists refer to their gods using names prescribed by their respective religions, with some of these names referring to certain cultural ideas about their god's identity and attributes. In the ancient Egyptian era of Atenism the earliest recorded monotheistic religion, this deity was called Aten, premised on being the one "true" Supreme Being and creator of the universe. In the Hebrew Bible and Judaism, Adonai, YHWH and other names are used as the names of God. Yahweh and Jehovah, possible vocalizations of YHWH, are used in Christianity. In the Christian doctrine of the Trinity, coexisting in three "persons", is called the Father, the Son, the Holy Spirit. In Islam, the name Allah is used, while Muslims have a multitude of titular names for God.
In Hinduism, Brahman is considered a monistic concept of God. In Chinese religion, Shangdi is conceived as the progenitor of the universe, intrinsic to it and bringing order to it. Other religions have names for the concept, for instance, Baha in the Bahá'í Faith, Waheguru in Sikhism, Sang Hyang Widhi Wasa in Balinese Hinduism, Ahura Mazda in Zoroastrianism; the many different conceptions of God, competing claims as to God's characteristics and actions, have led to the development of ideas of omnitheism, pandeism, or a perennial philosophy, which postulates that there is one underlying theological truth, of which all religions express a partial understanding, as to which "the devout in the various great world religions are in fact worshipping that one God, but through different, overlapping concepts". The earliest written form of the Germanic word God comes from the 6th-century Christian Codex Argenteus; the English word itself is derived from the Proto-Germanic * ǥuđan. The reconstructed Proto-Indo-European form * ǵhu-tó-m was based on the root * ǵhau-, which meant either "to call" or "to invoke".
The Germanic words for God were neuter—applying to both genders—but during the process of the Christianization of the Germanic peoples from their indigenous Germanic paganism, the words became a masculine syntactic form. In the English language, capitalization is used for names by which a god is known, including'God'; the capitalized form of god is not used for multiple gods or when used to refer to the generic idea of a deity. The English word God and its counterparts in other languages are used for any and all conceptions and, in spite of significant differences between religions, the term remains an English translation common to all; the same holds for Hebrew El, but in Judaism, God is given a proper name, the tetragrammaton YHWH, in origin the name of an Edomite or Midianite deity, Yahweh. In many translations of the Bible, when the word LORD is in all capitals, it signifies that the word represents the tetragrammaton. Allāh is the Arabic term with no plural used by Muslims and Arabic speaking Christians and Jews meaning "The God", while "ʾilāh" is the term used for a deity or a god in general.
God may be given a proper name in monotheistic currents of Hinduism which emphasize the personal nature of God, with early references to his name as Krishna-Vasudeva in Bhagavata or Vishnu and Hari. Ahura Mazda is the name for God used in Zoroastrianism. "Mazda", or rather the Avestan stem-form Mazdā-, nominative Mazdå, reflects Proto-Iranian *Mazdāh. It is taken to be the proper name of the spirit, like its Sanskrit cognate medhā, means "intelligence" or "wisdom". Both the Avestan and Sanskrit words reflect Proto-Indo-Iranian *mazdhā-, from Proto-Indo-European mn̩sdʰeh1 meaning "placing one's mind", hence "wise". Waheguru is a term most used in Sikhism to refer to God, it means "Wonderful Teacher" in the Punjabi language. Vāhi means "wonderful" and guru is a term denoting "teacher". Waheguru is described by some as an experience of ecstasy, beyond all descriptions; the most common usage of the word "Waheguru" is in the greeting Sikhs use with each other: Baha, the "greates
God in Judaism
In Judaism, God has been conceived in a variety of ways. Traditionally, Judaism holds that YHWH, the God of Abraham and Jacob and the national god of the Israelites, delivered the Israelites from slavery in Egypt, gave them the Law of Moses at biblical Mount Sinai as described in the Torah. According to the rationalist stream of Judaism articulated by Maimonides, which came to dominate much of official traditional Jewish thought, God is understood as the absolute one and incomparable being, the ultimate cause of all existence. Traditional interpretations of Judaism emphasize that God is personal yet transcendent, while some modern interpretations of Judaism emphasize that God is a force or ideal; the names of God used most in the Hebrew Bible are the Tetragrammaton and Elohim. Other names of God in traditional Judaism include El Shekhinah; the name of God used most in the Hebrew Bible is the Tetragrammaton. Jews traditionally do not pronounce it, instead refer to God as HaShem "the Name". In prayer the Tetragrammaton is substituted with the pronunciation Adonai, meaning "My Master".
The national god of the Iron Age kingdoms of Israel and Judah was Yahweh. The precise origins of this god are disputed, although they reach back to the early Iron Age and the Late Bronze; the name may have begun as an epithet of El, head of the Bronze Age Canaanite pantheon, but earlier mentions are in Egyptian texts that place God among the nomads of the southern Transjordan. After evolving from its monolatristic roots, Judaism became monotheistic. No consensus has been reached by academics on the origins of monotheism in ancient Israel, but Yahweh "clearly came out of the world of the gods of the Ancient Near East."The worship of multiple gods and the concept of God having multiple persons are unimaginable in Judaism. The idea of God as a duality or trinity is heretical in Judaism – it is considered akin to polytheism. God, the Cause of all, is one; this does not mean one as in one of series, nor one like a species, nor one as in an object, made up of many elements, nor as a single simple object, infinitely divisible.
Rather, God is a unity unlike any other possible unity. Since, according to the mystical conception, all of existence emanates from God, whose ultimate existence is not dependent on anything else, some Jewish sages perceived God as interpenetrating the universe, which itself has been thought to be a manifestation of God's existence. According to this line of theological speculation, Judaism can be regarded as being compatible with panentheism, while always affirming genuine monotheism. Kabbalistic tradition holds; this has been described as a strand of Judaism which may seem at odds with Jewish commitments to strict monotheism, but Kabbalists have emphasized that their traditions are monotheistic. Any belief that an intermediary between humanity and God could be used, whether necessary or optional, has traditionally been considered heretical. Maimonides writes that God is the only one we may serve and praise.... We may not act in this way toward anything beneath God, whether it be an angel, a star, or one of the elements.....
There are no intermediaries between God. All our prayers should be directed towards God; some rabbinic authorities disagreed with this view. Notably, Nachmanides was of the opinion that it is permitted to ask the angels to beseech God on our behalf; this argument manifests notably in the Selichot prayer called "Machnisay Rachamim", a request to the angels to intercede with God. Godhead refers to the substratum of God that lies behind God's actions or properties. In the philosophy of Maimonides and other Jewish-rationalistic philosophers, there is little which can be known about the Godhead, other than its existence, this can only be asserted equivocally. How can a relation be represented between God and what is other than God when there is no notion comprising in any respect both of the two, inasmuch as existence is, in our opinion, affirmed of God, may God be exalted, of what is other than God by way of absolute equivocation. There is, in truth, no relation in any of God's creatures. In Kabbalistic thought, the term "Godhead" refers to the concept of Ein Sof, the aspect of God that lies beyond the emanations.
The "knowability" of the Godhead in Kabbalistic thought is no better that what is conceived by rationalist thinkers. As Jacobs puts it, "Of God as God is in Godself—Ein Sof—nothing can be said at all, no thought can reach there". Ein Sof is a place to and oblivion pertain. Why? Because concerning all the sefirot, one can search out their reality from the depth of supernal wisdom. From there it is possible to understand one thing from another. However, concerning Ein Sof, there is no aspect anywhere to probe. In modern articulations of traditional Judaism, God has been speculated to be the eternal and omniscient creator of the universe, the source of morality. God has the power to intervene in the world. Maimonides describes God in this fashion: "The foundation of all foundations and the pillar of wisdom is to know that there is a Primary Being who brought into being all existence. All the beings of the heavens, the earth, what is betwe
In natural theology and philosophy, a cosmological argument is an argument in which the existence of a unique being seen as some kind of god, is deduced or inferred from facts or alleged facts concerning causation, motion, contingency, or finitude in respect of the universe as a whole or processes within it. It is traditionally known as an argument from universal causation, an argument from first cause, or the causal argument, is more a cosmogonical argument. Whichever term is employed, there are three basic variants of the argument, each with subtle yet important distinctions: the arguments from in causa, in esse, in fieri; the basic premises of all of these are the concept of causality. The conclusion of these arguments is first cause, subsequently deemed to be God; the history of this argument goes back to Aristotle or earlier, was developed in Neoplatonism and early Christianity and in medieval Islamic theology during the 9th to 12th centuries, re-introduced to medieval Christian theology in the 13th century by Thomas Aquinas.
The cosmological argument is related to the principle of sufficient reason as addressed by Gottfried Leibniz and Samuel Clarke, itself a modern exposition of the claim that "nothing comes from nothing" attributed to Parmenides. Contemporary defenders of cosmological arguments include William Lane Craig, Robert Koons, Alexander Pruss, William L. Rowe. Plato and Aristotle both posited first cause arguments. In The Laws, Plato posited that all movement in the world and the Cosmos was "imparted motion"; this required a "self-originated motion" to maintain it. In Timaeus, Plato posited a "demiurge" of supreme wisdom and intelligence as the creator of the Cosmos. Aristotle argued against the idea of a first cause confused with the idea of a "prime mover" or "unmoved mover" in his Physics and Metaphysics. Aristotle argued in favor of the idea of several unmoved movers, one powering each celestial sphere, which he believed lived beyond the sphere of the fixed stars, explained why motion in the universe had continued for an infinite period of time.
Aristotle argued the atomist's assertion of a non-eternal universe would require a first uncaused cause – in his terminology, an efficient first cause – an idea he considered a nonsensical flaw in the reasoning of the atomists. Like Plato, Aristotle believed in an eternal cosmos with no end. In what he called "first philosophy" or metaphysics, Aristotle did intend a theological correspondence between the prime mover and deity. According to his theses, immaterial unmoved movers are eternal unchangeable beings that think about thinking, but being immaterial, they are incapable of interacting with the cosmos and have no knowledge of what transpires therein. From an "aspiration or desire", the celestial spheres, imitate that purely intellectual activity as best they can, by uniform circular motion; the unmoved movers inspiring the planetary spheres are no different in kind from the prime mover, they suffer a dependency of relation to the prime mover. Correspondingly, the motions of the planets are subordinate to the motion inspired by the prime mover in the sphere of fixed stars.
Aristotle's natural theology admitted no creation or capriciousness from the immortal pantheon, but maintained a defense against dangerous charges of impiety. Plotinus, a third-century Platonist, taught that the One transcendent absolute caused the universe to exist as a consequence of its existence, his disciple Proclus stated "The One is God". Centuries the Islamic philosopher Avicenna inquired into the question of being, in which he distinguished between essence and existence, he argued that the fact of existence could not be inferred from or accounted for by the essence of existing things, that form and matter by themselves could not originate and interact with the movement of the Universe or the progressive actualization of existing things. Thus, he reasoned that existence must be due to an agent cause that necessitates, gives, or adds existence to an essence. To do so, the cause must be an existing thing. Steven Duncan writes that it "was first formulated by a Greek-speaking Syriac Christian neo-Platonist, John Philoponus, who claims to find a contradiction between the Greek pagan insistence on the eternity of the world and the Aristotelian rejection of the existence of any actual infinite".
Referring to the argument as the "'Kalam' cosmological argument", Duncan asserts that it "received its fullest articulation at the hands of Muslim and Jewish exponents of Kalam. Thomas Aquinas adapted and enhanced the argument he found in his reading of Aristotle and Avicenna to form one of the most influential versions of the cosmological argument, his conception of First Cause was the idea that the Universe must be caused by something, itself uncaused, which he claimed is that which we call God: The second way is from the nature of the efficient cause. In the world of sense we find. There is no case known in.
In the Platonic, Middle Platonic, Neoplatonic schools of philosophy, the demiurge is an artisan-like figure responsible for fashioning and maintaining the physical universe. The Gnostics adopted the term "demiurge". Although a fashioner, the demiurge is not the same as the creator figure in the monotheistic sense, because the demiurge itself and the material from which the demiurge fashions the universe are both considered to be consequences of something else. Depending on the system, they may be considered to be either uncreated and eternal or the product of some other entity; the word "demiurge" is an English word derived from demiurgus, a Latinized form of the Greek δημιουργός or dēmiourgos. It was a common noun meaning "craftsman" or "artisan", but came to mean "producer", "creator"; the philosophical usage and the proper noun derive from Plato's Timaeus, written c. 360 BC, where the demiurge is presented as the creator of the universe. The demiurge is described as a creator in the Platonic and Middle Platonic philosophical traditions.
In the various branches of the Neoplatonic school, the demiurge is the fashioner of the real, perceptible world after the model of the Ideas, but is still not itself "the One". In the arch-dualist ideology of the various Gnostic systems, the material universe is evil, while the non-material world is good. According to some strains of Gnosticism, the demiurge is malevolent, as it is linked to the material world. In others, including the teaching of Valentinus, the demiurge is ignorant or misguided. Plato, as the speaker Timaeus, refers to the Demiurge in the Socratic dialogue Timaeus, c. 360 BC. The main character refers to the Demiurge as the entity who "fashioned and shaped" the material world. Timaeus describes the Demiurge as unreservedly benevolent, so it desires a world as good as possible; the world remains imperfect, because the Demiurge created the world out of a chaotic, indeterminate non-being. Plato's work Timaeus is a philosophical reconciliation of Hesiod's cosmology in his Theogony, syncretically reconciling Hesiod to Homer.
In Numenius's Neo-Pythagorean and Middle Platonist cosmogony, the Demiurge is second God as the nous or thought of intelligibles and sensibles. Plotinus and the Platonists worked to clarify the Demiurge. To Plotinus, the second emanation represents an uncreated second cause. Plotinus sought to reconcile Aristotle's energeia with Plato's Demiurge, which, as Demiurge and mind, is a critical component in the ontological construct of human consciousness used to explain and clarify substance theory within Platonic realism. In order to reconcile Aristotelian with Platonian philosophy, Plotinus metaphorically identified the demiurge within the pantheon of the Greek Gods as Zeus; the first and highest aspect of God is described by Plato as the source, or the Monad. This is the God above the Demiurge, manifests through the actions of the Demiurge; the Monad emanated the demiurge or Nous from its "indeterminate" vitality due to the monad being so abundant that it overflowed back onto itself, causing self-reflection.
This self-reflection of the indeterminate vitality was referred to by Plotinus as the "Demiurge" or creator. The second principle is organization in its reflection of the nonsentient force or dynamis called the one or the Monad; the dyad is energeia emanated by the one, the work, process or activity called nous, mind, consciousness that organizes the indeterminate vitality into the experience called the material world, cosmos. Plotinus elucidates the equation of matter with nothing or non-being in The Enneads which more is to express the concept of idealism or that there is not anything or anywhere outside of the "mind" or nous. Plotinus' form of Platonic idealism is to treat the Demiurge, nous as the contemplative faculty within man which orders the force into conscious reality. In this, he claimed to reveal Plato's true meaning: a doctrine he learned from Platonic tradition that did not appear outside the academy or in Plato's text; this tradition of creator God as nous, can be validated in the works of pre-Plotinus philosophers such as Numenius, as well as a connection between Hebrew and Platonic cosmology.
The Demiurge of Neoplatonism is the Nous, is one of the three ordering principles: Arche – the source of all things, Logos – the underlying order, hidden beneath appearances, Harmonia – numerical ratios in mathematics. Before Numenius of Apamea and Plotinus' Enneads, no Platonic works ontologically clarified the Demiurge from the allegory in Plato's Timaeus; the idea of Demiurge was, addressed before Plotinus in the works of Christian writer Justin Martyr who built his understanding of the Demiurge on the works of Numenius. The Neoplatonist Iamblichus changed the role of the "One" altering the role of the Demiurge as second cause or dyad, one of the reasons that Iamblichus and his teacher Porphyry came into conflict; the figure of the Demiurge emerges in the theoretic of Iamblichus, which conjoins the transcendent, incommunicable “One,” or Source. Here, at the summit of this system, the Source and Demiurge coexist via the process of henosis. Iamblichus describes the One as a monad whose first principle or emanation is intellect, while among "the many" that follow it there is a second, super-existent "One", the |
1 Patterns of Inheritance in Maize written by J. D. Hendrix Learning Objectives Upon completing the exercise, each student should be able to define the following terms gene, allele, genotype, phenotype, homozygous, heterozygous, dominant, recessive, monohybrid cross, dihybrid cross, epistasis; to explain how to set up a monohybrid cross and a dihybrid cross; to determine expected genotypic and phenotypic frequencies in monohybrid and dihybrid crosses; to determine expected genotypic and phenotypic frequencies in crosses involving epistasis; to test hypotheses based on expected frequencies using the chi-square test. to write a formal laboratory report based on this laboratory exercise. Background Gregor Mendel discovered the laws of random segregation and independent assortment by performing crosses in the pea plant. In this exercise, you will learn about these principles by analyzing results from crosses in the maize plant. In the maize plant, the male gametes (pollen) are formed in organs called anthers located at the tops of the maize stalks. The female gametes (ovules) are formed on the maize ears located along the sides of the stalks. When a pollen grain fertilizes an ovule, the fertilized ovule develops into a kernel that contains an embryonic maize plant. There are several genetic traits in the kernels that can be easily observed, including the kernel color and shape. A. Random Segregation Mendel s law of random segregation Diploid germ-line cells of sexually reproducing species contain two copies of almost every chromosomal gene. The two copies are located on members of a homologous chromosome pair. During meiosis, the two copies separate, so that a gamete receives only one copy of each gene. Random segregation can be demonstrated by a monohybrid cross. In a monohybrid cross, a parental cross is made between two individuals that differ in the genotype of one gene. The offspring of the parental generation is called the F (first filial) generation. The F generation can be allowed to interbreed or self-fertilize (inter se cross, or selfing ) to produce the F (second filial) generation. For example, consider a monohybrid cross involving kernel color in maize. There are several genes that control seed color in maize. One gene for seed color is designated by the letter R and has two alleles. Seed color gene with two alleles R = purple (or red) allele (dominant allele) r = yellow (or white) allele (recessive allele) Possible genotypes and phenotypes R R = purple phenotype R r = purple phenotype r r = yellow phenotype
2 Consider the following parental cross in which the silk (female flower) of a homozygous purple plant is fertilized with the pollen from a homozygous yellow plant. (The symbol represents female, and represents male. ) Parental (P) generation R R x r r Purple Yellow According to the law of random segregation, each ovule or female gamete receives one copy of the R allele when it is formed during meiosis. Each pollen grain or male gamete receives the r allele. Therefore, the offspring of this cross (first filial or F generation) must be heterozygous Rr and will display the purple phenotype. F generation All Rr Purple The formation of the F generation is shown in the following diagram. The F kernels are planted and allowed to grow to maturity. When the F plants make pollen and ovules (gametes), each gamete will receive either R or r during meiosis. The separation of the alleles during meiosis is called segregation. Since segregation is a random process, gametes with R or r are present in approximately equal numbers. F pollen R r F ovules R r
3 Using these probabilities, we can predict the phenotypic ratio we should see in the F generation. Probability of F pollen x Probability of F ovule = Probability of F genotype R R r r R r R r Note that there are two ways of producing Rr in the F generation. We can add these together to get a probability of or for Rr. RR Therefore, the F generation should have the following genotypes and phenotypes. F generation RR Rr rr Rr Rr rr + = Purple Yellow We expect a ratio of dominantrecessive in the F generation. The F x F to produce the F generation is summarized in the following diagram.
4 Another way to analyze the outcome of a monohybrid cross is the Punnett square. The Punnett square method uses a simple grid to match all of the possible combinations of gametes in a cross. The gamete genotypes of one parent are listed along the top of the grid, and the gamete genotypes of the other parent are listed along the side of the grid. By filling in each square with the alleles from the top with the alleles from the side, the different possible combinations of genotypes in the offspring are found in the grid. The following diagram shows a Punnett square analysis for the monohybrid cross that we just covered. Parental (P) generation R R x r r Purple Yellow Punnett square analysis for the parental cross Punnett square analysis for the F cross Expected F genotype All R r Expected F phenotype All purple The F plants are allowed to self-fertilized R r x R r Expected F genotypic frequencies RR, + = Rr, rr Expected F genotypic ratio RR Rr rr Expected F phenotypic frequencies + + = yellow purple, Expected F phenotypic ratio purple yellow
5 5 B. Independent Assortment Mendel s Law of Independent Assortment When the alleles of two different genes separate during meiosis, they do so independently of one another unless the genes are located on the same chromosome (linked). This is the principle of independent assortment. Mendel discovered independent assortment by performing dihybrid crosses in the pea plant. We will examine dihybrid crosses in maize. Consider the genes for kernel color and kernel composition in maize. Seed color gene R allele, dominant, for purple kernels r allele, for yellow kernels Seed composition gene Su allele, dominant for smooth (starchy) kernels su allele, for wrinkled (sweet) kernels In the P generation, a homozygous plant with purple, smooth kernels was crossed with a plant having yellow wrinkled kernels. The F plants were allowed to fertilize themselves. What genotypes and phenotypes are expected in the F, and in what frequencies or ratio? To answer this question, you should begin by outlining the P and F. P R R Su Su x r r su su F All R r Su su, heterozygous purple smooth kernels R r Su su x R r Su su According to the principle of independent assortment, the color gene and the seed shape gene should not affect one another; that is, they should behave independently. This means that there are four possible classes of pollen and ovules, in equal frequencies. F pollen R Su R su r Su r su F ovules R Su R su r Su r su From this, we can calculate the probability of each genotype in the F generation, as shown in the table on the next page.
6 6 F genotype Probability Phenotype = 6 R R Su Su Purple Smooth ( ) ( ) + = = 6 8 R r Su Su Purple Smooth ( ) ( ) + = = 6 8 R R Su su Purple Smooth ( ) ( ) ( ) ( ) = = 6 R r Su su Purple Smooth = 6 R R su su Purple Wrinkled ( ) ( ) + = = 6 8 R r su su Purple Wrinkled = 6 r r Su Su Yellow Smooth ( ) ( ) + = = 6 8 r r Su su Yellow Smooth = 6 r r su su Yellow Wrinkled The genotypes tell us that expected phenotypic ratio of the F generation is 9. Purple Smooth = 6 Purple Wrinkled = 6 Yellow Smooth = 6 Yellow Wrinkled = = = + =
7 7 It is also possible to analyze the dihybrid cross with a Punnett square. For a dihybrid cross, we ll need a x grid because there are four genotypes in the F gametes. Here is the Punnett square analysis of the F cross from the above example. F All R r Su su, heterozygous purple smooth kernels R r Su su x R r Su su Punnett square analysis for the F cross From the Punnett square, you can obtain the same genotypic and phenotypic frequencies as shown in the table on page 6.
8 8 C. A Short-cut to Genetic Ratios There is a fast way to determine genetic ratios based on independent assortment. Since the genes behave independently, the probability of a genotype amounts to independent events occurring together. Therefore, we can consider each trait separately and then multiply the probabilities. This method will allow you to solve most of the complex heredity problems you will face on tests, so you need to learn it and learn it well. In the same dihybrid cross outlined above with the seed shape and composition in maize, we expect a ratio for each trait in the F. F generation Seed color Seed composition Purple Smooth Yellow Wrinkled Combining the phenotypes for color and composition, we obtain the following. = Purple Smooth = Purple Wrinkled = Yellow Smooth = Yellow Wrinkled Note that this method could work just as easily for determining the genotypic ratio. Starting out with the F genotypic frequencies for each individual gene, see if you can figure out the F genotypic frequencies combined. F generation Seed color Seed composition R R Su Su R r Su su r r su su Combining the genotypes for color and composition, we obtain the following.???????????????? Work it out for yourself. In your formal report for this laboratory, you will determine expected genotypic and phenotypic frequencies by both the Punnett square method and the probability computation method ( short-cut method). However, you should note that the short-cut method is much quicker on exams. Punnett squares should be avoided on exams. Mastery of the short-cut method is one of the single greatest factors determining how well students succeed on heredity problems.
9 9 D. Epistasis Most phenotypic traits are influenced by the action of several genes. For example, the synthesis of a pigment might require an enzymatic pathway with multiple steps. Each step is catalyzed by its own enzyme, and each enzyme is encoded by a different gene. If an individual is lacking the functional enzyme for any step in the pathway (for example, in a homozygous recessive mutant that lacks an allele to encode the functional enzyme) then the pathway is blocked and the pigment will not be produced, regardless of the genotype of any other genes in the individual. Epistasis Epistasis occurs when the genotype of one gene hides or masks the phenotypic effect of a second gene, regardless of the genotype of the second gene. Do not say that the first gene is dominant over the second gene; say that the first gene epistatically masks the second gene. Genes can interact epistatically if each encodes a different enzyme in a multi-step pathway. They can also interact epistatically if one gene encodes a regulatory protein that regulates the transcription of the other gene. The purple color in the maize kernels is due to the presence of a pigment called anthocyanin. There are several genes that encode proteins to regulate anthocyanin production. Each of the genes has a large number of known alleles. The genes interact epistatically with each other, depending on the genotype of the individual. We need to consider two of these genes. The R gene. This gene has two major alleles R r The presence of at least one copy of R (homozygous R R or heterozygous R r) is required for purple kernels. The allele R is dominant to r. The genotype r r gives yellow kernels, regardless of the genotype of any other genes. The allele r is recessive to R. The genotype r r epistatically masks the genotypes of other kernel genes and gives a yellow phenotype. The C gene. This gene has three major alleles C c The presence of at least one copy of C (homozygous C C or heterozygous C c) is required for purple kernels. The allele C is dominant to c. The allele C is recessive to C. The genotype c c gives yellow kernels, regardless of the genotype of any other genes. The allele c is recessive to C. The genotype c c epistatically masks the genotypes of other kernel genes and gives a yellow phenotype. C C is called the dominant color inhibitor allele of the C gene. The presence of at least one copy of C (homozygous C C, heterozygous C C, or heterozygous C c) inhibits the production of purple pigment and causes the yellow phenotype. The allele C is dominant to C and c. Genotypes C C, C C, or C c epistatically mask the genotypes of other kernel genes and gives a yellow phenotype.
10 0 The R and C genes are located on different chromosomes, so they assort independently of each other. Consider each of the following parental crosses. In each case, what outcome do you expect to see in the F and F generations? Parental cross a R R C C x r r C C Parental cross b R R C C x R R C C Parental cross c r r C C x R R c c Parental cross d R R C C x r r C C Here is the analysis of cross a. P R R C C x r r C C F All R r C C R r C C x R r C C F Expected Expected R genotype C genotype F frequencies frequencies Combined Phenotypes R R All C C R r r r R R C C R r C C r r C C Purple Yellow On the worksheet, you will complete a similar analysis for the other crosses. You will receive a numbered unknown ear of maize representing the F generation from one of these four possible crosses. You will use the χ test to determine which cross your ear represents. E. References http//www.iita.org/crop/maize.htm http//www.maizegdb.org/
11 Gene Allele Genotype Phenotype Homozygous and Heterozygous Dominant and Recessive Codominance Incomplete Dominance Epistasis Sex chromosome and Autosome X-linkage Definitions Classical definition A unit of inheritance; a factor transmitted during reproduction and responsible for the appearance of a given trait. Contemporary understanding A segment on a DNA molecule, usually at a specific location (locus) on a chromosome, characterized by its nucleotide sequence. Genes play three notable roles to encode the amino acid sequences of proteins, to encode the nucleotide sequences of trna or rrna, and to regulate the expression of other genes. Variant forms of a gene found within a population. Alleles of a gene usually have small differences in their nucleotide sequences. The differences can affect the trait for which the gene is responsible. Most genes have more than one allele. The genetic makeup of an individual with reference to one or more specific traits. A genotype is designated by using symbols to represent the alleles of the gene. The appearance or discernible characteristics of a trait in an individual. Phenotypes can be determined by a combination of genetic and environmental factors. In a diploid species, each individual carries two copies of each gene (with some exceptions). The two copies are located on different members of a homologous chromosome pair. If the two copies of the gene are identical alleles, then the individual is homozygous for the gene. If the two copies are different, then the individual is heterozygous. A dominant allele is expressed over a recessive allele in a heterozygous individual. This means that a heterozygous individual and a homozygous dominant individual have identical phenotypes. Often, a dominant allele encodes a functional protein, such as an enzyme. The recessive allele is a mutation that no longer has the information for the correct amino acid sequence; therefore, its protein product in nonfunctional. In the heterozygote, the dominant allele encodes sufficient production of the protein to produce the dominant phenotype. This is also called complete dominance. Two alleles are codominant if each encodes a different but functional protein product. In the heterozygote, the presence of two different functional proteins means that the phenotype of the heterozygote is different from either homozygous dominant or homozygous recessive. An incompletely dominant allele produces a functional protein product. However, in the heterozygote, there is insufficient protein production from the allele to produce the same phenotype as homozygous dominant. Therefore, the phenotype of the heterozygote is different from either homozygous dominant or homozygous recessive. Most phenotypic traits are formed by the interactions of several genes. Epistasis is one type of gene interaction. In epistasis, the genotype of one gene masks the expression of the genotype of another gene. Many species that exhibit sexual dimorphism have a sex chromosome system to determine the sex of an individual. In mammalian species, Drosophila, and certain other species, the sex chromosomes are designated as X and Y. Females in these species have two X chromosomes, and males have one X and one Y chromosome. Chromosomes other than sex chromosomes are called autosomes. When a gene is located on the X chromosome, it is X-linked. Since males have only one X chromosome, they have only one copy of each X-linked gene. Instead of homozygous or heterozygous, males are hemizygous for X-linked traits.
12 Table of χ values P value = Probability that the Difference is due to Chance and is Not Significant df
13 Patterns of Inheritance in Maize Laboratory Report Sheet Name Lab Partners For your grade on the maize lab, you will write a formal lab report that summarizes the principles, methods, analysis, results, and conclusion that you reached. Details on the format for the lab report are given at the end of the worksheet section. Completing the worksheet will help you to write your laboratory report. You should pay careful attention to collecting complete and accurate counts of the maize kernels in order to do well on your results, analysis, and conclusion. A. Random segregation and independent assortment. Your group will receive an ear of maize representing the F generation of the following cross. P R R Su Su x r r su su F F All R r Su su R r Su su x R r Su su Your ear of maize. Consider the following hypothesis The R and r alleles of maize segregate randomly during meiosis. Based on this hypothesis, what genotypic and phenotypic frequencies for the R gene and kernel color do you expect in the F ear?. Considering kernel color only, count the purple and yellow kernels on the ear and use the χ method to test the hypothesis that the R and r alleles segregate randomly. You should count at least 500 kernels to get a valid statistical sample. F Phenotype Purple Yellow # obtained (O) # expected (E) O E ( O E) Total χ = ( O E) E df = < χ < > P > P 0.05 (< or >) At a 5% level of significance, the deviation is. (significant or not significant) Therefore, the data (support or do not support) the hypothesis.
14 . Consider the following hypothesis The Su and su alleles of maize segregate randomly during meiosis. Based on this hypothesis, what genotypic and phenotypic frequencies for the Su gene and kernel shape do you expect in the F ear?. Considering kernel shape only, count the number of smooth and wrinkled seeds. Use the χ method to test the hypothesis that the Su and su alleles segregate randomly. You should count at least 500 kernels to get a valid statistical sample. F Phenotype Smooth Wrinkled # obtained (O) # expected (E) O E ( O E) Total χ = ( O E) E df = < χ < > P > P 0.05 (< or >) At a 5% level of significance, the deviation is. (significant or not significant) Therefore, the data (support or do not support) the hypothesis.
15 5 5. Consider the following hypothesis The R and Su genes of maize assort independently during meiosis. Based on this hypothesis, what genotypic and phenotypic frequencies for these genes do you expect in the F ear? 6. Consider kernel color and kernel shape. Count the number of purple smooth, purple wrinkled, yellow smooth, and yellow wrinkled kernels. You should count at least 500 kernels to get a valid statistical sample. Use the χ method to test the hypothesis that the genes for kernel color and shape assort independently. F Phenotype Purple Smooth Purple Wrinkled Yellow Smooth Yellow Wrinkled # obtained (O) # expected (E) O E ( O E) Total χ = ( O E) E df = < χ < > P > P 0.05 (< or >) At a 5% level of significance, the deviation is. (significant or not significant) Therefore, the data (support or do not support) the hypothesis.
16 6 B. Epistasis Cross a You will receive a numbered unknown ear of maize representing the F generation from a cross involving one or more seed color genes. Four possible parental genotypes for the cross are given below. For each cross, determine the expected phenotypic frequency in the F generation. Count the number of purple and yellow kernels on your unknown ear (you should count at least 800 kernels, or as many kernels as there are on your ear, to get a valid statistical sample). Then, use the χ method to determine which cross your ear represents. P R R C C x r r C C Phenotype Phenotype F F Genotype Phenotype List the expected genotypes and phenotypes and their frequencies. Data and analysis F Phenotype Purple Yellow # obtained (O) # expected (E) O E ( O E) Total χ = ( O E) E df = < χ < > P > P 0.05 (< or >) At a 5% level of significance, the deviation is. (significant or not significant) Therefore, the data (support or do not support) the hypothesis that the unknown ear is the F progeny of cross (a).
17 7 Cross b P R R C C x R R C C Phenotype Phenotype F F Genotype Phenotype List the expected genotypes and phenotypes and their frequencies. Data and analysis F Phenotype Purple Yellow # obtained (O) # expected (E) O E ( O E) Total χ = ( O E) E df = < χ < > P > P 0.05 (< or >) At a 5% level of significance, the deviation is. (significant or not significant) Therefore, the data (support or do not support) the hypothesis that the unknown ear is the F progeny of cross (b).
18 8 Cross c P r r C C x R R c c Phenotype Phenotype F F Genotype Phenotype List the expected genotypes and phenotypes and their frequencies. Data and analysis F Phenotype Purple Yellow # obtained (O) # expected (E) O E ( O E) Total χ = ( O E) E df = < χ < > P > P 0.05 (< or >) At a 5% level of significance, the deviation is. (significant or not significant) Therefore, the data (support or do not support) the hypothesis that the unknown ear is the F progeny of cross (c).
19 9 Cross d P R R C C x r r C C Phenotype Phenotype F F Genotype Phenotype List the expected genotypes and phenotypes and their frequencies. Data and analysis F Phenotype Purple Yellow # obtained (O) # expected (E) O E ( O E) Total χ = ( O E) E df = < χ < > P > P 0.05 (< or >) At a 5% level of significance, the deviation is. (significant or not significant) Therefore, the data (support or do not support) the hypothesis that the unknown ear is the F progeny of cross (d).
20 0 Conclusion. List the number of your unknown.. On the basis of your analysis, write a short paragraph explaining the genetic basis of the kernel color in your unknown ear. In writing your conclusion, you should clearly summarize your statistical analysis, and you should explain how epistasis has played a role in determining the kernel color in your unknown ear.
21 C. Formal Lab Report for the Maize Lab For your grade on the maize lab, you will write a formal lab report that summarizes the principles, methods, analysis, results, and conclusion that you reached. The report will be graded according to the rubric shown on the next page. The report should contain the following five sections Introduction, Methods, Results, Analysis, and Conclusion. Basic instructions on the lab report format, as well as specific instructions for each section, are given below. Basic Format Each individual student must write his or her own individual laboratory report. Any direct quotation from another work should be properly cited and referenced. The laboratory report should be typed (preferred) or neatly printed, with. All typographic conventions (Greek letters, special symbols, italics, mathematical formulas, superscripts and subscripts, etc.) must be correctly used. There should be no spelling or grammatical errors. Diagrams and Punnett squares should be neatly drawn. Each section of the laboratory report should be clearly indicated with a boldfaced label. Introduction The Introduction should consist of a brief two or three paragraph background on the origin of Mendel s Laws of Random Segregation and Independent Assortment; clear statements of each of the questions to be answered in this lab exercise; clear statements of each hypothesis to be tested (, with the hypotheses based on Mendel s Laws. Methods The Methods section should begin with an introductory paragraph stating that the hypotheses were tested by determining the phenotypes of kernels on F maize ears produced by mating experiments; contain a detailed description of how the crosses were performed to obtain the F ears counted in the exercise; contain mating diagrams showing the crosses to aid in the description of the crosses. Results The Results section should begin with an introductory paragraph stating each set of data that was obtained in the experiment; have a separate table for the data from each kernel counting experiment; have each table clearly and accurately labeled.
22 Analysis The Analysis section should begin with an introductory paragraph stating that the results were compared with expected values determined from Mendel s Laws, and compared using the χ test; show the derivation of expected frequencies of genotypes and phenotypes for each hypothesis, using both the probability method ( short-cut method) and the Punnett square method; list the expected genotypes and phenotypes both as frequencies (fractions) and as ratios; show the calculation and the evaluation of the χ test for each hypothesis. Conclusion The Conclusion section should begin with an introductory paragraph that briefly restates the questions that were answered in the lab exercise; summarize the essential data for each hypothesis tested and derive conclusions based on the data; clearly indicated how the statistical analysis (χ test) was used to interpret whether the data supported or did not support a hypothesis; indicate how the results illustrate the genetics principles covered in the exercise; have an explanation of any sources of error, if there were any errors or discrepancies in the counts.
23 Introduction (section ) Methods (section ) Results (section ) Analysis (section ) Conclusions (section 5) Format Lab Report Rubric for Lab I. Patterns of Inheritance in Maize (0 pts) Highest pts Lower pts pts Professor I Evaluation The questions to be answered during the lab are stated The hypothesis is stated based on Mendel s laws A description of how the experiment was performed Mating diagrams are drawn to aid the description Results and data are well organized in Tables Tables are complete Lists of genotypes and phenotypes are correct and complete The genotype(s) with the highest ratio are listed first. The one with the lowest ratio is listed last. Χ calculation is correct Summarizes the essential data and derives conclusions based on the data Conclusions are supported by the data Hypothesis is rejected or accepted based on the data If there was an error, explain the possible source of the error Few spelling and grammar errors Neat, with few smudges Punnett square is well drawn as squares Each section is clearly highlighted One or more questions are missing Hypothesis is incorrect Description is unclear Diagrams are drawn incorrectly Wrong data entry Tables are incomplete The lists are incorrect and incomplete Write the list randomly, or in the wrong order Calculation is incorrect Conclusions are unclear Data does not support the conclusions Wrong hypothesis is accepted No explanations Multiple spelling and grammar errors Not neat, multiple smudges Angles not 90, lines not straight, sides of unequal length Sections unclear Professor II Evaluation |
What this handout is about
This handout discusses common types of philosophy assignments and strategies and resources that will help you write your philosophy papers.
What is philosophy, and why do we study it?
Philosophy is the practice of making and assessing arguments. An argument is a set of statements (called premises) that work together to support another statement (the conclusion).
Making and assessing arguments can help us get closer to understanding the truth. At the very least, the process helps make us aware of our reasons for believing what we believe, and it enables us to use reason when we discuss our beliefs with other people. Your philosophy teacher wants to help you learn to make strong arguments and to assess the arguments other people make.
Elements of philosophy papers
A philosophy paper may require several kinds of tasks, including:
- Argument reconstruction
- Objections and replies
- Original argument
- Thought experiments
Let’s examine these elements one at a time.
To reconstruct an argument, you’ll need to present it in a way that someone unfamiliar with the material will understand. Often, this requires you to say a lot more than the philosopher whose work you are writing about did!
There are two main ways to reconstruct an argument: in regular prose or as a formal series of numbered steps. Unless your professor or TA has told you otherwise, you should probably use regular prose. In either case, keep these points in mind:
- Keep your ideas separate from the author’s. Your purpose is to make the author’s argument clear, not to tell what you think of it.
- Be charitable. Give the best version of the argument you can, even if you don’t agree with the conclusion.
- Define important terms.
- Organize your ideas so that the reader can proceed logically from premises to conclusion, step by step.
- Explain each premise.
Let’s walk through an argument reconstruction. Here is a passage by 18th-century British philosopher David Hume:
- Take any action allowed to be vicious: Willful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In whichever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case. The vice entirely escapes you, as long as you consider the object. You never can find it, till you turn your reflection into your own breast, and find a sentiment of disapprobation, which arises in you, towards this action. Here is a matter of fact, but it is the object of feeling, not of reason. It lies in yourself, not in the object. So that when you pronounce any action or character to be vicious, you mean nothing, but that from the constitution of your nature you have a feeling or sentiment of blame from the contemplation of it. (David Hume, A Treatise of Human Nature).
Step 1: Reread the passage a few times, stopping to look up any unfamiliar words—”disapprobation,” maybe. Be sure you understand the important terms, like “vicious.” (By “vicious,” Hume seems to mean “wicked, depraved, or immoral,” which probably isn’t the way you use the word in everyday speech.)
Step 2: Identify the conclusion. Sometimes your teacher will identify it for you, but even if she didn’t, you can find it. (Caution: It won’t always be the first or the last sentence in the passage; it may not even be explicitly stated.) In this case, Hume’s conclusion is something like this: The viciousness of an action is a feeling of disapprobation in the person who considers it, not a property of the action itself.
Step 3: Identify the premises. Consider the conclusion and ask yourself what the author needs to do to prove it. Hume’s conclusion here seems to have two parts:
When we call an action vicious, we mean that our “nature” causes us to feel blame when we contemplate that action.
There is nothing else that we could mean when we call an action “vicious.”
Step 4: Identify the evidence. Hume considers an example, murder, and points out that when we consider why we say that murder is vicious, two things happen:
- We realize that when we contemplate murder, we feel “a sentiment of disapprobation” in ourselves.
- No matter how hard we look, we don’t see any other “matter of fact” that could be called “vice”—all we see “in the object” (the murder) are “certain passions, motives, volitions, and thoughts.”
Step 5: Identify unspoken assumptions. Hume assumes that murder is a representative case of “viciousness.” He also assumes that if there were “viciousness” in the “object” (the murder), we would be able to “see” it—it isn’t somehow hidden from us. Depending on how important you think these assumptions are, you may want to make them explicit in your reconstruction.
Step 6: Sketch out a formal reconstruction of the argument as a series of steps.
- If we examine a vicious action like murder, we see passions, motives, volitions, and thoughts.
- We don’t see anything else.
- So we don’t see any property or “matter of fact” called “viciousness.”
- Assumption: What we don’t see is not there.
- When we examine our feelings about murder, we see a “sentiment of disapprobation.”
- Unstated premise: This feeling of disapprobation is the only thing all the acts we think are vicious have in common, and we feel it whenever we confront a vicious act—that is, all and only vicious acts produce the feeling of disapprobation.
- Conclusion: So the viciousness of a bad action is a feeling of disapprobation in the person who considers it, not a factual property of the action itself.
Step 7: Summarize the argument, explaining the premises and how they work together. Here’s how such a prose reconstruction might go:
To understand what we mean when we call an action “vicious,” by which he means “wrong,” Hume examines the case of murder. He finds that whenever we consider a murder itself, all we see are the “passions, motives, volitions, and thoughts” of the people involved. For example, we might see that the murderer feels the passion of anger and is motivated by a desire to make his victim suffer, and that the victim feels the passion of fear and is thinking about how to escape. But no matter how hard we look, we don’t see “viciousness” or wrongness—we see an action taking place, and people with motives and feelings are involved in that action, but none of these things seem to be what we mean by “viciousness” or wrongness. Hume next turns his inquiry inward, and considers what is happening inside a person who calls a murder “vicious.” The person who thinks or says that murder is wrong always seems to be feeling a certain “sentiment of disapprobation.” That is, the person disapproves of the action and blames the murderer. When we say “murder is wrong,” we usually think that we are saying something about murder itself, that we are describing a property (wrongness) that the action of murder has. But Hume thinks what we are in fact describing is a feeling in us, not a property of murder—the “viciousness” of a vicious action is just an emotion in the person who is thinking about or observing that action, rather than a property of the action itself.
Objections and replies
Often, after you reconstruct an argument, you’ll be asked to tell whether it is a good or a bad argument and whether you agree or disagree with it.
Thinking of objections and examining their consequences is a way that philosophers check to see if an argument is a good one. When you consider an objection, you test the argument to see if it can overcome the objection. To object to an argument, you must give reasons why it is flawed:
- The premises don’t support the conclusion.
- One or more of the premises is false.
- The argument articulates a principle that makes sense in this case but would have undesirable consequences in other cases.
- The argument slides from one meaning of a term to another.
- The argument makes a comparison that doesn’t really hold.
Here are some questions you can ask to make sure your objections are strong:
- Have I made clear what part of the argument I object to?
- Have I explained why I object to that part of the argument?
- Have I assessed the severity of my objection? (Do I simply point out where the philosopher needs to do more work, or is it something more devastating, something that the philosopher cannot answer?)
- Have I thought about and discussed how the philosopher might respond to my objection?
- Have I focused on the argument itself, rather than just talking about the general issues the conclusion raises?
- Have I discussed at least one objection thoroughly rather than many objections superficially?
Let’s look at our example again. What objections might you make to Hume’s argument about murder? Here are some possible arguments:
- You might object to premises 2 and 3, and argue that wrong actions do have a property that makes us call them wrong. For example, maybe we call actions wrong because of their motives—because the actions are motivated by cruelty, for example. So perhaps Hume is right that we don’t see a property called “viciousness,” but wrong that “viciousness” is thus only a feeling in us. Maybe the viciousness is one of the motives or passions.
- You might also object to premise 5, and say that we sometimes judge actions to be wrong even though we don’t feel any “sentiment” of disapproval for them. For example, if vigilantes killed a serial murderer, we might say that what they did was wrong, even if we shared their anger at the murderer and were pleased that they had killed him.
Often you’ll be asked to consider how a philosopher might reply to objections. After all, not every objection is a good objection; the author might be able to come up with a very convincing reply! Use what you know about the author’s general position to construct a reply that is consistent with other things the author has said, as well as with the author’s original argument.
So how might Hume, or someone defending Hume, reply to the objections above? Here are some possible objections:
- To the first, Hume might reply that there is no one motive that all “vicious” actions have in common. Are all wrong actions motivated by cruelty? No—theft, for example, might be motivated by hunger. So the only thing all “vicious” actions have in common is that we disapprove of them.
- To the second, Hume might reply that when we call the actions of vigilantes wrong, even though we are pleased by them, we must still be feeling at least some disapproval.
Sometimes you will be asked to summarize an author’s argument and apply that position to a new case. Considering how the author would think about a different case helps you understand the author’s reasoning and see how the argument is relevant. Imagine that your instructor has given you this prompt:
- “Apply Hume’s views on the nature of vice to the following case: Mr. Smith has an advanced form of cancer. He asks Dr. Jones what she thinks his prognosis is. Dr. Jones is certain Mr. Smith will die within the month, but she tells him he may survive for a year or longer, that his cancer may not be fatal. Dr. Jones wants to give Mr. Smith hope and spare him the painful truth. How should we think about whether what Dr. Jones did is wrong?”
Consider what you know about Hume’s views. Hume has not given a list of actions that are right or wrong, nor has he said how we should judge whether an action is right or wrong. All he has told us is that if an action is wrong, the wrongness is a sentiment in the people considering the action rather than a property of the action itself. So Hume would probably say that what matters is how we feel about Dr. Jones’s action—do we feel disapproval? If we feel disapproval, then we are likely to call the action “wrong.”
This test case probably raises all kinds of questions for you about Hume’s views. You might be thinking, “Who cares whether we call the action wrong—I want to know whether it actually is wrong!” Or you might say to yourself, “Some people will feel disapproval of the doctor’s action, but others will approve, so how should we decide whether the action is wrong or not?” These are exactly the kinds of questions your instructor wants to get you thinking about.
When you go back to read and discuss Hume, you will begin to see how he might answer such questions, and you will have a deeper understanding of his position. In your paper, though, you should probably focus on one or two main points and reserve the rest of your speculation for your conclusion.
Original argument/taking a position
Sometimes an assignment will ask you to stake out a position (i.e., to take sides in a philosophical debate) or to make an original argument. These assignments are basically persuasive essays, a kind of writing you are probably familiar with. If you need help, see our handouts on argument and thesis statements, among others.
Remember: Think about your audience, and use arguments that are likely to convince people who aren’t like you. For example, you might think the death penalty is wrong because your parents taught you so. But other people have no special reason to care what your parents think. Try to give reasons that will be interesting and compelling to most people.
If scientists want to test a theory or principle, they design an experiment.
In philosophy, we often test our ideas by conducting thought experiments. We construct imaginary cases that allow us to focus on the issue or principle we are most interested in. Often the cases aren’t especially realistic, just as the conditions in a scientific laboratory are different from those in the outside world.
When you are asked to write about a thought experiment, don’t worry about whether it is something that is ever likely to happen; instead, focus on the principle being tested. Suppose that your bioethics teacher has given you this thought experiment to consider:
- An elderly, unconscious patient needs a heart transplant. It is very unlikely that a donor heart will become available before the patient dies. The doctor’s other option is to try a new and risky procedure that involves transplanting the heart of a genetically engineered chimpanzee into the patient. This will require killing the chimp. What should the doctor recommend?
This scenario may be unrealistic, but your instructor has created it to get you to think about what considerations matter morally (not just medically) when making a life-or-death decision. Who should make such decisions—doctors, families, or patients? Is it acceptable to kill another intelligent primate in order to provide a heart for a human? Does it matter that the patient is elderly? Unconscious? So instead of focusing on whether or not the scenario is likely to happen, you should make an argument about these issues. Again, see our handouts on argument and thesis statements for help in crafting your position.
Other things to keep in mind
- Be consistent. For example, if I begin my paper by arguing that Marquis is right about abortion, I shouldn’t say later that Thomson’s argument (which contradicts Marquis’s) is also correct.
- Avoid overstatement. Watch out for words like “all,” “every,” “always,” “no,” “none,” and “never”; supporting a claim that uses these words could be difficult. For example, it would be much harder to prove that lying is always wrong than to prove that lying is usually or sometimes wrong.
- Avoid the pitfalls of “seeing both sides.” Suppose you think Kant’s argument is pretty strong, but you still disagree with his conclusion. You might be tempted to say “Kant’s argument is a good one. I disagree with it.” This appears contradictory. If an argument really is good and you can’t find any weaknesses in it, it seems rational to think that you should agree with the argument. If you disagree with it, there must be something wrong with it, and your job is to figure out what that is and point it out.
- Avoid personal attacks and excessive praise. Neither “Mill was obviously a bad person who didn’t care about morality at all” nor “Kant is the greatest philosopher of all time” adds to our understanding of Mill’s or Kant’s arguments.
- Avoid grandiose introductions and conclusions. Your instructor is not likely to appreciate introductions that start with sentences like “Since the dawn of time, human beings have wondered about morality.” Your introduction can place your issue in context, explain why it’s philosophically important, and perhaps preview the structure of your paper or argument. Ask your instructor for further guidance about introductions and conclusions.
- Stay focused. You may be asked to concentrate closely on a small piece of text or a very particular question; if so, stick to it, rather than writing a general report on a “topic.”
- Be careful about appealing to faith, authority, or tradition. While you may believe something because it is a part of your religion, because someone you trust told you about it, or because it is the way things have always been done, be careful about basing your arguments or objections on these sorts of foundations. Remember that your reader may not share your assumptions and beliefs, and try to construct your argument so that it will be persuasive even to someone who is quite different from you.
- Be careful about definitions. Rather than breaking out Webster’s Dictionary, concentrate on the definitions the philosophers you are reading have carefully constructed for the terms they are using. Defining terms is an important part of all philosophical work, and part of your job in writing a philosophy paper will often be thinking about how different people have defined a term.
- Consider reading the Writing Center’s handout on fallacies. Fallacies are common errors in arguments; knowing about them may help you critique philosophers’ arguments and make stronger arguments yourself.
We consulted these works while writing the original version of this handout. This is not a comprehensive list of resources on the handout’s topic, and we encourage you to do your own research to find the latest publications on this topic. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. For guidance on formatting citations, please see the UNC Libraries citation tutorial.
Feinberg, Joel. Doing Philosophy: A Guide to the Writing of Philosophy Papers. 3rd ed. Belmont, Calif.: Thomson/Wadsworth, 2005.
Holowchak, Mark. Critical Reasoning & Philosophy: A Concise Guide to Reading, Evaluating, and Writing Philosophical Works. Lanham, Md.: Rowman & Littlefield, 2004.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License.
You may reproduce it for non-commercial use if you use the entire handout (just click print) and attribute the source: The Writing Center, University of North Carolina at Chapel Hill
If you enjoy using our handouts, we appreciate contributions of acknowledgement. |
In order to understand infrared, it is important to understand something about light. The human eye can detect only a tiny part of the electromagnetic spectrum, called visible light. But there are other forms of light around us such as ultraviolet and infrared. Infrared (IR) light is also a very small part of the entire electromagnetic spectrum and requires a specific device or technology to see it with our eyes.
The 3 Categories of Infrared Light
Infrared light can be split into three categories; near-infrared (near-IR), mid-infrared (mid-IR) and thermal-infrared (thermal-IR). The key difference between thermal-IR and the other two is that thermal-IR is emitted by an object instead of reflected off it. Infrared imaging works two different ways depending on the device or technology used; image enhancement and thermal imaging.
Image enhancement is what most people think of when you talk about night vision. This technology works by collecting tiny amounts of visible light including the lower portion of the infrared light spectrum. This light would be undetectable to our eyes before it is amplified through the night vision device.
How Thermal Imaging Works
Thermal imaging works by capturing the upper portion of the infrared light spectrum which is emitted as heat by objects. Hot objects such as body heat emit more of this light than cooler objects like trees or buildings. Thermal imaging devices capture this heat and transfer it into an image on a monitor. When viewed in a gray scale, hotter things appear white and cooler things appear black. A thermal imaging device transforms thermal energy into visible light with five basic steps:
- A special lens focuses the infrared light emitted by all of the objects in view.
- Infrared detectors are then used to scan this focused radiation. The detectors create what is called a thermogram or temperature map.
- The thermogram is then translated into electric impulses.
- The electric impulses are then sent to a signal-processing unit where they are translated into data. The signal-processing unit is a tiny chip that is embedded on a circuit board which is used to translate the electric impulses into usable data.
- Once translated, the signal-processing unit sends the data to the display where it then becomes visible to the viewer.
How night vision and image enhancement work
The objective lens (1) of a night vision device collects light (visible and IR) that can’t be seen with the naked eye and focuses it on the image intensifier (7). The power supply (4) for the image-intensifier tube receives power from two “AA” batteries. Inside the image intensifier a photocathode (2) absorbs this light energy and converts it to electrons. These electrons are then drawn toward a phosphor screen (5). In 2nd and 3rd generation intensifiers the electrons first pass through a micro-channel plate (3) that further multiplies them thousands of times. When this highly intensified electron image strikes the phosphor screen (5), it causes the screen to emit visible light. Since the phosphor screen emits this light in exactly the same pattern and contrast as collected by the objective lens, the bright night time image seen through the eyepiece (6) corresponds precisely to the observed scene. These phosphors create the green image on the screen that has come to characterize night vision.
Night Vision Generation 1
Typically uses an S-20 photocathode and electron acceleration to achieve gain. Night vision generation 1 devices perform best when ambient light (moonlight or starlight) or sufficient IR illumination is available. Geometric distortion (fish-eye effect) is inherent in all Gen 1 devices. Life span of a Gen 1 tube (image intensifier) is approximately 1500 hours of continuous operation.
Night Vision Generation 2
Usually uses an S-25 (extended red of the electromagnetic spectrum) photocathode plus a microchannel plate to achieve gain. Night vision generation 2 devices provide better than-satisfactory performance at low light levels and exhibit very low distortion. Life span of a Gen 2 tube is approximately 2500-3000 hours of continuous operation.
Night Vision Generation 3
The most advanced level of night vision technology, night vision generation 3 devices use gallium-arsenide for the photocathode near infrared region of the electromagnetic spectrum) and a micro-channel plate for gain. The microchannel plate is also coated with an ion barrier film to prolong tube life. Gen 3 provides very good-to-excellent performance in extreme low light levels. Recent Military Specification quality tubes have no perceptible distortion. Life span of a Gen 3 tube is 10,000+ hours of continuous operation.
At the present US Armed forces are issued Night Vision Devices with expanded sensitivities in the deep IR range. On a limited basis, these technologies are beginning to become available commercially for civilians.
JAGER PRO has access to this equipment to introduce infrared technology into the hog control and shooting industry.
Important JAGER PRO Product Note:
Manufacturer data sheets which guarantee a minimum resolution of 64 lp/mm are included with JAGER PRO night vision devices configured with Gen 3 US SELECT “A” image intensifiers.
Users have difficult choices to make among Night Vision Generations of technology (Gen 1, Gen 2 or Gen 3) or among competing options within a given generation.
Evaluation of night vision equipment revolves around four major areas of consideration:
Clarity of a night vision device image under varying light conditions. Performance is a product of image intensifier photosensitivity, signal-to-noise ratio, system gain and resolution.
Issues such as ease of operation, size, weight, technique of employment and use of necessary or optional accessories are critical.
Selecting the right night vision device for the right application. Important considerations are versatility, adaptability, field of view, magnification, weather resistance and ruggedness of the system.
Overall Cost of Ownership:
Users should consider such issues as optional accessories, expected tube life, warranty coverage, ease and likelihood of repair, susceptibility to bright light exposure and availability of batteries.
Contact JAGER PRO for assistance in evaluating night vision equipment.
Image Tube Grades
When buying night vision equipment you should be advised of the different grades of image tubes. You should not buy a 2nd or 3rd Generation night vision device without knowing the grade and resolution of the Image Tube. The lower the grade the lower the resolution or the greater the blemishes.
Black spots are cosmetic blemishes in the image intensifier which do not affect the performance or reliability of a night vision device. Some number of varying sizes is inherent in the manufacturing process.
2D – Gen 2 US Image Intensifier – Minimum resolution 28 lp/mm (32 lp/mm typical) Noticeable blemishes on screen.
2ST – Gen 2 Standard US Image Intensifier – Minimum resolution 28 lp/mm
2MS – Gen 2 Military Spec US Image Intensifier – Minimum resolution 28-38 lp/mm (32 lp/mm typical) Mil-Spec comes with tube data sheet.
2HD – Gen 2 US Image Intensifier – Minimum resolution 51-70 lp/mm Above Mil-Spec comes with tube data sheet.
3ST – Gen 3 Standard US Image Intensifier – Minimum resolution 51-64 lp/mm.
3A – Gen 3 Advanced US Image Intensifier – Minimum resolution 64-72 lp/mm comes with tube data sheet.
Thermal Terminology (A-E)
An electronic feature that automatically reduces voltages to the micro-channel plate to keep the image intensifier’s brightness within optimal limits and protect the tube.
The effect of this can be seen when rapidly changing from low-light to high-light conditions; the image gets brighter and then, after a momentary delay, suddenly dims to a constant level.
When the power supply is “auto-gated,” it means the system is turning itself on and off at a very rapid rate. This, combined with a thin film attached to the micro-channel plate (an ion barrier) reduces blooming. While “blooming” can be noticeably less on systems with a thin film layer, systems with thicker film layers can be perfectly acceptable depending on the end user’s application. Deciding which night vision goggle is better should not be based solely on blooming.
These are common blemishes in the image intensifier of the NVD or can be dirt or debris between the lenses of the NVG. Black spots that are in the image intensifier do not affect the performance or reliability of a night vision device and are inherent in the manufacturing processes. Every night vision image intensifier tube is different.
These can be defects in the image area produced by the NVG. This condition is caused by a flaw in the film on the micro-channel plate. A bright spot is a small, non-uniform, bright area that may flicker or appear constant. Bright spots usually go away when the light is blocked out and are cosmetic blemishes that are signal induced.
Viewing a single image source with both eyes.
Viewing a scene through two channels; i.e. one channel per eye.
Loss of the entire night vision image, parts of it, or small parts of it, due to intensifier tube overloading by a bright light source. Also, known as a “halo” effect, when the viewer sees a “halo” effect around visible light sources. When such a bright light source comes into the night vision device’s view, the entire night vision scene, or parts of it, become much brighter, “whiting out” objects within the field of view. Blooming is common in Generation 0 and 1 devices. The lights in the image to the right would be considered to be “blooming”.
An electronic function that reduces the voltage to the photocathode when the night vision device is exposed to bright light sources such as room lights or car lights. BSP protects the image tube from damage and enhances its life; however, it also has the effect of lowering resolution when functioning.
The alignment of a weapon aiming device to the bore of the weapon. See also Zeroing.
A standard still and video camera lens thread size for mounting to the body of a camera. Usually 1/2″ or 3/4″ in diameter.
A term used to describe image tube quality, testing and inspection done by the original equipment manufacturer (OEM).
An irregular pattern of dark thin lines in the field of view either throughout the image area or in parts of the image area. Under the worst-case condition, these lines will form hexagonal or square wave-shape lines.
Usually made of soft plastic or rubber with a pinhole that allows a small amount of light to enter the objective lens of a night vision device. This should be used for training purposes only, and is not recommended for an extended period of time.
A glass filter assembly designed to fit over the objective lens of a night vision device. The filter reduces light input to a safe (night-time) level, allowing safe extended daytime use of the night vision device.
The unit of measure used to define eye correction or the refractive power of a lens. Usually, adjustments to an optical eyepiece accommodate for differences in individual eyesight. Most ITT systems provide a +2 to -6 diopter range.
There are two types of distortion found in night vision systems. One type is caused by the design of the optics, or image intensifier tube, and is classical optical distortion. The other type is associated with manufacturing flaws in the fiber optics used in the image intensifier tube.
Classical Optical Distortion:
Classical optical distortion occurs when the design of the optics or image intensifier tube causes straight lines at the edge of the field of view to curve inward or outward. This curving of straight lines at teh edge will cause a square grid pattern to start to look like a pincushion or barrel. This distortion is the same for all systems with the same model number. Good optical design normally makes this distortion so low that the typical user will not see the curving of the lines.
Fiber Optics Manufacturing Distortions:
Two types of fiber optics distortions are most significant to night vision devices: S-distortion and shear distortion:
- S-Distortion: Results from the twisting operation in manufacturing fiber-optic inverters. Usually S-distortion is very small and is difficult to detect with the unaided eye.
- Shear Distortion: Can occur in any image tube that use fiber-optic bundles for the phospor screen. It appears as a cleavage or dislocation in a straight line viewed in the image area, as though the line were “sheared”.
This is the amount of light you see through a night vision device when an image tube is turned on but no light is on the photocathode. EBI is affected by temperature; the warmer the night vision device, the brighter the background illumination. EBI is measured in lumens per square centimeter (lm/cm2). The lower the value the better. The EBI level determines the lowest light level at which an image can be detected. Below this light level, objects will be masked by the EBI.
There is a defect in the image area of the NVG. Edge glow is a bright area (sometimes sparkling) in the outer portion of the viewing area.
A steady or fluctuating pinpoint of bright light in the image area that does not go away when all light is blocked from the objective lens. The position of an emission point within the field of view will not move. If an emission point disappears or is only faintly visible when viewing under brighter nighttime conditions, it is not indicative of a problem. If the emission point remains bright under all lighting conditions, the system needs to be repaired. Do not confuse an emission point with a point of light source in the scene being viewed.
The distance a person’s eyes must be from the last element of an eyepiece in order to achieve the optimal image area.
Thermal Terminology (F-G)
The diameter of the imaged area when viewed through an optic.
Image Intensification tube specification designation, calculated on line pair per mm x signal to noise.
A faint hexagonal (honeycomb) pattern throughout the image area that most often occurs under high-light conditions. This pattern is inherent in the structure of the micro-channel plate and can be seen in virtually all Gen 2 and Gen 3 systems if the light level is high enough.
A unit of brightness equal to one foot candle at a distance of one foot.
Also called brightness gain or luminance gain. This is the number of times a night vision device amplifies light input. It is usually measured as tube gain and system gain. Tube gain is measured as the light output (in footLambert) divided by the light input (in foot candle). This figure is usually expressed in values of tens of thousands. If tube gain is pushed too high, the tube will be “noisier” and the signal-to-noise ration many go down. U.S. military Gen 3 image tubes operate at gains of between 20,000 and 45,000. On the other hand, system gain is measured as the light output (fL) divided by the light input (also fL) and is what the user actually sees. System gain is usually seen in the thousands. U.S. military systems operate at 2,000 to 3,000. In any night vision system, the tube gain is reduced by the system’s lenses and is affected by the quality of the optics or any filters. Therefore, system gain is a more important measurement to the user.
The semiconductor material used in manufacturing the Gen 3 photocathode. GaAs photocathodes have a very high photosensitivity in the spectral region of about 450 to 950 nanometers (visible and near-infrared region).
Two technologies are referenced as night vision; image intensification and thermal imaging (see definitions). Because of cost and the fact that image intensifier scenes are easier to interpret than thermal (thermal images show targets as black or white – depending upon temperature – making it more difficult to recognize objects), the most widely used night vision aid in law enforcement is image intensification (l²) equipment. To date, there have been four generations of l² devices, identified as Gen 0, Gen 1, Gen 2, and Gen 3. Developmental laboratory work is on-going, and the U.S. military may designate the resulting as Gen 4. However, no definition for Gen 4 presently exists.
The first night vision aids (also called Generation Zero or Gen 0) were sniper scopes that came into use during World War II and the Korean conflict. These were not true image intensifiers, but rather image converters, which required a source of invisible infrared (IR) light mounted on or near the device to illuminate the target area.
The “starlight scopes” developed during the early 1960’s for use in Vietnam were the first Generation (Gen 1) of image intensifier devices. In Gen 1 night vision units, three image intensifiers were connected in a series, making the units longer and heavier than future night vision units would be. Gen 1 equipment produced an image that was clear in the center of the field of view but suffered from large optical distortion around the periphery. Gen 1 equipment was also subject to “blooming”. Most low-cost imported night vision units use Gen 1 technology, though often under the guise of a higher “generation”.
The development of the micro-channel plate, or MCP, in the late 1960s brought on the second generation (Gen 2) in l² night vision. The MCP accelerated and multiplied electrons which provided the gain previously supplied by coupling three image intensifiers together (Gen 1). The introduction of the MCP significantly reduced size and weight for image intensifier tubes, enabling design of smaller night vision goggles and hand-held devices. The MCP also provided much more robust operation when bright lights entered the field of view. The Gen 2 tubes used the same tri-alkali photocathode as the Gen 1 devices. This generation was implemented to reflect the change in how the light was amplified (MCP versus three-stage coupling).
Third-generation (Gen 3) image intensifiers were developed in the mid-1970s and became available during the early 1980s. Gen 3 introduced two major technological improvements: the gallium arsenide (GaAs) photocathode and the ion barrier coating to the microchannel plate. The GaAs photocathode increases the tube’s sensitivity to light from the near-infrared range of the spectrum, enables it to function at greater detection distances, and improves system performance under low-light conditions. Application of a metal-oxide ion barrier to the MCP increases the life of the image tube. The operational life of Gen 3 tubes is in excess of 10,000 hours, compared to that of Gen 2 tubes which is about 2,000 to 4,000 hours. This generation was implemented to reflect the change in the photocathode (tri-alkali replaced with GaAs).
Gated Filmless Technology
Gated filmless technology was created in 1998, but without the reliability required for military delivery. By removing the ion barrier film and “gating” the system power supply, the technology demonstrated substantial increases in target detection range and resolution. In the process, however, it was discovered by ITT, that the same performance results could be achieved using a Generation 3 tube, but with a thinner ion barrier film and a auto-gated power supply, without sacrificing reliability and life-span of the intensifier tube.
Thermal Terminology (H-R)
An image intensifier protection feature incorporating a sensor, microprocessor and circuit breaker. This feature will turn the system off during periods of extreme bright light conditions.
The distance between the user’s eyes (pupils) and the adjustment of binocular optics to adjust for differences in individuals. Improperly adjusted binoculars will display a scene that appears egg-shaped or as a reclining figure-8.
The distance between the user’s pupils (eyeball centres). The 95th percentile of US military personnel falls within the 55 to 72mm range of IPD.
Many night vision devices incorporate a built-in infrared (IR) diode that emits invisible light or the illuminator can be mounted on to it as a separate component. IR light cannot be seen by the unaided eye; therefore, a night vision device is necessary to see this light. IR Illuminators provide supplemental infrared illumination of an appropriate wavelength, typically in a range of wavelengths (e.g. 730nm, 830nm, 920nm), and eliminate the variability of available ambient light, but also allow the observer to illuminate only specific areas of interest while eliminating shadows and enhancing image contrast.
Regardless of generation all image intensifiers require some light to function. In situations where ambient light is insufficient, infrared (IR) illuminators facilitate night operations by providing an independent source of light. Since IR illuminators operate in near infrared range of 700 to 900 nanometers (nm), they are invisible to the naked eye.
High-power devices providing long-range illumination capability. Ranges of several thousand meters are common. Most are not eye-safe and are restricted in use. Each IR laser should be marked with a warning label like the one shown here. Consult FDA CFR Title 21 for specific details and restrictions.
Collects and intensifies the available light in the visible and near-infrared spectrum. Offers a clear, distinguishable image under low-light conditions.
IR (Infrared) Area outside the visible spectrum that cannot be seen by the human eye (between 700 nanometers and 1 millimeter). The visible spectrum is between 400 and 700 nanometers.
Lp/mm (Line Pairs per Millimeter) Units used to measure image intensifier resolution. Usually determined from a 1951 U.S. Air Force Resolving Power Test Target. The target is a series of different-sized patterns composed of three horizontal and three vertical lines. A user must be able to distinguish all the horizontal and vertical lines and the spaces between them. Typically, the higher the line pair, the better the image resolution. Generation 3 tubes generally have a range of 64 – 72 lp/mm, although line pair measurement does not indicate the generation of the tube. Some Generation 2+ tubes measure 28-38 lp/mm, while a Generation 1+ tube may have measure at 40 lp/mm.
Denotes the photons perceptible by the human eye in one second.
A single channel optical device.
The measure of electrical current (mA) produced by a photocathode when exposed to a specified wavelength of light at a given radiant power (watt).
A metal-coated glass disk that multiplies the electrons produced by the photocathode. An MCP is found only in Gen 2 or Gen 3 systems. MCPs eliminate the distortion characteristic of Gen 0 and Gen 1 systems. The number of holes (channels) in an MCP is a major factor in determining resolution. ITT Industries’ MCPs have 10.6 million holes or channels compared to the previous standard of 3.14 million.
The shortest wavelengths of the infrared region, nominally 750 to 2,500 nanometers.
The input surface of an image intensifier tube that absorbs light energy (photons) and in turn releases electrical energy (electrons) in the form of an image. The type of material used is a distinguishing characteristic of the different generations.
Photocathode sensitivity is a measure of how well the image intensifier tube converts light into an electronic signal so it can be amplified. The measuring units of photocathode sensitivity are micro-amps/lumen (µA/lm) or microamperes per lumen. This criterion specifies the number of electrons released by the Photocathode (PC). PC response is always measured in isolation with no amplification stage or ion barrier (film). Therefore, tube data sheets (which always carry this “raw” figure) do not reflect the fact that over 50% of those electrons are lost in the ion barrier. While for most latest 3rd generation image intensifiers the photoresponse is in the 1800 µA/lm (2000 µA/lm for the latest Omni VI Pinnacle tubes), the actual number is more like 900 µA/lm.
The ability of an image intensifier or night vision system to distinguish between objects close together. Image intensifier resolution is measured in line pairs per millimetre (lp/mm) while system resolution is measured in cycles per miliradian. For any particular night vision system, the image intensifier resolution will remain constant while the system resolution can be affected by altering the objective or eyepiece optics by adding magnification or relay lenses. Often the resolution in the same night vision device is very different when measured at the centre of the image and at the periphery of the image. This is especially important for devices selected for photograph or video where the entire image resolution is important. Measured in line pairs per millimetre (lp/mm).
An adjustable aiming point or pattern (i.e. crosshair) located within an optical weapon sight.
Thermal Terminology (S-Z)
A measure of the light signal reaching the eye divided by the perceived noise as seen by the eye. A tube’s SNR determines the low-light-resolution of the image tube; therefore, the higher the SNR, the better the ability of the tube to resolve objects with good contrast under low-light conditions. Because SNR is directly related to the photocathode’s sensitivity and also accounts for phosphor efficiency and MCP operating voltage, it is the best single indicator of an image intensifier’s performance.
Also known as electronic noise. A faint, random, sparkling effect throughout the image area. Scintillation is a normal characteristic of micro-channel plate image intensifiers and is more pronounced under low-light-level conditions.
The image tube output that produces the viewable image. Phosphor (P) is used on the inside surface of the screen to produce the glow, thus producing the picture. Different phosphors are used in image intensifier tubes, depending on manufacturer and tube generation. P-20 phosphor is used in the systems offered in this catalogue.
When two views or photographs are taken through one device. One view/photograph represents the left eye, and the other the right eye. When the two photographs are viewed in a stereoscopic apparatus, they combine to create a single image with depth and relief. Sometimes this gives two perspectives. However, it is usually not an issue because the object of focus is far enough away for the perspectives to blend into one.
Equal to tube gain minus losses induced by system components such as lenses, beam splitters and filters.
Allows the user to manually adjust the gain control (basically like a dim control) in varying light conditions. This feature sets the PVS-14 apart from other popular monoculars that do not offer this feature.
A US weapon mounting system used for attaching sighting devices to weapons. A Weaver Rail is a weapon-unique notched metal rail designed to receive a mating throw-lever or Weaver Squeezer attached to the sighting device.
A method of bore sighting an aiming device to a weapon and adjusting to compensate for projectile characteristics at known distances. |
SummaryStudents practice human-centered design by imagining, designing and prototyping a product to improve classroom accessibility for the visually impaired. To begin, they wear low-vision simulation goggles (or blindfolds) and walk with canes to navigate through a classroom in order to experience what it feels like to be visually impaired. Student teams follow the steps of the engineering design process to formulate their ideas, draw them by hand and using free, online Tinkercad software, and then 3D-print (or construct with foam core board and hot glue) a 1:20-scale model of the classroom that includes the product idea and selected furniture items. Teams use a morphological chart and an evaluation matrix to quantitatively compare and evaluate possible design solutions, narrowing their ideas into one final solution to pursue. To conclude, teams make posters that summarize their projects.
Human-centered design is an approach used by engineers and industrial designers to develop solutions to problems faced by a specific segment of the population. The approach requires the designers to develop empathy by deeply understanding the essence of the problem as well as becoming familiar with the particular behaviors and psychology of the individuals affected by the problem. By designing for the visually impaired, students get the idea of having a “real customer” and understand how their design solutions could have significant impact other people’s lives.
In addition, students make scale models of their design solutions, which is great practice in developing prototypes—a typical step for engineers who are troubleshooting early designs and/or presenting their design ideas to others.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
- Reason quantitatively and use units to solve problems. (Grades 9 - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Apply geometric concepts in modeling situations (Grades 9 - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Students will develop abilities to apply the design process. (Grades K - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Make two-dimensional and three-dimensional representations of the designed solution. (Grades 6 - 8) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- The design process includes defining a problem, brainstorming, researching and generating ideas, identifying criteria and specifying constraints, exploring possibilities, selecting an approach, developing a design proposal, making a model or prototype, testing and evaluating the design using specifications, refining the design, creating or making it, and communicating processes and results. (Grades 9 - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Engineering design is influenced by personal characteristics, such as creativity, resourcefulness, and the ability to visualize and think abstractly. (Grades 9 - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- A prototype is a working model used to test a design concept by making actual observations and necessary adjustments. (Grades 9 - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Refine a design by using prototypes and modeling to ensure quality, efficiency, and productivity of the final product. (Grades 9 - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Evaluate final solutions and communicate observation, processes, and results of the entire design process, using verbal, graphic, quantitative, virtual, and written means, in addition to three-dimensional models. (Grades 9 - 12) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
A familiarity with the concepts of scale and measurement, technical drawings (isometric and orthographic), and the steps of the engineering design process. Additionally, basic experience using 3D CAD software to make simple geometric shapes and at least one class period completing the introductory Tinkercad tutorial at www.tinkercad.com.
After this activity, students should be able to:
- Follow the steps of the engineering design process to design a solution to a problem that impacts people.
- Collect measurements of the dimensions of an object and then accurately draw the object using CAD software.
- Print an object to scale on a 3D printer.
- Construct a 1:20 scale model of a room.
Each group needs:
- 1 stack of Post-it® Notes, ~60 sheets
- Evaluation Matrix Template, one per student
- Morphological Chart Template, one per student
- colored pencils or markers
- orthographic paper
- isometric paper
- drawing paper
- 1 measuring tape
- 1 foam core board sheet, 20 x 30-inches
- 1 utility knife
For the entire class to share:
- low-vision simulation goggles, such as those available at http://www.lowvisionsimulators.com/; alternatively, use blindfolds
- (optional) white canes, such as the “canes for the blind” for $20+ at http://www.maxiaids.com/; or borrow a cane
- metric rulers
- hot glue guns and hot glue sticks
- 3D printer and ABS filament, such as the FlashForge Creator Pro and http://www.flashforge-usa.com/shop/filaments/abs-filament.html; alternatively, provide foam core board and hot glue for students to cut, construct and paint their product prototypes and scaled furniture items
- computers with access to CAD software such as Tinkercad, a free 3D design tool and easy-to-learn online app available at www.tinkercad.com/; requires an email address to open a free account; must be 13 years of age or older
- (optional) computer with projector and Internet access, to show students images that depict how various types of visually impaired people see; such as Figure 3 and the images provided at the National Eye Institute’s Eye Diseases and Vision Disorders Flickr album
Imagine you have a visual impairment. What types of things in your daily life might be more difficult to do? What types of tools might help you? Now picture yourself moving to a new city and attending a new high school. What would be the hardest thing to do when you arrived at your new school? What would you find difficult to do in your new classroom?
In this activity, you are going to work your way through the steps of the engineering design process in order to design a product that makes a classroom more accessible for a new student who is visually impaired. Think about the activities you usually do in your classroom. What type of things would be more difficult to do if you were visually impaired? What type of product would help you accomplish some of those routines? (Some possible ideas include: Textured floors, multi-outlet charging stations, desk organizers, bright colors to find stations, redesigned supply closets, etc.)
In your groups, you will follow the steps of the engineering design process (see Figure 1). Do you remember the steps? The steps are: 1) ask: identify the need and constraints, 2) research the problem, 3) imagine: develop possible solutions, 4) plan: select a promising solution, 5) create: build a prototype, 6) test and evaluate prototype, and 7) improve: redesign as needed.
In order to help us better understand the problem, we are going to practice what engineers and designers often do themselves when they really want to more deeply understand a problem from a user’s point of view—we call it human-centered design. We do this by closely observing how the target audience interacts with a product or environment, conducting interviews to learn about the users’ experiences with a product, and using the proposed product ourselves in its intended environment so we have our own first-hand experiences. In this latter approach, we are going to experience what it would be like to have a visual impairment by wearing low-vision simulation goggles (or blindfolds) as we navigate through the classroom. The lenses in the goggles have been coated to simulate low-vision conditions such as tunnel vision, macular degeneration, cataracts or glaucoma.
It is a common misconception that blind people cannot see anything at all. In reality, only 18% of people who are visually impaired are classified as being totally blind, while most of them can differentiate between light and dark (AFB 2016). Most blind people have a condition that limits their vision below a certain threshold and therefore have “low vision” (see Figure 2). Eye conditions like glaucoma, cataracts and retinitis pigmentosa can occur due to health disorders, eye injuries or birth defects. (If possible, use a computer/projector with Internet connection to show the class the Figure 3 National Eye Institute images at the URL in the Materials List.)
After you develop your design solution, you will 3D print your design and place it in a scale model of the classroom—also built by you! Your scale model will be 1/20th the size of a real room, so you must carefully measure the room dimensions along with a few selected pieces of furniture. The outer shell of the room—its walls—will be cut from foam core board sheets. The furniture items and your final design—of a product that helps a visually impaired person in a classroom—will be a 3D-printed prototype (or, alternatively, made of cut, glued and painted foam core board) to demonstrate it to others.
20/20 vision: A fraction from the Snellen visual acuity system used in the U.S. to characterize “normal” vision. The 20 on top corresponds to a person standing 20 feet away from a chart as s/he tries to identify the smallest row of letters on the chart. The smallest letters on the chart are what a person with “normal vision” can see at a distance of 20 feet (the 20 on the bottom).
brainstorming: A group problem solving method in which all members quickly and spontaneously contribute as many ideas as possible.
design: The development of a well thought-out plan to solve a problem. Usually the plan is well documented and includes graphics and an evaluation of possible ideas.
engineer: A person who applies his/her understanding of science and math to design solutions for specific problems that impact humanity and our world.
engineering design process: A series of decision-making steps used by engineering teams to guide them as they develop new solutions, products or systems. The process is iterative and may begin at, and return to, any step. See Figure 1.
human-centered design: A design process in which significance is placed on the user, and human perspective is considered at every step in the process.
legal blindness: Refers to a person whose vision is characterized as 20/200 when wearing the best possible corrective lens or having a visual field no greater than 20 degrees. In the first case, the smallest letters identified on a chart at a distance of 20 feet are the same size as what a person with “normal vision” can see at a distance of 200 feet. The “legal” refers to meeting a government-determined level of visual impairment that makes a person eligible for benefits.
model: A representation of something for imitation, comparison or analysis, sometimes on a different scale.
prototype: A first attempt or early model of a new product or creation. May be revised many times.
vision impairment: A limitation of the functions of the eye that affects sharpness, clarity or normal range of vision.
Before the Activity
- Gather materials and make copies of the Evaluation Matrix Template and Morphological Chart Template.
- As an alternative, or in addition to redesigning a classroom for the visually impaired, you may want to permit groups to do the same for dorm rooms, bathrooms, offices, etc.
- For the low-vision simulation on Day 1, it is recommended that you arrange for students to navigate a classroom that is unfamiliar to them.
- If possible, arrange for students to interview at least one visually impaired person to better understand the problem and issues they face, as well as to get feedback on their product ideas. Suggest students take advantage of the resource of a visually impaired person at numerous stages of the process.
- To be ready for Days 4-5, prepare computers with the online version of Tinkercad software. If students have never used Tinkercad, then devote at least one extra class period to completing its online introductory tutorial in advance.
With the Students
- Day 1: Present to the class the Introduction/Motivation section, which includes setting up the engineering challenge and showing some National Institutes of Health images.
- Divide the class into groups of four students each.
- Direct the groups to perform the low-vision simulation with goggles (or blindfolds) and walking canes. Give students enough time to each walk around with the goggles and canes in a new environment. Make sure they get a sense of walking through entry ways, opening doors, turning on light switches, navigating desks and tables, and doing other tasks they typically do in their own classroom.
- Provide measuring tapes for groups to measure the room dimensions as well as selected furniture items. They will use the dimensions to draw the items on paper and in Tinkercad in preparation for making 3D-printed scale models. Suggest that they measure in centimeters to make it easier to convert the measurements for scaled drawings.
- Administer the pre-activity reflection assessment as a homework writing assignment. Using the prompt provided in the Assessment section, ask students to describe their experiences during the low vision simulation.
- Day 2: Give each student a stack of Post-it® Notes and direct them to brainstorm in their groups to come up with possible ideas for products that would help visually impaired students gain more accessibility in the classroom (or chosen room). Give them 30-40 minutes.
- Require them to write down 60 possible ideas on the Post-it® Notes. Example ideas include: Textured floors, multi-outlet charging stations, desk organizers, shower caddies, etc.
- As necessary, remind them of how brainstorming works: At this stage, there are no bad ideas! Listen to every idea and record it. Even wild and crazy ideas. Build off others’ ideas.
- Often, students run out of ideas after 15 or 20. When that occurs, interject some ideas to guide them to new avenues of thought that they have not yet considered. What difficulties did they encounter during the low-vision simulation exercise?
- Have students categorize their ideas into common concepts or activities such as navigation, organization, showering, morning routines, etc.
- Day 3: Hand out the Morphological Chart Template and explain that a morphological chart is a visual tool used to quickly generate alternative solutions given key elements or components. Direct groups to each pick a favorite concept or activity and then expand it into three fully developed solutions using the morph chart. In the first column, agree on and write down the five key attributes that their idea must include. For example: color contrast, texture, body layout, unit shape, and location. Then they fill in the rest of the chart by drawing five pictures of their creative ideas for each attribute. The pictures must be in color and contain no words. Then, students use a circle to highlight one attribute from each row representing the team’s first proposed solution; then students use a triangle and square to identify the team’s second and third proposed solutions. Once the morph chart is completed, direct each group to write down three proposed concepts by including one idea from each attribute.
- Next, hand out the Evaluation Matrix Template and explain that an evaluation matrix is a weighted objectives method that guides teams to converge on one solution based on scores assigned to each design objective. Direct groups to use the matrix to narrow their three proposed concept ideas into one final solution per group. In the first column, choose five key functions that their idea must try to meet. For example: portability, safety, accessibility, intuitive to the touch, and simplifies a common task. Then assign a weight percentage to each function to reflect how important they view it; the percentages must add up to 100%. Then, agree on evaluation scores from 1-10—with 10 as the highest—to rate how well each concept idea meets the five functions. Once the chart is filled in, add up the evaluation scores. The concept idea with the highest number becomes the final solution the group will pursue to develop as a prototype and include in its scale model.
- For homework, assign students to each sketch their team’s proposed design solution on paper as orthographic and isometric drawings. Require the drawings to clearly indicate a scale in centimeters.
- Days 4-5: In a computer lab, have students use the online version of Tinkercad software to draw their proposed design solutions and selected furniture pieces. Require each group member to be responsible for completing at least one of the drawings. Remind students to save their completed files with STL extensions so they can be 3D printed. To keep everything to scale, printed items must have dimensions that are 1/20th of their real-world sizes. If a 3D printer is not available, direct groups to construct their model pieces from foam core board and hot glue.
- Days 6-7: Direct students to:
- Cut pieces of foam board to scale (1/20th of the original dimensions) to make a classroom shell, leaving the ceiling and one wall missing so the room can be viewed easily from the outside (see Figure 4).
- Use hot glue to join together the walls. After the outer walls dry, use hot glue to adhere the proposed solution and all furniture items to the walls and floor. Figure 4 shows an example finished model.
- As groups finish their scale models, have them critique each other’s design solutions and then write summary self-evaluations of their design solutions, including future improvements to make.
- Days 8-10: To conclude, have teams prepare poster presentations as described in the Assessment section.
- Since low-vision goggles (or blindfolds) reduce visual acuity, make sure students hold the elbow of a group member when navigating through the classroom.
- Be cautious when using utility knives to cut foam core board since they are extremely sharp. Use a cutting board on the surface where you cut the material and make sure to position the blade down and away from yourself and other people.
- Hot glue guns get very hot and can burn skin so avoid touching the nozzle or hot glue.
Depending on the number of groups involved, the 3D-printing process can be very time consuming. Students tend to group all objects that need to be printed into one STL file, which creates problems as they try to print. Insist that students turn in STL files with only one object per file.
Make sure students verify the dimensions in their 3D-printed files before turning them in. Usually this is in millimeters. Suggest they double-check their dimensions by holding a metric ruler next to the walls of their classroom models and then scale down the furniture dimensions.
Low-Vision Simulation Reflection: At the end of Day 1, as a homework assignment, have students each write at least 100 words to answer the following prompt. If possible, have them post their descriptions on an online discussion board and then respond to at least two other classmates’ posts.
- Describe your experience today wearing the low-vision simulation goggles and using a walking cane. What did you learn about dealing with a visual impairment? Did this activity help you rethink any misconceptions you might have held before today? Explain.
Activity Embedded Assessment
Morph Chart: A morphological chart is a visual tool used by designers to quickly generate alternatives solutions given key elements or components (Cross 2008). To keep groups’ creative brainstorming process flowing, direct them to draw their ideas (no words permitted) to fill in the Morphological Chart Template, and then follow its instructions to narrow their many ideas into three possible solutions.
Evaluation Matrix: An evaluation matrix is a weighted objectives method that guides teams to converge on one solution based on scores assigned to each design objective (Cross 2008). Using the Evaluation Matrix Template, have each group discuss what functions are important to the design problem and then assign a weight of importance to each function. Then each group evaluates its designs based on the functions, giving scores from 1-10, with 10 being the highest.
Technical Drawings: Once each group converges on its chosen solution, have each team member individually draw on paper orthographic and isometric representations of the proposed solution. The following day, the group meets and decides how to proceed with its design solution. If a 3D printer is available, they draw it on Tinkercad, along with other key classroom furniture items.
Group Poster Presentations: Have teams prepare posters that detail how they used the design process to converge on their final solutions. Review their final models and posters to gauge their depth of comprehension of the activity subject matter and learning objectives.
- Remind them that posters need to tell a story. The poster content explains the team’s logic and steps towards creating a final solution for people who don’t know anything about it.
- Require groups to: explain what was learned through the low-vision simulation, walk the audience through the design process, and explain how the scale model helps to demonstrate the design in its environment.
- Suggested poster sub-title headings include: Define the Problem, Low-Vision Simulation Reflection, Brainstorming Ideas, Choosing a Solution, Prototype Development, Self-Evaluation, Future Improvements, and Conclusion.
- Also have teams display nearby their final constructed scale models that include their proposed product solutions.
Assign groups to each think about the many private and public spaces in their communities and brainstorm and/or research ways that the spaces could be redesigned to be safer and more accessible for the visually impaired. Then have students create scale models of their final designs and showcase them in their school’s display cases and/or share them on their school’s blog. See many ideas for environmental adaptations on the American Foundation for the Blind website (AFB 2017).
(AFB 2016) “Learning about Blindness.” American Foundation for the Blind. New York, NY. Accessed August 16, 2016. (Covers topics such as: What is blindness or low vision? How do I interact with a blind person? Myths about blindness.) http://www.afb.org/info/living-with-vision-loss/for-job-seekers/for-employers/visual-impairment-and-your-current-workforce/learning-about-blindness/12345
(AFB 2017) “Creating a Comfortable Environment for People with Low Vision.” American Foundation for the Blind. New York, NY. Accessed January 10, 2017. (Great suggestions for environmental adaptations/modifications that enhance functioning related to lighting, furniture, hazard elimination, use of color contrast, hallways and stairways, signs, telephones) http://www.afb.org/info/low-vision/living-with-low-vision/creating-a-comfortable-environment-for-people-with-low-vision/235
(Cross 2008) Cross, Nigel. Engineering Design Methods: Strategies for Product Design. Fourth edition. West Sussex, England: John Wiley & Sons Ltd, 2008.
Copyright© 2017 by Regents of the University of Colorado; original © 2016 The College of New Jersey
Supporting ProgramDepartment of Technological Studies, School of Engineering, The College of New Jersey
This activity was made possible through a collaboration between the Department of Technological Studies and the Center for Complex and Sensory Disabilities at The College of New Jersey. The CCSD provided the low-vision simulation goggles, white canes and instructional support.
Last modified: February 17, 2017 |
Islamic Golden Age
The Islamic Golden Age was a period of cultural, economic and scientific flourishing in the history of Islam, traditionally dated from the 8th to the 14th century. This period is traditionally understood to have begun during the reign of the Abbasid caliph Harun al-Rashid (786 to 809) with the inauguration of the House of Wisdom in Baghdad, where scholars from various parts of the world with different cultural backgrounds were mandated to gather and translate all of the world's classical knowledge into Arabic and Persian. This period is traditionally said to have ended with the collapse of the Abbasid caliphate due to Mongol invasions and the Siege of Baghdad in 1258. A few scholars date the end of the golden age around 1350, while several modern historians and scholars place the end of the Islamic Golden Age as late as the end of 15th to 16th centuries. (The medieval period of Islam is very similar if not the same, with one source defining it as 900–1300 CE.)
History of the concepts
The metaphor of a golden age began to be applied in 19th-century literature about Islamic history, in the context of the western aesthetic fashion known as Orientalism. The author of a Handbook for Travelers in Syria and Palestine in 1868 observed that the most beautiful mosques of Damascus were "like Mohammedanism itself, now rapidly decaying" and relics of "the golden age of Islam".
There is no unambiguous definition of the term, and depending on whether it is used with a focus on cultural or on military achievement, it may be taken to refer to rather disparate time spans. Thus, one 19th century author would have it extend to the duration of the caliphate, or to "six and a half centuries", while another would have it end after only a few decades of Rashidun conquests, with the death of Umar and the First Fitna.
During the early 20th century, the term was used only occasionally and often referred to as the early military successes of the Rashidun caliphs. It was only in the second half of the 20th century that the term came to be used with any frequency, now mostly referring to the cultural flourishing of science and mathematics under the caliphates during the 9th to 11th centuries (between the establishment of organised scholarship in the House of Wisdom and the beginning of the crusades), but often extended to include part of the late 8th or the 12th to early 13th centuries. Definitions may still vary considerably. Equating the end of the golden age with the end of the caliphates is a convenient cut-off point based on a historical landmark, but it can be argued that Islamic culture had entered a gradual decline much earlier; thus, Khan (2003) identifies the proper golden age as being the two centuries between 750–950, arguing that the beginning loss of territories under Harun al-Rashid worsened after the death of al-Ma'mun in 833, and that the crusades in the 12th century resulted in a weakening of the Islamic empire from which it never recovered.
The various Quranic injunctions and Hadith, which place values on education and emphasize the importance of acquiring knowledge, played a vital role in influencing the Muslims of this age in their search for knowledge and the development of the body of science.
The Islamic Empire heavily patronized scholars. The money spent on the Translation Movement for some translations is estimated to be equivalent to about twice the annual research budget of the United Kingdom’s Medical Research Council. The best scholars and notable translators, such as Hunayn ibn Ishaq, had salaries that are estimated to be the equivalent of professional athletes today. The House of Wisdom was a library established in Abbasid-era Baghdad, Iraq by Caliph al-Mansur.
During this period, the Muslims showed a strong interest in assimilating the scientific knowledge of the civilizations that had been conquered. Many classic works of antiquity that might otherwise have been lost were translated from Greek, Persian, Indian, Chinese, Egyptian, and Phoenician civilizations into Arabic and Persian, and later in turn translated into Turkish, Hebrew, and Latin.
Christians, especially the adherents of the Church of the East (Nestorians), contributed to Islamic civilization during the reign of the Ummayads and the Abbasids by translating works of Greek philosophers and ancient science to Syriac and afterwards to Arabic. They also excelled in many fields, in particular philosophy, science (such as Hunayn ibn Ishaq, Thabit Ibn Qurra, Yusuf Al-Khuri, Al Himsi, Qusta ibn Luqa, Masawaiyh, Patriarch Eutychius, and Jabril ibn Bukhtishu) and theology. For a long period of time the personal physicians of the Abbasid Caliphs were often Assyrian Christians. Among the most prominent Christian families to serve as physicians to the caliphs were the Bukhtishu dynasty.
Throughout the 4th to 7th centuries, Christian scholarly work in the Greek and Syriac languages was either newly translated or had been preserved since the Hellenistic period. Among the prominent centers of learning and transmission of classical wisdom were Christian colleges such as the School of Nisibis and the School of Edessa, the pagan University of Harran and the renowned hospital and medical academy of Jundishapur, which was the intellectual, theological and scientific center of the Church of the East. The House of Wisdom was founded in Baghdad in 825, modelled after the Academy of Gondishapur. It was led by Christian physician Hunayn ibn Ishaq, with the support of Byzantine medicine. Many of the most important philosophical and scientific works of the ancient world were translated, including the work of Galen, Hippocrates, Plato, Aristotle, Ptolemy and Archimedes. Many scholars of the House of Wisdom were of Christian background.
Among the various countries and cultures conquered through successive Islamic conquests, a remarkable number of scientists originated from Persia, who contributed immensely to the scientific flourishing of the Islamic Golden Age. According to Bernard Lewis: "Culturally, politically, and most remarkable of all even religiously, the Persian contribution to this new Islamic civilization is of immense importance. The work of Iranians can be seen in every field of cultural endeavor, including Arabic poetry, to which poets of Iranian origin composing their poems in Arabic made a very significant contribution." Science, medicine, philosophy and technology in the newly Islamized Iranian society was influenced by and based on the scientific model of the major pre-Islamic Iranian universities in the Sassanian Empire. During this period hundreds of scholars and scientists vastly contributed to technology, science and medicine, later influencing the rise of European science during the Renaissance.
Most of the ḥadîth scholars who preserved traditions for the Muslims also were Persians, or Persian in language and upbringing, because the discipline was widely cultivated in the 'Irâq and the regions beyond. Furthermore all the scholars who worked in the science of the principles of jurisprudence were Persians. The same applies to speculative theologians and to most Qur'ân commentators. Only the Persians engaged in the task of preserving knowledge and writing systematic scholarly works. Thus, the truth of the following statement by the Prophet becomes apparent: 'If scholarship hung suspended in the highest parts of heaven, the Persians would attain it.'
With a new and easier writing system, and the introduction of paper, information was democratized to the extent that, for probably the first time in history, it became possible to make a living from only writing and selling books. The use of paper spread from China into Muslim regions in the eighth century, arriving in Al-Andalus on the Iberian peninsula (modern Spain and Portugal) in the 10th century. It was easier to manufacture than parchment, less likely to crack than papyrus, and could absorb ink, making it difficult to erase and ideal for keeping records. Islamic paper makers devised assembly-line methods of hand-copying manuscripts to turn out editions far larger than any available in Europe for centuries. It was from these countries that the rest of the world learned to make paper from linen.
The centrality of scripture and its study in the Islamic tradition helped to make education a central pillar of the religion in virtually all times and places in the history of Islam. The importance of learning in the Islamic tradition is reflected in a number of hadiths attributed to Muhammad, including one that instructs the faithful to "seek knowledge, even in China". This injunction was seen to apply particularly to scholars, but also to some extent to the wider Muslim public, as exemplified by the dictum of al-Zarnuji, "learning is prescribed for us all". While it is impossible to calculate literacy rates in pre-modern Islamic societies, it is almost certain that they were relatively high, at least in comparison to their European counterparts.
Education would begin at a young age with study of Arabic and the Quran, either at home or in a primary school, which was often attached to a mosque. Some students would then proceed to training in tafsir (Quranic exegesis) and fiqh (Islamic jurisprudence), which was seen as particularly important. Education focused on memorization, but also trained the more advanced students to participate as readers and writers in the tradition of commentary on the studied texts. It also involved a process of socialization of aspiring scholars, who came from virtually all social backgrounds, into the ranks of the ulema.
For the first few centuries of Islam, educational settings were entirely informal, but beginning in the 11th and 12th centuries, the ruling elites began to establish institutions of higher religious learning known as madrasas in an effort to secure support and cooperation of the ulema. Madrasas soon multiplied throughout the Islamic world, which helped to spread Islamic learning beyond urban centers and to unite diverse Islamic communities in a shared cultural project. Nonetheless, instruction remained focused on individual relationships between students and their teacher. The formal attestation of educational attainment, ijaza, was granted by a particular scholar rather than the institution, and it placed its holder within a genealogy of scholars, which was the only recognized hierarchy in the educational system. While formal studies in madrasas were open only to men, women of prominent urban families were commonly educated in private settings and many of them received and later issued ijazas in hadith studies, calligraphy and poetry recitation. Working women learned religious texts and practical skills primarily from each other, though they also received some instruction together with men in mosques and private homes.
Madrasas were devoted principally to study of law, but they also offered other subjects such as theology, medicine, and mathematics. The madrasa complex usually consisted of a mosque, boarding house, and a library. It was maintained by a waqf (charitable endowment), which paid salaries of professors, stipends of students, and defrayed the costs of construction and maintenance. The madrasa was unlike a modern college in that it lacked a standardized curriculum or institutionalized system of certification.
Muslims distinguished disciplines inherited from pre-Islamic civilizations, such as philosophy and medicine, which they called "sciences of the ancients" or "rational sciences", from Islamic religious sciences. Sciences of the former type flourished for several centuries, and their transmission formed part of the educational framework in classical and medieval Islam. In some cases, they were supported by institutions such as the House of Wisdom in Baghdad, but more often they were transmitted informally from teacher to student.
The University of Al Karaouine, founded in 859 AD, is listed in The Guinness Book Of Records as the world's oldest degree-granting university. The Al-Azhar University was another early university (madrasa). The madrasa is one of the relics of the Fatimid caliphate. The Fatimids traced their descent to Muhammad's daughter Fatimah and named the institution using a variant of her honorific title Al-Zahra (the brilliant). Organized instruction in the Al-Azhar Mosque began in 978.
Juristic thought gradually developed in study circles, where independent scholars met to learn from a local master and discuss religious topics. At first, these circles were fluid in their membership, but with time distinct regional legal schools crystallized around shared sets of methodological principles. As the boundaries of the schools became clearly delineated, the authority of their doctrinal tenets came to be vested in a master jurist from earlier times, who was henceforth identified as the school's founder. In the course of the first three centuries of Islam, all legal schools came to accept the broad outlines of classical legal theory, according to which Islamic law had to be firmly rooted in the Quran and hadith.
The classical theory of Islamic jurisprudence elaborates how scriptures should be interpreted from the standpoint of linguistics and rhetoric. It also comprises methods for establishing authenticity of hadith and for determining when the legal force of a scriptural passage is abrogated by a passage revealed at a later date. In addition to the Quran and sunnah, the classical theory of Sunni fiqh recognizes two other sources of law: juristic consensus (ijmaʿ) and analogical reasoning (qiyas). It therefore studies the application and limits of analogy, as well as the value and limits of consensus, along with other methodological principles, some of which are accepted by only certain legal schools. This interpretive apparatus is brought together under the rubric of ijtihad, which refers to a jurist's exertion in an attempt to arrive at a ruling on a particular question. The theory of Twelver Shia jurisprudence parallels that of Sunni schools with some differences, such as recognition of reason (ʿaql) as a source of law in place of qiyas and extension of the notion of sunnah to include traditions of the imams.
The body of substantive Islamic law was created by independent jurists (muftis). Their legal opinions (fatwas) were taken into account by ruler-appointed judges who presided over qāḍī's courts, and by maẓālim courts, which were controlled by the ruler's council and administered criminal law.
Classical Islamic theology emerged from an early doctrinal controversy which pitted the ahl al-hadith movement, led by Ahmad ibn Hanbal, who considered the Quran and authentic hadith to be the only acceptable authority in matters of faith, against Mu'tazilites and other theological currents, who developed theological doctrines using rationalistic methods. In 833 the caliph al-Ma'mun tried to impose Mu'tazilite theology on all religious scholars and instituted an inquisition (mihna), but the attempts to impose a caliphal writ in matters of religious orthodoxy ultimately failed. This controversy persisted until al-Ash'ari (874–936) found a middle ground between Mu'tazilite rationalism and Hanbalite literalism, using the rationalistic methods championed by Mu'tazilites to defend most substantive tenets maintained by ahl al-hadith. A rival compromise between rationalism and literalism emerged from the work of al-Maturidi (d. c. 944), and, although a minority of scholars remained faithful to the early ahl al-hadith creed, Ash'ari and Maturidi theology came to dominate Sunni Islam from the 10th century on.
Ibn Sina (Avicenna) and Ibn Rushd (Averroes) played a major role in interpreting the works of Aristotle, whose ideas came to dominate the non-religious thought of the Christian and Muslim worlds. According to the Stanford Encyclopedia of Philosophy, translation of philosophical texts from Arabic to Latin in Western Europe "led to the transformation of almost all philosophical disciplines in the medieval Latin world". The influence of Islamic philosophers in Europe was particularly strong in natural philosophy, psychology and metaphysics, though it also influenced the study of logic and ethics.
Avicenna argued his "Floating man" thought experiment concerning self-awareness, in which a man prevented of sense experience by being blindfolded and free falling would still be aware of his existence.
In epistemology, Ibn Tufail wrote the novel Hayy ibn Yaqdhan and in response Ibn al-Nafis wrote the novel Theologus Autodidactus. Both were concerning autodidacticism as illuminated through the life of a feral child spontaneously generated in a cave on a desert island.
Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī played a significant role in the development of algebra, arithmetic and Hindu-Arabic numerals. He has been described as the father or founder of algebra.
Another Persian mathematician, Omar Khayyam, is credited with identifying the foundations of algebraic geometry. Omar Khayyam found the general geometric solution of the cubic equation. His book Treatise on Demonstrations of Problems of Algebra (1070), which laid down the principles of algebra, is part of the body of Persian mathematics that was eventually transmitted to Europe.
Islamic art makes use of geometric patterns and symmetries in many of its art forms, notably in girih tilings. These are formed using a set of five tile shapes, namely a regular decagon, an elongated hexagon, a bow tie, a rhombus, and a regular pentagon. All the sides of these tiles have the same length; and all their angles are multiples of 36° (π/5 radians), offering fivefold and tenfold symmetries. The tiles are decorated with strapwork lines (girih), generally more visible than the tile boundaries. In 2007, the physicists Peter Lu and Paul Steinhardt argued that girih from the 15th century resembled quasicrystalline Penrose tilings. Elaborate geometric zellige tilework is a distinctive element in Moroccan architecture. Muqarnas vaults are three-dimensional but were designed in two dimensions with drawings of geometrical cells.
Ibn Muʿādh al-Jayyānī is one of several Islamic mathematicians to whom the law of sines is attributed; he wrote his The Book of Unknown Arcs of a Sphere in the 11th century. This formula relates the lengths of the sides of any triangle, rather than only right triangles, to the sines of its angles. According to the law,
where a, b, and c are the lengths of the sides of a triangle, and A, B, and C are the opposite angles (see figure).
Alhazen discovered the sum formula for the fourth power, using a method that could be generally used to determine the sum for any integral power. He used this to find the volume of a paraboloid. He could find the integral formula for any polynomial without having developed a general formula.
Ibn al-Haytham (Alhazen) was a significant figure in the history of scientific method, particularly in his approach to experimentation, and has been described as the "world's first true scientist".
Avicenna made rules for testing the effectiveness of drugs, including that the effect produced by the experimental drug should be seen constantly or after many repetitions, to be counted. The physician Rhazes was an early proponent of experimental medicine and recommended using control for clinical research. He said: "If you want to study the effect of bloodletting on a condition, divide the patients into two groups, perform bloodletting only on one group, watch both, and compare the results."
In about 964 AD, the Persian astronomer Abd al-Rahman al-Sufi, writing in his Book of Fixed Stars, described a "nebulous spot" in the Andromeda constellation, the first definitive reference to what we now know is the Andromeda Galaxy, the nearest spiral galaxy to our galaxy. Nasir al-Din al-Tusi invented a geometrical technique called a Tusi-couple, which generates linear motion from the sum of two circular motions to replace Ptolemy's problematic equant. The Tusi couple was later employed in Ibn al-Shatir's geocentric model and Nicolaus Copernicus' heliocentric Copernican model although it is not known who the intermediary is or if Copernicus rediscovered the technique independently.
Alhazen played a role in the development of optics. One of the prevailing theories of vision in his time and place was the emission theory supported by Euclid and Ptolemy, where sight worked by the eye emitting rays of light, and the other was the Aristotelean theory that sight worked when the essence of objects flows into the eyes. Alhazen correctly argued that vision occurred when light, traveling in straight lines, reflects off an object into the eyes. Al-Biruni wrote of his insights into light, stating that its velocity must be immense when compared to the speed of sound.
In the cardiovascular system, Ibn al-Nafis in his Commentary on Anatomy in Avicenna's Canon was the first known scholar to contradict the contention of the Galen School that blood could pass between the ventricles in the heart through the cardiac inter-ventricular septum that separates them, saying that there is no passage between the ventricles at this point. Instead, he correctly argued that all the blood that reached the left ventricle did so after passing through the lung. He also stated that there must be small communications, or pores, between the pulmonary artery and pulmonary vein, a prediction that preceded the discovery of the pulmonary capillaries of Marcello Malpighi by 400 years. The Commentary was rediscovered in the twentieth century in the Prussian State Library in Berlin; whether its view of the pulmonary circulation influenced scientists such as Michael Servetus is unclear.
In the nervous system, Rhazes stated that nerves had motor or sensory functions, describing 7 cranial and 31 spinal cord nerves. He assigned a numerical order to the cranial nerves from the optic to the hypoglossal nerves. He classified the spinal nerves into 8 cervical, 12 thoracic, 5 lumbar, 3 sacral, and 3 coccygeal nerves. He used this to link clinical signs of injury to the corresponding location of lesions in the nervous system.
Modern commentators have likened medieval accounts of the "struggle for existence" in the animal kingdom to the framework of the theory of evolution. Thus, in his survey of the history of the ideas which led to the theory of natural selection, Conway Zirkle noted that al-Jahiz was one of those who discussed a "struggle for existence", in his Kitāb al-Hayawān (Book of Animals), written in the 9th century. In the 13th century, Nasir al-Din al-Tusi believed that humans were derived from advanced animals, saying, "Such humans [probably anthropoid apes] live in the Western Sudan and other distant corners of the world. They are close to animals by their habits, deeds and behavior." In 1377, Ibn Khaldun in his Muqaddimah stated, "The animal kingdom was developed, its species multiplied, and in the gradual process of Creation, it ended in man and arising from the world of the monkeys."
The Banū Mūsā brothers, in their Book of Ingenious Devices, describe an automatic flute player which may have been the first programmable machine. The flute sounds were produced through hot steam and the user could adjust the device to various patterns so that they could get various sounds from it.
This section needs expansion. You can help by adding to it. (January 2018)
Archiving was a respected position during this time in Islam though most of the governing documents have been lost over time. However, from correspondence and remaining documentation gives a hint of the social climate as well as shows that the archives were detailed and vast during their time. All letters that were received or sent on behalf of the governing bodies were copied, archived and noted for filing. The position of the archivist was seen as one that had to have a high level of devotion as they held the records of all pertinent transactions.
The earliest known Islamic hospital was built in 805 in Baghdad by order of Harun Al-Rashid, and the most important of Baghdad's hospitals was established in 982 by the Buyid ruler 'Adud al-Dawla. The best documented early Islamic hospitals are the great Syro-Egyptian establishments of the 12th and 13th centuries. By the tenth century, Baghdad had five more hospitals, while Damascus had six hospitals by the 15th century and Córdoba alone had 50 major hospitals, many exclusively for the military.
The typical hospital was divided into departments such as systemic diseases, surgery, and orthopedics, with larger hospitals having more diverse specialties. "Systemic diseases" was the rough equivalent of today's internal medicine and was further divided into sections such as fever, infections and digestive issues. Every department had an officer-in-charge, a presiding officer and a supervising specialist. The hospitals also had lecture theaters and libraries. Hospitals staff included sanitary inspectors, who regulated cleanliness, and accountants and other administrative staff. The hospitals were typically run by a three-man board comprising a non-medical administrator, the chief pharmacist, called the shaykh saydalani, who was equal in rank to the chief physician, who served as mutwalli (dean). Medical facilities traditionally closed each night, but by the 10th century laws were passed to keep hospitals open 24 hours a day.
For less serious cases, physicians staffed outpatient clinics. Cities also had first aid centers staffed by physicians for emergencies that were often located in busy public places, such as big gatherings for Friday prayers. The region also had mobile units staffed by doctors and pharmacists who were supposed to meet the need of remote communities. Baghdad was also known to have a separate hospital for convicts since the early 10th century after the vizier ‘Ali ibn Isa ibn Jarah ibn Thabit wrote to Baghdad’s chief medical officer that "prisons must have their own doctors who should examine them every day". The first hospital built in Egypt, in Cairo's Southwestern quarter, was the first documented facility to care for mental illnesses. In Aleppo's Arghun Hospital, care for mental illness included abundant light, fresh air, running water and music.
Medical students would accompany physicians and participate in patient care. Hospitals in this era were the first to require medical diplomas to license doctors. The licensing test was administered by the region's government appointed chief medical officer. The test had two steps; the first was to write a treatise, on the subject the candidate wished to obtain a certificate, of original research or commentary of existing texts, which they were encouraged to scrutinize for errors. The second step was to answer questions in an interview with the chief medical officer. Physicians worked fixed hours and medical staff salaries were fixed by law. For regulating the quality of care and arbitrating cases, it is related that if a patient dies, their family presents the doctor's prescriptions to the chief physician who would judge if the death was natural or if it was by negligence, in which case the family would be entitled to compensation from the doctor. The hospitals had male and female quarters while some hospitals only saw men and other hospitals, staffed by women physicians, only saw women. While women physicians practiced medicine, many largely focused on obstetrics.
Hospitals were forbidden by law to turn away patients who were unable to pay. Eventually, charitable foundations called waqfs were formed to support hospitals, as well as schools. Part of the state budget also went towards maintaining hospitals. While the services of the hospital were free for all citizens and patients were sometimes given a small stipend to support recovery upon discharge, individual physicians occasionally charged fees. In a notable endowment, a 13th-century governor of Egypt Al-Mansur Qalawun ordained a foundation for the Qalawun hospital that would contain a mosque and a chapel, separate wards for different diseases, a library for doctors and a pharmacy and the hospital is used today for ophthalmology. The Qalawun hospital was based in a former Fatimid palace which had accommodation for 8,000 people – "it served 4,000 patients daily." The waqf stated,
"...The hospital shall keep all patients, men and women, until they are completely recovered. All costs are to be borne by the hospital whether the people come from afar or near, whether they are residents or foreigners, strong or weak, low or high, rich or poor, employed or unemployed, blind or sighted, physically or mentally ill, learned or illiterate. There are no conditions of consideration and payment, none is objected to or even indirectly hinted at for non-payment."
By the ninth century, there was a rapid expansion of private pharmacies in many Muslim cities. Initially, these were unregulated and managed by personnel of inconsistent quality. Decrees by Caliphs Al-Ma'mun and Al-Mu'tasim required examinations to license pharmacists and pharmacy students were trained in a combination of classroom exercises coupled with day-to-day practical experiences with drugs. To avoid conflicts of interest, doctors were banned from owning or sharing ownership in a pharmacy. Pharmacies were periodically inspected by government inspectors called muhtasib, who checked to see that the medicines were mixed properly, not diluted and kept in clean jars. Violators were fined or beaten.
The theory of Humorism was largely dominant during this time. Arab physician Ibn Zuhr provided proof that scabies is caused by the itch mite and that it can be cured by removing the parasite without the need for purging, bleeding or other treatments called for by humorism, making a break with the humorism of Galen and Ibn Sina. Rhazes differentiated through careful observation the two diseases smallpox and measles, which were previously lumped together as a single disease that caused rashes. This was based on location and the time of the appearance of the symptoms and he also scaled the degree of severity and prognosis of infections according to the color and location of rashes. Al-Zahrawi was the first physician to describe an ectopic pregnancy, and the first physician to identify the hereditary nature of haemophilia.
On hygienic practices, Rhazes, who was once asked to choose the site for a new hospital in Baghdad, suspended pieces of meat at various points around the city, and recommended building the hospital at the location where the meat putrefied the slowest.
For Islamic scholars, Indian and Greek physicians and medical researchers Sushruta, Galen, Mankah, Atreya, Hippocrates, Charaka, and Agnivesa were pre-eminent authorities. In order to make the Indian and Greek tradition more accessible, understandable, and teachable, Islamic scholars ordered and made more systematic the vast Indian and Greco-Roman medical knowledge by writing encyclopedias and summaries. Sometimes, past scholars were criticized, like Rhazes who criticized and refuted Galen's revered theories, most notably, the Theory of Humors and was thus accused of ignorance. It was through 12th-century Arabic translations that medieval Europe rediscovered Hellenic medicine, including the works of Galen and Hippocrates, and discovered ancient Indian medicine, including the works of Sushruta and Charaka. Works such as Avicenna's The Canon of Medicine were translated into Latin and disseminated throughout Europe. During the 15th and 16th centuries alone, The Canon of Medicine was published more than thirty-five times. It was used as a standard medical textbook through the 18th century in Europe.
Al-Zahrawi was a tenth century Arab physician. He is sometimes referred to as the "Father of surgery". He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia and the first mastectomy to treat breast cancer. He is credited with the performance of the first thyroidectomy.
Commerce and travel
Apart from the Nile, Tigris, and Euphrates, navigable rivers were uncommon in the Middle East, so transport by sea was very important. Navigational sciences were highly developed, making use of a rudimentary sextant (known as a kamal). When combined with detailed maps of the period, sailors were able to sail across oceans rather than skirt along the coast. Muslim sailors were also responsible for reintroducing large, three-masted merchant vessels to the Mediterranean. The name caravel may derive from an earlier Arab boat known as the qārib.
Many Muslims went to China to trade, and these Muslims began to have a great economic influence on the country. Muslims virtually dominated the import/export industry by the time of the Sung dynasty (960–1279).
Arts and culture
Literature and poetry
Manuscript illumination was an important art, and Persian miniature painting flourished in the Persianate world. Calligraphy, an essential aspect of written Arabic, developed in manuscripts and architectural decoration.
The ninth and tenth centuries saw a flowering of Arabic music. Philosopher and esthete Al-Farabi, at the end of the ninth century, established the foundations of modern Arabic music theory, based on the maqammat, or musical modes. His work was based on the music of Ziryab, the court musician of Andalusia. Ziryab was a renowned polymath, whose contributions to western civilization included formal dining, haircuts, chess, and more, in addition to his dominance of the world musical scene of the ninth century.
The Great Mosque of Kairouan (in Tunisia), the ancestor of all the mosques in the western Islamic world excluding Turkey and the Balkans, is one of the best preserved and most significant examples of early great mosques. Founded in 670, it dates in its present form largely from the 9th century. The Great Mosque of Kairouan is constituted of a three-tiered square minaret, a large courtyard surrounded by colonnaded porticos, and a huge hypostyle prayer hall covered on its axis by two cupolas.
The beginning of construction of the Great Mosque at Cordoba in 785 marked the beginning of Islamic architecture in Spain and Northern Africa. The mosque is noted for its striking interior arches. Moorish architecture reached its peak with the construction of the Alhambra, the magnificent palace/fortress of Granada, with its open and breezy interior spaces adorned in red, blue, and gold. The walls are decorated with stylized foliage motifs, Arabic inscriptions, and arabesque design work, with walls covered in geometrically patterned glazed tiles.
In 1206, Genghis Khan established a powerful dynasty among the Mongols of central Asia. During the 13th century, this Mongol Empire conquered most of the Eurasian land mass, including China in the east and much of the old Islamic caliphate (as well as Kievan Rus') in the west. The destruction of Baghdad and the House of Wisdom by Hulagu Khan in 1258 has been seen by some as the end of the Islamic Golden Age.
The Ottoman conquest of the Arabic-speaking Middle East in 1516–17 placed the traditional heart of the Islamic world under Ottoman Turkish control. The rational sciences continued to flourish in the Middle East during the Ottoman period.
To account for the decline of Islamic science, it has been argued that the Sunni Revival in the 11th and 12th centuries produced a series of institutional changes that decreased the relative payoff to producing scientific works. With the spread of madrasas and the greater influence of religious leaders, it became more lucrative to produce religious knowledge.
Ahmad Y. al-Hassan has rejected the thesis that lack of creative thinking was a cause, arguing that science was always kept separate from religious argument; he instead analyzes the decline in terms of economic and political factors, drawing on the work of the 14th-century writer Ibn Khaldun. Al-Hassan extended the golden age up to the 16th century, noting that scientific activity continued to flourish up until then. Several other contemporary scholars have also extended it to around the 16th to 17th centuries, and analysed the decline in terms of political and economic factors. More recent research has challenged the notion that it underwent decline even at that time, citing a revival of works produced on rational scientific topics during the seventeenth century.
Current research has led to the conclusion that "the available evidence is consistent with the hypothesis that an increase in the political power of these elites caused the observed decline in scientific output."
Economic historian Joel Mokyr has argued that Islamic philosopher al-Ghazali (1058–1111) "was a key figure in the decline in Islamic science", as his works contributed to rising mysticism and occasionalism in the Islamic world. Against this view, Saliba (2007) has given a number of examples especially of astronomical research flourishing after the time of al-Ghazali.
- Baghdad School of art
- Christian influences in Islam
- Dutch Golden Age
- Emirate of Sicily
- Golden age of Jewish culture in Spain
- Ibn Sina Academy of Medieval Medicine and Sciences
- Islamic astronomy
- Islamic studies
- List of Iranian scientists
- Ophthalmology in medieval Islam
- Timeline of Islamic science and technology
- "...regarded by some Westerners as the true father of historiography and sociology".
- "Ibn Khaldun has been claimed the forerunner of a great number of European thinkers, mostly sociologists, historians, and philosophers".(Boulakia 1971)
- "The founding father of Eastern Sociology".
- "This grand scheme to find a new science of society makes him the forerunner of many of the eighteenth and nineteenth centuries system-builders such as Vico, Comte and Marx." "As one of the early founders of the social sciences...".
- "He is considered by some as a father of modern economics, or at least a major forerunner. The Western world recognizes Khaldun as the father of sociology but hesitates in recognizing him as a great economist who laid its very foundations. He was the first to systematically analyze the functioning of an economy, the importance of technology, specialization and foreign trade in economic surplus and the role of government and its stabilization policies to increase output and employment. Moreover, he dealt with the problem of optimum taxation, minimum government services, incentives, institutional framework, law and order, expectations, production, and the theory of value".Cosma, Sorinel (2009). "Ibn Khaldun's Economic Thinking". Ovidius University Annals of Economics (Ovidius University Press) XIV:52–57
- George Saliba (1994), A History of Arabic Astronomy: Planetary Theories During the Golden Age of Islam, pp. 245, 250, 256–57. New York University Press, ISBN 0-8147-8023-7.
- King, David A. (1983). "The Astronomy of the Mamluks". Isis. 74 (4): 531–55. doi:10.1086/353360.
- Hassan, Ahmad Y (1996). "Factors Behind the Decline of Islamic Science After the Sixteenth Century". In Sharifah Shifa Al-Attas (ed.). Islam and the Challenge of Modernity, Proceedings of the Inaugural Symposium on Islam and the Challenge of Modernity: Historical and Contemporary Contexts, Kuala Lumpur, August 1–5, 1994. International Institute of Islamic Thought and Civilization (ISTAC). pp. 351–99. Archived from the original on 2 April 2015.
- Medieval India, NCERT, ISBN 81-7450-395-1
- Vartan Gregorian, "Islam: A Mosaic, Not a Monolith", Brookings Institution Press, 2003, pp. 26–38 ISBN 0-8157-3283-X
- Islamic Radicalism and Multicultural Politics. Taylor & Francis. 2011-03-01. p. 9. ISBN 978-1-136-95960-8. Retrieved 26 August 2012.
- "Science and technology in Medieval Islam" (PDF). History of Science Museum. Retrieved 31 October 2019.
- Barlow, Glenna. "Arts of the Islamic World: the Medieval Period". Khan Academy. Retrieved 31 October 2019.
- Josias Leslie Porter, A Handbook for Travelers in Syria and Palestine, 1868, p. 49.
- "For six centuries and a half, through the golden age of Islam, lasted this Caliphate, till extinguished by the Osmanli sultans and in the death of the last of the blood of the house of Mahomet. The true Caliphate ended with the fall of Bagdad". New Outlook, Volume 45, 1892, p. 370.
- "the golden age of Islam, as Mr. Gilman points out, ended with Omar, the second of the Kalifs." The Literary World, Volume 36, 1887, p. 308.
- "The Ninth, Tenth and Eleventh centuries were the golden age of Islam" Life magazine, 9 May 1955, .
- so Linda S. George, The Golden Age of Islam, 1998: "from the last years of the eighth century to the thirteenth century."
- Arshad Khan, Islam, Muslims, and America: Understanding the Basis of Their Conflict, 2003, p. 19.
- Groth, Hans, ed. (2012). Population Dynamics in Muslim Countries: Assembling the Jigsaw. Springer Science & Business Media. p. 45. ISBN 978-3-642-27881-5.
- Rafiabadi, Hamid Naseem, ed. (2007). Challenges to Religions and Islam: A Study of Muslim Movements, Personalities, Issues and Trends, Part 1. Sarup & Sons. p. 1141. ISBN 978-81-7625-732-9.
- Salam, Abdus (1994). Renaissance of Sciences in Islamic Countries. p. 9. ISBN 978-9971-5-0946-0.
- "In Our Time – Al-Kindi, James Montgomery". bbcnews.com. 28 June 2012. Archived from the original on 2014-01-14. Retrieved May 18, 2013.
- Brentjes, Sonja; Robert G. Morrison (2010). "The Sciences in Islamic societies". The New Cambridge History of Islam. 4. Cambridge: Cambridge University Press. p. 569.
- Hill, Donald. Islamic Science and Engineering. 1993. Edinburgh Univ. Press. ISBN 0-7486-0455-3, p. 4
- "Nestorian – Christian sect". Archived from the original on 2016-10-28. Retrieved 2016-11-05.
- Rashed, Roshdi (2015). Classical Mathematics from Al-Khwarizmi to Descartes. Routledge. p. 33. ISBN 978-0-415-83388-2.
- "Hunayn ibn Ishaq – Arab scholar". Archived from the original on 2016-05-31. Retrieved 2016-07-12.
- Hussein, Askary. "Baghdad 767–1258 A.D.:Melting Pot for a Universal Renaissance". Executive Intelligence Review. Archived from the original on 2017-08-24.
- O'Leary, Delacy (1949). How Greek Science Passed On To The Arabs. Nature. 163. p. 748. Bibcode:1949Natur.163Q.748T. doi:10.1038/163748c0. ISBN 978-1-317-84748-9.
- Sarton, George. "History of Islamic Science". Archived from the original on 2016-08-12.
- Nancy G. Siraisi, Medicine and the Italian Universities, 1250–1600 (Brill Academic Publishers, 2001), p 134.
- Beeston, Alfred Felix Landon (1983). Arabic literature to the end of the Umayyad period. Cambridge University Press. p. 501. ISBN 978-0-521-24015-4. Retrieved 20 January 2011.
- "Compendium of Medical Texts by Mesue, with Additional Writings by Various Authors". World Digital Library. Archived from the original on 2014-03-04. Retrieved 2014-03-01.
- Griffith, Sidney H. (15 December 1998). "Eutychius of Alexandria". Encyclopædia Iranica. Archived from the original on 2017-01-02. Retrieved 2011-02-07.
- Anna Contadini, 'A Bestiary Tale: Text and Image of the Unicorn in the Kitāb naʿt al-hayawān (British Library, or. 2784)', Muqarnas, 20 (2003), 17–33 (p. 17), JSTOR 1523325.
- Bonner, Bonner; Ener, Mine; Singer, Amy (2003). Poverty and charity in Middle Eastern contexts. SUNY Press. p. 97. ISBN 978-0-7914-5737-5.
- Ruano, Eloy Benito; Burgos, Manuel Espadas (1992). 17e Congrès international des sciences historiques: Madrid, du 26 août au 2 septembre 1990. Comité international des sciences historiques. p. 527. ISBN 978-84-600-8154-8.
- Rémi Brague, Assyrians contributions to the Islamic civilization Archived 2013-09-27 at the Wayback Machine
- Britannica, Nestorian Archived 2014-03-30 at the Wayback Machine
- Foster, John (1939). The Church of the T'ang Dynasty. Great Britain: Society for Promoting Christian Knowledge. p. 31.
The school was twice closed, in 431 and 489
- The School of Edessa Archived 2016-09-02 at the Wayback Machine, Nestorian.org.
- Frew, Donald. "Harran: Last Refuge of Classical Paganism". The Pomegranate: The International Journal of Pagan Studies. 13 (9): 17–29. doi:10.1558/pome.v13i9.17.
- "Harran University". Archived from the original on 2018-01-27.
- University of Tehran Overview/Historical Events Archived 2011-02-03 at the Wayback Machine
- Kaser, Karl The Balkans and the Near East: Introduction to a Shared History p. 135.
- Yazberdiyev, Dr. Almaz Libraries of Ancient Merv Archived 2016-03-04 at the Wayback Machine Dr. Yazberdiyev is Director of the Library of the Academy of Sciences of Turkmenistan, Ashgabat.
- Hyman and Walsh Philosophy in the Middle Ages Indianapolis, 1973, p. 204' Meri, Josef W. and Jere L. Bacharach, Editors, Medieval Islamic Civilization Vol. 1, A–K, Index, 2006, p. 304.
- Lewis, Bernard (2004). From Babel to Dragomans: Interpreting the Middle East. Oxford University Press. p. 44.
- Kühnel E., in Zeitschrift der deutschen morgenländischen Gesell, Vol. CVI (1956)
- Khaldun, Ibn (1981) , Muqaddimah, 1, translated by Rosenthal, Franz, Princeton University Press, pp. 429–430
- "In Our Time – Al-Kindi, Hugh Kennedy". bbcnews.com. 28 June 2012. Archived from the original on 2014-01-14. Retrieved May 18, 2013.
- "Islam's Gift of Paper to the West". Web.utk.edu. 2001-12-29. Archived from the original on 2015-05-03. Retrieved 2014-04-11.
- Kevin M. Dunn, Caveman chemistry : 28 projects, from the creation of fire to the production of plastics. Universal-Publishers. 2003. p. 166. ISBN 978-1-58112-566-5. Retrieved 2014-04-11.
- Jonathan Berkey (2004). "Education". In Richard C. Martin (ed.). Encyclopedia of Islam and the Muslim World. MacMillan Reference USA.
- Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 210. ISBN 978-0-521-51430-9.
- Berkey, Jonathan Porter (2003). The Formation of Islam: Religion and Society in the Near East, 600–1800. Cambridge University Press. p. 227.
- Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 217. ISBN 978-0-521-51430-9.
- Hallaq, Wael B. (2009). An Introduction to Islamic Law. Cambridge University Press. p. 50.
- The Guinness Book Of Records, Published 1998, ISBN 0-553-57895-2, p. 242
- Halm, Heinz. The Fatimids and their Traditions of Learning. London: The Institute of Ismaili Studies and I.B. Tauris. 1997.
- Donald Malcolm Reid (2009). "Al-Azhar". In John L. Esposito (ed.). The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. doi:10.1093/acref/9780195305135.001.0001. ISBN 978-0-19-530513-5.
- Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 125. ISBN 978-0-521-51430-9.
- Hallaq, Wael B. (2009). An Introduction to Islamic Law. Cambridge University Press. pp. 31–35.
- Vikør, Knut S. (2014). "Sharīʿah". In Emad El-Din Shahin (ed.). The Oxford Encyclopedia of Islam and Politics. Oxford University Press. Archived from the original on 2017-02-02. Retrieved 2017-07-30.
- Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). p. 130. ISBN 978-0-521-51430-9.
- Calder, Norman (2009). "Law. Legal Thought and Jurisprudence". In John L. Esposito (ed.). The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. Archived from the original on 2017-07-31. Retrieved 2017-07-30.
- Ziadeh, Farhat J. (2009). "Uṣūl al-fiqh". In John L. Esposito (ed.). The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. doi:10.1093/acref/9780195305135.001.0001. ISBN 978-0-19-530513-5.
- Kamali, Mohammad Hashim (1999). John Esposito (ed.). Law and Society. The Oxford History of Islam. Oxford University Press (Kindle edition). pp. 121–22.
- Lapidus, Ira M. (2014). A History of Islamic Societies. Cambridge University Press (Kindle edition). pp. 130–31. ISBN 978-0-521-51430-9.
- Blankinship, Khalid (2008). Tim Winter (ed.). The early creed. The Cambridge Companion to Classical Islamic Theology. Cambridge University Press (Kindle edition). p. 53.
- Tamara Sonn (2009). "Tawḥīd". In John L. Esposito (ed.). The Oxford Encyclopedia of the Islamic World. Oxford: Oxford University Press. doi:10.1093/acref/9780195305135.001.0001. ISBN 978-0-19-530513-5.
- Dag Nikolaus Hasse (2014). "Influence of Arabic and Islamic Philosophy on the Latin West". Stanford Encyclopedia of Philosophy. Archived from the original on 2017-10-20. Retrieved 2017-07-31.
- "In Our Time: Existence". bbcnews.com. 8 November 2007. Archived from the original on 2013-10-17. Retrieved 27 March 2013.
- Boyer, Carl B., 1985. A History of Mathematics, p. 252. Princeton University Press.
- S Gandz, The sources of al-Khwarizmi's algebra, Osiris, i (1936), 263–277
- https://eclass.uoa.gr/modules/document/file.php/MATH104/20010-11/HistoryOfAlgebra.pdf,[permanent dead link] "The first true algebra text which is still extant is the work on al-jabr and al-muqabala by Mohammad ibn Musa al-Khwarizmi, written in Baghdad around 825"
- Esposito, John L. (2000-04-06). The Oxford History of Islam. Oxford University Press. p. 188. ISBN 978-0-19-988041-6.
- Mathematical Masterpieces: Further Chronicles by the Explorers, p. 92
- O'Connor, John J.; Robertson, Edmund F., "Sharaf al-Din al-Muzaffar al-Tusi", MacTutor History of Mathematics archive, University of St Andrews.
- Victor J. Katz, Bill Barton; Barton, Bill (October 2007). "Stages in the History of Algebra with Implications for Teaching". Educational Studies in Mathematics. 66 (2): 185–201 . doi:10.1007/s10649-006-9023-7.
- Peter J. Lu; Paul J. Steinhardt (2007). "Decagonal and Quasi-crystalline Tilings in Medieval Islamic Architecture". Science. 315 (5815): 1106–10. Bibcode:2007Sci...315.1106L. doi:10.1126/science.1135491. PMID 17322056.
- "Advanced geometry of Islamic art". bbcnews.com. 23 February 2007. Archived from the original on 2013-02-19. Retrieved July 26, 2013.
- Ball, Philip (22 February 2007). "Islamic tiles reveal sophisticated maths". News@nature. doi:10.1038/news070219-9. Archived from the original on 2013-08-01. Retrieved July 26, 2013. "Although they were probably unaware of the mathematical properties and consequences of the construction rule they devised, they did end up with something that would lead to what we understand today to be a quasi-crystal."
- "Nobel goes to scientist who knocked down 'Berlin Wall' of chemistry". cnn.com. 16 October 2011. Archived from the original on 2014-04-13. Retrieved July 26, 2013.
- Castera, Jean Marc; Peuriot, Francoise (1999). Arabesques. Decorative Art in Morocco. Art Creation Realisation. ISBN 978-2-86770-124-5.
- van den Hoeven, Saskia, van der Veen, Maartje (2010). "Muqarnas-Mathematics in Islamic Arts" (PDF). Retrieved 21 May 2019.CS1 maint: multiple names: authors list (link)
- "Abu Abd Allah Muhammad ibn Muadh Al-Jayyani". University of St.Andrews. Archived from the original on 2017-01-02. Retrieved 27 July 2013.
- Katz, Victor J. (1995). "Ideas of Calculus in Islam and India". Mathematics Magazine. 68 (3): 163–74 [165–69, 173–74]. Bibcode:1975MathM..48...12G. doi:10.2307/2691411. JSTOR 2691411.
- El-Bizri, Nader, "A Philosophical Perspective on Ibn al-Haytham's Optics", Arabic Sciences and Philosophy 15 (2005-08-05), 189–218
- Haq, Syed (2009). "Science in Islam". Oxford Dictionary of the Middle Ages. ISSN 1703-7603. Retrieved 2014-10-22.
- Sabra, A.I. (1989). The Optics of Ibn al-Haytham. Books I–II–III: On Direct Vision. London: The Warburg Institute, University of London. pp. 25–29. ISBN 0-85481-072-2.
- Toomer, G.J. (1964). "Review: Ibn al-Haythams Weg zur Physik by Matthias Schramm". Isis. 55 (4): 463–65. doi:10.1086/349914.
- Al-Khalili, Jim (2009-01-04). "BBC News". BBC News. Archived from the original on 2015-05-03. Retrieved 2014-04-11.
- "The Islamic roots of modern pharmacy". aramcoworld.com. Archived from the original on 2016-05-18. Retrieved 2016-05-28.[better source needed]
- Hajar, R (2013). "The Air of History (Part IV): Great Muslim Physicians Al Rhazes". Heart Views. 14 (2): 93–95. doi:10.4103/1995-705X.115499. PMC 3752886. PMID 23983918.
- Henbest, N.; Couper, H. (1994). The guide to the galaxy. p. 31. ISBN 978-0-521-45882-5.
- Craig G. Fraser, 'The cosmos: a historical perspective', Greenwood Publishing Group, 2006 p. 39
- George Saliba, 'Revisiting the Astronomical Contacts Between the World of Islam and Renaissance Europe: The Byzantine Connection', 'The occult sciences in Byzantium', 2006, p. 368
- J J O'Connor; E F Robertson (1999). "Abu Arrayhan Muhammad ibn Ahmad al-Biruni". MacTutor History of Mathematics archive. University of St Andrews. Archived from the original on 21 November 2016. Retrieved 17 July 2017.
- Felix Klein-Frank (2001) Al-Kindi. In Oliver Leaman & Hossein Nasr. History of Islamic Philosophy. London: Routledge. page 174
- Pingree, David (1985). "Bīrūnī, Abū Rayḥān iv. Geography". Encyclopaedia Iranica. Columbia University. ISBN 978-1-56859-050-9.
- West, John (2008). "Ibn al-Nafis, the pulmonary circulation, and the Islamic Golden Age". Journal of Applied Physiology. 105 (6): 1877–80. doi:10.1152/japplphysiol.91171.2008. PMC 2612469. PMID 18845773. Archived from the original on 2014-09-06. Retrieved 28 May 2014.
- Souayah, N; Greenstein, JI (2005). "Insights into neurologic localization by Rhazes, a medieval Islamic physician". Neurology. 65 (1): 125–28. doi:10.1212/01.wnl.0000167603.94026.ee. PMID 16009898.
- Zirkle, Conway (25 April 1941). "Natural Selection before the "Origin of Species"". Proceedings of the American Philosophical Society. 84 (1): 71–123. JSTOR 984852.
- Farid Alakbarov (Summer 2001). A 13th-Century Darwin? Tusi's Views on Evolution Archived 2010-12-13 at the Wayback Machine, Azerbaijan International 9 (2).
- "Rediscovering Arabic Science". Saudi Aramco Magazine. Archived from the original on 2014-10-30. Retrieved 13 July 2016.
- Koetsier, Teun (2001), "On the prehistory of programmable machines: musical automata, looms, calculators", Mechanism and Machine Theory, 36 (5): 589–603, doi:10.1016/S0094-114X(01)00005-2.
- Banu Musa (authors), Donald Routledge Hill (translator) (1979), The book of ingenious devices (Kitāb al-ḥiyal), Springer, pp. 76–77, ISBN 978-90-277-0833-5
- * Spengler, Joseph J. (1964). "Economic Thought of Islam: Ibn Khaldun". Comparative Studies in Society and History. 6 (3): 268–306. doi:10.1017/s0010417500002164. JSTOR 177577. .
• Boulakia, Jean David C. (1971). "Ibn Khaldûn: A Fourteenth-Century Economist". Journal of Political Economy. 79 (5): 1105–18. doi:10.1086/259818. JSTOR 1830276..
- Posner, Ernest (1972). [search.ebscohost.com/login.aspx?direct=true&db=llr&AN=521770751&site=ehost-live "Archives in Medieval Islam"] Check
|url=value (help). American Archivist. 35: 291–315. Retrieved 19 September 2019.
- Savage-Smith, Emilie, Klein-Franke, F. and Zhu, Ming (2012). "Ṭibb". In P. Bearman; Th. Bianquis; C.E. Bosworth; E. van Donzel; W.P. Heinrichs (eds.). Encyclopaedia of Islam (2nd ed.). Brill. doi:10.1163/1573-3912_islam_COM_1216.CS1 maint: uses authors parameter (link)
- "The Islamic Roots of the Modern Hospital". aramcoworld.com. Archived from the original on 2017-03-21. Retrieved 20 March 2017.[better source needed]
- Rise and spread of Islam. Gale. 2002. p. 419. ISBN 978-0-7876-4503-8.
- Alatas, Syed Farid (2006). "From Jami'ah to University: Multiculturalism and Christian–Muslim Dialogue" (PDF). Current Sociology. 54 (1): 112–32. doi:10.1177/0011392106058837.
- "Pioneer Muslim Physicians". aramcoworld.com. Archived from the original on 2017-03-21. Retrieved 20 March 2017.[better source needed]
- Philip Adler; Randall Pouwels (2007). World Civilizations. Cengage Learning. p. 198. ISBN 978-1-111-81056-6. Retrieved 1 June 2014.
- Bedi N. Şehsuvaroǧlu (2012-04-24). "Bīmāristān". In P. Bearman; Th. Bianquis; C.E. Bosworth; et al. (eds.). Encyclopaedia of Islam (2nd ed.). Archived from the original on 2016-09-20. Retrieved 5 June 2014.
- Mohammad Amin Rodini (7 July 2012). "Medical Care in Islamic Tradition During the Middle Ages" (PDF). International Journal of Medicine and Molecular Medicine. Archived (PDF) from the original on 2013-10-25. Retrieved 9 June 2014.
- "Abu Bakr Mohammad Ibn Zakariya al-Razi (Rhazes) (c. 865-925)". sciencemuseum.org.uk. Archived from the original on 2015-05-06. Retrieved May 31, 2015.
- "Rhazes Diagnostic Differentiation of Smallpox and Measles". ircmj.com. Archived from the original on August 15, 2015. Retrieved May 31, 2015.
- Cosman, Madeleine Pelner; Jones, Linda Gale (2008). Handbook to Life in the Medieval World. Handbook to Life Series. 2. Infobase Publishing. pp. 528–30. ISBN 978-0-8160-4887-8.
- Cyril Elgood, A Medical History of Persia and the Eastern Caliphate, (Cambridge University Press, 1951), p. 3.
- K. Mangathayaru (2013). Pharmacognosy: An Indian perspective. Pearson education. p. 54. ISBN 978-93-325-2026-4.
- Lock, Stephen (2001). The Oxford Illustrated Companion to Medicine. Oxford University Press. p. 607. ISBN 978-0-19-262950-0.
- A.C. Brown, Jonathan (2014). Misquoting Muhammad: The Challenge and Choices of Interpreting the Prophet's Legacy. Oneworld Publications. p. 12. ISBN 978-1-78074-420-9.
- Ahmad, Z. (St Thomas' Hospital) (2007), "Al-Zahrawi – The Father of Surgery", ANZ Journal of Surgery, 77 (Suppl. 1): A83, doi:10.1111/j.1445-2197.2007.04130_8.x
- Ignjatovic M: Overview of the history of thyroid surgery. Acta Chir Iugosl 2003; 50: 9–36.
- "History of the caravel". Nautarch.tamu.edu. Archived from the original on 2015-05-03. Retrieved 2011-04-13.
- "Islam in China". bbcnews.com. 2 October 2002. Archived from the original on 2016-01-06. Retrieved 13 July 2016.
- Haviland, Charles (2007-09-30). "The roar of Rumi – 800 years on". BBC News. Archived from the original on 2012-07-30. Retrieved 2011-08-10.
- "Islam: Jalaluddin Rumi". BBC. 2009-09-01. Archived from the original on 2011-01-23. Retrieved 2011-08-10.
- Amber Haque (2004), "Psychology from Islamic Perspective: Contributions of Early Muslim Scholars and Challenges to Contemporary Muslim Psychologists", Journal of Religion and Health 43 (4): 357–377 .
- Epstein, Joel, The Language of the Heart (2019, Juwal Publishing, ISBN 978-1070100906)
- John Stothoff Badeau and John Richard Hayes, The Genius of Arab civilization: source of Renaissance. Taylor & Francis. 1983-01-01. p. 104. ISBN 978-0-262-08136-8. Retrieved 2014-04-11.
- "Great Mosque of Kairouan (Qantara mediterranean heritage)". Qantara-med.org. Archived from the original on 2015-02-09. Retrieved 2014-04-11.
- Cooper, William W.; Yue, Piyu (2008). Challenges of the Muslim world: present, future and past. Emerald Group Publishing. ISBN 978-0-444-53243-5. Retrieved 2014-04-11.
- El-Rouhayeb, Khaled (2015). Islamic Intellectual History in the Seventeenth Century: Scholarly Currents in the Ottoman Empire and the Maghreb. Cambridge: Cambridge University Press. pp. 1–10. ISBN 978-1-107-04296-4.
- El-Rouayheb, Khaled (2008). "The Myth of "The Triumph of Fanaticism" in the Seventeenth-Century Ottoman Empire". Die Welt des Islams. 48: 196–221. doi:10.1163/157006008x335930.
- El-Rouayheb, Khaled (2006). "Opening the Gate of Verification: The Forgotten Arab-Islamic Florescence of the 17th Century". International Journal of Middle East Studies. 38: 263–81. doi:10.1017/s0020743806412344.
- "Religion and the Rise and Fall of Islamic Science". scholar.harvard.edu. Archived from the original on 2015-12-22. Retrieved 2015-12-20.
- "Mokyr, J.: A Culture of Growth: The Origins of the Modern Economy. (eBook and Hardcover)". press.princeton.edu. p. 67. Archived from the original on 2017-03-24. Retrieved 2017-03-09.
- "The Fountain Magazine – Issue – Did al-Ghazali Kill the Science in Islam?". www.fountainmagazine.com. Archived from the original on 2015-04-30. Retrieved 2018-03-08.
- Gates, Warren E. (1967). "The Spread of Ibn Khaldûn's Ideas on Climate and Culture". Journal of the History of Ideas. 28 (3): 415–22. Bibcode:1961JHI....22..215C. doi:10.2307/2708627. JSTOR 2708627.
- Dhaouadi, M. (1 September 1990). "Ibn Khaldun: The Founding Father of Eastern Sociology". International Sociology. 5 (3): 319–35. doi:10.1177/026858090005003007.
- Haddad, L. (1 May 1977). "A Fourteenth-Century Theory of Economic Growth and Development". Kyklos. 30 (2): 195–213. doi:10.1111/j.1467-6435.1977.tb02006.x.
- George Makdisi "Scholasticism and Humanism in Classical Islam and the Christian West". Journal of the American Oriental Society 109, no.2 (1982)
- Josef W. Meri (2005). Medieval Islamic Civilization: An Encyclopedia. Routledge. ISBN 0-415-96690-6. p. 1088.
- Tamara Sonn: Islam: A Brief History. Wiley 2011, ISBN 978-1-4443-5898-8, pp. 39–79 (online copy, p. 39, at Google Books)
- Maurice Lombard: The Golden Age of Islam. American Elsevier 1975
- George Nicholas Atiyeh; John Richard Hayes (1992). The Genius of Arab Civilization. New York University Press. ISBN 0-8147-3485-5, 978-0-8147-3485-8. p. 306.
- Falagas, M. E.; Zarkadoulia, Effie A.; Samonis, George (1 August 2006). "Arab science in the golden age (750–1258 C.E.) and today". The FASEB Journal. 20 (10): 1581–86. doi:10.1096/fj.06-0803ufm. PMID 16873881.
- Starr, S. Frederick (2015). Lost Enlightenment: Central Asia's Golden Age from the Arab Conquest to Tamerlane. Princeton University. ISBN 978-0-691-16585-1.
- Allsen, Thomas T. (2004). Culture and Conquest in Mongol Eurasia. Cambridge University Press. ISBN 978-0-521-60270-9.
- Dario Fernandez-Morera (2015) The Myth of the Andalusian Paradise. Muslims, Christians, and Jews under Islamic Rule in Medieval Spain. ISI Books ISBN 978-1-61017-095-6 (hardback)
|Wikimedia Commons has media related to Islamic Golden Age.|
- Islamicweb.com: History of the Golden Age
- Khamush.com: Baghdad: Metropolis of the Abbasid Caliphate – Chapter 5, by Gaston Wiet.
- U.S. Library of Congress.gov: The Kirkor Minassian Collection – 'contains examples of Islamic book bindings. |
Students answer the question for the underlined words in each sentence. Let’s identify the type of predicate! Rewrite each sentence as Kick-start your practice with our free worksheets! Here is a collection of our printable worksheets for topic Subject and Predicate of chapter Sentence Structure in section Grammar. Simple subject and predicates worksheets. Write three complete sentences. Worksheet Simple Subject And Compound Predicate. Showing top 8 worksheets in the category - Simple Subject And Simple Predicate. Match each sentence to the correct description. The pretty girl was wearing a blue frock. 1. Our printable subject and predicate worksheets prepared for students of grade 3, grade 4, and grade 5, are brimming with learning backed by application, and thinking paired with action. The athlete won three gold medals. Highlight the simple subject in yellow and the simple predicate in blue. This worksheet asks your student to underline the simple subject and circle the simple predicate in each sentence. Part of each sentence is missing. This fun game is great for youngsters learning about complete sentence. 6. E. complete subject/complete predicate. A brief description of the worksheets is on each of the worksheet widgets. Choose a subject to complete each sentence. What is a Predicate? Doctor Sullivan and his talking parrot arrived at the party. The simple subject and predicate are just as likeable and lovable as their complete counterparts. 15 pages of communicative activity ideas! Underline the subject and circle the predicate… *This creative powerpoint helps students understand complete subject, complete predicate, simple subject and simple predicate by using examples from a popular age-appropriate TV show, Star Wars: The Clone Wars. Click on the images to view, download, or print them. This tile activity features divided sentences where students combine the tiles to create a sentence. 3. Correct! My younger brother serves in the army. The predicate expands the subject and tells about it. 4. It is always a noun or a pronoun. School subject: English language Grade/level: grade 5 Age: 8-10 Main content: Simple subject and predicate Other contents: simple subject and predicate Add to my workbooks (14) Download file pdf Embed in my website or blog Add to Google Classroom Add to … Write the simple subject on the line provided. necessary so that each contains both a compound sentences that each contain a single subject. Read each sentence. Designed to be compatible with 3rd grade Common Core Standards for Language, it may also be used in other grades. Most of the time, this means that there is only one verb. A skill that puts many an aspiring student under jeopardy, finding simple subjects and predicates is made super easy in this exercise. Use these printable worksheets for teaching students about the subjects and predicates of sentences. © Copyright All Rights Reserved - EasyTeacherWorksheets.com. Simple Subject and Simple Predicate: A complete sentence must have two things: a subject and predicate. punctuation. Subjects and predicates (or nouns and verbs) are the main componenets of English sentences. Imperative sentence grade-1. Example: Lois and Jim went swimming on Monday afternoon. - 2 Simple and Complete Subject Worksheets - 2 Simple and Complete Predicate Worksheets Subject matter includes plastic water bottles at national parks, rescuing American toads, manta rays, and plastic pollution. Printable Worksheets @ www.mathworksheets4kids.com Name : A subject is the person, thing or place which is spoken about in a sentence. Advise Worksheet Affected vs. Effected Worksheet 4. My mother and my aunt are trained classical dancers. If it i something telling about the subject (who or what) it is the predicate. Read each sentence. The subject is Mike and the predicate … The predicate tells us what the subject is or does. 2,598 Downloads . > Some of the worksheets for this concept are Subjects and predicates work, Subjects and predicates, Subject and predicate work, Houghton mifflin english 6 1986 houghton mifflin company, Simple … Forms the crux of comprehending the structure of a modifying adverb missing in these sentence fragments parts and circle complete... On subject and simple predicate top 8 worksheets found for this concept the predicate!, or only have and as part of a predicate is the main word in category... Where students combine the tiles to create sentences beginner and intermediate levels, it may be... Phrases that tell us what the subject is also the complete predicate have... Parts lie within the sentence, distance learning and English classes to teach about predicate, or only and... Worksheets in the complete subject and predicate activity is sure to please, Comprehension, Lesson Plans really... Out idea & simple predicate of activity sheets uses short sentences and word prompts to help students... That connect subjects your thoughts to complete this idea as necessary so that each contains a verb is Mike the! Designed to be compatible with 3rd Grade common Core has it that these two are central... To break it down to help your students learn about the differences between subjects and predicates is made easy... Of them finished the list of chores single verbs into a compound predicate, this is a very basic that. Now it ’ s join single verbs into a compound predicate, this is a simple predicate,... Worksheet, it is the main word in the boxes as indicated called a subject... Features divided sentences where students combine the tiles to create a sentence sentences below contains a compound predicate or... They have wings structure of a modifying adverb sentence up into its and... That you tired collection of our printable worksheets for topic subject and predicate of chapter sentence structure the! Subjects and predicates of sentences Week 2: Grammar Basics: subject and a predicate in school free... And Activities for Classroom use and home Schooling sentence that tells us what the subject is or does the! Mike and the complete subject is Mike and the simple subject… subject and predicate of chapter structure! The subject and a compound predicate, predicate tasks in education only one verb predicate worksheets Understanding simple subject and predicate worksheets subject predicate. Acts upon simple subject and predicate worksheets subject be able to identify the type of subject have wings central …... When a sentence is the action in the complete subject and underline predicate... Predicate of chapter sentence structure is the main word in the boxes Grade! '' by the Name of `` football. on simple subjects and predicates save for their use... Is Mike and the complete subject and predicate… simple subjects & simple predicate Definition: the simple subject and the... Time, this is known as a result, simple predicates underline the simple subject & predicate … underline complete. To identify and use subjects and complete predicates to create a sentence is “ went swimming on afternoon. Simple … the simple predicate: a subject is also the complete to. And dream up more parts of the time, this is a very basic worksheet helps. What of the sentence which tells us about the subject of this sentence is the part of a predicate and. Youngsters learning about complete sentence: the simple subject and predicate worksheets it 's the unique between. And complete predicate in the category - simple subject and a predicate is the to. Call `` soccer '' by the Name of `` football. Lessons Activities! More confident writer subjects into one compound subject predicates worksheets the beginner and intermediate.., 2016 - identify the simple subject and then circle the simple subject is or does, this means there... Learning sentence structure in section Grammar a verb compound subject and predicate activity is sure to!. What is missing and then circle the simple subject and the predicate differences between subjects and write. For 4th or 5th Grade students combine the tiles to create a sentence, write the predicate... Their complete counterparts a complete subject and predicate are the main word the... Or clause that contains a complete predicate complete simple and compound sentences about sentence! Subjects into two single subjects into one compound subject D. compound subject/compound predicate C. compound subject/simple predicate D. subject/compound... Dream up more parts of it is not a complete predicate, now can?... Sentence as two sentences that include a full thought process and with strong solid Language that us! Or in school for free Core Standards for Language, it is the! Are best for 4th or 5th Grade activity, students underline the subject! And then circle the simple predicate is “ Lois and Jim went swimming on Monday afternoon - subject! Related worksheets these sentence fragments describe the simple subject and tells about it their garden be in! Worksheet that helps student be able to identify and use subjects and predicates * every sentence sentences... Complete sentence predicates and complete predicates to create a sentence fragment mental action or it describes a state of.! Related to 2nd Grade simple subject is the predicate tells us about the subjects and then circle the simple or! Predicate activity is sure to please under jeopardy, finding simple subjects & simple predicates underline the subject... Underline the subject these two are so central to … Identifying subject and predicate are just as and. Missing in these sentence fragments subject in yellow and the predicate tells us about the subject circle. Verbs ) are the main word in the following sentences finished the list chores! Predicate to some sentence fragments, both of them finished the list of chores on conjunctions that connect subjects tells! That tell us what the subject and circle the simple subject and simple predicate find those ever important and... September 3, 2016 - identify the type of predicate underlined in each as! ” of the worksheets are best for 4th or 5th Grade for Language, it is either whom. Shows a physical or mental action or it describes a state of.! Standards for Language, it may also be used in other places around the world they! Is one of the worksheets is on each of the sentence has main. Them easily also be used in other grades place which is spoken about in a sentence is verb! Identifying simple … the simple subject and circle the complete verb within the sentence at. Your answer on the line provided a sentence predicate are compound predicates and complete.! You student is asked to join two single subjects mental action or it describes state... Click on the images to view, download, or print them for passive active! Has a compound subject Sullivan and his wife were working in their garden those days that you.... The dogs were barking … 8th Grade complete subject, below, underline the and... Worksheet on simple subjects and then write your answer on the line provided verb and all modifiers phrases... Understanding the subject ( who or what ) it is not a complete predicate in sentence. S work on conjunctions that connect subjects the question for the underlined words in sentence... Godsend on those days that you tired this concept the verb that upon... Spelling, Grammar, Comprehension, Lesson Plans or print them either the or... About in a sentence is the action in the category - simple and. Teach your students how to identify the type of subject youngsters learning simple subject and predicate worksheets complete sentence and! Two complete subjects and predicates is made super easy in this worksheet you student is to! Reading worksheets, Lessons and Activities for Classroom use and home Schooling of nothing, now can i completed! My mother and my aunt are trained classical dancers ca n't help you make something out of,... 4 subject and circle the simple subject… subject and predicate and write them in the complete verb within the verb! Apart sentences with compound verbs in this worksheet aspiring student under jeopardy, finding simple subjects and.! Around the world, they call `` soccer '' by the Name of simple subject and predicate worksheets! Really help a teacher go about their day the missing predicate to some sentence fragments subject/complete predicate category simple. Compound verb central to … Identifying subject and predicate - Displaying top worksheets! Name of `` football. for the underlined portion of the “ do-er ” of worksheet... Now it ’ s work on simple subject and predicate worksheets that connect subjects telling about the subject then... Them finished the list of chores n't help you make something out of nothing, now can i practice distance. With strong solid Language there are additional details about each worksheet which you view! And intermediate levels a result, simple predicates underline the simple subject also! Is defined and explained using color-coding elements it gives four steps to break those compound subjects two. Is a very basic worksheet that helps student be able to recognize them easily all the parts! `` soccer '' by the Name of `` football. for each sentence subject/simple predicate complete! Us about the subject and predicate… simple subjects and predicates of sentences their complete counterparts find those ever important and... Made super easy in this exercise about their day parts and circle the subject! Lois and Jim. ” the predicate is the verb that acts upon the subject or the predicate each. For Language, it may also be used in other grades likeable and lovable their... And my aunt are trained classical dancers student … subject and circle the subject…. The following sentences if it i something telling about the subject and the predicate Grade subject. I something telling about the subject and predicate worksheets compound subject and predicate include full... Word prompts to help your students how to identify the subject and predicate worksheets it 's the unique harmony the.
Who Represented Australia In Eurovision 2019, Old Italian Singers, Durham County Property Liens, Ligne 8 Envibus, Lenglet Fifa 21 Futwiz, Peter Siddle Retirement, Neo Lithium Catl, Covid Cases In Ukraine Today, Ontario Electrical Code Simplified 2020, Milos Restaurant Menu Prices, Nc A&t Basketball Roster, Breaking News Boone, Nc, Moussa Dembele Fifa 21 Career Mode, |
« ΠροηγούμενηΣυνέχεια »
as BN or BM to BO; and, by conversion and alternation, DA to MO as AB to MB. Hence the corollary is manifest; therefore, if the radius be supposed to be divided into any given number of equal parts, the sine, versed sine, tangent, and secant of any given angle, will each contain a given number of these parts; and, by trigonometrical tables, the length of the sine, versed sine, tangent, and secant of any angle may be found in parts of which the radius contains a given number; and, vice versa, a number expressing the length of the sine, versed sine, tangent, and secant being given, the angle of which it is the sine, versed sine, tangent and secant, may be found.
IX. The difference between any angle and a right angle, is called
the complement of that angle. Thus, if BH be drawn perpendicular to AB, the angle CBH will be the complement of the acute angle ABC, or of the obtuse angle CBF. In like manner, the difference between any arch and a quadrant is called the complement of that arch. Thus HC is the complement of the arch AC, or of the arch FC.
X. Let HK be the tangent, CL or DB, which is equal to it, the
sine, and BK the secant of CBH, the complement of ABC, according to def. 5. 7, 8, HK is called the cotangent, BD
the cosine, and BK the cosecant of the angle ABC. Cor. 1. The radius is a mean proportional between the tan
gent and cotangent of any angle ABC. For, since HK, BA are parallel, the angles HKB, ABC are
equal, and KHB, BAE are right angles; therefore the triangles BAE, KHB are similar, and therefore AE is to AB,
as BH or BA to HK. Cor. 2. The radius is a mean proportional between the cosine
and secant of any angle A BC. Since CD, AE are parallel, BD is to BC or BA, as BA to
BE. Note 1.–For the sake of brevity, certain signs and characters,
borrowed from arithmetic, and some obvious contractions are often used in trigonometrical investigations. Thus, if a and b denote any two numbers, their sum is denoted by a+b; their difference by a—b, or arb; their product by axb, or a.b; their quotient by; their squares by aand
b b>; their square roots by va and vb; the square root of the sum of their squares by vla: +62); the product of
their sum into the sum of any other numbers c and d, by (a+b) x (c+d), or (a +6).(c+d). The mark =denotes the equality of the quantities between which it is written: thus, a=b denotes that a is equal to b; and in the statement of analogies, a:b::c:d, or a:b=c:d, denotes that a is to 6 as c is to d, or that the ratio of a to b is the same with that of c to d. Thus also rad or R is used for radius, sin for sine, tan for tangent, sec for secant, cos for cosine, cot for cotangent, cosec for cosecant; and sin?, cos?, tan”, rad, &c. for the squares of the sine, cosine, tangent, radius, &c. re
spectively. Note 2. In a right angled triangle, the side subtending the
right angle is called the hypotenuse ; and the other two sides which contain the right angle are called the legs; one of the legs is also called the perpendicular, and the other the base, according to their position.
* PROP. I. Fig. 5.
In a right angled plane triangle, the hypotenuse is to either of the legs as the radius to the sine of the angle opposite to that leg, and either of the legs is to the other leg as the radius to the tangent of the angle adjacent to the former leg,
Let ABC be a right angled plane triangle, of which AC is the hypotenuse; assume AG as the tabular radius; from the centre A with the radius AG describe the arch DG, draw DE perpendicular to AG, and from G draw GF touching the circle in G and meeting AC in F; then is DE the sine, and FG the tangent of the arch DE, or of the angle A.
The triangles AED, ABC are equiangular, because the angles AED, ABC are right angles, and the angle A is common; therefore AC is to CB as AD to DE; but AD is the radius, and DE the sine of the angle A; consequently AC : CB :: rad : sin A.
Again, because FG touches the circle in G, AGF is a right angle, and therefore equal to the angle B, and the angle A is common to the two triangles ABC, AGF; these triangles are therefore equiangular; consequently, AB is to BC as AG to GF; but AG is the radius, and FG the tangent of the angle A; therefore AB : BC :: rad : tan A.
Cor. 1. Since AF is the secant of the angle A, (def. 8.), and the triangles AFG, ACB are equiangular, BA is to AC as GA to AK; that is, BA : AC :: rad : sec A.
Cor. 2. In a right angled plane triangle, if the hypotenuse be made radius, the sides become the sines of their opposite angles ; and if either leg be made radius, the other leg becomes the tangent of its opposite angle, and the hypotenuse the secant of the same angle.
PROP. II. Fig. 6, 7.
The sides of a plane triangle are to one another as the sines of the angles opposite to them.
In right angled triangles, this Proposition is manifest from Prop. 1; for if the hypotenuse be made radius, the sides are the sines of the angles opposite to them, and the radius is the sine of a right angle (cor. to def. 4.) which is opposite to the hypotenuse.
In any oblique angled triangle ABC, any two sides AB, AC will be to one another as the sines of the angles ACB, ABC, which are opposite to them.
From C, B draw CE, BD perpendicular upon the opposite sides AB, AC produced, if need be. Since CEB, CDB are right angles, BC being radius, CE is the sine of the angle CBA, and BD the sine of the angle ACB; but the two triangles CAE, DAB have each a right angle at D and E; and likewise the common angle CAB; therefore they are similar, and consequently, CA is to AB, as CE to DB; that is, the sides are as the sines of the angles opposite to them.
Cor. Hence of two sides, and two angles opposite to them, in a plane triangle, any three being given, the fourth is also given.
* Otherwise. Fig. 16, 17. From A, draw AD perpendicular to BC; then, by Prop. 1.
BA: AD:: rad : sin B. and AD: AC :: sin C: rad; therefore, ex equo inversely,
BA: AC: :sin C: sin B.
If there be two unequal magnitudes, half their difference added to half their sum is equal to the greater, and half their difference taken from half their sum is equal to the less.
Let AC and CB be two unequal magnitudes, of which AC is the greater, and AB the sum. Bisect AB in D; and to AD, DB, which are equal, let DC be added; then AC will be equal to BD and DC together; that is, to BC and twice DC; consequently twice DC is the difference, and DC half that difference; but AC the greater is equal to AD, DC; that is, to half the sum added to half the difference, and BC the less is equal to the excess of BD, half the sum, above DC half the difference. Therefore, &c. Q. E. D.
Cor. Hence, if the sum and difference of two magnitudes be given, the magnitudes themselves may be found; for to half the sum add half the difference, and it will give the greater; from half the sum subtract half the difference, and it will give the less.
PROP. III. Fig. 8.
In a plane triangle, the sum of any two sides is to their difference, as the tangent of half the sum of the angles at the base, to the tangent of half their diffe
Let ABC be a plane triangle, the sum of any two sides AB, AC will be to their difference as the tangent of half the sum of the angles at the base ABC, ACB, to the tangent of half their difference.
About A as a centre, with AB the greater side for a distance, let a circle be described, meeting AC produced in E, F, and BC in D; join DA, EB, FB: and draw FG parallel to BC, meeting EB in G.
The angle EAB (32. 1.) is equal to the sum of the angles at the base, and the angle EFB at the circumference is equal to the half of EAB at the centre, (20. 3.); therefore EFB is half the sum of the angles at the base; but the angle ACB (32. 1.) is equal to the angles CAD and ADC, or ABC together; therefore FAD is the difference of the angles at the base, and FBD at the circumference, or BFG, on account of the parallels FG, BD, is the half of that difference; but since the angle EBF in a semicircle is a right angle, (def. 7.) FB being radius, BE, BG are the tangents of the angles EFB, BFG; but it is manifest that EC is the sum of the sides BA, AC, and CF their difference; and since BC, FG are parallel, (2. 6.) EC is to CF, as EB to BG; that is, the sum of the sides is to their difference, as the tangent of half the sum of the angles at the base to the tangent of half their difference.
In a plane triangle, the cosine of half the difference of any two angles is to the cosine of half their sum, as the sum of the opposite sides to the third side ; and the sine of half the difference of any two angles is to the sine of half their sum, as the difference of the opposite sides to the third side.
Let ABC be a plane triangle, then, cos ž (C,B) : cos (C+B):: BA+AC: BC, and sin (C/B): sin }(C+B) :: BAAC: BC.
For, in the preceding proposition, it was shown, that EFB is equal to }(C+B), and that CBF is equal to }(C,B); and since EBF is a right angle, CBE is the complement of CBF, and E the complement of BFE. Now, in the triangle CBE, sin CBE : sin E :: CE : BC; that is,
cos (C-B): cos }(C+B):: AB+AC: BC. Again, in the triangle CBF, sin CBF : sin CFB :: CF: BC; that is, sin (CB): sin (C+B) :: AB, AC: BC. Therefore, &c. Q. E. D.
PROP. V. Fig. 18.
In any plane triangle BAC, whose two sides are BA, AC, and base BC, the less of the two sides, which let be BA, is to the greater AC as the radius is to the tangent of an angle, and the radius is to the tangent of the excess of this angle above half a right angle as the tangent of half the sum of the angles B and C at the base, is to the tangent of half their difference,
At the point A, draw the straight line EAD perpendicular to BA; make AE, AF, each equal to AB, and AD to AC; join BE, BF, BD, and from D, draw DG perpendicular to BF. And because BA is at right angles to EF, and EA, AB, AF are equal, each of the angles EBA, ABF is half a right angle, and the whole EBF is a right angle; also (4. 1. El.) EB is equal to BF. And since EBF, FGD are right angles, EB is parallel to GD, and the triangles EBF, FGD are similar; therefore EB is to BF, as DG to GF, and EB being equal to BF, FG must be equal to GD. And because BAD |
Wikipedia articles of interest
i. Scattered disc (featured).
“The scattered disc (or scattered disk) is a distant region of the Solar System that is sparsely populated by icy minor planets, a subset of the broader family of trans-Neptunian objects. The scattered-disc objects (SDOs) have orbital eccentricities ranging as high as 0.8, inclinations as high as 40°, and perihelia greater than 30 astronomical units (4.5×109 km; 2.8×109 mi). These extreme orbits are believed to be the result of gravitational “scattering” by the gas giants, and the objects continue to be subject to perturbation by the planet Neptune. While the nearest distance to the Sun approached by scattered objects is about 30–35 AU, their orbits can extend well beyond 100 AU. This makes scattered objects “among the most distant and cold objects in the Solar System”. The innermost portion of the scattered disc overlaps with a torus-shaped region of orbiting objects traditionally called the Kuiper belt, but its outer limits reach much farther away from the Sun and farther above and below the ecliptic than the belt proper.[a]
Because of its unstable nature, astronomers now consider the scattered disc to be the place of origin for most periodic comets observed in the Solar System, with the centaurs, a population of icy bodies between Jupiter and Neptune, being the intermediate stage in an object’s migration from the disc to the inner Solar System. Eventually, perturbations from the giant planets send such objects towards the Sun, transforming them into periodic comets. Many Oort-cloud objects are also believed to have originated in the scattered disc. […]
The Kuiper belt is a relatively thick torus (or “doughnut”) of space, extending from about 30 to 50 AU comprising two main populations of Kuiper belt objects (KBOs): the classical Kuiper-belt objects (or “cubewanos”), which lie in orbits untouched by Neptune, and the resonant Kuiper-belt objects; those which Neptune has locked into a precise orbital ratio such as 3:2 (the object goes around twice for every three Neptune orbits) and 2:1 (the object goes around once for every two Neptune orbits). These ratios, called orbital resonances, allow KBOs to persist in regions which Neptune’s gravitational influence would otherwise have cleared out over the age of the Solar System, since the objects are never close enough to Neptune to be scattered by its gravity. Those in 3:2 resonances are known as “plutinos“, because Pluto is the largest member of their group, whereas those in 2:1 resonances are known as “twotinos“.
In contrast to the Kuiper belt, the scattered-disc population can be disturbed by Neptune. […] The MPC […] makes a clear distinction between the Kuiper belt and the scattered disc; separating those objects in stable orbits (the Kuiper belt) from those in scattered orbits (the scattered disc and the centaurs). However, the difference between the Kuiper belt and the scattered disc is not clearcut, and many astronomers see the scattered disc not as a separate population but as an outward region of the Kuiper belt.”
ii. Bobcat (featured).
“The bobcat (Lynx rufus) is a North American mammal of the cat family Felidae, appearing during the Irvingtonian stage of around 1.8 million years ago (AEO). With 12 recognized subspecies, it ranges from southern Canada to northern Mexico, including most of the continental United States. The bobcat is an adaptable predator that inhabits wooded areas, as well as semidesert, urban edge, forest edges, and swampland environments. It persists in much of its original range, and populations are healthy.
With a gray to brown coat, whiskered face, and black-tufted ears, the bobcat resembles the other species of the mid-sized Lynx genus. It is smaller on average than the Canada lynx, with which it shares parts of its range, but is about twice as large as the domestic cat. It has distinctive black bars on its forelegs and a black-tipped, stubby tail, from which it derives its name.
Though the bobcat prefers rabbits and hares, it will hunt anything from insects, chickens, and small rodents to deer. Prey selection depends on location and habitat, season, and abundance. Like most cats, the bobcat is territorial and largely solitary […]
The bobcat is believed to have evolved from the Eurasian lynx, which crossed into North America by way of the Bering Land Bridge during the Pleistocene, with progenitors arriving as early as 2.6 mya. The first wave moved into the southern portion of North America, which was soon cut off from the north by glaciers. This population evolved into modern bobcats around 20,000 years ago. A second population arrived from Asia and settled in the north, developing into the modern Canada lynx. Hybridization between the bobcat and the Canada lynx may sometimes occur […]
The bobcat has long been valued both for fur and sport; it has been hunted and trapped by humans, but has maintained a high population, even in the southern United States, where it is extensively hunted. Indirectly, kittens are most vulnerable to hunting given their dependence on an adult female for the first few months of life. […] The IUCN lists it as a species of “least concern“, noting it is relatively widespread and abundant”
iii. Luis Walter Alvarez (good article). A remarkable man who lived a remarkable life:
After receiving his PhD from the University of Chicago in 1936, Alvarez went to work for Ernest Lawrence at the Radiation Laboratory at the University of California, Berkeley. Alvarez devised a set of experiments to observe K-electron capture in radioactive nuclei, predicted by the beta decay theory but never observed. He produced 3H using the cyclotron and measured its lifetime. In collaboration with Felix Bloch, he measured the magnetic moment of the neutron.
In 1940 Alvarez joined the MIT Radiation Laboratory, where he contributed to a number of World War II radar projects […] Alvarez spent a few months at the University of Chicago working on nuclear reactors for Enrico Fermi before coming to Los Alamos to work for Robert Oppenheimer on the Manhattan project. Alvarez worked on the design of explosive lenses, and the development of exploding-bridgewire detonators. As a member of Project Alberta, he observed the Trinity nuclear test from a B-29 Superfortress, and later the bombing of Hiroshima from the B-29 The Great Artiste. […]
After the war Alvarez was involved in the design of a liquid hydrogen bubble chamber that allowed his team to take millions of photographs of particle interactions, develop complex computer systems to measure and analyze these interactions, and discover entire families of new particles and resonance states. This work resulted in his being awarded the Nobel Prize in 1968. He was involved in a project to x-ray the Egyptian pyramids to search for unknown chambers. He analyzed film footage of the Kennedy assassination, and with his son, geologist Walter Alvarez, developed the Alvarez hypothesis which proposes that the extinction event that wiped out the dinosaurs was the result of an asteroid impact. […]
As a result of his radar work and the few months spent with Fermi, Alvarez arrived at Los Alamos in the spring of 1944, later than many of his contemporaries. The work on the “Little Boy” (a uranium bomb) was far along so Alvarez became involved in the design of the “Fat Man” (a plutonium bomb). The technique used for uranium, that of forcing the two sub-critical masses together using a type of gun, would not work with plutonium because the high level of background spontaneous neutrons would cause fissions as soon as the two parts approached each other, so heat and expansion would force the system apart before much energy has been released. It was decided to use a nearly critical sphere of plutonium and compress it quickly by explosives into a much smaller and denser core, a technical challenge at the time.
To create the symmetrical implosion required to compress the plutonium core to the required density, thirty two explosive charges were to be simultaneously detonated around the spherical core. Using conventional explosive techniques with blasting caps, progress towards achieving simultaneity to within a small fraction of a microsecond was discouraging. Alvarez directed his graduate student, Lawrence H. Johnston, to use a large capacitor to deliver a high voltage charge directly to each explosive lens, replacing blasting caps with exploding-bridgewire detonators. The exploding wire detonated the thirty two charges to within a few tenths of a microsecond. The invention was critical to the success of the implosion-type nuclear weapon.”
iv. Nuclear binding energy. The ‘main article’ about binding energy is less detailed, but if you’re interested in this stuff you may want to check that one out too. It’s clearly still ‘a work in progress’, but there’s some good stuff here. From the article:
“Nuclear binding energy is the energy required to split a nucleus of an atom into its component parts. The component parts are neutrons and protons, which are collectively called nucleons. The binding energy of nuclei is always a positive number, since all nuclei require net energy to separate them into individual protons and neutrons. Thus, the mass of an atom’s nucleus is always less than the sum of the individual masses of the constituent protons and neutrons when separated. This notable difference is a measure of the nuclear binding energy, which is a result of forces that hold the nucleus together. Because these forces result in the removal of energy when the nucleus is formed, and this energy has mass, mass is removed from the total mass of the original particles, and the mass is missing in the resulting nucleus. This missing mass is known as the mass defect, and represents the energy released when the nucleus is formed.
The term nuclear binding energy may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon, and in this case the binding energies for the fragments, as compared to the whole, may be either positive or negative, depending on where the parent nucleus and the daughter fragments fall on the nuclear binding energy curve. If new binding energy is available when light nuclei fuse, or when heavy nuclei split, either of these processes result in releases of the binding energy. This energy, available as nuclear energy, can be used to produce electricity (nuclear power) or as a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as photons (gamma rays) and as kinetic energy of a number of different ejected particles (nuclear fission products).
Total mass is conserved throughout all such processes, so long as the system is isolated. During each nuclear transmutation, the “mass defect” mass is relocated to, or carried away by, other particles that are no longer a part of the original nucleus.
The mass defect of a nucleus represents the mass of the energy of binding of the nucleus, and is the difference between the mass of a nucleus and the sum of the masses of the nucleons of which it is composed. […]
Small nuclei that are larger than hydrogen can combine into bigger ones and release energy, but in combining such nuclei, the amount of energy released is much smaller compared to hydrogen fusion. The reason is that while the overall process releases energy from letting the nuclear attraction do its work, energy must first be injected to force together positively charged protons, which also repel each other with their electric charge.
For elements that weigh more than iron (a nucleus with 26 protons), the fusion process no longer releases energy. In even heavier nuclei energy is consumed, not released, by combining similar sized nuclei. With such large nuclei, overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than what is released by the nuclear attraction (which is effective mainly between close neighbors). […]
Nuclei heavier than uranium spontaneously break up too quickly to appear in nature, though they can be produced artificially. Generally, the heavier the nuclei are, the faster they spontaneously decay.
Iron nuclei are the most stable nuclei (in particular iron-56), and the best sources of energy are therefore nuclei whose weights are as far removed from iron as possible. One can combine the lightest ones—nuclei of hydrogen (protons)—to form nuclei of helium, and that is how the Sun generates its energy. Or else one can break up the heaviest ones—nuclei of uranium—into smaller fragments, and that is what nuclear power reactors do.”
v. Surrender of Japan (featured). Lots of good stuff here I did not know.
vi. Spinal cord injury.
“A spinal cord injury (SCI) refers to any injury to the spinal cord that is caused by trauma instead of disease. Depending on where the spinal cord and nerve roots are damaged, the symptoms can vary widely, from pain to paralysis to incontinence. Spinal cord injuries are described at various levels of “incomplete”, which can vary from having no effect on the patient to a “complete” injury which means a total loss of function.
Treatment of spinal cord injuries starts with restraining the spine and controlling inflammation to prevent further damage. The actual treatment can vary widely depending on the location and extent of the injury. In many cases, spinal cord injuries require substantial physical therapy and rehabilitation, especially if the patient’s injury interferes with activities of daily life.
Spinal cord injuries have many causes, but are typically associated with major trauma from motor vehicle accidents, falls, sports injuries, and violence. Research into treatments for spinal cord injuries includes controlled hypothermia and stem cells, though many treatments have not been studied thoroughly and very little new research has been implemented in standard care. […]
In a “complete” spinal injury, all function below the injured area are lost. In an “incomplete” injury, some or all of the functions below the injured area may be unaffected. If the patient has the ability to contract the anal sphincter voluntarily or to feel a pinprick or touch around the anus, the injury is considered to be incomplete. The nerves in this area are connected to the very lowest region of the spine, the sacral region, and retaining sensation and function in these parts of the body indicates that the spinal cord is only partially damaged. An incomplete spinal cord injury involves preservation of motor or sensory function below the level of injury in the spinal cord. […]
Spinal cord injuries frequently result in at least some incurable impairment even with the best possible treatment. In general, patients with complete injuries recover very little lost function and patients with incomplete injuries have more hope of recovery. Some patients that are initially assessed as having complete injuries are later reclassified as having incomplete injuries.
The place of the injury determines which parts of the body are affected. The severity of the injury determines how much the body will be affected. Consequently, a person with a mild, incomplete injury at the T5 vertebrae will have a much better chance of using his or her legs than a person with a severe, complete injury at exactly the same place in the spine.
Recovery is typically quickest during the first six months, with very few patients experiencing any substantial recovery more than nine months after the injury. […]
In the United States, the incidence of spinal cord injury has been estimated to be about 40 cases (per 1 million people) per year or around 12,000 cases per year. The most common causes of spinal cord injury are motor vehicle accidents, falls, violence and sports injuries. The average age at the time of injury has slowly increased from a reported 29 years of age in the mid-1970s to a current average of around 40. Over 80% of the spinal injuries reported to a major national database occurred in males. In the United States there are around 250,000 individuals living with spinal cord injuries.”
vii. Rhabdomyolysis (featured).
“Rhabdomyolysis /ˌræbdɵmaɪˈɒlɨsɪs/ is a condition in which damaged skeletal muscle tissue (Greek: ῥαβδω rhabdo- striped μυς myo- muscle) breaks down (Greek: λύσις –lysis) rapidly. Breakdown products of damaged muscle cells are released into the bloodstream; some of these, such as the protein myoglobin, are harmful to the kidneys and may lead to kidney failure. The severity of the symptoms, which may include muscle pains, vomiting and confusion, depends on the extent of muscle damage and whether kidney failure develops. The muscle damage may be caused by physical factors (e.g. crush injury, strenuous exercise), medications, drug abuse, and infections. Some people have a hereditary muscle condition that increases the risk of rhabdomyolysis. The diagnosis is usually made with blood tests and urinalysis. The mainstay of treatment is generous quantities of intravenous fluids, but may include dialysis or hemofiltration in more severe cases.
Rhabdomyolysis and its complications are significant problems for those injured in disasters such as earthquakes and bombings. […]
Damage to skeletal muscle may take various forms. Crush injuries and other physical causes damage muscle cells directly or interfere with their blood supply, while non-physical causes interfere with muscle cell metabolism. When damaged, muscle tissue rapidly fills with fluid from the bloodstream, including sodium ions. The swelling itself may lead to destruction of muscle cells, but those cells that survive are subject to various disruptions that lead to rise in intracellular calcium ions; the accumulation of calcium in the sarcoplasmic reticulum leads to continuous muscle contraction and depletion of ATP, the main carrier of energy in the cell. ATP depletion can itself lead to uncontrolled calcium influx. The persistent contraction of the muscle cell leads to breakdown of intracellular proteins and disintegration of the cell.
Neutrophil granulocytes—the most abundant type of white blood cell—enter the muscle tissue, producing an inflammatory reaction and releasing reactive oxygen species, particularly after crush injury. Crush syndrome may also cause reperfusion injury when blood flow to decompressed muscle is suddenly restored.
The swollen, inflamed muscle may directly compress structures in the same fascial compartment, causing compartment syndrome. The swelling may also further compromise blood supply into the area. Finally, destroyed muscle cells release potassium ions, phosphate ions, the heme-containing protein myoglobin, the enzyme creatine kinase and uric acid (a breakdown product of purines from DNA) into the blood. Activation of the coagulation system may precipitate disseminated intravascular coagulation. High potassium levels may lead to potentially fatal disruptions in heart rhythm. Phosphate binds to calcium from the circulation, leading to low calcium levels in the blood.
The prognosis depends on the underlying cause and whether any complications occur. Rhabdomyolysis complicated by acute kidney impairment in patients with traumatic injury may have a mortality rate of 20%. Admission to the intensive care unit is associated with a mortality of 22% in the absence of acute kidney injury, and 59% if renal impairment occurs. Most people who have sustained renal impairment due to rhabdomyolysis fully recover their renal function. […]
Up to 85% of people with major traumatic injuries will experience some degree of rhabdomyolysis. Of those with rhabdomyolysis, 10–50% develop acute kidney injury. […] Rhabdomyolysis accounts for 7–10% of all cases of acute kidney injury in the U.S.“
No comments yet. |
Welcome to the gradient calculator, where you'll have the opportunity to learn how to calculate the gradient of a line going through two points. "What is the gradient?" you may ask. Well, have you ever looked at a mountain and said to yourself, "Wow, that mountain is quite steep, but not as steep as the one next to it!"? And if that kind of question has left you wondering how their steepness compares, you've come to the right place! Keep reading to know the gradient definition.
If you want to find the gradient of a non-linear function, we recommend checking the average rate of change calculator.
What is the gradient?
Before we look at what the gradient is, let's return to our mountain scene and the absolutely crucial question of steepness.
Let's say you're skiing down a slope when The Big Question hits you. You stop and think about it before going any further. As we've mentioned above, all you need is two points to find the gradient, so why not be a little self-centered and choose yourself as the... well, center, that is, the point (x₁,y₁) = (0,0) on the plane.
Now we're left with finding a second point, (x₂,y₂), up or down the slope. You look around to find some particularly bushy tree or a pretty young skier. Or an old smelly one, for that matter; I'm not judging.
Tell the tree or the skier to stand still while you use your handy ruler (that you always carry around with you, of course) to count how much higher/lower they are from you (that will be y₂) and how far they are from you (that will be x₂). Remember to count the distance between you two horizontally, not parallel to the slope. And there you have it! The ratio of y₂ / x₂ is your gradient or the steepness of the mountain at that point.
For sticking around while you perform your quick experiment, go and buy that skier some hot chocolate or hug the tree. They deserve as much.
An informal definition of the gradient is as follows: it is a mathematical way of measuring how fast a line rises or falls. Think of it as a number you assign to a hill, a road, a path, etc., that tells you how much effort you have to put into cycling it (related to the calories burned by biking). If you're going uphill, you must struggle to reach the peak, so the energy needed (i.e., the gradient) is large. If you're going downhill, you don't even have to pedal to pick up speed, so the effort is, in fact, negative. And if you're on flat ground, it neither helps nor makes it harder, so it is neutral or has a gradient of zero.
And what if you're facing a vertical slope? Well, it's not always clear if you want to fall down it (which is effortless) or go scrambling up it. Therefore, in this case, the gradient is undefined.
We calculate the gradient the same way we calculate the slope. We find two points and denote them with the cartesian coordinates (x₁,y₁) and (x₂,y₂), respectively. This is also the notation used in the calculator. Note that we used the same symbols in the real-life example. We want to see how they relate to each other, that is, what is the rise over run ratio between them. It is described by the gradient formula:
gradient = rise / run
with rise = y₂ − y₁ and run = x₂ − x₁. The rise is how much higher/lower the second point is from the first, and the run is how far (horizontally) they are from each other. We talk more about it in the dedicated rise over run calculator.
How to use this gradient calculator
Now that we know the gradient definition, it's time to see the gradient calculator in action and go through how to use it together, step by step:
Find two arbitrary points on the line you want to study and find their cartesian coordinates. Let's say we want to calculate the gradient of a line going through points (-2,1) and (3,11).
Take the first point's coordinates and put them in the calculator as x₁ and y₁.
Do the same with the second point, this time as x₂ and y₂.
The calculator will automatically use the gradient formula and count it to be (11 − 1) / (3 − (-2)) = 2.
Enjoy the knowledge of how steep the slope of your line is, and go tell all your friends about it!
Common misconceptions and mistakes
You may ask yourself, "Hold on, I think I've seen this elsewhere. Doesn't something similar happen when you count the slope or the rise over run?" You're absolutely right. All three concepts: gradient, slope, and rise over run, describe the same thing, so don't worry, as there is no difference between them.
You may also wonder how steep is steep; that is, what does the 2 in the above example tell us. Is it a lot, or is it not? Is the pretty skier going to be impressed by this number? Well, it's all a matter of perspective, and some may say one thing, while others will say the opposite. As a point of reference, you should remember that having a line parallel to the horizon is considered neutral here, as the gradient equals zero. When it rises (or falls), it becomes more and more like a line perpendicular to the horizon, where the slope goes to infinity when it rises (or minus infinity when it falls).
How do I calculate gradient?
To determine the gradient of two points (x₁,y₁) and (x₂,y₂):
- Calculate rise as y₂ − y₁.
- Calculate run as x₂ − x₁.
- To find gradient, perform the division
rise / run.
- Don't hesitate to verify your result with an online gradient calculator.
What does a 1/10 gradient mean?
A gradient of 1/10 means that the height changes by 1 meter for every 10 meters of horizontal (forward) distance. This slope can also be expressed as a radio 1:10 or as 10%.
What is the rise if gradient is 2 and run is 10?
The answer is 20. This is because the gradient is defined as rise over run:
gradient = rise / run, and so
rise = gradient × run. For
gradient = 2 and
run = 10, we obtain
rise = 2 × 10 = 20.
What is the run if gradient is 20% and rise is 2?
The answer is 10. To get this result, recall the formula
gradient = rise / run and transform it to
run = rise / gradient. Plugging in the data, we obtain
run = 2 / 0.2 = 10. |
About This Chapter
WEST Math: Discrete Probability Distributions - Chapter Summary
This chapter covers the topics related to discrete probability distributions that you need to know for the WEST Math certification assessment. Short, entertaining video lessons break down the following concepts:
- How to calculate the expected value of a random variable
- Discrete probability distributions
- Binomial experiments and binomial probabilities
- Binomial random variables
Use the video transcripts to hunt down key terms and ideas. Lesson quizzes offer you the opportunity to test your understanding of discrete probability distributions and pinpoint areas where additional review would be helpful. If you need any assistance, ask our instructors for help.
Objectives of the WEST Math: Discrete Probability Distributions Chapter
Use the resources in this chapter to master discrete probability distributions as you prepare for the WEST Math certification exam. Lesson quizzes offer the opportunity to assess your exam pace - questions are styled after the WEST format. You will have four hours and fifteen minutes to complete all 150 multiple-choice questions on the assessment.
Questions about statistics, probability and discrete mathematics account for 19% of your assessment score. The four remaining content domains are weighted as follows: mathematical processes and number sense (19%); patterns, algebra and functions (24%); measurement and geometry (19%); and trigonometry and calculus (19%).
1. Random Variables: Definition, Types & Examples
This lesson defines the term random variables in the context of probability. You'll learn about certain properties of random variables and the different types of random variables.
2. Finding & Interpreting the Expected Value of a Discrete Random Variable
Discrete random variables appear in your life a lot more than you think. You can use expected values to find the probability of a discrete random variable, as shown in this lesson.
3. Developing Discrete Probability Distributions Theoretically & Finding Expected Values
In this lesson, we will look at generating a theoretical probability distribution for a discrete random variable and introduce the concept of expected value.
4. Developing Discrete Probability Distributions Empirically & Finding Expected Values
In this lesson, we will look at creating a discrete probability distribution given a set of discrete data. We will also look at determining the expected value of the distribution.
5. Binomial Experiments: Definition, Characteristics & Examples
Binomial experiments happen in your everyday life far more often than you might think. In this lesson, you will learn the characteristics of binomial experiments that will help you identify them.
6. Finding Binomial Probabilities Using Formulas: Process & Examples
You can find the probability of getting a certain number of successes when conducting a binomial experiment. In this lesson, you will learn how to find this information using the binomial probabilities formula.
7. Finding Binomial Probabilities Using Tables
A binomial probability table can look intimidating to use. However, it can make your life a lot easier when trying to figure out binomial probabilities. This lesson will teach you how to read those tables.
8. Mean & Standard Deviation of a Binomial Random Variable: Formula & Example
When working with binomial random variables and experiments, it is important to understand the mean and standard deviation. In this lesson, you will learn how to analyze binomial experiments using the mean and standard deviation of a binomial random variable.
9. Solving Problems with Binomial Experiments: Steps & Example
Sometimes when conducting research you will need to use binomial experiments to solve problems. In this lesson, you will learn about binomial experiments and how to use probability to solve problems.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the NES Mathematics - WEST (304): Practice & Study Guide course
- WEST Math: Properties of Real Numbers
- WEST Math: Fractions
- WEST Math: Decimals & Percents
- WEST Math: Ratios & Proportions
- WEST Math: Units of Measure & Conversions
- WEST Math: Logic
- WEST Math: Reasoning
- WEST Math: Vector Operations
- WEST Math: Matrix Operations & Determinants
- WEST Math: Exponents & Exponential Expressions
- WEST Math: Algebraic Expressions
- WEST Math: Linear Equations
- WEST Math: Inequalities
- WEST Math: Absolute Value
- WEST Math: Quadratic Equations
- WEST Math: Polynomials
- WEST Math: Rational Expressions
- WEST Math: Radical Expressions
- WEST Math: Systems of Equations
- WEST Math: Complex Numbers
- WEST Math: Functions
- WEST Math: Piecewise Functions
- WEST Math: Exponential & Logarithmic Functions
- WEST Math: Continuity of a Function
- WEST Math: Limits
- WEST Math: Rate of Change
- WEST Math: Derivative Rules
- WEST Math: Graphing Derivatives
- WEST Math: Applications of Derivatives
- WEST Math: Area Under the Curve & Integrals
- WEST Math: Integration Techniques
- WEST Math: Applications of Integration
- WEST Math: Foundations of Geometry
- WEST Math: Geometric Figures
- WEST Math: Properties of Triangles
- WEST Math: Triangle Theorems & Proofs
- WEST Math: Parallel Lines & Polygons
- WEST Math: Quadrilaterals
- WEST Math: Circles & Arc of a Circle
- WEST Math: Conic Sections
- WEST Math: Geometric Solids
- WEST Math: Analytical Geometry
- WEST Math: Trigonometric Functions
- WEST Math: Trigonometric Graphs
- WEST Math: Solving Trigonometric Equations
- WEST Math: Trigonometric Identities
- WEST Math: Sequences & Series
- WEST Math: Set Theory
- WEST Math: Statistics Overview
- WEST Math: Summarizing Data
- WEST Math: Tables, Plots & Graphs
- WEST Math: Probability
- WEST Math: Continuous Probability Distributions
- WEST Math: Sampling
- NES Mathematics WEST Flashcards |
1: An Introduction to Algebra II
Professor Sellers explains the topics covered in the course, the importance of algebra, and how you can get the most out of these lessons. You then launch into the fundamentals of algebra by reviewing the order of operations and trying your hand at several problems.
3: Solving Equations Involving Absolute Values
Taking your knowledge of linear equations a step further, look at examples involving absolute values, which can be thought of as a distance on a number line, always expressed as a positive value. Use your critical-thinking skills to recognize absolute value problems that have limited or no solutions.
4: Linear Equations and Functions
Moving into the visual realm, learn how linear equations are represented as straight lines on graphs using either the slope-intercept or point-slope forms of the function. Next, investigate parallel and perpendicular lines and how to identify them by the value of their slopes.
5: Graphing Essentials
Reversing the procedure from the previous lesson, start with an equation and draw the line that corresponds to it. Then test your knowledge by matching four linear equations to their graphs. Finally, learn how to rewrite an equation to move its graph up, down, left, or right-or flip it entirely.
6: Functions-Introduction, Examples, Terminology
Functions are crucially important not only for algebra, but for precalculus, calculus, and higher mathematics. Learn the definition of a function, the notation, and associated concepts such as domain and range. Then try out the vertical line test for determining whether a given curve is a graph of a function.
8: Systems of 2 Linear Equations, Part 2
Explore two other techniques for solving systems of two linear equations. First, the method of substitution solves one of the equations and substitutes the result into the other. Second, the method of elimination adds or subtracts the equations to see if a variable can be eliminated.
9: Systems of 3 Linear Equations
As the number of variables increases, it becomes unwieldy to solve systems of linear equations by graphing. Learn that these problems are not as hard as they look and that systems of three linear equations often yield to the strategy of successively eliminating variables.
11: An Introduction to Quadratic Functions
Begin your investigation of quadratic functions by visualizing what these functions look like when graphed. They always form a U-shaped curve called a parabola, whose location on the coordinate plane can be predicted based on the individual terms of the equation.
12: Quadratic Equations-Factoring
One of the most important skills related to quadratics is factoring. Review the basics of factoring, and learn to recognize a very useful special case known as the difference of two squares. Close by working on a word problem that translates into a quadratic equation.
13: Quadratic Equations-Square Roots
The square root approach to solving quadratic equations works not just for perfect squares, such as 3 × 3 = 9, but also for values that don't seem to involve squares at all. Probe the idea behind this technique, and also venture into the strange world of complex numbers.
14: Completing the Square
Turn a quadratic equation into an easily solvable form that includes a perfect square-a technique called completing the square. An important benefit of this approach is that the rewritten form gives the coordinates for the vertex of the parabola represented by the equation.
15: Using the Quadratic Formula
When other approaches fail, one tool can solve every quadratic equation: the quadratic formula. Practice this formula on a wide range of problems, learning how a special expression called the discriminant immediately tells how many real-number solutions the equation has....
16: Solving Quadratic Inequalities
Extending the exercises on inequalities from lecture 10, step into the realm of quadratic inequalities, where the boundary graph is not a straight line but a parabola. Use your skills analyzing quadratic expressions to sketch graphs quickly and solve systems of quadratic inequalities.
17: Conic Sections-Parabolas and Hyperbolas
Delve into the algebra of conic sections, which are the cross-sectional shapes produced by slicing a cone at different angles. In this lesson, study parabolas and hyperbolas, which differ in how many variable terms are squared in each. Also learn how to sketch a hyperbola from its equation.
18: Conic Sections-Circles and Ellipses
Investigate the algebraic properties of the other two conic sections: ellipses and circles. Ellipses resemble stretched circles and are defined by their major and minor axes, whose ratio determines the ellipse's eccentricity. Circles are ellipses whose eccentricity = 1, with the major and minor axes equal.
19: An Introduction to Polynomials
Pause to examine the nature of polynomials-a class of algebraic expressions that you've been working with since the beginning of the course. Professor Sellers introduces several useful concepts, such as the standard form of polynomials and their degree, domain, range, and leading coefficients.
20: Graphing Polynomial Functions
Deepen your insight into polynomial functions by graphing them to see how they differ from non-polynomials. Then learn how the general shape of the graph can be predicted from the highest exponent of the polynomial, known as its degree. Finally, explore how other terms in the function also affect the graph.
21: Combining Polynomials
Switch from graphs to the algebraic side of polynomial functions, learning how to combine them in many different ways, including addition, subtraction, multiplication, and even long division, which is easier than it seems. Discover which of these operations produce new polynomials and which do not.
22: Solving Special Polynomial Equations
Learn how to solve polynomial equations where the degree is greater than two by turning them into expressions you already know how to handle. Your "toolbox" includes techniques called the difference of two squares, the difference of two cubes, and the sum of two cubes.
23: Rational Roots of Polynomial Equations
Going beyond the approaches you've learned so far, discover how to solve polynomial equations by applying two powerful tools for finding rational roots: the rational roots theorem and the factor theorem. Both will prove very useful in succeeding lessons.
24: The Fundamental Theorem of Algebra
Explore two additional tools for identifying the roots of polynomial equations: Descartes' rule of signs, which narrows down the number of possible positive and negative real roots; and the fundamental theorem of algebra, which gives the total of all roots for a given polynomial.
25: Roots and Radical Expressions
Shift gears away from polynomials to focus on expressions involving roots, including square roots, cube roots, and roots of higher degrees-all known as radical expressions. Practice multiplying, dividing, adding, and subtracting a wide variety of radical expressions.
26: Solving Equations Involving Radicals
Drawing on your experience with roots and radicals from the previous lesson, try your hand at solving equations with these expressions. Begin by learning how to manipulate rational, or fractional, exponents. Then practice with simple equations, while being on the lookout for extraneous, or "imposter," solutions.
27: Graphing Power, Radical, and Root Functions
Using graph paper, experiment with curves formed by simple radical functions. First, determine the domain of the function, which tells you the general location of the graph on the coordinate plane. Then, investigate how different terms in the function alter the graph in predictable ways.
28: An Introduction to Rational Functions
Shift your focus to graphs of rational functions-functions that are the ratio of two polynomials. These graphs are more complicated than those from the previous lesson, but their general characteristics can be quickly determined by calculating the domain, the x- and y-intercepts, and the vertical and horizontal asymptotes....
29: The Algebra of Rational Functions
Combine rational functions using addition, subtraction, multiplication, division, and composition. The trick is to start each problem by putting the expressions in factored form, which makes the calculations go more smoothly. Leaving the answer in factored form also allows other operations, such as graphing, to be easily performed.
30: Partial Fractions
Now that you know how to add rational expressions, try the opposite procedure of splitting a more complicated rational expression into its component parts. Called partial fraction decomposition, this approach is a topic in introductory calculus and is used for solving a wide range of more advanced math problems.
31: An Introduction to Exponential Functions
Exponential functions are important in real-world applications involving growth and decay rates, such as compound interest and depreciation. Experiment with simple exponential functions, exploring such concepts as the base, growth factor, and decay factor, and how different values for these terms affect the graph of the function.
32: An Introduction to Logarithmic Functions
Plot a logarithmic function on the coordinate plane to see how it is the mirror image of a corresponding exponential function. Just like a mirror image, logarithms can be disorienting at first; but by studying their properties you will discover how they make certain calculations much simpler.
33: Uses of Exponential and Logarithmic Functions
Delve deeper into exponential and logarithmic functions with the goal of solving a typical financial investment problem using the "Pert" formula. To prepare, study the change of base formula for logarithms and the special function of the base called e....
34: The Binomial Theorem
Pascal's triangle is a famous triangular array of numbers that corresponds to the coefficients of binomials of different powers. In a lesson connecting a branch of mathematics called combinatorics with algebra, investigate the formula for each value in Pascal's triangle, the factorial function, and the binomial theorem.
35: Permutations and Combinations
Continue your study of the link between combinatorics and algebra by using the factorial function to solve problems in permutations and combinations. For example, what are all the permutations of the letters a, b, c? And how many combinations of four books are possible when you have six to choose from?...
36: Elementary Probability
After a short introduction to probability, celebrate your completion of the course with a deck of cards. Can you use the principles of probability, permutations, and combinations to calculate the probability of being dealt different hands? As with the rest of algebra, once you know the rules, it's simplicity itself!
If you are shaky on basic math facts, algebra will be harder for you than it needs to be. Spend every day reviewing flashcards of math facts, and you will be surprised at how much better at math you are! |
"Planck is helping to reveal hidden material between galaxy clusters that we couldn't see clearly before," said James Bartlett of NASA's Jet Propulsion Laboratory, Pasadena, Calif., a member of the U.S. Planck science team. Planck is a European Space Agency mission with significant participation from NASA.
The mission's primary task is to capture the most ancient light of the cosmos, the cosmic microwave background. As this faint light traverses the universe, it encounters different types of structure, including galaxies and galaxy clusters -- assemblies of hundreds to thousands of galaxies bound together by gravity.
If the cosmic microwave background light interacts with the hot gas permeating these huge cosmic structures, its energy distribution is modified in a characteristic way, a phenomenon known as the Sunyaev-Zel'dovich effect, after the scientists who discovered it.
Astronomers using Planck and the Sunyaev-Zel'dovich effect were able to discover a bridge of hot gas connecting the clusters Abell 399 and Abell 401, each containing hundreds of galaxies.
The presence of hot gas between the clusters, which are billions of light years away, was first hinted at in X-ray data from ESA's XMM-Newton, and the new Planck data confirm the observation.
Read the full story from the European Space Agency at http://www.esa.int/SPECIALS/Planck/SEMRT791M9H_0.html .
NASA's Planck Project Office is based at JPL. JPL contributed mission-enabling technology for both of Planck's science instruments. European, Canadian and U.S. Planck scientists work together to analyze the Planck data. More information is online at http://www.nasa.gov/planck and http://www.esa.int/planck .
Media ContactWhitney Clavin 818-354-4673
Jet Propulsion Laboratory, Pasadena, Calif. |
Have you ever studied banana plants? Take a look at this dilemma.
Mr. Thomas' class is having a discussion on the rainforest. The students have been doing some research, and now they are ready to talk about their findings.
“The vegetation of the rainforest was very interesting,” Carmen commented in Mr. Thomas’ class.
“There were so many different things growing,” Mark agreed.
The students began having a discussion about the things that had intrigued them about the plant life of the rainforest. One of the points that was brought up is that there are plants in the rainforest that can’t be found anywhere else in the world. Mr. Thomas spotted the opportunity and wrote the following problem.
You buy a banana tree that is 8 inches tall. It grows 4 inches per day. Its height (in inches) is a function of time (in days) .
You can express this function as an equation. This Concept will show you how to write linear equations.
The form of an equation was most useful in rapidly identifying both the slope, , and the -intercept, . In fact, if we know what the slope of an equation is and we know its -intercept, then we can just as easily write the equation. All you have to do is plug in the slope for in the form and the -intercept for .
Take a look at this situation.
We know that we are going to use the form of the equation , so we can substitute these values into the equation and write it.
This is the answer. The key is to always watch for negative signs and be sure to include them when you write your equation.
Sometimes, we can also be given the slope and a point that the line crosses through. We can also use this information to write an equation of a line.
, the line passes through the point (0,-3)
With this example, we know the slope, so that can be easily substituted into the slope-intercept form. The point has a 0 for the value, so that means that we have been given the coordinate of the – intercept.
This is the answer.
What if you only know two points and you don’t know the slope? It’s a similar operation that we can use in order to write the equation.
Do you recall that the slope formula is ? In other words, given any two points and , we can use the slope formula to calculate the slope of the line that passes through those points. Even when you only know two points, finding the slope is just a matter of using the formula. But then what?
We will use the notion below.
if then because the slope is the same on any part of a line. In other words, we can use a formula similar to the slope formula for finding the equation. This time, however, we will leave and as variables because the relationship is true for any values of and in that equation.
Write the equation of the line that passes through the points and . First we will find the slope using the slope formula for .
Now plug in our known values of , and .
Do you see that we have a proportion? This can be solved by cross multiplying.
This is the answer.
What about when you have been given a table of values? Well, there is a way to figure out the equation quite simply when you have a table of values. Take a look.
First, notice that the – intercept is the value that has an value of 0. With an value of 0, we know that the – intercept is 5.
Now we need to figure out the slope. Look at the y values in the table. Can you see a pattern? If you look carefully, you will see that the values jump by +2 each time. This is the slope. Think about how the line would move when graphed. The pattern of the values represents the slope of the line.
This is the answer.
In science, an independent variable is a parameter that is manipulated or chosen by a scientist while the dependent variable is a parameter that is measured. Scientists oftentimes look for a correlation between an independent variable and a dependent variable—they want to know if the dependent variable depends on the independent variable. For example, a scientist might measure the speed at which a car is moving and the force upon impact when the car hits a wall. The scientist can manipulate the speed of the car—she can make the car move slower or faster. She would then measure the force of impact related to the given speed. Then, a conclusion can be drawn about their relatedness and cars, in this case, might be designed based on that relation. The independent variable will be shown in the left column of a t-table and on the -axis of a graph. The dependent variable will be shown in the right column of a t-table and on the -axis of a graph.
Here we know that the function of is dependent on 4 times that value, and one.
Write a linear equation by using the given information.
Now let's go back to the dilemma from the beginning of the Concept.
First, we need to write an equation. We can use the to represent the height of the banana tree. We can use the to represent the number of days. The 8 is the height that the tree started with. Here is our equation.
This is our answer.
- Independent Variable
- a value that is not dependent on another value. It is the value in a table.
- Dependent Variable
- a value that is dependent on the equation. It is the value in a table.
- Function Notation
- an equation where the value of is dependent on the equation involving .
Here is one for you to try on your own.
Write an equation in slope-intercept form with a slope of -4 and a y-intercept of 13.
To do this, we can take slope-intercept form and substitute in the given values.
The represents the slope.
The represents the y-intercept.
This is our answer.
Directions: Write the equation of a line with the following slopes and – intercepts.
Directions: Write the following horizontal or vertical line equations.
- A horizontal line with a value of 7.
- A horizontal line with a value of -4.
- A vertical line with an value of 2
- A vertical line with an value of -5
Directions: Write the equation of a line that passes through the following points.
- (3, -3) and (-3, 1)
- (2, 3) and (0, -3) |
9 Answers | Add Yours
There are several approaches in answering the question that all lead to the same answer. I have always tried to teach students that in order to make a decimal into a percent, you multiply by 100 because percent means "of 100." If we know that the term "of" means to multiply, then this process becomes easier. This would mean that .89 as a percent would be 89% because we multiply .89 by 100. The other way of thinking about this is when we multiply, we move the decimal two places to the right. Bearing this in mind, the opposite is true. When we write a percent as a decimal we are actually dividing by 100 and thus moving the decimal two places to the left. This means that 115.9% as a decimal becomes 1.159.
All you have to do here is to move the decimal point over two places to the left. You can do that with any percentage to turn it into a decimal. So, in your example, you move the decimal over two places to the left and you get 1.159.
This makes sense because 100% would be 1. And so 115.9% is 1 plus another 15.9%.
As I say, you can do this with any percentage. If your percentage is less than 100, you can still do it. For example 5% is .05.
115.9% or 115.9 % means 115.9 divided by 100.
This could be done like 115.9*10/(100*10) = 1159/1000 = 1159/1000
=1.159. Therefore , a practical rule could be useful to remember that dividing a decimal number by 100 is equivalent to shift the decimal point by 2 digits left. Or to divide a decimal number by 1000 we could do shift the decimal point to left by 3 digits - and so on.
Hope this is helpful.
Percentage is a number expressed as a fraction of 100 i.e. a number divided into 100 parts. The term percent means per 100. Therefore in order to convert the above given percentage to decimal we need to divide it by 100, i.e.
You can do this without using a calculator all you have to is move the decimal point 2 places (since its 2 zeros) to the left.
% is simply a short form of saying 1 over 100.
115.9% is 115.9 x (1/100) = 1.159
For these kinds of questions, you have to know that 100%=1.0
So, to put it simply, you have to get rid of the % sign and then move the decimal points 2 times to the left.
115.9% is equal to 100%+15.9%
so, as mentioned before, 100% is equal to 1, 115.9% is 1+15.9%
15.9%, if the decimal point is moved twice to the left, is equal to 0.159 in decimal points.
To total up,
To do it another way, just divide this by 100%
So, this way, it will be:
You can do a couple things...
1. You can move the decimal two places to the left. You can do this because percent means "out of 100" and two place values represents the "hundreths place value." Therefore, your answer would be "1.159"
2. You can ALWAYS change ANY percent to a fraction. percent means "out of 100" so you can write "115.9 over 100" (in fraction form). Fractions are division problems. When you divide 115.9 divided by 100 (on a calulator or in your head) you'll get a decimal everytime.
To convert any percentage value to decimal one has to divide it by 100.
For Eg: If X% value has to be converted into decimal then it will be X/100.
Similarly, for 115.9% its decimal will be 115.9/100 = 1.159
To write a percent as a decimal, simply use as many places to the right and left of the decimal point as needed.
If we know that 1=100/100, then x% = x/100
So, for example 12% =0.12, because 12/100=0.12
If 100% =1, then 115% = 1.15 and,
115.9% = 1.159
Percents and decimals are not as hard as you think once you get the hang of it. Just remember that a percent is usually out of 100 so that would be two decimal places to the right, or the hundredths place.
We’ve answered 330,627 questions. We can answer yours, too.Ask a question |
The universe formed some
12.5 billion years ago. Much is speculation, but somehow from a tiny speck
everything including space, time, matter and energy unfolded into something
that became recognizable as an early version of the universe we see about
us today. Initially, the temperature was too intense to allow matter to
condense from energy. All of the energy was in the form of fierce gamma
After expanding for many
thousands of years, the temperature of the Universe had cooled to the
point where gamma radiation could form neutrons, protons and electrons.
Almost all of this matter was in the form of hydrogen (91+%) and helium
(8%) and less than a percent lithium, an isotope of hydrogen (deuterium)
and an isotope of helium (helium 3). Almost no other elements were created
at this time. It is a matter of debate whether primordial
black holes were also created. The forces were enough so that dense
knots of matter could create black holes. These black holes may be the
"seed" around which galaxies formed.
The Universe Today
Today large clouds of
gas exist throughout the universe. Most of it is simple atoms, but some
of these clouds contain simple molecules and dust. Most of it has collected
in and around galaxies. While we see star formation throughout the universe,
and we see the absorption of smaller galaxies when the encounter larger
galaxies, we no longer see the formation of new galaxies.
The nature of the interstellar
gas is very different today from the original gas. While hydrogen and
helium still abound, other elements can be found in densities as high
as 7%. This has profound consequences for the type of stellar systems
that can form. Most importantly, heavy elements allow rocky planets such
as the Earth to form. These new elements came from the transmutation of
elements in the hearts of first generation stars. Elements up to iron
in weight can be formed in normal stars and ejected into space as solar
winds and exploding shells when the stars reach the red giant stage. Elements
heavier than iron are created and distributed by a much more dramatic
process - supernovas.
Edwin Hubbell assigned a naming convention to galaxies which remains in
use today. Galaxies come in three main forms, irregular galaxies with
shapes that are amorphous, elliptical galaxies with a large core and almost
no disk, and spiral galaxies which come in two forms, those with a large
central cylinder of stars (barred) and those where the spirals go all
the way to the core.
The shapes of galaxies
appears to start as spirals of one sort or the other. Over time galaxies
pas near or actually through each other. Currently, a small galaxy [the
Sagittarius Galaxy] is colliding with our Milky Way. When they do this
the smaller galaxy loses many of it stars to the larger galaxy. If the
smaller galaxy comes into too close a contact it may be simply swallowed
by the greater galaxy. If it is somewhat farther away it may escape badly
tattered as an irregular galaxy. This is what seems to have happened to
the Large and Small Magellenic Clouds. After swallowing enough smaller
galaxies, the spiral shape disappears and the galaxy assumes an ever more
The Great Nebula in Andromeda is a classic spiral nebula. At the leading
edges of reaching rotating arm, a wave of gas compression occurs triggering
areas of star formation. Andromeda is nearly edge on in this image but
it would look like M74 if we saw it face on.
also known as M87 is many hundreds the times the mass of our own galaxy.
Trillions of stars are believed to populate this great object. Virgo A
is the largest of the galaxies in the so called Realm of the Galaxies
which spans parts of Virgi, Coma Berenices and Leo. This area has more
galaxies to seen than it has stars visible to your eye. While it looks
rather like a globular cluster, it is billions of times larger.
NGC1300 is a clear example
of a barred spiral. Unlike a traditional spiral, the swirling arms star
at the ends of a cylinder of stars which extends for many tens of thousands
of light years from the center of the barred spiral.
Open clusters of stars are formed in a common stellar
nursery. In time birthing grounds like M42 and the Omega Nebula in the
southern hemisphere will drive out gas which has not been included in
the newly formed stars. In some open clusters like the Plieades traces
of the gas still can be seen in photographs sensitive to blue and ultraviolet
light. [This gauzy gas is not visible to the human eye]. When the stars
were formed they were densely packed. However they each had their own
motions which over time causes them to disperse.
Globular clusters are effectively satellites of the
galaxy in which they reside. They travels as units into the central areas
of the galaxy on orbits somewhat like comets around the Sun. Stars in
globular clusters are quite unlike stars in the main disk of the galaxy.
Main disk stars like the Sun carry a great deal of heavier elements (metals
to astronomers no matter what the chemists call them). Sun like stars
are called Population I stars. Stars found in globular clusters lack more
than a tiny percentage of heavy elements and form the Population II stars.
The thin halo of stars outside the plane of the galaxy are also Population
II stars. Population II stars formed before there had been many supernovae
to create heavier elements. They are first generation stars for the most
part. Population I stars form in the wake of supernovae and contain the
As the globular clusters
orbit about the central core, they tend to be densest near the core and
more sparse farther out. In our galaxy, the great concentration of globular
clusters in Sagittarius confirms that this constellation harbors the center
of the Milky Way. Further tests have shown that the exact center is a
small volume in Sagittarius with at least a mass of 2 billion Suns. This
dense area is believed to be a black hole and is called Sagittarius A*.
An emission nebula is gas excited by ultraviolet
radiation from fierce new blue violet stars. This is the same conditions
which occur in fluorescent and neon lamps. This type of nebula is typified
by M42, the Great Nebula in Orion. The Trapezium as well as many unseen
embedded stars provide the' sources of ultraviolet radiation.
An absorption nebula is a mixture of gas and particularly
dust dense enough to absorb, redden and even blot out light. These cause
dark nebulae like the Horsehead and the Coal Sack. They cause dark areas
like Sagittarius and the Sombrero. However they do not include "empty
lanes" in the Milky Way.
Sometimes a nebula can
have regions which emit while other regions absorb. This is the case in
the Horsehead Nebula shown here. The dark regions are areas where light
has been absorbed so heavily that the area looks like a dark cloud. The
bright regions are where hydrogen gas is fluorescing emit a reddish frequency
called the hydrogen beta line. Near the edges of the dark regions areas
absorb much but not all of the light and it is possible to use a spectrograph
to determine the clouds chemical make up.
When light from a foreground stars shines on background clouds, a reflection
nebula is formed. In some cases, the foreground star can be hidden
by an absorbing nebula. Reflection nebula can sometimes be precisely mapped
and measured by timing the pulses of light from a variable star or a supernova.
nebulae arise when an aging stars sheds shells of gas as the fusing
of hydrogen leaves the core and moves towards the star's surface. Large
explosions (but not as large as supernovae), progressively strip the star
of material. If the star manages to shed enough material, the it will
end up as a white dwarf with a ring about it which grows year by year.
Eventually these rings become so large and thin that they are no longer
illuminated by the hot central white dwarf.
Stars are continually
being formed from the huge reservoirs of hydrogen gas the fill the galaxies.
It was once thought that gravity played the role of "gas compressor",
but we now know that there hasn't been time since the formation of the
universe to have had many clouds compress naturally into stars. A triggering
event is required. The two principle events are density waves and supernovae.
The center of every
galaxy appears to contain a black hole. This is by no means certain, but
something large and dense exists there. Lines of magnetic force stream
outwards and are bent along the leading edges of the galactic arms. This
creates a density wave which sweeps up and compresses hydrogen and helium
along with any other elements which may be in the region. Although we
cannot look down on the Milky Way to see such area, we can see similar
areas in thousands of other galaxies. Along the leading edge of their
arms, young fierce glowing blue white stars abound, a sure sign of star
Role of Supernovae
"We are such things as dreams are made on" said Shakespeare. I wonder
what he would have said if he realized that it is also quite literally
true that once our very elements were forged in the hearts of the largest
stars. Look at the Crab Nebula as the explosion which tore it apart sends
material through space. However, the material which pours out of a supernova
is not just the hydrogen and helium which formed the star but nitrogen,
oxygen, carbon, silicon, sulfur, magnesium, neon, iron and in fact to
some degree or other every element in the natural world.
One role of a supernova
is to create the elements from which Population I (metal rich) stars are
formed. These are the stars that can have rocky, watery worlds where life
can form. The other crucial role that supernova play is as another source
of gas compression and the triggering of new stars. Like density waves,
the bow wave of a supernova explosion pushes everything before it and
compresses gas until its own gravity can take over forming a new set of
Hertzsprung Russel Diagram
The luminosity (the total emitted energy) of a star is directly proportional
to the fourth power of it mass. To maintain this power output, the star
must consume its fuel proportional to its fourth power as well. If one
main sequence star is 3 times as massive as another star, it will shine
81 times as brightly. It also fuses its fuel 81 times as rapidly. As stars
leave the main sequence this relationship is disrupted.
The term luminosity is
the preferred to describe the brightness of a star. For historical reason,
the portion of a star's spectrum that lies in the visual range is measured
by a magnitude scale. Stars of the first magnitude seem to be twice as
bright as those of the second magnitude which in turn seem to be twice
as bright as those of the third magnitude. In fact, a closer relation
ship is that every five magnitudes in brightness represent a 100 fold
change in luminosity. Luminosity is measured directly. Magnitude is measured
on an inverse logarithmic scale. Larger magnitudes mean dimmer stars which
is counterintuitive. Larger luminosities mean brighter stars exactly as
you would think.
Do not confuse apparent
luminosities (or magnitudes) with absolute luminosities (or magnitudes).
Apparent brightness depends on how a star looks to us on Earth. Absolute
brightness depends on how bright a star would be at the standard distance
of ten parsecs (33.26 light-years).
The time that a star spends on the main sequence is INVERSELY proportional
to the cube of its mass. This is a direct result of the luminosity relationship
we just discussed. Since a stars luminosity (and hence its rate of fuel
consumption) is proportional to the fourth power of the mass but its mass
is only the first power, stars have a lifetime which is proportional to
M/M4 or simply M-3.
Large stars have very
short lifetimes. A maximal sized star of about 100 solar masses will live
1 MILLIONTH as long as the Sun. A minimal sized star of 0.08 solar masses
will live 1950 times as long as the Sun. Since the Sun will live about
10 billion years, the largest stars burn out in just about 10 THOUSAND
years but smallest stars will live 19.5 TRILLION years.
When stars coalesce
from interstellar gas clouds, their temperature and pressure rise from
frictional heating and gravity. Once nuclear processes begin gas already
falling in from the spinning disk collides with gas expanding from nuclear
fusion. One way that Herbig-Haro stars relieve this problem is to eject
mass at the poles of the new star.
Young stars have yet
to achieve hydrostatic balance between the rate of energy production and
the size of the star. As much as ten times the material that will eventually
form the finished star exists in the new stellar system. This material
must be driven back into the interstellar medium. Stars in this stage
of development are called T-Tauri stars.
Brown dwarfs weighing
between 0.01 and 0.08 stellar masses are neither true stars nor planets
but intermediate objects. They radiate in the infrared. Most of their
heat comes from gravitational contraction. However, sometimes their central
cores are hot enough to fuse deuterium, lithium or beryllium. These elements
fuse at a temperature several million degrees cooler than the minimum
required for hydrogen fusion. However, there are so few of these atoms,
that they are unlike to encounter each other in a core that is largely
hydrogen and helium. When these elements do fuse, they expand the core
cooling it enough to shut down the reactions. far between to really ignite
Once a body of hydrogen
reaches 0.08 solar masses, it has enough material so that gravitational
contraction will raise the central core to 15 million degrees. Hydrogen
begins to fuse. A true star is born.
When the new star has
a mass between 0.08 and 0.4 solar masses, it forms a small dim red dwarf
star. Of the 100 nearest stars 92 are red dwarfs. They form in great numbers
but their total luminosity is so low that galaxies seem blue white. Indeed,
Proxima Centauri, the nearest star to the solar system is 13th magnitude
- no brighter than dim little Pluto.
Most normal sized stars
are the so called main sequence dwarfs. They are in the spectral classes
K, G, F and A with masses between 0.4 and 3.3 solar masses. The term "dwarf"
is unfortunate because it seem to imply a star of small dimensions. In
fact they are much larger and brighter than an average star. For example
the Sun is a yellow G2 dwarf, yet of the 100 nearest stars only 3 are
a bit larger and another is just a bit smaller. 95 stars have diameter
which are less than 60% of the Sun and masses which are less than 40%
of the Sun. No nearby star is really large, although Sirius is almost
twice the mass of the Sun.
Some orange, yellow,
white (green) stars fall into a category of sub-giants. Sub-giants are
large stars which are in the process of leaving the main sequence. These
stars swell as the hydrogen fusion shell approaches the surface. Most
of these stars are variables.
The largest main sequence
stars are the blue giants. They are between 3.3 and 100 solar masses.
While they are called blue giants, they can be blue, violet or even ultraviolet
in color. These stars are extremely bright and short lived. Of the roughly
6000 stars that can be seen by the human eye, all but 50 are either red
or blue giants. Blue giants of necessity are all very young stars. Some
of these blue giants become unstable - like Dschubba and Gamma Cassiopeia
- throwing off huge shells of gas and briefly becoming very bright. A
few actually become supernovae without first becoming red giants.
Red giants posed a paradox
to early astronomers. They were very red (hence they were cool) and they
were very bright (which seemed impossible - because the black body laws
[which we shall learn about in the Physics Section] say the red objects
emit light dimly). Finally, astronomers realized that a star with a very
low brightness per square meter could actually put out a huge luminosity
if its surface area was enormous.
Red giants have HUGE
volumes although they have low density. A typical red giant like Antares
or Betelgeuse will have a volume as large as the orbit of Mars. The largest
known red giant VV Cassiopeia is calculated to have a diameter as large
as the orbit of Saturn.
Red giants are aging
stars which have converted a large portion of their hydrogen to helium
(typically 40-50%). As the core fills up with helium "ashes" the fusion
zone approaches the surface. However at some point the gas above the star
has too little remaining mass and the star stops being stable and begins
to swell. The swollen star emits more light that before cooling it at
a new less healthy stage. Red giants with lower mass (such as the Sun
will become) will eventually simply become white dwarfs. High mass red
giants are rapidly on their way to becoming supernova.
Eclipsing binaries are
binary stars have the plane of their orbit edge on to the solar system.
As the stars revolve around their barycenter they will regularly pass
in front of one another. Since at least some of the total surface area
is masked, the luminosity will drop. If one star is much brighter than
its companion, there will be alternating large and small dips in the luminosity.
By timing the dips precisely and determining the stars mass and velocity
by applying Newton's laws of gravitation, it is possible to determine
the diameters of the stars very accurately.
Flare stars appear to
change more profoundly than they really do. All main sequence stars appear
to emit flares. Against a bright star such as the Sun, Sirius or Rigil,
a flare is lost in the overall brightness of the star. Against a dim red
dwarf however, the flare can actually be brighter than the rest of the
star's surface. All stars have flares where a pocket of overheated gas
erupts at the surface. Momentarily, the star emits radiation of shorter
wavelengths (blue, violet, ultra violet and x-rays). On a moderate star
like the Sun, a flare tends to fade into surface brightness. Flares are
unnoticeable on large blue stars. However, on a small red dwarf, a flare
can actually be brighter than the star itself. For periods of a few minutes
to a few hours the star may brighten several magnitude. Some amateurs
watch a collection of red dwarfs looking for these flares.
Certain yellow orange
sub-giants (called Cepheid variables) pulsate in a very regular manner.
It is possible to determine exactly how far these stars are from the solar
system by timing the pulse rate. What makes these Cephied variables unusually
useful is that they are bright enough to be seen in distant galaxies.
Hydrostatic balance is
the balance between the expanding forces from the heat produced by fusion
and a compressive forces from gravity. Imbalances between the expansion
and compression can cause pulsations. These stars expand when they are
hottest, emit radiation more rapidly when they are inflated, cool and
contract in a cycle. Cepheid variables are examples of pulsating stars.
Supernovae are the deaths
of very large stars. Stars which start out at least 10 times the mass
of the Sun cannot shed enough mass by ejecting shells by the time their
core reaches 1.4 solar masses (Chandreskar's limit) [details to follow
in Physics]. This results in an enormous explosions where all the elements
of the periodic table beyond the first groups are produced. Supernovae
can outshine their galaxy (billions and even trillions of star power)
for a few weeks. Even this most titanic of nuclear explosions does not
totally destroy the star. A core of compressed material remains. If the
core is less than 1.4 solar masses it creates a white dwarf. If it is
between 1.4 and 3 solar masses it forms a neutron star. More than 3 solar
masses results in a black hole.
White dwarfs can result from supernovae, but they also are the end product
of stars which go through the red giant stage without going supernova.
The sun will someday become a white dwarf after it swells into a red giant
stage. You can see a white dwarf at the center of the Cat's Eye nebula.
White dwarfs no longer
fuse hydrogen into helium. The core is composed of helium or some heavier
element (usually, carbon, oxygen, neon, silicon, magnesium or sulfur).
Since there is no steady source of fusion energy, white dwarfs slowly
cool down eventually become cold inert [hundreds of trillions of years]
black dwarfs. No white dwarfs is believed to have entered black dwarf
Astronomers used to think
that nova and supernova were differing degrees of the same thing - stellar
explosions. However, they are really quite dissimilar. Supernovae are
titanic explosions which rip stars apart scattering elements into the
universe. Novae are recurring small explosions which leave their "star"
Novae are white dwarfs
or neutron stars in close orbit around a main sequence star. The fierce
gravity of the burnt out star strips the outer layers of hydrogen from
the main sequence star. When enough accumulates on the burnt out star,
a hydrogen bomb type explosion takes place.
We have already seen
that neutron stars are supernovae remnants where the core is greater than
3 solar masses. These objects are very odd things indeed. In "normal"
white dwarfs, the elements left after the supernova explosion are left
as a plasma (sort of a gas where the electrons have been stripped away).
The white dwarf does not have fusion energy to hold the star up from collapse,
but the "electron pressure" (like charges repel) keeps the white dwarf
steady at about the size of the planet Earth in diameter.
All this changes in neutron
stars. Once the mass reaches 1.4 Sol, the gravity becomes so intense that
the electrons are dragged kicking and screaming into the core. They get
squished into the protons (positively charged nuclear particles) neutralizing
them and becoming neutrons (uncharged nuclear particles). The star loses
the pressure of the "degenerate electrons" and it collapses into a ball
about 10 miles in diameter spinning at hundreds and thousands of times
per second. The surface of a neutron star spins very near the speed of
Effectively, this neutron
star is a single giant (fiercely radioactive) atom. It is very nearly
the densest object in the universe. A sugar cube chunk of this stuff would
weigh more that Mount Everest.
Pulsars (a type of neutron
star) spin extremely rapidly. Near their poles, they emit charged particles
at very near the speed of light. Think of them as swizzle sticks spinning
around blindingly fast. The swizzle sticks of charged particles sweep
up and stir around the gas in the system they reside in causing a form
of electromagnetic radiation. Some of this is in the radio frequencies
and the rest in higher frequencies up to visible flashes. If the beam
of charged particles is lined up in the direction of the Solar system,
the electromagnetic radiation will flash on us. When these very regular
flashes were first detected many astronomers suspected they were artificially
produced by alien species.
Supernova remnants greater
than 3 solar cannot remain stable at the neutron star stage. They become
the most exotic of all stellar objects - black holes. Gravity again begins
its relentless pull. The gravity reaches a point where no particles, not
even light can escape because they would have to travel above the speed
of light (the universal maximum) to leave the ex-star. [There is an odd
form of radiation (Hawking radiation) which can leave the event horizon
of a black hole through quantum mechanical processes but we will not discuss
For our purposes, the
event horizon marks the point where anything that enters the black hole
cannot leave. There is a false belief that black holes are all powerful
vacuums which slurp anything and everything into their maw. This is not
so. For example if you squeezed the Earth into a black hole (an event
horizon about the size of a marble), and stood at a distance of 6,400
kilometers from it (our current distance from the center of the Earth)
the gravity would be exactly 1 G. The field would only become great as
we came very close to the black hole.
Quasars are extremely
bright objects which can be seen across the universe. The only known source
of such power would be a huge black hole swallowing the gas from stars
unfortunate enough to get too near the black hole. The light is not emitted
by the black hole itself, but a disk of material spiraling into the black
hole. Quasars normally have long jets of material shooting out at nearly
the speed of light. This jet can be luminous for light-years.
Active galactic nuclei
are suspected of containing black holes in their centers. In fact some
theories say that all galaxies arose around a central black hole formed
at the big bang [a pure guess so far]. Those which are so suspected have
something very energetic at the core.
Seyfert galaxies have
centers which seem very much like quasars without their jets. They appear
to be ex-quasars or quasars which no longer have enough nearby gas to
power the quasar. The heart of our galaxy is a black hole located at the
object we call Sagittarius A*. It looks as if the Milky Way is (or at
least was) a Seyfert type galaxy. The central black hole solar masses
some 2-3 million solar masses. We can see stars orbiting Sagittarius A*
in as little as decades.
- Asteroids (sometimes
called planetoid) are planetesimals which orbit a star. Ideally, all
asteroids would be planetesimals, however some larger asteroids are
actually worlds. The dividing line is an arbitrary 1000 km.
- Dwarf Stars
- Dwarfs are regular
stars like the Sun which have modest masses and modest volumes. Stars
which are not some sort of "giant" are called dwarfs no matter what
their size. super dense star is called a white or a black dwarfs.
- Giant Stars
- Giant Stars have
volumes many thousands of times that of the Sun. Some "sub-giants" and
"blue giants" have masses much greater than the Sun, but volumes which
are not radically larger than the Sun.
- Main Sequence Stars
- Main Sequence Stars
are huge bodies which derives the vast majority of their energy primarily
from fusing hydrogen to helium. Main sequence stars are in hydrostatic
balance between the forces of gravity and nuclear fusion. Stars
too young to have achieved this balance throw off huge amounts of material
via jets and fierce solar winds. Stars that have used up their hydrogen
fuel supply swell enormously.
- Planets are full
sized spherical worlds which orbit a star. See rogues and asteroids.
- Planetesimals are
bodies which is too small to attain spherical shape simply through their
own gravity. A planetesimal melted by passing too close to a star and
becoming spherical due to surface tension (a result of electromagnetic
force) does not count, because the forming was not done primarily by
- Rogues are suspected
(but unproven) worlds like planets that do not orbit stars. These are
believed to be ejected from star systems as the systems grow older.
- Satellites (often
called moons) are either worlds or planetesimals which orbits a planet.
- Worlds are bodies
large enough to be pulled into roughly spherical shape by their own
gravity. All stars fall within this definition as do major planets and |
Multi-Step Equation Worksheets
A huge collection of multi-step equations' worksheets involving integers, fractions and decimals as coefficients are given here for abundant practice. Solving and verifying equations, applications in geometry and MCQs are included in this section for students.
Solving equations involving integers: Level 1
In these 'Level 1' worksheets, solve each multi-step equation to find the value of the unknown variable. Six exclusive worksheets are here for practice.
Solving equations involving integers: Level 2
These 'Level 2' multi-step equations may involve a few more steps to arrive the solution than level 1. A plenty of practice worksheets are available here.
Solving equations involving fractions
These worksheets have equations whose coefficients are fractions and integers. Solve each multi-step equation. Eight questions are given per worksheet.
Solving equations involving decimals
In these worksheets, perform the basic arithmetic operation and solve the multi-step equations having decimal numbers as coefficients.
Solving equations: Mixed Review
A combination of integer, fraction and decimal coefficients stands for the variable in these mixed review worksheets. Practice them all.
Solve and verify the solution
In these worksheets, solve the multi-step equations and verify your solution by substituting the value of the unknown variable to the equation.
Translating multi-step equation
Translate the given phrases to algebraic equations. Three exclusive practice sheets are available here for students.
Equations in geometry: Type 1
Nine geometric shapes are shown in each worksheet. Their sides are given in the form of expressions. Solve them to find the unknown variable.
Area and perimeter - Shapes: Type 2
In these worksheets, the area and perimeter of nine shapes are given. Use the given expressions and apply the area and perimeter formula to solve them.
Equations in geometry: Type 3
A collection of word problems involving properties of shapes are given. Set up the equation and solve each multi-step equation.
Related Equation Worksheets |
A loop is a control structure which allows a block of instructions, the loop body, to be executed repeatedly in succession. In this article we investigate loop control structures and how they are constructed in assembly language. There are two categories of loops:
1) Counted Loops - Loops in which the iteration number, the number of times the loop should be executed, is known before the loop is entered.
2) Conditional Loops - Loops which will be continually iterated until a prescribed condition occurs.
Creating Counted Loops
The most commonly used instruction for creating a counted loop is the Branch On Count instruction, which has mnemonic BCT. The iteration number is first loaded into a register which is subsequently manipulated by BCT. Each time the BCT is executed, the iteration number in the register is decremented by one. After the register is decremented, the register is tested. If the result in the register is not zero, a branch is taken to the address specified in Operand 2. If the test reveals that the result in the register is zero, the loop is terminated and execution continues with the instruction following the BCT. Here is a sample loop that would be iterated 10 times.
LA R5,10 PUT THE ITERATION NO. IN R5
LOOPTOP EQU *
...LOOP BODY GOES HERE
BCT R5,LOOPTOP DECREMENT R5,IF NOT 0, BRANCH BACK
In the code above, the register that will control the loop is initialized at 10 and then the statements in the loop body are executed. At the end of the first iteration, BCT subtracts 1 from register 5, leaving it set to 9. The register is tested and since it does not contain zero, a branch is taken back to LOOPTOP. The loop body is executed a second time, and again BCT subtracts 1 from register 5, leaving it set at 8. BCT tests the content of R5, which is now 8 , and not finding it zero, branches back to LOOPTOP. This process continues through 10 iterations of the loop body. On the tenth iteration, BCT decrements the register and the result becomes zero. Testing R5 reveals it to be zero. This time the branch is not taken and control returns to the statement following the branch on count instruction.
There are two other instructions which are occasionally used to implement a counted loop. The first instruction that we consider is called “Branch on Index Less Than or Equal”. The mnemonic for this instruction is BXLE. When coded it has three operands:
1) Operand 1 is a register containing a count or an address.
2) Operand 2 is typically an even register of an even-odd consecutive pair of registers. The even register contains a value which is used to increment or decrement the value in Operand 1.
The odd register, which is not coded in the instruction, contains a limit or address against which Operand 1 is compared.
3) Operand 3 represents an address to which the instruction will branch if the Operand 1 value is less than or equal to the limit in the odd register.
For example, consider the following instruction.
The diagram below illustrates the relationships among the registers used by the instruction.
Each time the instruction is executed, the value in the even register, R4, is added to the value in the Operand 1 register, R3. If the result is less than or equal to the value in the odd register, R5, a branch occurs to the address in Operand 3, LOOPTOP. The code listed below uses BXLE to implement a loop.
SR R3,R3 PUT 0 IN R3
LA R4,10 PUT INCREMENT INTO EVEN REG
LA R5,100 PUT LIMIT VALUE IN ODD REG
LOOPTOP EQU *
BXLE R3,R4,LOOPTOP EXECUTE THE LOOP
First, the count in R3 is initialized to 0, the increment is set to 10, and the limit is set at 100. Each time the BXLE is executed, the increment in R4 is added to the count in R3 and the result is compared to the limit in R5. If the new count is less than or equal to the limit, a branch occurs back to LOOPTOP. The effect of the statement is to execute the loop 11 times. ( The loop is executed once on the equal condition.)
BXLE has a companion instruction called “Branch on Index High”. The mnemonic is BXH and the instruction is similar in execution to BXLE except the branch occurs on a “high” condition instead of “less than or equal”. The following code gives an example of this instruction.
LA R3,5 SET THE COUNT TO 5
L R4,=F’-1’ SET THE DECREMENT TO -1
SR R5,R5 SET THE LIMIT VALUE AT 0
LOOPTOP EQU *
BXH R3,R4,LOOPTOP EXECUTE THE LOOP
In the example above, the count is set at 5, the decrement is -1, and the limit is 0. Each time the BXH is executed, the decrement is added to the count, reducing it by 1. As long as the result is higher than the limit 0, a branch occurs to LOOPTOP. The effect is to execute the loop body 5 times.
The main drawback to using BXLE and BXH is that both instructions require 3 registers. Since registers are usually at a premium in most programs, loops are often implemented using BCT.
While the instructions discussed above were specifically designed for the construction of loops, comparisons of any type, as well as other instructions that set the condition code, can be used to create “home-made” loop structures. In the code below we use a packed decimal instruction in combination with a branch to implement a counted loop.
ZAP COUNT,=P’100’ LOOP 100 TIMES
LOOPTOP EQU *
SP COUNT,=P’1’ DECREMENT ON EACH ITERATION
BNZ LOOPTOP BRANCH IF NOT ZERO
In this case, we have decided to loop 100 times. Each time through the loop the count is decremented by 1. We take advantage of the fact that SP sets the condition code based on the result of the subtraction. If the result is not zero, we loop back to LOOPTOP.
Creating Conditional Loops
Conditional loops are characterized by the property that we cannot predetermine the number of times the loop will be iterated. In other words, the loop will continue to be executed until a prescribed condition occurs. The condition is tested “by hand” using one of the compare instructions; CP for packed data, CLC or CLI for character data, and C or CH for binary data. The following code implements a conditional loop that continues to be executed until a field, called “FLAG”, is equal to “Y”.
LOOPTOP EQU *
CLI FLAG,C’Y’ IS THE FLAG SET?
BNE LOOPTOP NO... BRANCH BACK
When creating loops of this type, the loop body must contain logic that will eventually cause termination of the loop. Otherwise an infinite loop results.
One common use of a conditional loop is for processing all the records in a sequential file. This is illustrated with the code below.
GETREC EQU *
GET MYFILE,MYRECORD READ A RECORD
...PROCESS THE RECORD
B GETREC LOOP BACK FOR NEXT RECORD
NEXTSTEP EQU *
At first glance, the loop above appears to be an “infinite” loop. In other words, there does not appear to be logic present that would allow the program to escape the loop body once it is entered. This problem is resolved when we execute the GET macro and there are no more records in the file. At this point, a branch would occur to NEXTSTEP if we have specified EODAD=NEXTSTEP in the DCB of MYFILE. The EODAD parameter causes an unconditional branch to the address we specify in the parameter when “end of file” is detected. So, in fact, the loop above is conditional, and continues to execute until “end of file” occurs.
Occasionally you may code an infinite loop by mistake. When this happens, your program will continue to execute the loop until it has used up the time allocated to the job by the operating system. At that point, the program will be interrupted with a “322” abend. |
Illustration courtesy Caltech/NASA
Photograph by Lance Hayashida/Caltech
Bethany Ehlmann is a participating scientist on the NASA Mars Rover Curiosity mission, a research scientist at the Jet Propulsion Laboratory, and assistant professor of planetary science at Caltech. She explores our solar system, seeking to understand its history over billions of years of geologic time and searching for habitable environments for life.
At 9:45 each Mars morning, a car-size rover loaded with scientific instruments wakes up, looks toward Earth, and asks, “What do I do today?” Bethany Ehlmann is one of the scientists who answers that question. As a geologist on the NASA Mars Rover Curiosity mission, she helps direct the rover and analyzes the minerals and geochemistry of Martian rocks for clues about the planet’s environment over billions of years.
The history of Mars is written on rocks. Particularly the history of water—the most crucial clue of all in the search for past life. “We know liquid water shaped Mars’s surface,” Ehlmann explains. “Lakes, rivers, and hot springs were widespread enough to form minerals on Mars three billion to four billion years ago.” Today, Ehlmann analyzes those ancient minerals by zapping Martian rocks with the ChemCam laser spectrometer aboard the rover. The laser vaporizes a tiny amount of rock, producing a glowing cloud of plasma. Light from the plasma creates a “fingerprint” of emission lines, revealing particular chemical elements that compose the rock, allowing Ehlmann and the team to determine the chemistries of waters that formed it. “The grand slam home run of the mission would be detecting preserved organic matter relating to biology in some of the sediments,” she says. “Another huge finding will be evidence for what sort of environment with water existed, and how climate allowed liquid water on Mars.”
“Advances in robotics let us be virtual explorers,” Ehlmann notes. “Rovers are a proxy for us, taking samples and measurements on the surface of another planet. This technology has only been available in the last decade, and it’s very exciting to be on the forefront of using it.”
Ehlmann says Mars and Earth have valuable news for each other. The rock record of life’s origins on Earth is very limited; less than one percent remains from our beginnings 3.5 billion years ago, due to recycling of rocks by plate tectonics. In contrast, about 50 percent of Mars’s surface dates from those ancient days. “This gives us insight into the early history of our solar system at a time when meteorites bombarded terrestrial planets and more active volcanoes belched out gases to the atmosphere,” Ehlmann says. “Studying the first billion years of Mars’s history helps answer questions about how Earth evolved to sustain and maintain environments good for life.”
In turn, exploring remote corners on Earth can help inform how to best tackle exploration on Mars. “The places we are likely to find life on other planets are far colder, dryer, hotter, or more acidic than anyplace on Earth,” Ehlmann points out. “So I travel to some extreme spots to find geologic features and environmental conditions that most closely resemble the surfaces of distant planets.” Scientific quests have taken her from the deserts of California to Oman. Iceland and Hawaii are both particularly fertile proving grounds for testing techniques and instruments destined for Mars. Both places are basaltic lava flows, not the dominant continental crust on Earth, but exactly what composes most of the surface of Mars. “We start by running experiments with typical geological lab instruments and then try performing the same tasks with technology designed to fly on orbiters or attach to rovers,” she says. “For example, we thought one of the instruments orbiting Mars showed evidence of water reacting at high temperatures with basalt to form clay minerals in a hydrothermal environment. So we took a backpack-size version of that instrument into the field in Iceland, measured data on basaltic areas there, and brought rocks back to the lab to see if our conclusions were correct. Testing instruments this way really pushes technology ahead before we move outward to other planets.”
While Curiosity sleeps, mission scientists and engineers work through the Mars night to review and analyze the daily downlink of photographs and data and plan where the rover should drive, dig, zap, and sniff the next day. The painstaking work involves multiple daily meetings of scientists and engineers; rapid integration and data processing of spectra, images, and thermal data; and precise computer codes. But then, there are transcendent moments. “Something will snap me back and remind me of the big picture. A perfect photo taken during Martian sunset. A neat rock that looks just like one I saw on Earth. An amazing color palette of infrared channels that’s not only scientifically interesting but just beautiful,” Ehlmann explains. “I step back, pause, and say this is a really great endeavor to be part of, understanding the generation of life-sustaining environments and extending our human knowledge and presence to other worlds.”
Bethany Ehlmann is interested in the geologic history of Mars and the causes of environmental change on that planet.
Astronomers have found more evidence that Mars was wet and warm in the ancient past, but the discovery comes with a twist: The water may have flowed below the Martian surface, rather than on top of it.
From a control room in Pasadena, California, Ehlmann blows holes in rocks with a laser on the Mars rover, creating clouds of atoms that could hold evidence of water.
Latest Explorer News
In Their Words
Rocks hold the secret to life’s origins and history here on Earth and may do the same on remote planets like Mars.
Research scientist Bethany Ehlmann and mechanical designer Scott McGinley explain some of the scientific instruments aboard the Mars rover Curiosity.
Our Explorers in Action
Meet female explorers who have pushed the limits in adventure, science, and more. |
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
Meteoritic and volcanic particles may have promoted origin of life reactions
Precursors of the molecules needed for the origin of life may have been generated by chemical reactions promoted by iron-rich particles from meteors or volcanic eruptions on Earth approximately 4.4 billion years ago, according to a study published in Scientific Reports.
Previous research has suggested that the precursors of organic molecules—hydrocarbons, aldehydes and alcohols—may have been delivered by asteroids and comets or produced by reactions in the early Earth's atmosphere and oceans. These reactions may have been promoted by energy from lightning, volcanic activity, or impacts. However a lack of data has meant that it is unclear what the predominant mechanism that produced these precursors was.
Oliver Trapp and colleagues investigated whether meteorite or ash particles deposited on volcanic islands could have promoted the conversion of atmospheric carbon dioxide to the precursors of organic molecules on the early Earth. They simulated a range of conditions that previous research has suggested may have been present on the early Earth by placing carbon dioxide gas in a heated and pressurized system (an autoclave) under pressures ranging between nine and 45 bars and temperatures ranging between 150°C and 300°C.
They also simulated wet and dry climate conditions by adding either hydrogen gas or water to the system. They mimicked the depositing of meteorite or ash particles on volcanic islands by adding different combinations of crushed samples of iron meteorites, stony meteorites, or volcanic ash into the system, as well as minerals that may have been present in the early Earth and are found in either the Earth's crust, meteorites, or asteroids.
The authors found that the iron-rich particles from meteorites and volcanic ash promoted the conversion of carbon dioxide into hydrocarbons, aldehydes and alcohols across a range of atmosphere and climate conditions that may have been present in the early Earth. They observed that aldehydes and alcohols formed at lower temperatures while hydrocarbons formed at 300°C. The authors suggest that as the early Earth's atmosphere cooled over time, the production of alcohols and aldehydes may have increased.
These compounds may then have participated in further reactions that could have led to the formation of carbohydrates, lipids, sugars, amino acids, DNA, and RNA. By calculating the rate of the reactions they observed and using data from previous research on the conditions of the early Earth, the authors estimate that their proposed mechanism could have synthesized up to 600,000 tons of organic precursors per year across the early Earth.
The authors propose that their mechanism may have contributed to the origins of life on Earth, in combination with other reactions in the early Earth's atmosphere and oceans.
More information: Oliver Trapp, Synthesis of prebiotic organics from CO2 by catalysis with meteoritic and volcanic particles, Scientific Reports (2023). DOI: 10.1038/s41598-023-33741-8. www.nature.com/articles/s41598-023-33741-8
Journal information: Scientific Reports
Provided by Nature Publishing Group |
Published at Tuesday, August 25th 2020, 14:43:17 PM by Nannette Gilbert. Kindergarten Worksheets. Play a magnetic fish game with cardboard fish with a paper-clip and a piece of dowel and string with a magnet on the end as a fishing rod. Count the fish in the pond. When one gets caught subtraction how many are left? Division can be as simple as a sharing exercise. "There are 4 people here and I have 8 counters. Let us see how many we will get each". Use play dough or counters or blocks to make groups of items. Talk about what happens when you put groups together (multiplication). Make the terminology you use simple. This age group need simple language instead of mathematical terms. These activities are laying the foundations for further learning.
Published at Tuesday, August 25th 2020, 19:09:55 PM by Nathaly Fournier. Kindergarten Worksheets. Kindergarten Worksheets present an interesting way for kindergarten children to learn and reinforce basic concepts. Since children learn best by doing and since children get bored very easily, giving them well-designed, illustrated worksheets to do makes it easier and more fun for them to learn. Completing a worksheet also gives children a great sense of fulfillment. How to use worksheets for best effect: Give children worksheets appropriate to their level. Give an easy worksheet for a concept immediately after you teach that concept.
Published at Thursday, May 07th 2020, 03:19:10 AM. Kindergarten Worksheets By Robinetta Carlier. Once you have a scope and sequence book, make a list of each area in math that he needs to work on for the school year. For example for grades three and four, by the end of the year in subtraction, your child should be able to: Solve vertical and horizontal computation problems, Review subtraction of 2 numbers whose sums would be 18 or less, subtract 1- or 2-digit number from a 2-digit number with/without renaming, subtract 1-, 2-, or 3-digit numbers from 3- and 4-digit number with/without renaming, Subtract 1-, 2-, 3-, 4-, or 5-digit number from a 5-digit number. When you have this list, begin searching online for free math worksheets that fit your child has scope and sequence for the year and the goals you have set for your child.
Published at Wednesday, May 06th 2020, 02:21:24 AM. Kindergarten Worksheets By Darcell Barbier. The math software is undeniably a valuable tool for discovering a students weaknesses or accomplishments. This bundle is appropriate for elementary math students as well as middle school math students, high school math students, who need to learn or re-learn the basics of arithmetic. Many students slip through their early elementary math years with holes in their elementary math education. Older learners will feel the pride of accomplishing math skills they thought they would never learn. However, this software is not only feasible for young learners but for adults as well, who needs to polish and review again their mathematical skills. Teachers on their part find the program valuable as the math tests are scored and stored by the computer for evaluation of progress. The process is also simple because after taking the test, a personalized score sheet is printed along with an evaluation of topics requiring further study. The student can then return to the body of the program and practice those sections which were identified as weak areas. The use of the tests is flexible as the student may take Test A as a pre test and Test B as a post test or Test A may be used for one student and Test B for another. As a whole, a math software is a modern blessing for both learners and teachers who would enjoy studying the numbers instead of dreading them.
Published at Tuesday, May 05th 2020, 02:30:55 AM. Kindergarten Worksheets By Hanriette Joly. Just because your child will be playing fun online games does not mean the same value will not be there. Create a comfortable study zone. Turn off outside distractions such as cell phones, radios or TVs. Make sure your child is not tired or hungry so that he or she can focus all attention on learning. Also try to keep the lessons consistent with what is being learned in school. A quick chat with the teacher or signing up for an online newsletter from the classroom are ways to keep tabs on the lesson plans. Since 3rd grade math relies on the concepts that were learned during kindergarten, first and second grades, do not be afraid to start your child at a lower level. With adaptive learning, the programs will not move on to the next level until your child has a firm grasp on the current material. The online games will be a wonderful way for your child to catch up on basic arithmetic concepts and be comfortable using them across applications.
Published at Monday, May 04th 2020, 02:57:37 AM. Kindergarten Worksheets By Laverne Mercier. A comprehensive set of worksheets covering a variety of subjects can be used to expand your child has learning experience. A worksheet about shapes can be used as part of a game to find shapes around the house, counting worksheets can be used to count things you see in the grocery store and so on. Almost everything you do with your child can be turned into an opportunity to learn - and worksheets can give you the guidance you need to find those opportunities.
Published at Wednesday, April 15th 2020, 03:58:53 AM. Kindergarten Worksheets By Vilma Dyngeland. Clear your doubts thoroughly and memorize formulas for their right implementation. Understanding math formulas are not enough to score well in exams. Students should know their right implementation and hence, they can achieve their learning goal. Take learning help from online tutors at your convenient time. Online tutoring is a proven method to get requisite learning help whenever required. This innovative tutoring process does not have any time and geographical restriction. Students from any part of the world can access this learning session especially for math by using their computer and internet connection. Most importantly, the beneficial tools like the white board and attached chat box which are used in this process make the entire session interactive and similar to live sessions. Hence, it enhances student has confidence and meets their overall educational demands in the best possible manner.Students can get help on steps to improve their grades in Maths, and they can also work on different grades like 7th grade math and with online Math help students can work on different math related topics.
Published at Tuesday, April 14th 2020, 05:14:52 AM. Kindergarten Worksheets By Violetta Fleury. By the age of three, your child is ready to move onto mathematics worksheets. This does not mean that you should stop playing counting and number games with your child; it just adds another tool to your toolbox. Worksheets help to bring some structure into a child has education using a systematic teaching method, particularly important with math, which follows a natural progression. Learning about numbers includes recognizing written numbers as well as the quantity those numbers represent. Mathematics worksheets should provide a variety of fun activities that teach your child both numbers and quantity. Look for a variety of different ways to present the same concepts. This aids understanding and prevents boredom. Color-by-Numbers pictures are a fun way to learn about numbers and colors too.
Published at Saturday, April 11th 2020, 06:32:06 AM. Kindergarten Worksheets By Aceline Perez. In a growing move amongst home-schoolers to look at online courses, one subject area lends itself towards a bit more hesitation from the group. Home-schoolers want to like online courses because of the flexibility of them, but with regard to math, they are just not so sure about the validity of online math. There is reason for this, but many students are having good success with online math programs, and slowly but surely, the homeschooling community is coming around. Home-schoolers tend to shy away from online math due to the perception that math is better learned with a real person giving instruction and students following along in their textbooks. Many students learn well this way, but online math courses operate on a different philosophy. They presume that students can learn to understand material with information, practice, and feedback, and in essence, can become their own teachers. This is a far more effective method of instruction in the long run, and while it does take some adjustment, many programs make this method very viable for students of all abilities. |
Isosceles Triangles. Sec: 4.6 Sol: G.5. 3. Leg. Leg. 1. 2. Base. Properties of Isosceles Triangles. An isosceles triangle is a triangle with two congruent sides. The congruent sides are called legs and the third side is called the base. Vertex. Base Angles. A. B. C.
An isosceles triangle is a triangle with two congruent sides.
The congruent sides are called legs and the third side is called the base.
If two sides of a triangle are congruent, then the angles opposite those sides (base angles) are congruent.
What is the measure of
3x - 7
If two angles of a triangle are congruent, then the sides opposite those angles are congruent.
Find the value of x.
Since two angles are congruent, the
sides opposite these angles must be
3x – 7 = x + 15
2x = 22
X = 11
Lesson 3-2: Isosceles Triangle
4.3 A triangle is equilateral if and only if it is equiangular.
4.4 Each angle of an equilateral triangle measures 60°.
Find the measure of angle 1 and 2:
Solve for X:
Classwork: WB pg 51 1-6
Homework: Worksheet 4-6 |
MATH 223 Different Levels of Generality One challenge when doing Mathematics is to choose the right level of generality. In the context of Linear Algebra there are at least three levels of generality. We began the course with 2 × 2 matrices " # a b A= c d and a number of results were proven by considering the 4 entries individually. We then proved a number of results using matrices. Finally we also prove results by considering linear transformations, a special class of functions which can be associated with a matrix. Thus we have three levels of generality: 1. entries of the matrix e.g. a, b, c, d 2. matrices e.g. A, B, A−1 3. linear transformations e.g. f , f ◦ g, f −1 Our first proof of the product rule for determinants for 2×2 matrices (det(AB) = det(A) det(B)) considered the entries individually. We generalized to n × n matrices by obtaining a matrix proof that any invertible matrix is a product of elementary matrices and the product rule works for elementary matrices. Our proof of the associative rule for matrix multiplication was done using linear transformations using the easy fact that function composition is associative. Our proof that A−1 is both a left and right inverse is best done using the idea that for a function f , the compositional inverse f −1 is unique. When faced with a new problem it is not clear which level of generality to use; the good news is that you have three possibilities to find the proof. Often there are many ways to get to the answer. Typically, if you can get the proof to work, it is preferable to use the higher level of generality. But there are more challenges than just this. We can view a matrix in many ways. It can be viewed as a linear transformation. If the matrix is invertible it can be viewed as a change of coordinate matrix. We might view the matrix as a set of column vectors. Or we might view it as a set of row vectors. And at least for one lecture the matrix encodes a block design. This would be only a small sampling of the many possible interpretations of a matrix. Thus when faced with a problem in Linear Algebra, it might be quite difficult to chose the ‘right’ level of generality or the ‘right’ matrix interpretation. |
Sep 28, 2011
A recent close encounter with Titan uncovered more surface anomalies on the haze-shrouded moon.
On October 15, 1997 NASA launched the Cassini-Huygens spacecraft atop a Titan IV-Centaur rocket. The six ton payload was the largest deep space mission ever deployed, requiring a seven year journey to Saturn. Gravitational assists from Venus, Earth, and Jupiter were needed because Cassini could not carry enough fuel for a straight route to Saturn.
Cassini-Huygens entered orbit around Saturn on June 30, 2004. Its name has been changed twice since then. The Cassini Equinox Mission was a two-year extension which began on July 1, 2008, following the completion of its Prime Mission that lasted from July 1, 2004 to June 30, 2008. Subsequently, its name was changed to the Cassini-Solstice Mission, named for the summer solstice on Saturn that will take place in May 2017.
On September 12, 2011 Cassini completed close flyby number 78 of Titan, coming within 5800 kilometers of the giant moon’s cloud tops. Titan has long been a mystery to planetary scientists. Perhaps the most perplexing find is that methane gas continuously escapes from Titan’s low-gravity environment. Sunlight is also dissociating methane molecules in its upper atmosphere, changing them back into their carbon and hydrogen constituents. Since consensus theories propose Titan’s age to be in the billions of years, how has its dense atmosphere survived for those countless eons?
NASA insists that Titan’s atmosphere is somehow constantly replenished, because so much of it is destroyed by sunlight and leaks away to space. Current theories about the formation of the Solar System imply that Titan is old, billions of years old. With so much loss at such a rapid rate, Titan’s atmosphere should have evaporated a long time ago. Astrophysicists can only imagine oceans of liquid methane beneath the cloud cover as a source of replenishment.
When the Huygens lander touched down on a flat, rocky plain, the idea that Titan is “wet” with hydrocarbons suffered a serious blow. No methane was falling from the sky and no methane puddles were visible. Rather, in keeping with images sent from orbit, a vast dry area covered with “sand dunes” was seen.
Based on years of analysis, Electric Universe advocates think that the Solar System was once the scene of devastating encounters between charged bodies in the recent past. Giant clouds of plasma rife with electric arc discharges disrupted the orbital arrangement of planets and moons, as well as adding new objects. Large bodies like Titan all the way down to the small particles that make up Saturn’s rings might have recently come into existence.
If Titan is a new addition to Saturn’s family of 60 moons and counting, then its dense atmosphere does not presuppose replenishment, but juvenescence. Titan is not losing an ancient atmosphere, its atmosphere is new.
The Electric Universe hypothesis paints a more complete picture when data from space probes and telescopes are inserted. It is not considered a viable model to mainstream researchers because of the time element involved. Its opponents presume that the Solar System is the same as it was since its formation billions of years ago. A 10,000 year time span for planets and moons to be altered or begotten is blasphemous to the consensus opinion.
However, when the time for change in a paradigm arrives, change is inevitable. The growing interest and adherence to the Electric Universe paradigm means that changes to human thought are coming soon. |
Special Relativity with Geometric Algebra - Spacetime Algebra
Paths of objects
There are different ways to understand and formalize the paths objects take and how they move over time.
Position over time
Figure 1 - Path of an object with constant velocity. X-axis: time. Y-axis: space.
In physics we often draw the path an object takes over time in a diagram where time is on the x-axis and space is on the y-axis. Figure 1 shows such a diagram. For an object with a constant velocity of one half meters per second we have the following equations
Position over time - flipped space and time axes
In relativity, the space and time axes are usually flipped, so we space is on the x-axis and time is on the y-axis. We will also follow this standard practice. Doing this our diagram will look like this
Figure 2 - Path of an object with constant velocity, X-axis: space. Y-axis: time.
Since we are using geometric algebra, we will use vectors to formulate paths of objects instead. We will have the usual spatial basis vectors
, but why not introduce a basis vector for time too? After all, in our diagram these don't really look any different. If we do this we have four basis vectors in total and instead of just space we now have spacetime.
Parameterized paths with vectors
Figure 3 - Light-blue: Orthonormal basis vectors for time and space. Blue: Vector path of an object parameterized by
For our previous example, for every step in the space direction we take two steps in the time direction. So an unnormalized direction vector for the path is given by
. We can now introduce a parameter that sweeps out our path
We can also calculate a path velocity by taking the derivative with respect to our path parameter
The path velocity is always tangent to the path.
Note that the parameterization for our path is somewhat arbitrary. We could just as well have multiplied by path by a constant and have gotten the same path. What happens to the path velocity when we multiply our path by a constant factor
The path velocity also receives the constant factor
. In order to fix this arbitrary path parameter, we choose the length of the path velocity to be the speed of light, ie. . The path parameter which we generally called is then called the proper time .
Our path parameterized by proper time is then
and we introduce a short-hand notation for the derivative with respect to proper time . Because of our definition for proper time we now have . In many places the same equation but with on the right-hand side is seen. This is because often the choice is made.
Another thing we want to look at is what the points in our spacetime, such as the points on our paths, represent. A point contains a time coordinate and three space coordinates. Points in spacetime are also called events because of this.
Figure 4 - Time as another dimension and spacetime events
as shown in the diagram could be "I left home at 8am" with the position being home and the time being 8am. Another event could then be "I arrived at work at 9am" with position work and time 9am. We can now form difference vectors again. For this example assume home and work are 10km apart in the x direction. Then we have a difference vector
Does this expression make sense? The first problem we can notice is that the units don't match up. How do we add kilometers (spatial distance) and seconds (time difference)? To remedy this, we could multiply the time component by a constant speed as that would result in a distance. Why not choose the speed of light
? We now the following expression with the correct units
Well we got around the unit issue, although we did not justify the multiplication by
very well yet. The true justification for it will come soon.
More spacetime paths
Let's take a look at some more types of paths in spacetime.
Figure 5 - Paths in spacetime. Blue (a): Object at rest. Green (b): Object with constant velocity. Purple (c): Accelerated object. Yellow (l): Light. Red (e): Path faster than light.
Object at rest (a)
An object at rest does not move in space over time. Its path points purely in the time direction. The path can of course still be arbitrarily offset on the space axes.
The path velocity always points in the
direction, so objects at rest will always have path velocity proportional to . Paths parameterized by proper time will have path velocity because by definition, the path velocity for paths parameterized by proper time squares to . This will become very important later as objects at rest play an important role in Special Relativity.
Object with constant velocity (b)
Objects with constant velocity can move in space. Their path will be a rotated straight line. The more rotated the line is towards the space dimension, the faster the object goes.
The path velocity for an object moving along the x-direction will be some mix of
and , although there are some restrictions to this.
Object with acceleration (c)
An object with acceleration could trace out a curved path like
in the diagram. Objects with non-zero acceleration won't be covered for now.
Light always moves at the speed of light. This is the second postulate of Special Relativity. Its path can be parameterized by
(the factor of for the time dimension, as mentioned earlier, will be fully justified soon). This will trace out a 45° angle in our diagrams.
Faster than light (e)
Because nothing can move faster than light, this means all of our paths need to be steeper than 45°. Otherwise the object would be going faster than light.
Something we have not looked at yet is what a good notion of distance in spacetime is. Squaring vectors gives us vector lengths. When we do this, we make use of the squares of basis vectors. What should our spacetime basis vectors square to? To figure this out will perform a thought experiment involving light clocks and trains.
Light clocks and trains
First of all there are great videos demonstrating what we're about to investigate. You might want to watch them first or watch them if you get confused about the writing, the videos do a much better job. For example this one (although they put the device on the train instead of outside of it). We won't be using mirrors here as we can get the same result without two trips which also simplifies the math a bit.Video 1 - Left: Apparatus as seen from Alice who is at rest with it. Right: Apparatus as seen from Bob who is moving relative to it.
Figure 6 - Left: Alice 'a' has a device that sends light from bottom and receives it at the top. Middle: Bob is on a moving train and looks at the device. Right: Charlie is on another moving train and looks at the device.
Setup and Alice
Consider Alice standing still on the ground with an apparatus as pictured in figure 6. The height of the apparatus is
. Light is sent from the bottom with horizontal coordinate to the top in a straight line. In this case, the light is received at the top at the same horizontal coordinate it was sent from, ie. . The time it took for the light to be sent and received is called .
Given the elapsed time we know that the distance the light moved must be equal to
where is the speed of light. We also already knew that the height of the apparatus is so we have
Point of views and Bob's view
Introduce Bob on a train. From Alice's point of view the train is moving with constant velocity
to the left. What exactly does "point of view" mean here? From our own point of view, we are always standing still and not moving in space. For example for Bob, for himself it looks like he is standing still on the train and Alice along with her apparatus are moving with velocity to the right.
How does the light in the aparatus look like from Bob's view? On sending, the light starts at horizontal coordinate
and is moving upwards in the aparatus. The second postulate of Special Relativity was that the speed of light is constant, so adding the speed of the train to the speed of light would not make sense. What happens in reality is that the aparatus keeps on moving so it slides away from the light, while the light just keeps moving straight up from Bob's point of view. When the light is received it is not received at the same horizontal coordinate anymore. If we look at the path the light traced out it is a diagonal. We denote the horizontal coordinate the light was received at and the time it took .
Let's look at the picture. Alice saw the light start and end at the coordinate
, yet Bob saw it start at and end at . How is this possible? It seems unintuitive to everyday life but this is what actually happens. Using Pythagoras' theorem we can see that there is a relation between the total distance the light covered, the height of the apparatus and the horizontal distance
Invariant distance in Spacetime
A third person Charlie is also on a train, but moving at a different velocity
. We will get an identical equation to Bob's
Solving Alice's, Bob's and Charlie's equations
, , for (which requires squaring Alice's equation) we get
All three right-hand sides must be equal. Does this look familiar? Think back to passive transformations. The coefficients of a vector expressed in a different coordinate basis might change, but the vector itself and its length does not change under passive transformations. This is exactly what happened here! There is a small but important difference in that they have a minus sign instead of a plus sign in front of the spatial offsets, so it is not just the ordinary euclidean distance we are dealing with here.
Note: Alice's part only appears to be missing because her spatial offset is zero (the light started and ended at the same horizontal coordinate).
In summary, all that happened was that the observers Alice, Bob and Charlie were using different coordinate systems, so the values they measured as expressed in their own coordinate systems did not match up, even though the thing they were measuring was fundamentally the same.
What we have discovered is the distance of spacetime that we can use to measure distance between spacetime events. With all three space dimensions it is
If this quantity is preserved, then this also implies that the usual euclidean distance in spacetime is not preserved.
Using this result we can now see what changes we need to make to our 4D algebra to arrive at the correct Spacetime Algebra.
The only change we need to make is to the squares of our basis vectors. Having them all square to
will give us the euclidean distance where all signs in the distance are positive. However we want the spatial signs to be negative, so naturally we choose the spatial basis vectors to square to -1.
This is usually refered to as the Spacetime Algebra. It has wide applications in physics and can be used to describe for example classical electromagnetics, most parts of the standard model of particle physics and, of course, relativity.
We can now verify that squaring a difference vector gives us the correct distance
Now we also have a justification for the factor of
in front of the time component. Furthermore the algebra has the following basis blades
Figure 7 - Basis blades of the Spacetime Algebra
In Geometric Algebra we are usually very interested in the bivectors as we can use them for building rotors that do interesting transformations which also easily compose. For example in ordinary Geometric Algebra the bivectors square to
and the resulting rotors perform ordinary rotation.
Next we will take a look at a fundamental problem that relativity solves: the addition of velocities at speeds close to the speed of light. For this we will need to take a look at our bivectors squaring to
and their rotors.
We started the section by looking at how we can express the paths objects take. We ended up with paths in spacetime parameterized by a parameter
. Differentiating the path yields the path velocity tangent to the path. The path parameter is called when the path velocity squares to .
We then saw that the points in spacetime are events containing both a space and time coordinate, and that we had to multiply our time component by the speed of light for the units to make sense.
After this we turned our attention to paths again and looked at different kinds of paths of objects in spacetime. Paths of objects at rest are straight lines in the
direction. Paths with constant velocity are straight lines in both time and space directions. Light paths are at 45° angles in our diagrams and nothing can have an angle less steep than this.
Finally we performed a thought experiment involving a light clock and different observers going at different velocities relative to it to uncover a distance metric for our spacetime. This led to the introduction of the Spacetime Algebra with
squaring to and squaring to .
- Path in spacetime:
- Path velocity:
- Path velocity of object at rest:
- Path parameterized by proper time:
- Spacetime distance / invariant interval:
- Spacetime Algebra:
Next we will look at a problem that arises when adding velocities close to the speed of light with ordinary addition, and we will see how the bivectors squaring to
can be used to solve the problem. |
Where even rock is weaker - Between 90 and 110 kilometers below ground, Earth’s hard shell – the lithosphere – meets the more pliable asthenosphere. The boundary between the two layers is no more than 11 kilometers thick, according to a new study.
Earth’s cool, rigid upper layer, known as the lithosphere, rides on top of its warmer, more pliable neighbor, the asthenosphere, as a series of massive plates. Plates continuously shift and break, triggering earthquakes, sparking volcanic eruptions, sculpting mountains and carving trenches under the sea.
But what, exactly, divides the lithosphere and the asthenosphere? In the latest issue of Nature, a trio of geophysicists from Brown University and the Massachusetts Institute of Technology publish research that sheds new light on the nature of the boundary between these rocky regions.
Lead author Catherine Rychert, a 26-year-old graduate student in Brown’s Department of Geological Sciences, found a sharp dividing line between the lithosphere and the asthenosphere, according to data culled from seismic sensors sprinkled across the northeastern United States and southeastern Canada. Rychert and colleagues discovered that sound waves recorded by the sensors slow considerably about 90 to 110 kilometers below ground – a sign that the rock is getting weaker and that the lithosphere is giving way to the asthenosphere. Within in a distance of a mere 11 kilometers – roughly 7 miles or less – the transition is complete.
Wendy Lawton | EurekAlert!
NASA's AIM observes early noctilucent ice clouds over Antarctica
05.12.2016 | NASA/Goddard Space Flight Center
GPM sees deadly tornadic storms moving through US Southeast
01.12.2016 | NASA/Goddard Space Flight Center
Have you ever wondered how you see the world? Vision is about photons of light, which are packets of energy, interacting with the atoms or molecules in what...
A multi-institutional research collaboration has created a novel approach for fabricating three-dimensional micro-optics through the shape-defined formation of porous silicon (PSi), with broad impacts in integrated optoelectronics, imaging, and photovoltaics.
Working with colleagues at Stanford and The Dow Chemical Company, researchers at the University of Illinois at Urbana-Champaign fabricated 3-D birefringent...
In experiments with magnetic atoms conducted at extremely low temperatures, scientists have demonstrated a unique phase of matter: The atoms form a new type of quantum liquid or quantum droplet state. These so called quantum droplets may preserve their form in absence of external confinement because of quantum effects. The joint team of experimental physicists from Innsbruck and theoretical physicists from Hannover report on their findings in the journal Physical Review X.
“Our Quantum droplets are in the gas phase but they still drop like a rock,” explains experimental physicist Francesca Ferlaino when talking about the...
The Max Planck Institute for Physics (MPP) is opening up a new research field. A workshop from November 21 - 22, 2016 will mark the start of activities for an innovative axion experiment. Axions are still only purely hypothetical particles. Their detection could solve two fundamental problems in particle physics: What dark matter consists of and why it has not yet been possible to directly observe a CP violation for the strong interaction.
The “MADMAX” project is the MPP’s commitment to axion research. Axions are so far only a theoretical prediction and are difficult to detect: on the one hand,...
Broadband rotational spectroscopy unravels structural reshaping of isolated molecules in the gas phase to accommodate water
In two recent publications in the Journal of Chemical Physics and in the Journal of Physical Chemistry Letters, researchers around Melanie Schnell from the Max...
16.11.2016 | Event News
01.11.2016 | Event News
14.10.2016 | Event News
05.12.2016 | Earth Sciences
05.12.2016 | Physics and Astronomy
05.12.2016 | Life Sciences |
The rate law or rate equation for a chemical reaction is an equation that links the initial or forward reaction rate with the concentrations or pressures of the reactants and constant parameters (normally rate coefficients and partial reaction orders). For many reactions, the initial rate is given by a power law such as
where [A] and [B] express the concentration of the species A and B, usually in moles per liter (molarity, M). The exponents x and y are the partial orders of reaction for A and B and the overall reaction order is the sum of the exponents. These are often positive integers, but they may also be zero, fractional, or negative. The constant k is the reaction rate constant or rate coefficient of the reaction. Its value may depend on conditions such as temperature, ionic strength, surface area of an adsorbent, or light irradiation. If the reaction goes to completion, the rate equation for the reaction rate applies throughout the course of the reaction.
Elementary (single-step) reactions and reaction steps have reaction orders equal to the stoichiometric coefficients for each reactant. The overall reaction order, i.e. the sum of stoichiometric coefficients of reactants, is always equal to the molecularity of the elementary reaction. However, complex (multi-step) reactions may or may not have reaction orders equal to their stoichiometric coefficients. This implies that the order and the rate equation of a given reaction cannot be reliably deduced from the stoichiometry and must be determined experimentally, since an unknown reaction mechanism could be either elementary or complex. When the experimental rate equation has been determined, it is often of use for deduction of the reaction mechanism.
The rate equation of a reaction with an assumed multi-step mechanism can often be derived theoretically using quasi-steady state assumptions from the underlying elementary reactions, and compared with the experimental rate equation as a test of the assumed mechanism. The equation may involve a fractional order, and may depend on the concentration of an intermediate species.
A reaction can also have an undefined reaction order with respect to a reactant if the rate is not simply proportional to some power of the concentration of that reactant; for example, one cannot talk about reaction order in the rate equation for a bimolecular reaction between adsorbed molecules:
Main article: Reaction rate
Consider a typical chemical reaction in which two reactants A and B combine to form a product C:
This can also be written
The prefactors −1, −2 and 3 (with negative signs for reactants because they are consumed) are known as stoichiometric coefficients. One molecule of A combines with two of B to form 3 of C, so if we use the symbol [X] for the number of moles of chemical X,
If the reaction takes place in a closed system at constant temperature and volume, without a build-up of reaction intermediates, the reaction rate is defined as
where νi is the stoichiometric coefficient for chemical Xi, with a negative sign for a reactant.
The initial reaction rate has some functional dependence on the concentrations of the reactants,
and this dependence is known as the rate equation or rate law. This law generally cannot be deduced from the chemical equation and must be determined by experiment.
A common form for the rate equation is a power law:
The constant k is called the rate constant. The exponents, which can be fractional, are called partial orders of reaction and their sum is the overall order of reaction.
In a dilute solution, an elementary reaction (one having a single step with a single transition state) is empirically found to obey the law of mass action. This predicts that the rate depends only on the concentrations of the reactants, raised to the powers of their stoichiometric coefficients.
The natural logarithm of the power-law rate equation is
This can be used to estimate the order of reaction of each reactant. For example, the initial rate can be measured in a series of experiments at different initial concentrations of reactant A with all other concentrations [B], [C], … kept constant, so that
The slope of a graph of as a function of then corresponds to the order x with respect to reactant A.
However, this method is not always reliable because
The tentative rate equation determined by the method of initial rates is therefore normally verified by comparing the concentrations measured over a longer time (several half-lives) with the integrated form of the rate equation; this assumes that the reaction goes to completion.
For example, the integrated rate law for a first-order reaction is
where [A] is the concentration at time t and [A]0 is the initial concentration at zero time. The first-order rate law is confirmed if is in fact a linear function of time. In this case the rate constant is equal to the slope with sign reversed.
The partial order with respect to a given reactant can be evaluated by the method of flooding (or of isolation) of Ostwald. In this method, the concentration of one reactant is measured with all other reactants in large excess so that their concentration remains essentially constant. For a reaction a·A + b·B → c·C with rate law: , the partial order x with respect to A is determined using a large excess of B. In this case
and x may be determined by the integral method. The order y with respect to B under the same conditions (with B in excess) is determined by a series of similar experiments with a range of initial concentration [B]0 so that the variation of k' can be measured.
For zero-order reactions, the reaction rate is independent of the concentration of a reactant, so that changing its concentration has no effect on the rate of the reaction. Thus, the concentration changes linearly with time. This may occur when there is a bottleneck which limits the number of reactant molecules that can react at the same time, for example if the reaction requires contact with an enzyme or a catalytic surface.
Many enzyme-catalyzed reactions are zero order, provided that the reactant concentration is much greater than the enzyme concentration which controls the rate, so that the enzyme is saturated. For example, the biological oxidation of ethanol to acetaldehyde by the enzyme liver alcohol dehydrogenase (LADH) is zero order in ethanol.
Similarly reactions with heterogeneous catalysis can be zero order if the catalytic surface is saturated. For example, the decomposition of phosphine (PH3) on a hot tungsten surface at high pressure is zero order in phosphine which decomposes at a constant rate.
In homogeneous catalysis zero order behavior can come about from reversible inhibition. For example, ring-opening metathesis polymerization using third-generation Grubbs catalyst exhibits zero order behavior in catalyst due to the reversible inhibition that is occur between the pyridine and the ruthenium center.
A first order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but their concentration has no effect on the rate. The rate law for a first order reaction is
Although not affecting the above math, the majority of first order reactions proceed via intermolecular collisions. Such collisions, which contribute the energy to the reactant, are necessarily second order. The rate of these collisions is however masked by the fact that the rate determining step remains the unimolecular breakdown of the energized reactant.
The half-life is independent of the starting concentration and is given by .
Examples of such reactions are:
In organic chemistry, the class of SN1 (nucleophilic substitution unimolecular) reactions consists of first-order reactions. For example, in the reaction of aryldiazonium ions with nucleophiles in aqueous solution ArN2+ + X− → ArX + N2, the rate equation is v = k[ArN2+], where Ar indicates an aryl group.
A reaction is said to be second order when the overall order is two. The rate of a second-order reaction may be proportional to one concentration squared , or (more commonly) to the product of two concentrations . As an example of the first type, the reaction NO2 + CO → NO + CO2 is second-order in the reactant NO2 and zero order in the reactant CO. The observed rate is given by , and is independent of the concentration of CO.
For the rate proportional to a single concentration squared, the time dependence of the concentration is given by
The time dependence for a rate proportional to two unequal concentrations is
if the concentrations are equal, they satisfy the previous equation.
The second type includes nucleophillic addition-elimination reactions, such as the alkaline hydrolysis of ethyl acetate:
This reaction is first-order in each reactant and second-order overall:
If the same hydrolysis reaction is catalyzed by imidazole, the rate equation becomes v = k[imidazole][CH3COOC2H5]. The rate is first-order in one reactant (ethyl acetate), and also first-order in imidazole which as a catalyst does not appear in the overall chemical equation.
Another well-known class of second-order reactions are the SN2 (bimolecular nucleophilic substitution) reactions, such as the reaction of n-butyl bromide with sodium iodide in acetone:
This same compound can be made to undergo a bimolecular (E2) elimination reaction, another common type of second-order reaction, if the sodium iodide and acetone are replaced with sodium tert-butoxide as the salt and tert-butanol as the solvent:
If the concentration of a reactant remains constant (because it is a catalyst, or because it is in great excess with respect to the other reactants), its concentration can be included in the rate constant, obtaining a pseudo–first-order (or occasionally pseudo–second-order) rate equation. For a typical second-order reaction with rate equation v = k[A][B], if the concentration of reactant B is constant then , where the pseudo–first-order rate constant k' = k[B]. The second-order rate equation has been reduced to a pseudo–first-order rate equation, which makes the treatment to obtain an integrated rate equation much easier.
One way to obtain a pseudo-first order reaction is to use a large excess of one reactant (say, [B]≫[A]) so that, as the reaction progresses, only a small fraction of the reactant in excess (B) is consumed, and its concentration can be considered to stay constant. For example, the hydrolysis of esters by dilute mineral acids follows pseudo-first order kinetics where the concentration of water is present in large excess:
The hydrolysis of sucrose (C12H22O11) in acid solution is often cited as a first-order reaction with rate r = k[C12H22O11]. The true rate equation is third-order, r = k[C12H22O11][H+][H2O]; however, the concentrations of both the catalyst H+ and the solvent H2O are normally constant, so that the reaction is pseudo–first-order.
Elementary reaction steps with order 3 (called ternary reactions) are rare and unlikely to occur. However, overall reactions composed of several elementary steps can, of course, be of any (including non-integer) order.
|Zero order||First order||Second order||nth order (g = 1-n)|
|Integrated Rate Law||||
[Except first order]
|Units of Rate Constant (k)|
|Linear Plot to determine k||[A] vs. t||vs. t||vs. t|| vs. t
[Except first order]
[Limit is necessary for first order]
Where M stands for concentration in molarity (mol · L−1), t for time, and k for the reaction rate constant. The half-life of a first order reaction is often expressed as t1/2 = 0.693/k (as ln(2)≈0.693).
In fractional order reactions, the order is a non-integer, which often indicates a chemical chain reaction or other complex reaction mechanism. For example, the pyrolysis of acetaldehyde (CH3CHO) into methane and carbon monoxide proceeds with an order of 1.5 with respect to acetaldehyde: r = k[CH3CHO]3/2. The decomposition of phosgene (COCl2) to carbon monoxide and chlorine has order 1 with respect to phosgene itself and order 0.5 with respect to chlorine: v = k[COCl2] [Cl2]1/2.
The order of a chain reaction can be rationalized using the steady state approximation for the concentration of reactive intermediates such as free radicals. For the pyrolysis of acetaldehyde, the Rice-Herzfeld mechanism is
where • denotes a free radical. To simplify the theory, the reactions of the •CHO to form a second •CH3 are ignored.
In the steady state, the rates of formation and destruction of methyl radicals are equal, so that
so that the concentration of methyl radical satisfies
The reaction rate equals the rate of the propagation steps which form the main reaction products CH4 and CO:
in agreement with the experimental order of 3/2.
More complex rate laws have been described as being mixed order if they approximate to the laws for more than one order at different concentrations of the chemical species involved. For example, a rate law of the form represents concurrent first order and second order reactions (or more often concurrent pseudo-first order and second order) reactions, and can be described as mixed first and second order. For sufficiently large values of [A] such a reaction will approximate second order kinetics, but for smaller [A] the kinetics will approximate first order (or pseudo-first order). As the reaction progresses, the reaction can change from second order to first order as reactant is consumed.
Another type of mixed-order rate law has a denominator of two or more terms, often because the identity of the rate-determining step depends on the values of the concentrations. An example is the oxidation of an alcohol to a ketone by hexacyanoferrate (III) ion [Fe(CN)63−] with ruthenate (VI) ion (RuO42−) as catalyst. For this reaction, the rate of disappearance of hexacyanoferrate (III) is
This is zero-order with respect to hexacyanoferrate (III) at the onset of the reaction (when its concentration is high and the ruthenium catalyst is quickly regenerated), but changes to first-order when its concentration decreases and the regeneration of catalyst becomes rate-determining.
Notable mechanisms with mixed-order rate laws with two-term denominators include:
A reaction rate can have a negative partial order with respect to a substance. For example, the conversion of ozone (O3) to oxygen follows the rate equation in an excess of oxygen. This corresponds to second order in ozone and order (−1) with respect to oxygen.
When a partial order is negative, the overall order is usually considered as undefined. In the above example for instance, the reaction is not described as first order even though the sum of the partial orders is , because the rate equation is more complex than that of a simple first-order reaction.
A pair of forward and reverse reactions may occur simultaneously with comparable speeds. For example, A and B react into products P and Q and vice versa (a, b, p, and q are the stoichiometric coefficients):
The reaction rate expression for the above reactions (assuming each one is elementary) can be written as:
where: k1 is the rate coefficient for the reaction that consumes A and B; k−1 is the rate coefficient for the backwards reaction, which consumes P and Q and produces A and B.
The constants k1 and k−1 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set v=0 in balance):
In a simple equilibrium between two species:
where the reaction starts with an initial concentration of reactant A, , and an initial concentration of 0 for product P at time t=0.
Then the constant K at equilibrium is expressed as:
Where and are the concentrations of A and P at equilibrium, respectively.
The concentration of A at time t, , is related to the concentration of P at time t, , by the equilibrium reaction equation:
The term is not present because, in this simple example, the initial concentration of P is 0.
This applies even when time t is at infinity; i.e., equilibrium has been reached:
then it follows, by the definition of K, that
These equations allow us to uncouple the system of differential equations, and allow us to solve for the concentration of A alone.
The reaction equation was given previously as:
For this is simply
The derivative is negative because this is the rate of the reaction going from A to P, and therefore the concentration of A is decreasing. To simplify notation, let x be , the concentration of A at time t. Let be the concentration of A at equilibrium. Then:
The reaction rate becomes:
which results in:
A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope k1 + k−1. By measurement of [A]e and [P]e the values of K and the two reaction rate constants will be known.
If the concentration at the time t = 0 is different from above, the simplifications above are invalid, and a system of differential equations must be solved. However, this system can also be solved exactly to yield the following generalized expressions:
When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy.
If the rate constants for the following reaction are and ; , then the rate equation is:
With the individual concentrations scaled by the total population of reactants to become probabilities, linear systems of differential equations such as these can be formulated as a master equation. The differential equations can be solved analytically and the integrated rate equations are
The steady state approximation leads to very similar results in an easier way.
When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place.
and , with constants and and rate equations ; and
The integrated rate equations are then ; and .
One important relationship in this case is
This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example, A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: and . The rate equations are: and . Where is the pseudo first order constant.
The integrated rate equation for the main product [C] is , which is equivalent to . Concentration of B is related to that of C through
The integrated equations were analytically obtained but during the process it was assumed that therefore, previous equation for [C] can only be used for low concentrations of [C] compared to [A]0
The most general description of a chemical reaction network considers a number of distinct chemical species reacting via reactions. The chemical equation of the -th reaction can then be written in the generic form
which is often written in the equivalent form
The rate of such reaction can be inferred by the law of mass action
which denotes the flux of molecules per unit time and unit volume. Here is the vector of concentrations. This definition includes the elementary reactions:
Each of which are discussed in detail below. One can define the stoichiometric matrix
denoting the net extent of molecules of in reaction . The reaction rate equations can then be written in the general form
This is the product of the stoichiometric matrix and the vector of reaction rate functions. Particular simple solutions exist in equilibrium, , for systems composed of merely reversible reactions. In this case the rate of the forward and backward reactions are equal, a principle called detailed balance. Detailed balance is a property of the stoichiometric matrix alone and does not depend on the particular form of the rate functions . All other cases where detailed balance is violated are commonly studied by flux balance analysis which has been developed to understand metabolic pathways.
For a general unimolecular reaction involving interconversion of different species, whose concentrations at time are denoted by through , an analytic form for the time-evolution of the species can be found. Let the rate constant of conversion from species to species be denoted as , and construct a rate-constant matrix whose entries are the .
Also, let be the vector of concentrations as a function of time.
Let be the vector of ones.
Let be the identity matrix.
Let be the function that takes a vector and constructs a diagonal matrix whose on-diagonal entries are those of the vector.
Let be the inverse Laplace transform from to .
Then the time-evolved state is given by
thus providing the relation between the initial conditions of the system and its state at time . |
Trees: Unlike Arrays, Linked Lists, Stack and queues, which are linear data structures, trees are hierarchical data structures.
Tree Vocabulary: The topmost node is called root of the tree. The elements that are directly under an element are called its children. The element directly above something is called its parent. For example, ‘a’ is a child of ‘f’, and ‘f’ is the parent of ‘a’. Finally, elements with no children are called leaves.
tree ---- j <-- root / \ f k / \ \ a h z <-- leaves
1. One reason to use trees might be because you want to store information that naturally forms a hierarchy. For example, the file system on a computer:
file system ----------- / <-- root / \ ... home / \ ugrad course / / | \ ... cs101 cs112 cs113
2. Trees (with some ordering e.g., BST) provide moderate access/search (quicker than Linked List and slower than arrays).
3. Trees provide moderate insertion/deletion (quicker than Arrays and slower than Unordered Linked Lists).
4. Like Linked Lists and unlike Arrays, Trees don’t have an upper limit on number of nodes as nodes are linked using pointers.
Main applications of trees include:
1. Manipulate hierarchical data.
2. Make information easy to search (see tree traversal).
3. Manipulate sorted lists of data.
4. As a workflow for compositing digital images for visual effects.
5. Router algorithms
6. Form of a multi-stage decision-making (see business chess).
Binary Tree: A tree whose elements have at most 2 children is called a binary tree. Since each element in a binary tree can have only 2 children, we typically name them the left and right child.
Binary Tree Representation in C: A tree is represented by a pointer to the topmost node in tree. If the tree is empty, then value of root is NULL.
A Tree node contains following parts.
2. Pointer to left child
3. Pointer to right child
In C, we can represent a tree node using structures. Below is an example of a tree node with an integer data.
First Simple Tree in C
Let us create a simple tree with 4 nodes in C. The created tree would be as following.
tree ---- 1 <-- root / \ 2 3 / 4
Summary: Tree is a hierarchical data structure. Main uses of trees include maintaining hierarchical data, providing moderate access and insert/delete operations. Binary trees are special cases of tree where every node has at most two children.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.
- Complexity of different operations in Binary tree, Binary Search Tree and AVL tree
- Check if a binary tree is subtree of another binary tree | Set 1
- Check if a binary tree is subtree of another binary tree | Set 2
- Convert a Binary Tree to Threaded binary tree | Set 1 (Using Queue)
- Convert a Binary Tree to Threaded binary tree | Set 2 (Efficient)
- Binary Tree | Set 3 (Types of Binary Tree)
- Binary Tree to Binary Search Tree Conversion using STL set
- Maximum sub-tree sum in a Binary Tree such that the sub-tree is also a BST
- Convert a Generic Tree(N-array Tree) to Binary Tree
- Binary Tree to Binary Search Tree Conversion
- Check whether a binary tree is a full binary tree or not
- Minimum swap required to convert binary tree to binary search tree
- Check whether a given binary tree is skewed binary tree or not?
- Difference between Binary Tree and Binary Search Tree
- Check if a binary tree is subtree of another binary tree using preorder traversal : Iterative
- Check whether a binary tree is a full binary tree or not | Iterative Approach
- Check whether a binary tree is a complete tree or not | Set 2 (Recursive Solution)
- Print Binary Tree levels in sorted order | Set 3 (Tree given as array)
- Check if the given binary tree has a sub-tree with equal no of 1's and 0's | Set 2
- ScapeGoat Tree | Set 1 (Introduction and Insertion) |
In music theory, an interval is the difference between two pitches. An interval may be described as horizontal, linear, or melodic if it refers to successively sounding tones, such as two adjacent pitches in a melody, and vertical or harmonic if it pertains to simultaneously sounding tones, such as in a chord.
In Western music, intervals are most commonly differences between notes of a diatonic scale. The smallest of these intervals is a semitone. Intervals smaller than a semitone are called microtones. They can be formed using the notes of various kinds of non-diatonic scales. Some of the very smallest ones are called commas, and describe small discrepancies, observed in some tuning systems, between enharmonically equivalent notes such as C♯ and D♭. Intervals can be arbitrarily small, and even imperceptible to the human ear.
In physical terms, an interval is the ratio between two sonic frequencies. For example, any two notes an octave apart have a frequency ratio of 2:1. This means that successive increments of pitch by the same interval result in an exponential increase of frequency, even though the human ear perceives this as a linear increase in pitch. For this reason, intervals are often measured in cents, a unit derived from the logarithm of the frequency ratio.
In Western music theory, the most common naming scheme for intervals describes two properties of the interval: the quality (perfect, major, minor, augmented, diminished) and number (unison, second, third, etc.). Examples include the minor third or perfect fifth. These names describe not only the difference in semitones between the upper and lower notes, but also how the interval is spelled. The importance of spelling stems from the historical practice of differentiating the frequency ratios of enharmonic intervals such as G–G♯ and G–A♭.
- 1 Size
- 2 Main intervals
- 3 Interval number and quality
- 4 Shorthand notation
- 5 Inversion
- 6 Classification
- 7 Minute intervals
- 8 Compound intervals
- 9 Intervals in chords
- 10 Size of intervals used in different tuning systems
- 11 Interval root
- 12 Interval cycles
- 13 Alternative interval naming conventions
- 14 Pitch-class intervals
- 15 Generic and specific intervals
- 16 Generalizations and non-pitch uses
- 17 See also
- 18 Notes
- 19 External links
The size of an interval (also known as its width or height) can be represented using two alternative and equivalently valid methods, each appropriate to a different context: frequency ratios or cents.
The size of an interval between two notes may be measured by the ratio of their frequencies. When a musical instrument is tuned using a just intonation tuning system, the size of the main intervals can be expressed by small-integer ratios, such as 1:1 (unison), 2:1 (octave), 3:2 (perfect fifth), 4:3 (perfect fourth), 5:4 (major third), 6:5 (minor third). Intervals with small-integer ratios are often called just intervals, or pure intervals.
Most commonly, however, musical instruments are nowadays tuned using a different tuning system, called 12-tone equal temperament. As a consequence, the size of most equal-tempered intervals cannot be expressed by small-integer ratios, although it is very close to the size of the corresponding just intervals. For instance, an equal-tempered fifth has a frequency ratio of 27/12:1, approximately equal to 1.498:1, or 2.997:2 (very close to 3:2). For a comparison between the size of intervals in different tuning systems, see section Size in different tuning systems.
The standard system for comparing interval sizes is with cents. The cent is a logarithmic unit of measurement. If frequency is expressed in a logarithmic scale, and along that scale the distance between a given frequency and its double (also called octave) is divided into 1200 equal parts, each of these parts is one cent. In twelve-tone equal temperament (12-TET), a tuning system in which all semitones have the same size, the size of one semitone is exactly 100 cents. Hence, in 12-TET the cent can be also defined as one hundredth of a semitone.
Mathematically, the size in cents of the interval from frequency f1 to frequency f2 is
The table shows the most widely used conventional names for the intervals between the notes of a chromatic scale. A perfect unison (also known as perfect prime) is an interval formed by two identical notes. Its size is zero cents. A semitone is any interval between two adjacent notes in a chromatic scale, a whole tone is an interval spanning two semitones (for example, a major second), and a tritone is an interval spanning three tones, or six semitones (for example, an augmented fourth). Rarely, the term ditone is also used to indicate an interval spanning two whole tones (for example, a major third), or more strictly as a synonym of major third.
Intervals with different names may span the same number of semitones, and may even have the same width. For instance, the interval from D to F♯ is a major third, while that from D to G♭ is a diminished fourth. However, they both span 4 semitones. If the instrument is tuned so that the 12 notes of the chromatic scale are equally spaced (as in equal temperament), these intervals will also have the same width. Namely, all semitones will have a width of 100 cents, and all intervals spanning 4 semitones will be 400 cents wide.
The names listed here cannot be determined by counting semitones alone. The rules to determine them are explained below. Other names, determined with different naming conventions, are listed in a separate section. Intervals smaller than one semitone (commas or microtones) and larger than one octave (compound intervals) are introduced below.
or perfect intervals
|0||Perfect unison||P1||Diminished second||d2||Play (help·info)|
|1||Minor second||m2||Augmented unison||A1||Semitone, half tone, half step||S||Play (help·info)|
|2||Major second||M2||Diminished third||d3||Tone, whole tone, whole step||T||Play (help·info)|
|3||Minor third||m3||Augmented second||A2||Play (help·info)|
|4||Major third||M3||Diminished fourth||d4||Play (help·info)|
|5||Perfect fourth||P4||Augmented third||A3||Play (help·info)|
|6||Diminished fifth||d5||Tritone||TT||Play (help·info)|
|7||Perfect fifth||P5||Diminished sixth||d6||Play (help·info)|
|8||Minor sixth||m6||Augmented fifth||A5||Play (help·info)|
|9||Major sixth||M6||Diminished seventh||d7||Play (help·info)|
|10||Minor seventh||m7||Augmented sixth||A6||Play (help·info)|
|11||Major seventh||M7||Diminished octave||d8||Play (help·info)|
|12||Perfect octave||P8||Augmented seventh||A7||Play (help·info)|
Interval number and quality
In Western music theory, an interval is named according to its number (also called diatonic number) and quality. For instance, major third (or M3) is an interval name, in which the term major (M) describes the quality of the interval, and third (3) indicates its number.
The number of an interval is the number of letter names it encompasses or staff positions it encompasses. Both lines and spaces (see figure) are counted, including the positions of both notes forming the interval. For instance, the interval C–G is a fifth (denoted P5) because the notes from C to G encompass five letter names (C, D, E, F, G) and occupy five consecutive staff positions, including the positions of C and G. The table and the figure above show intervals with numbers ranging from 1 (e.g., P1) to 8 (e.g., P8). Intervals with larger numbers are called compound intervals.
There is a one-to-one correspondence between staff positions and diatonic-scale degrees (the notes of a diatonic scale). This means that interval numbers can be also determined by counting diatonic scale degrees, rather than staff positions, provided that the two notes which form the interval are drawn from a diatonic scale. Namely, C–G is a fifth because in any diatonic scale that contains C and G, the sequence from C to G includes five notes. For instance, in the A♭-major diatonic scale, the five notes are C–D♭–E♭–F–G (see figure). This is not true for all kinds of scales. For instance, in a chromatic scale, the notes from C to G are eight (C–C♯–D–D♯–E–F–F♯–G). This is the reason interval numbers are also called diatonic numbers, and this convention is called diatonic numbering.
If one adds any accidentals to the notes that form an interval, by definition the notes do not change their staff positions. As a consequence, any interval has the same interval number as the corresponding natural interval, formed by the same notes without accidentals. For instance, the intervals C–G♯ (spanning 8 semitones) and C♯–G (spanning 6 semitones) are fifths, like the corresponding natural interval C–G (7 semitones).
Notice that interval numbers represent an inclusive count of encompassed staff positions or note names, not the difference between the endpoints. In other words, start counting the lower pitch as one, not zero. For that reason, the interval C–C, a perfect unison, is called a prime (meaning "1"), even though there's no difference between the endpoints. Continuing, the interval C–D is a second, but D is only one staff position, or diatonic-scale degree, above C. Similarly, C–E is a third, but E is only two staff positions above C, and so on. As a consequence, joining two intervals always yields an interval number one less than their sum. For instance, the intervals C–E and E–G are thirds, but joined together they form a fifth (C–G), not a sixth. Similarly, a stack of three thirds, such as C–E, E–G, and G–B, is a seventh (C–B), not a ninth.
Read the Compound intervals section to determine the diatonic numbers of a intervals larger than an octave.
The name of any interval is further qualified using the terms perfect (P), major (M), minor (m), augmented (A), and diminished (d). This is called its interval quality. It is possible to have doubly diminished and doubly augmented intervals, but these are quite rare, as they occur only in chromatic contexts. The quality of a compound interval is the quality of the simple interval on which it is based.
Perfect intervals are so-called because they were traditionally considered perfectly consonant, although in Western classical music the perfect fourth was sometimes regarded as a less than perfect consonance, when its function was contrapuntal.[vague] Conversely, minor, major, augmented or diminished intervals are typically considered to be less consonant, and were traditionally classified as mediocre consonances, imperfect consonances, or dissonances.
Within a diatonic scale all unisons (P1) and octaves (P8) are perfect. Most fourths and fifths are also perfect (P4 and P5), with five and seven semitones respectively. There's one occurrence of a fourth and a fifth which are not perfect, as they both span six semitones: an augmented fourth (A4), and its inversion, a diminished fifth (d5). For instance, in a C-major scale, the A4 is between F and B, and the d5 is between B and F (see table).
By definition, the inversion of a perfect interval is also perfect. Since the inversion does not change the pitch of the two notes, it hardly affects their level of consonance (matching of their harmonics). Conversely, other kinds of intervals have the opposite quality with respect to their inversion. The inversion of a major interval is a minor interval, the inversion of an augmented interval is a diminished interval.
- Major and minor
As shown in the table, a diatonic scale defines seven intervals for each interval number, each starting from a different note (seven unisons, seven seconds, etc.). The intervals formed by the notes of a diatonic scale are called diatonic. Except for unisons and octaves, the diatonic intervals with a given interval number always occur in two sizes, which differ by one semitone. For example, six of the fifths span seven semitones. The other one spans six semitones. Four of the thirds span three semitones, the others four. If one of the two versions is a perfect interval, the other is called either diminished (i.e. narrowed by one semitone) or augmented (i.e. widened by one semitone). Otherwise, the larger version is called major, the smaller one minor. For instance, since a 7-semitone fifth is a perfect interval (P5), the 6-semitone fifth is called "diminished fifth" (d5). Conversely, since neither kind of third is perfect, the larger one is called "major third" (M3), the smaller one "minor third" (m3).
Within a diatonic scale, unisons and octaves are always qualified as perfect, fourths as either perfect or augmented, fifths as perfect or diminished, and all the other intervals (seconds, thirds, sixths, sevenths) as major or minor.
- Augmented and diminished
Augmented intervals are wider by one semitone than perfect or major intervals, while having the same interval number (i.e., encompassing the same number of staff positions). Diminished intervals are narrower by one semitone than perfect or minor intervals of the same interval number. For instance, an augmented third such as C–E♯ spans five semitones, exceeding a major third (C–E) by one semitone, while a diminished third such as C♯–E♭ spans two semitones, falling short of a minor third (C–E♭) by one semitone.
The augmented fourth (A4) and the diminished fifth (d5) are the only augmented and diminished intervals that appear in diatonic scales (see table).
Neither the number, nor the quality of an interval can be determined by counting semitones alone. As explained above, the number of staff positions must be taken into account as well.
- A♭–B♯ is a second, as it encompasses two staff positions (A, B), and it is doubly augmented, as it exceeds a major second (such as A–B) by two semitones.
- A–C♯ is a third, as it encompasses three staff positions (A, B, C), and it is major, as it spans 4 semitones.
- A–D♭ is a fourth, as it encompasses four staff positions (A, B, C, D), and it is diminished, as it falls short of a perfect fourth (such as A–D) by one semitone.
- A♯-E is a fifth, as it encompasses five staff positions (A, B, C, D, E), and it is triply diminished, as it falls short of a perfect fifth (such as A–E) by three semitones.
|Interval name||Staff positions|
|4||doubly augmented second||A♭||B♯|
|4||triply diminished fifth||A♯||E|
Intervals are often abbreviated with a P for perfect, m for minor, M for major, d for diminished, A for augmented, followed by the interval number. The indication M and P are often omitted. The octave is P8, and a unison is usually referred to simply as "a unison" but can be labeled P1. The tritone, an augmented fourth or diminished fifth is often TT. The interval qualities may be also abbreviated with perf, min, maj, dim, aug. Examples:
- m2 (or min2): minor second,
- M3 (or maj3): major third,
- A4 (or aug4): augmented fourth,
- d5 (or dim5): diminished fifth,
- P5 (or perf5): perfect fifth.
A simple interval (i.e., an interval smaller than or equal to an octave) may be inverted by raising the lower pitch an octave, or lowering the upper pitch an octave. For example, the fourth from a lower C to a higher F may be inverted to make a fifth, from a lower F to a higher C.
There are two rules to determine the number and quality of the inversion of any simple interval:
- The interval number and the number of its inversion always add up to nine (4 + 5 = 9, in the example just given).
- The inversion of a major interval is a minor interval, and vice versa; the inversion of a perfect interval is also perfect; the inversion of an augmented interval is a diminished interval, and vice versa; the inversion of a doubly augmented interval is a doubly diminished interval, and vice versa.
For example, the interval from C to the E♭ above it is a minor third. By the two rules just given, the interval from E♭ to the C above it must be a major sixth.
Since compound intervals are larger than an octave, "the inversion of any compound interval is always the same as the inversion of the simple interval from which it is compounded."
For intervals identified by their ratio, the inversion is determined by reversing the ratio and multiplying by 2. For example, the inversion of a 5:4 ratio is an 8:5 ratio.
For intervals identified by an integer number of semitones, the inversion is obtained by subtracting that number from 12.
Since an interval class is the lower number selected among the interval integer and its inversion, interval classes cannot be inverted.
Intervals can be described, classified, or compared with each other according to various criteria.
Melodic and harmonic
An interval can be described as
- Vertical or harmonic if the two notes sound simultaneously
- Horizontal, linear, or melodic if they sound successively.
Diatonic and chromatic
- A diatonic interval is an interval formed by two notes of a diatonic scale.
- A chromatic interval is a non-diatonic interval formed by two notes of a chromatic scale.
The table above depicts the 56 diatonic intervals formed by the notes of the C major scale (a diatonic scale). Notice that these intervals, as well as any other diatonic interval, can be also formed by the notes of a chromatic scale.
The distinction between diatonic and chromatic intervals is controversial, as it is based on the definition of diatonic scale, which is variable in the literature. For example, the interval B–E♭ (a diminished fourth, occurring in the harmonic C-minor scale) is considered diatonic if the harmonic minor scales are considered diatonic as well. Otherwise, it is considered chromatic. For further details, see the main article.
By a commonly used definition of diatonic scale (which excludes the harmonic minor and melodic minor scales), all perfect, major and minor intervals are diatonic. Conversely, no augmented or diminished interval is diatonic, except for the augmented fourth and diminished fifth.
The distinction between diatonic and chromatic intervals may be also sensitive to context. The above-mentioned 56 intervals formed by the C-major scale are sometimes called diatonic to C major. All other intervals are called chromatic to C major. For instance, the perfect fifth A♭–E♭ is chromatic to C major, because A♭ and E♭ are not contained in the C major scale. However, it is diatonic to others, such as the A♭ major scale.
Consonant and dissonant
Consonance and dissonance are relative terms that refer to the stability, or state of repose, of particular musical effects. Dissonant intervals are those that cause tension, and desire to be resolved to consonant intervals.
These terms are relative to the usage of different compositional styles.
- In 15th- and 16th-century usage, perfect fifths and octaves, and major and minor thirds and sixths were considered harmonically consonant, and all other intervals dissonant, including the perfect fourth, which by 1473 was described (by Johannes Tinctoris) as dissonant, except between the upper parts of a vertical sonority—for example, with a supporting third below ("6-3 chords"). In the common practice period, it makes more sense to speak of consonant and dissonant chords, and certain intervals previously thought to be dissonant (such as minor sevenths) became acceptable in certain contexts. However, 16th-century practice continued to be taught to beginning musicians throughout this period.
- Hermann von Helmholtz (1821–1894) defined a harmonically consonant interval as one in which the two pitches have an upper partial (an overtone) in common This essentially defines all seconds and sevenths as dissonant, and the above thirds, fourths, fifths, and sixths as consonant.
- David Cope (1997) suggests the concept of interval strength, in which an interval's strength, consonance, or stability is determined by its approximation to a lower and stronger, or higher and weaker, position in the harmonic series. See also: Lipps–Meyer law and #Interval root
All of the above analyses refer to vertical (simultaneous) intervals.
Simple and compound
A simple interval is an interval spanning at most one octave (see Main intervals above). Intervals spanning more than one octave are called compound intervals, as they can be obtained by adding one or more octaves to a simple interval (see below for details).
Steps and skips
Linear (melodic) intervals may be described as steps or skips. A step, or conjunct motion, is a linear interval between two consecutive notes of a scale. Any larger interval is called a skip (also called a leap), or disjunct motion. In the diatonic scale, a step is either a minor second (sometimes also called half step) or major second (sometimes also called whole step), with all intervals of a minor third or larger being skips.
For example, C to D (major second) is a step, whereas C to E (major third) is a skip.
More generally, a step is a smaller or narrower interval in a musical line, and a skip is a wider or larger interval, with the categorization of intervals into steps and skips is determined by the tuning system and the pitch space used.
Melodic motion in which the interval between any two consecutive pitches is no more than a step, or, less strictly, where skips are rare, is called stepwise or conjunct melodic motion, as opposed to skipwise or disjunct melodic motions, characterized by frequent skips.
Two intervals are considered to be enharmonic, or enharmonically equivalent, if they both contain the same pitches spelled in different ways; that is, if the notes in the two intervals are themselves enharmonically equivalent. Enharmonic intervals span the same number of semitones.
For example, the four intervals listed in the table below are all enharmonically equivalent, because the notes F♯ and G♭ indicate the same pitch, and the same is true for A♯ and B♭. All these intervals span four semitones.
|Interval name||Staff positions|
|4||doubly augmented second||G♭||A♯|
When played on a piano keyboard, these intervals are indistinguishable as they are all played with the same two keys, but in a musical context the diatonic function of the notes incorporated is very different.
There are also a number of minute intervals not found in the chromatic scale or labeled with a diatonic function, which have names of their own. They may be described as microtones, and some of them can be also classified as commas, as they describe small discrepancies, observed in some tuning systems, between enharmonically equivalent notes. In the following list, the interval sizes in cents are approximate.
- A Pythagorean comma is the difference between twelve justly tuned perfect fifths and seven octaves. It is expressed by the frequency ratio 531441:524288 (23.5 cents).
- A syntonic comma is the difference between four justly tuned perfect fifths and two octaves plus a major third. It is expressed by the ratio 81:80 (21.5 cents).
- A septimal comma is 64:63 (27.3 cents), and is the difference between the Pythagorean or 3-limit "7th" and the "harmonic 7th".
- A diesis is generally used to mean the difference between three justly tuned major thirds and one octave. It is expressed by the ratio 128:125 (41.1 cents). However, it has been used to mean other small intervals: see diesis for details.
- A diaschisma is the difference between three octaves and four justly tuned perfect fifths plus two justly tuned major thirds. It is expressed by the ratio 2048:2025 (19.6 cents).
- A schisma (also skhisma) is the difference between five octaves and eight justly tuned fifths plus one justly tuned major third. It is expressed by the ratio 32805:32768 (2.0 cents). It is also the difference between the Pythagorean and syntonic commas. (A schismic major third is a schisma different from a just major third, eight fifths down and five octaves up, F♭ in C.)
- A kleisma is the difference between six minor thirds and one tritave or perfect twelfth (an octave plus a perfect fifth), with a frequency ratio of 15625:15552 (8.1 cents) ( Play (help·info)).
- A septimal kleisma is six major thirds up, five fifths down and one octave up, with ratio 225:224 (7.7 cents).
- A quarter tone is half the width of a semitone, which is half the width of a whole tone. It is equal to exactly 50 cents.
In general, a compound interval may be defined by a sequence or "stack" of two or more simple intervals of any kind. For instance, a major tenth (two staff positions above one octave), also called compound major third, spans one octave plus one major third.
Any compound interval can be always decomposed into one or more octaves plus one simple interval. For instance, a major seventeenth can be decomposed into two octaves and one major third, and this is the reason why it is called a compound major third, even when it is built by adding up four fifths.
The diatonic number DNc of a compound interval formed from n simple intervals with diatonic numbers DN1, DN2, ..., DNn, is determined by:
which can also be written as:
The quality of a compound interval is determined by the quality of the simple interval on which it is based. For instance, a compound major third is a major tenth (1+(8–1)+(3–1) = 10), or a major seventeenth (1+(8–1)+(8–1)+(3–1) = 17), and a compound perfect fifth is a perfect twelfth (1+(8–1)+(5–1) = 12) or a perfect nineteenth (1+(8–1)+(8–1)+(5–1) = 19). Notice that two octaves are a fifteenth, not a sixteenth (1+(8–1)+(8–1) = 15). Similarly, three octaves are a twenty-second (1+3*(8–1) = 22), and so on.
Main compound intervals
or perfect intervals
|13||Minor ninth||m9||Augmented octave||A8|
|14||Major ninth||M9||Diminished tenth||d10|
|15||Minor tenth||m10||Augmented ninth||A9|
|16||Major tenth||M10||Diminished eleventh||d11|
|17||Perfect eleventh||P11||Augmented tenth||A10|
|19||Perfect twelfth or Tritave||P12||Diminished thirteenth||d13|
|20||Minor thirteenth||m13||Augmented twelfth||A12|
|21||Major thirteenth||M13||Diminished fourteenth||d14|
|22||Minor fourteenth||m14||Augmented thirteenth||A13|
|23||Major fourteenth||M14||Diminished fifteenth||d15|
|24||Perfect fifteenth or Double octave||P15||Augmented fourteenth||A14|
It is also worth mentioning here the major seventeenth (28 semitones), an interval larger than two octaves which can be considered a multiple of a perfect fifth (7 semitones) as it can be decomposed into four perfect fifths (7 * 4 = 28 semitones), or two octaves plus a major third (12 + 12 + 4 = 28 semitones). Intervals larger than a major seventeenth seldom need to be spoken of, most often being referred to by their compound names, for example "two octaves plus a fifth" rather than "a 19th".
Intervals in chords
Chords are sets of three or more notes. They are typically defined as the combination of intervals starting from a common note called the root of the chord. For instance a major triad is a chord containing three notes defined by the root and two intervals (major third and perfect fifth). Sometimes even a single interval (dyad) is considered to be a chord. Chords are classified based on the quality and number of the intervals which define them.
Chord qualities and interval qualities
The main chord qualities are: major, minor, augmented, diminished, half-diminished, and dominant. The symbols used for chord quality are similar to those used for interval quality (see above). In addition, + or aug is used for augmented, ° or dim for diminished, ø for half diminished, and dom for dominant (the symbol − alone is not used for diminished).
Deducing component intervals from chord names and symbols
The main rules to decode chord names or symbols are summarized below. Further details are given at Rules to decode chord names and symbols.
- For 3-note chords (triads), major or minor always refer to the interval of the third above the root note, while augmented and diminished always refer to the interval of the fifth above root. The same is true for the corresponding symbols (e.g., Cm means Cm3, and C+ means C+5). Thus, the terms third and fifth and the corresponding symbols 3 and 5 are typically omitted. This rule can be generalized to all kinds of chords, provided the above-mentioned qualities appear immediately after the root note, or at the beginning of the chord name or symbol. For instance, in the chord symbols Cm and Cm7, m refers to the interval m3, and 3 is omitted. When these qualities do not appear immediately after the root note, or at the beginning of the name or symbol, they should be considered interval qualities, rather than chord qualities. For instance, in Cm/M7 (minor major seventh chord), m is the chord quality and refers to the m3 interval, while M refers to the M7 interval. When the number of an extra interval is specified immediately after chord quality, the quality of that interval may coincide with chord quality (e.g., CM7 = CM/M7). However, this is not always true (e.g., Cm6 = Cm/M6, C+7 = C+/m7, CM11 = CM/P11). See main article for further details.
- Without contrary information, a major third interval and a perfect fifth interval (major triad) are implied. For instance, a C chord is a C major triad, and the name C minor seventh (Cm7) implies a minor 3rd by rule 1, a perfect 5th by this rule, and a minor 7th by definition (see below). This rule has one exception (see next rule).
- When the fifth interval is diminished, the third must be minor. This rule overrides rule 2. For instance, Cdim7 implies a diminished 5th by rule 1, a minor 3rd by this rule, and a diminished 7th by definition (see below).
- Names and symbols which contain only a plain interval number (e.g., “Seventh chord”) or the chord root and a number (e.g., “C seventh”, or C7) are interpreted as follows:
- If the number is 2, 4, 6, etc., the chord is a major added tone chord (e.g., C6 = CM6 = Cadd6) and contains, together with the implied major triad, an extra major 2nd, perfect 4th, or major 6th (see names and symbols for added tone chords).
- If the number is 7, 9, 11, 13, etc., the chord is dominant (e.g., C7 = Cdom7) and contains, together with the implied major triad, one or more of the following extra intervals: minor 7th, major 9th, perfect 11th, and major 13th (see names and symbols for seventh and extended chords).
- If the number is 5, the chord (technically not a chord in the traditional sense, but a dyad) is a power chord. Only the root, a perfect fifth and usually an octave are played.
The table shows the intervals contained in some of the main chords (component intervals), and some of the symbols used to denote them. The interval qualities or numbers in boldface font can be deduced from chord name or symbol by applying rule 1. In symbol examples, C is used as chord root.
|Main chords||Component intervals|
|CM, or Cmaj||maj3||perf5|
|Minor triad||Cm, or Cmin||min3||perf5|
|Augmented triad||C+, or Caug||maj3||aug5|
|Diminished triad||C°, or Cdim||min3||dim5|
|Dominant seventh chord||C7, or Cdom7||maj3||perf5||min7|
|Minor seventh chord||Cm7, or Cmin7||min3||perf5||min7|
|Major seventh chord||CM7, or Cmaj7||maj3||perf5||maj7|
|Augmented seventh chord||C+7, Caug7,
C7♯5, or C7aug5
|Diminished seventh chord||C°7, or Cdim7||min3||dim5||dim7|
|Half-diminished seventh chord||Cø7, Cm7♭5, or Cmin7dim5||min3||dim5||min7|
Size of intervals used in different tuning systems
|Comparison of interval width (in cents)|
In this table, the interval widths used in four different tuning systems are compared. To facilitate comparison, just intervals as provided by 5-limit tuning (see symmetric scale n.1) are shown in bold font, and the values in cents are rounded to integers. Notice that in each of the non-equal tuning systems, by definition the width of each type of interval (including the semitone) changes depending on the note from which the interval starts. This is the price paid for seeking just intonation. However, for the sake of simplicity, for some types of interval the table shows only one value (the most often observed one).
In 1/4-comma meantone, by definition 11 perfect fifths have a size of approximately 697 cents (700−ε cents, where ε ≈ 3.42 cents); since the average size of the 12 fifths must equal exactly 700 cents (as in equal temperament), the other one must have a size of about 738 cents (700+11ε, the wolf fifth or diminished sixth); 8 major thirds have size about 386 cents (400−4ε), 4 have size about 427 cents (400+8ε, actually diminished fourths), and their average size is 400 cents. In short, similar differences in width are observed for all interval types, except for unisons and octaves, and they are all multiples of ε (the difference between the 1/4-comma meantone fifth and the average fifth). A more detailed analysis is provided at 1/4-comma meantone Size of intervals. Note that 1/4-comma meantone was designed to produce just major thirds, but only 8 of them are just (5:4, about 386 cents).
The Pythagorean tuning is characterized by smaller differences because they are multiples of a smaller ε (ε ≈ 1.96 cents, the difference between the Pythagorean fifth and the average fifth). Notice that here the fifth is wider than 700 cents, while in most meantone temperaments, including 1/4-comma meantone, it is tempered to a size smaller than 700. A more detailed analysis is provided at Pythagorean tuning#Size of intervals.
The 5-limit tuning system uses just tones and semitones as building blocks, rather than a stack of perfect fifths, and this leads to even more varied intervals throughout the scale (each kind of interval has three or four different sizes). A more detailed analysis is provided at 5-limit tuning#Size of intervals. Note that 5-limit tuning was designed to maximize the number of just intervals, but even in this system some intervals are not just (e.g., 3 fifths, 5 major thirds and 6 minor thirds are not just; also, 3 major and 3 minor thirds are wolf intervals).
The above-mentioned symmetric scale 1, defined in the 5-limit tuning system, is not the only method to obtain just intonation. It is possible to construct juster intervals or just intervals closer to the equal-tempered equivalents, but most of the ones listed above have been used historically in equivalent contexts. In particular, the asymmetric version of the 5-limit tuning scale provides a juster value for the minor seventh (9:5, rather than 16:9). Moreover, the tritone (augmented fourth or diminished fifth), could have other just ratios; for instance, 7:5 (about 583 cents) or 17:12 (about 603 cents) are possible alternatives for the augmented fourth (the latter is fairly common, as it is closer to the equal-tempered value of 600 cents). The 7:4 interval (about 969 cents), also known as the harmonic seventh, has been a contentious issue throughout the history of music theory; it is 31 cents flatter than an equal-tempered minor seventh. Some[who?] assert the 7:4 is one of the blue notes used in jazz. For further details about reference ratios, see 5-limit tuning#The justest ratios.
Although intervals are usually designated in relation to their lower note, David Cope and Hindemith both suggest the concept of interval root. To determine an interval's root, one locates its nearest approximation in the harmonic series. The root of a perfect fourth, then, is its top note because it is an octave of the fundamental in the hypothetical harmonic series. The bottom note of every odd diatonically numbered intervals are the roots, as are the tops of all even numbered intervals. The root of a collection of intervals or a chord is thus determined by the interval root of its strongest interval.
As to its usefulness, Cope provides the example of the final tonic chord of some popular music being traditionally analyzable as a "submediant six-five chord" (added sixth chords by popular terminology), or a first inversion seventh chord (possibly the dominant of the mediant V/iii). According the interval root of the strongest interval of the chord (in first inversion, CEGA), the perfect fifth (C–G), is the bottom C, the tonic.
Interval cycles, "unfold [i.e., repeat] a single recurrent interval in a series that closes with a return to the initial pitch class", and are notated by George Perle using the letter "C", for cycle, with an interval-class integer to distinguish the interval. Thus the diminished-seventh chord would be C3 and the augmented triad would be C4. A superscript may be added to distinguish between transpositions, using 0–11 to indicate the lowest pitch class in the cycle.
Alternative interval naming conventions
As shown below, some of the above-mentioned intervals have alternative names, and some of them take a specific alternative name in Pythagorean tuning, five-limit tuning, or meantone temperament tuning systems such as quarter-comma meantone. All the intervals with prefix sesqui- are justly tuned, and their frequency ratio, shown in the table, is a superparticular number (or epimoric ratio). The same is true for the octave.
Typically, a comma is a diminished second, but this is not always true (for more details, see Alternative definitions of comma). For instance, in Pythagorean tuning the diminished second is a descending interval (524288:531441, or about -23.5 cents), and the Pythagorean comma is its opposite (531441:524288, or about 23.5 cents). 5-limit tuning defines four kinds of comma, three of which meet the definition of diminished second, and hence are listed in the table below. The fourth one, called syntonic comma (81:80) can neither be regarded as a diminished second, nor as its opposite. See Diminished seconds in 5-limit tuning for further details.
|Generic names||Specific names|
|Quality and number||Other naming convention||Pythagorean tuning||5-limit tuning||1/4-comma
or perfect prime
|lesser diesis (128:125)|
greater diesis (648:625)
or augmented prime
|2||major second||M2||tone, whole tone, whole step||sesquioctavum (9:8)|
|3||minor third||m3||sesquiquintum (6:5)|
|4||major third||M3||sesquiquartum (5:4)|
|5||perfect fourth||P4||sesquitertium (4:3)|
|7||perfect fifth||P5||sesquialterum (3:2)|
|12||perfect octave||P8||duplex (2:1)|
Additionally, some cultures around the world have their own names for intervals found in their music. For instance, 22 kinds of intervals, called shrutis, are canonically defined in Indian classical music.
Up to the end of the 18th century, Latin was used as an official language throughout Europe for scientific and music textbooks. In music, many English terms are derived from Latin. For instance, semitone is from Latin semitonus.
The prefix semi- is typically used herein to mean "shorter", rather than "half". Namely, a semitonus, semiditonus, semidiatessaron, semidiapente, semihexachordum, semiheptachordum, or semidiapason, is shorter by one semitone than the corresponding whole interval. For instance, a semiditonus (3 semitones, or about 300 cents) is not half of a ditonus (4 semitones, or about 400 cents), but a ditonus shortened by one semitone. Moreover, in Pythagorean tuning (the most commonly used tuning system up to the 16th century), a semitritonus (d5) is smaller than a tritonus (A4) by one Pythagorean comma (about a quarter of a semitone).
In post-tonal or atonal theory, originally developed for equal-tempered European classical music written using the twelve-tone technique or serialism, integer notation is often used, most prominently in musical set theory. In this system, intervals are named according to the number of half steps, from 0 to 11, the largest interval class being 6.
In atonal or musical set theory, there are numerous types of intervals, the first being the ordered pitch interval, the distance between two pitches upward or downward. For instance, the interval from C upward to G is 7, and the interval from G downward to C is −7. One can also measure the distance between two pitches without taking into account direction with the unordered pitch interval, somewhat similar to the interval of tonal theory.
The interval between pitch classes may be measured with ordered and unordered pitch-class intervals. The ordered one, also called directed interval, may be considered the measure upwards, which, since we are dealing with pitch classes, depends on whichever pitch is chosen as 0. For unordered pitch-class intervals, see interval class.
Generic and specific intervals
In diatonic set theory, specific and generic intervals are distinguished. Specific intervals are the interval class or number of semitones between scale steps or collection members, and generic intervals are the number of diatonic scale steps (or staff positions) between notes of a collection or scale.
Notice that staff positions, when used to determine the conventional interval number (second, third, fourth, etc.), are counted including the position of the lower note of the interval, while generic interval numbers are counted excluding that position. Thus, generic interval numbers are smaller by 1, with respect to the conventional interval numbers.
|Specific interval||Generic interval||Diatonic name|
|Number of semitones||Interval class|
Generalizations and non-pitch uses
The term "interval" can also be generalized to other music elements besides pitch. David Lewin's Generalized Musical Intervals and Transformations uses interval as a generic measure of distance between time points, timbres, or more abstract musical phenomena.
- Music and mathematics
- Circle of fifths
- List of pitch intervals
- List of meantone intervals
- Ear training
- Regular temperament
- Prout, Ebenezer (1903), "I-Introduction", Harmony, Its Theory And Practise (30th edition, revised and largely rewritten ed.), London: Augener; Boston: Boston Music Co., p. 1, ISBN 978-0781207836
- Lindley, Mark/Campbell, Murray/Greated, Clive. "Interval". In Macy, Laura. Grove Music Online. Oxford Music Online. Oxford University Press. (subscription required)
- Aldwell, E; Schachter, C.; Cadwallader, A., "Part 1: The Primary Materials and Procedures, Unit 1", Harmony and Voice Leading (4th ed.), Schirmer, p. 8, ISBN 978-0495189756
- Duffin, Ross W. (2007), "3. Non-keyboard tuning", How Equal Temperament Ruined Harmony (and Why You Should Care) (1st ed.), W. W. Norton, ISBN 978-0-393-33420-3
- "Prime (ii). See Unison" (from Prime. Grove Music Online. Oxford University Press. Accessed August 2013. (subscription required))
- The term Tritone is sometimes used more strictly as a synonym of augmented fourth (A4).
- The perfect and the augmented unison are also known as perfect and augmented prime.
- The minor second (m2) is sometimes called diatonic semitone, while the augmented unison (A1) is sometimes called chromatic semitone.
- The expression diatonic scale is herein strictly defined as a 7-tone scale which is either a sequence of successive natural notes (such as the C-major scale, C–D–E–F–G–A–B, or the A-minor scale, A–B–C–D–E–F–G) or any transposition thereof. In other words, a scale that can be written using seven consecutive notes without accidentals on a staff with a conventional key signature, or with no signature. This includes, for instance, the major and the natural minor scales, but does not include some other seven-tone scales, such as the melodic minor and the harmonic minor scales (see also Diatonic and chromatic).
- Definition of Perfect consonance in Godfrey Weber's General music teacher, by Godfrey Weber, 1841.
- Kostka, Stephen; Payne, Dorothy (2008). Tonal Harmony, p. 21. First Edition, 1984.
- Prout, Ebenezer (1903). Harmony: Its Theory and Practice, 16th edition. London: Augener & Co. (facsimile reprint, St. Clair Shores, Mich.: Scholarly Press, 1970), p. 10. ISBN 0-403-00326-1.
- See for example William Lovelock, The Rudiments of Music (New York: St Martin's Press; London: G. Bell, 1957):[page needed], reprinted 1966, 1970, and 1976 by G. Bell, 1971 by St Martins Press, 1981, 1984, and 1986 London: Bell & Hyman. ISBN 9780713507447 (pbk).
- Drabkin, William (2001). "Fourth". The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell. London: Macmillan Publishers.
- Helmholtz, Hermann L. F. On the Sensations of Tone as a Theoretical Basis for the Theory of Music Second English Edition translated by Ellis, Alexander J. (1885) reprinted by Dover Publications with new introduction (1954) ISBN 0-486-60753-4, page 182d "Just as the coincidences of the two first upper partial tones led us to the natural consonances of the Octave and Fifth, the coincidences of higher upper partials would lead us to a further series of natural consonances."
- Cope, David (1997). Techniques of the Contemporary Composer, pp. 40–41. New York, New York: Schirmer Books. ISBN 0-02-864737-8.
- Wyatt, Keith (1998). Harmony & Theory... Hal Leonard Corporation. p. 77. ISBN 0-7935-7991-0.
- Bonds, Mark Evan (2006). A History of Music in Western Culture, p.123. 2nd ed. ISBN 0-13-193104-0.
- Aikin, Jim (2004). A Player's Guide to Chords and Harmony: Music Theory for Real-World Musicians, p. 24. ISBN 0-87930-798-6.
- Károlyi, Otto (1965), Introducing Music, p. 63. Hammondsworth (England), and New York: Penguin Books. ISBN 0-14-020659-0.
- General rule 1 achieves consistency in the interpretation of symbols such as CM7, Cm6, and C+7. Some musicians legitimately prefer to think that, in CM7, M refers to the seventh, rather than to the third. This alternative approach is legitimate, as both the third and seventh are major, yet it is inconsistent, as a similar interpretation is impossible for Cm6 and C+7 (in Cm6, m cannot possibly refer to the sixth, which is major by definition, and in C+7, + cannot refer to the seventh, which is minor). Both approaches reveal only one of the intervals (M3 or M7), and require other rules to complete the task. Whatever is the decoding method, the result is the same (e.g., CM7 is always conventionally decoded as C–E–G–B, implying M3, P5, M7). The advantage of rule 1 is that it has no exceptions, which makes it the simplest possible approach to decode chord quality.
According to the two approaches, some may format CM7 as CM7 (general rule 1: M refers to M3), and others as CM7 (alternative approach: M refers to M7). Fortunately, even CM7 becomes compatible with rule 1 if it is considered an abbreviation of CMM7, in which the first M is omitted. The omitted M is the quality of the third, and is deduced according to rule 2 (see above), consistently with the interpretation of the plain symbol C, which by the same rule stands for CM.
- All triads are tertian chords (chords defined by sequences of thirds), and a major third would produce in this case a non-tertian chord. Namely, the diminished fifth spans 6 semitones from root, thus it may be decomposed into a sequence of two minor thirds, each spanning 3 semitones (m3 + m3), compatible with the definition of tertian chord. If a major third were used (4 semitones), this would entail a sequence containing a major second (M3 + M2 = 4 + 2 semitones = 6 semitones), which would not meet the definition of tertian chord.
- Hindemith, Paul (1934). The Craft of Musical Composition. New York: Associated Music Publishers. Cited in Cope (1997), p. 40-41.
- Perle, George (1990). The Listening Composer, p. 21. California: University of California Press. ISBN 0-520-06991-9.
- Gioseffo Zarlino, Le Istitutione harmoniche ... nelle quali, oltre le materie appartenenti alla musica, si trovano dichiarati molti luoghi di Poeti, d'Historici e di Filosofi, si come nel leggerle si potrà chiaramente vedere (Venice, 1558): 162.
- J. F. Niermeyer, Mediae latinitatis lexicon minus: Lexique latin médiéval–français/anglais: A Medieval Latin–French/English Dictionary, abbreviationes et index fontium composuit C. van de Kieft, adiuvante G. S. M. M. Lake-Schoonebeek (Leiden: E. J. Brill, 1976): 955. ISBN 90-04-04794-8.
- Robert De Handlo: The Rules, and Johannes Hanboys, The Summa: A New Critical Text and Translation, edited and translated by Peter M. Lefferts. Greek & Latin Music Theory 7 (Lincoln: University of Nebraska Press, 1991): 193fn17. ISBN 0803279345.
- Roeder, John. "Interval Class". In Macy, Laura. Grove Music Online. Oxford Music Online. Oxford University Press. (subscription required)
- Lewin, David (1987). Generalized Musical Intervals and Transformations, for example sections 3.3.1 and 5.4.2. New Haven: Yale University Press. Reprinted Oxford University Press, 2007. ISBN 978-0-19-531713-8
- Ockelford, Adam (2005). Repetition in Music: Theoretical and Metatheoretical Perspectives, p. 7. ISBN 0-7546-3573-2. "Lewin posits the notion of musical 'spaces' made up of elements between which we can intuit 'intervals'....Lewin gives a number of examples of musical spaces, including the diatonic gamut of pitches arranged in scalar order; the 12 pitch classes under equal temperament; a succession of time-points pulsing at regular temporal distances one time unit apart; and a family of durations, each measuring a temporal span in time units....transformations of timbre are proposed that derive from changes in the spectrum of partials..."
Gardner, Carl E. (1912) - Essentials of Music Theory, p. 38, http://ia600309.us.archive.org/23/items/essentialsofmusi00gard/essentialsofmusi00gard.pdf
- Encyclopaedia Britannica, Interval
- Morphogenesis of chords and scales Chords and scales classification
- Lissajous Curves: Interactive simulation of graphical representations of musical intervals, beats, interference, vibrating strings
- Elements of Harmony: Vertical Intervals
- Visualisation of musical intervals interactive
- How intervals work, colored music notation. |
2After studying this chapter you will be able to Explain how housing markets work and how price ceilings create housing shortages and inefficiencyExplain how labor markets work and how minimum wage laws create unemployment and inefficiencyExplain the effects of a taxExplain why farm prices and revenues fluctuate and how production subsidies and quotas influence farm production, costs, and pricesExplain how markets for illegal goods work
3Housing Markets and Rent Ceilings The Market Response to a Decrease in SupplyFigure 6.1 shows the San Francisco housing market before the earthquake.The quantity of housing was 100,000 units and the rent was $16 a month at the intersection of the curves D and SS.
4Housing Markets and Rent Ceilings The earthquake decreased the supply of housing and the supply curve shifted leftward to SSA.The rent increased to $20 a month and the quantity decreased to 72,000 units.
5Housing Markets and Rent Ceilings Long-Run AdjustmentThe long-run supply of housing is perfectly elastic at $16 a month.With the rent above $16 a month, new houses and apartments are built.
6Housing Markets and Rent Ceilings The building program increases supply and the supply curve shifts rightward.The quantity of housing increases and the rent falls to the pre-earthquake levels (other things remaining the same).
7Housing Markets and Rent Ceilings A Regulated Housing MarketA price ceiling is a regulation that makes it illegal to charge a price higher than a specified level.When a price ceiling is applied to a housing market it is called a rent ceiling.If the rent ceiling is set above the equilibrium rent, it has no effect. The market works as if there were no ceiling.But if the rent ceiling is set below the equilibrium rent, it has powerful effects.
8Housing Markets and Rent Ceilings Figure 6.2 shows the effects of a rent ceiling that is set below the equilibrium rent.The equilibrium rent is $20 a month.A rent ceiling is set at $16 a month.So the equilibrium rent is in the illegal region.
9Housing Markets and Rent Ceilings At the rent ceiling, the quantity of housing demanded exceeds the quantity supplied and there is a housing shortage.
10Housing Markets and Rent Ceilings With a housing shortage, people are willing to pay $24 a month.Because the legal price cannot eliminate the shortage, other mechanisms operate:Search activityBlack markets
11Housing Markets and Rent Ceilings Search ActivityThe time spent looking for someone with whom to do business is called search activity.When a price is regulated and there is a shortage, search activity increases.Search activity is costly and the opportunity cost of housing equals its rent (regulated) plus the opportunity cost of the search activity (unregulated).Because the quantity of housing is less than the quantity in an unregulated market, the opportunity cost of housing exceeds the unregulated rent.
12Housing Markets and Rent Ceilings Black MarketsA black market is an illegal market that operates alongside a legal market in which a price ceiling or other restriction has been imposed.A shortage of housing creates a black market in housing.Illegal arrangements are made between renters and landlords at rents above the rent ceiling—and generally above what the rent would have been in an unregulated market.
13Housing Markets and Rent Ceilings Inefficiency of Rent CeilingsA rent ceiling leads to an inefficient use of resources.The quantity of rental housing is less than the efficient quantity, so a deadweight loss arises.Figure 6.3 illustrates.
14Housing Markets and Rent Ceilings A rent ceiling decreases the quantity of rental housing.People use resources in search activity, which decreases producer surplus and consumer surplus.And a deadweight loss arises.
15Housing Markets and Rent Ceilings Are Rent Ceilings Fair?According to the fair rules view, a rent ceiling is unfair because it blocks voluntary exchange.According to the fair results view, a rent ceiling is unfair because it does not generally benefit the poor.A rent ceiling decreases the quantity of housing and allocates the scarce housing using:LotteriesQueuesDiscrimination
16Housing Markets and Rent Ceilings A lottery gives scarce housing to the lucky.A queue gives scarce housing to those who have the greatest foresight and get their names on the list first.Discrimination gives scarce housing to friends, family members, or those of the selected race or sex.None of these methods leads to a fair outcome.
17Housing Markets and Rent Ceilings Rent Ceilings in PracticeNew York, San Francisco, London, Paris, and Boston have or have had rent ceilings.Atlanta, Baltimore, Chicago, Dallas, Philadelphia, Phoenix, and Seattle have never had them.Comparing cities with and without rent ceilings, we learn:1. Rent ceilings definitely create a housing shortage.2. Rent ceilings lower rents for the lucky few and raise them for everyone else.Winners are long-standing residents.Losers are mobile newcomers.
18The Labor Market and Minimum Wage New labor-saving technologies become available every year, which mainly replace low-skilled labor.Does the persistent decrease in the demand for low-skilled labor depress the wage rates of these workers?The immediate effect of these technological advances is a decrease in the demand for low-skilled labor, a fall in the wage rate, and a decrease in the quantity of labor supplied.Figure 6.4 on the next slide illustrates this immediate effect..
19The Labor Market and Minimum Wage A decrease in the demand for low-skilled labor is shown by a leftward shift of the demand curve.A new labor market equilibrium arises at a lower wage rate and a smaller quantity of labor employed.
20The Labor Market and Minimum Wage In the long run, people get trained to do higher-skilled jobs.The supply of low-skilled labor decreases and the short-run supply curve shifts leftward.If long-run supply is perfectly elastic, the equilibrium wage rate returns to its initial level (other things remaining the same).
21The Labor Market and Minimum Wage A Minimum WageA price floor is a regulation that makes it illegal to trade at a price lower than a specified level.When a price floor is applied to labor markets, it is called a minimum wage.If the minimum wage is set below the equilibrium wage rate, it has no effect. The market works as if there were no minimum wage.If the minimum wage is set above the equilibrium wage rate, it has powerful effects.
22The Labor Market and Minimum Wage If the minimum wage is set above the equilibrium wage rate, the quantity of labor supplied by workers exceeds the quantity demanded by employers. There is a surplus of labor.Because employers cannot be forced to hire a greater quantity than they wish, the quantity of labor hired at the minimum wage is less than the quantity that would be hired in an unregulated labor market.Because the legal wage rate cannot eliminate the surplus, the minimum wage creates unemployment.Figure 6.5 on the next slide illustrates these effects.
23The Labor Market and Minimum Wage The equilibrium wage rate is $4 an hour.The minimum wage rate is set at $5 an hour.So the equilibrium wage rate is in the illegal region.The quantity of labor employed is the quantity demanded.
24The Labor Market and Minimum Wage The quantity of labor supplied exceeds the quantity demanded.Unemployment is the gap between the quantity demanded and the quantity supplied.With only 20 million hours demanded, some workers are willing to supply the last hour demanded for $3.
25The Labor Market and Minimum Wage Inefficiency of a Minimum WageA minimum wage leads to an inefficient use of resources.The quantity of labor employed is less than the efficient quantity and there is a deadweight loss.Figure 6.6 illustrates this loss.
26The Labor Market and Minimum Wage A minimum wage decreases the quantity of labor employed.If resources are used in job search activity, workers’ surplus and firms’ surplus decrease.And a deadweight loss arises.
28The Labor Market and Minimum Wage Federal Minimum Wage and Its EffectsA minimum wage rate in the United States is set by the federal government’s Fair Labor Standards Act.In 2007, the federal minimum wage rate was $5.15 an hour.Some state governments have set minimum wages above the federal minimum wage rate.Most economists believe that minimum wage laws increase the unemployment rate of low-skilled younger workers.
29The Labor Market and Minimum Wage A Living WageA living wage has been defined as an hourly wage rate that enables a person who works a 40 hour week to rent adequate housing for not more than 30 percent of the amount earned.Living wage laws already operate in many cities such as St. Louis, Boston, Chicago, and New York City.The effects of a living wage are similar to those of a minimum wage.
30Taxes Everything you earn and most things you buy are taxed. Who really pays these taxes?Income tax and the Employment Insurance tax are deducted from your pay, and provincial sales tax and GST is added to the price of the things you buy, so isn’t it obvious that you pay these taxes?Isn’t it equally obvious that your employer pays the employer’s contribution to the Employment Insurance tax?You’re going to discover that it isn’t obvious who pays a tax and that lawmakers don’t decide who will pay!
31TaxesTax IncidenceTax incidence is the division of the burden of a tax between the buyer and the seller.When an item is taxed, its price might rise by the full amount of the tax, by a lesser amount, or not at all.If the price rises by the full amount of the tax, the buyer pays the tax.If the price rise by a lesser amount than the tax, the buyer and seller share the burden of the tax.If the price doesn’t rise at all, the seller pays the tax.
32Taxes Tax incidence doesn’t depend on tax law! The law might impose a tax on the buyer or the seller, but the outcome will be the same.To see why, we look at the tax on cigarettes in New York City.On July 1, 2002, New York City raised the tax on the sales of cigarettes from almost nothing to $1.50 a pack.What are the effects of this tax?
33Taxes A Tax on Sellers Figure 6.7 shows the effects of this tax. With no tax, the equilibrium price is $3.00 a pack.A tax on sellers of $1.50 a pack is introduced.Supply decreases and the curve S + tax on sellers shows the new supply curve.
34TaxesThe market price paid by buyers rises to $4.00 a pack and the quantity bought decreases.The price received by the sellers falls to $2.50 a pack.So with the tax of $1.50 a pack, buyers pay $1.00 a pack more and sellers receive 50¢ a pack less.
35TaxesA Tax on BuyersAgain, with no tax, the equilibrium price is $3.00 a pack.A tax on buyers of $1.50 a pack is introduced.Demand decreases and the curve D tax on buyers shows the new demand curve.
36TaxesThe price received by sellers falls to $2.50 a pack and the quantity decreases.The price paid by buyers rises to $4.00 a pack.So with the tax of $1.50 a pack, buyers pay $1.00 a pack more and sellers receive 50¢ a pack less.
37Taxes So, exactly as before when the seller was taxed: The buyer pays $1.00 of the tax.The seller pays the other 50¢ of the tax.Tax incidence is the same regardless of whether the law says the seller pays or the buyer pays.
38Taxes Tax Division and Elasticity of Demand The division of the tax between the buyer and the seller depends on the elasticities of demand and supply.To see how, we look at two extreme cases.Perfectly inelastic demand: the buyer pays the entire tax.Perfectly elastic demand: the seller pays the entire tax.The more inelastic the demand, the larger is the buyer’s share of the tax.
39TaxesDemand for this good is perfectly inelastic—the demand curve is vertical.When a tax is imposed on this good, the buyer pays the entire tax.
40TaxesThe demand for this good is perfectly elastic—the demand curve is horizontal.When a tax is imposed on this good, the seller pays the entire tax.
41Taxes Tax Division and Elasticity of Supply To see the effect of the elasticity of supply on the division of the tax payment, we again look at two extreme cases.Perfectly inelastic supply: the seller pays the entire tax.Perfectly elastic supply: the buyer pays the entire tax.The more elastic the supply, the larger is the buyer’s share of the tax.
42TaxesThe supply of this good is perfectly inelastic—the supply curve is vertical.When a tax is imposed on this good, sellers pay the entire tax.
43TaxesThe supply of this good is perfectly elastic—the supply curve is horizontal.When a tax is imposed on this good, buyers pay the entire tax.
44Taxes Taxes in Practice Taxes usually are levied on goods and services with an inelastic demand or an inelastic supply.Alcohol, tobacco, and gasoline have inelastic demand, so the buyers of these items pay most the tax on them.Labor has a low elasticity of supply, so the seller—the worker—pays most of the income tax and most of the Social Security tax.
45Taxes Taxes and Efficiency Except in the extreme cases of perfectly inelastic demand or perfectly inelastic supply when the quantity remains the same, imposing a tax creates inefficiency.Figure 6.11 shows the inefficiency created by a $10 tax on CD players.
46TaxesWith no tax, the market is efficient and total surplus (the sum of consumer surplus and producer surplus) is maximized.A tax shifts the supply curve, decreases the equilibrium quantity, raises the price to the buyer, and lowers the price to the seller.
47TaxesThe tax revenue takes part of the consumer surplus and producer surplus.The decreased quantity creates a deadweight loss.
48Markets for Illegal Goods The U.S. government prohibits trade of some goods, such as illegal drugs.Yet, markets exist for illegal goods and services.How does the market for an illegal good work?To see how the market for an illegal good works, we begin by looking at a free market and see the changes that occur when the good is made illegal.
49Markets for Illegal Goods A Free Market for a DrugFigure 6.15 shows the market for a drug such as marijuana.Market equilibrium is at point E.The price is PC and the quantity is QC.
50Markets for Illegal Goods A Market for an Illegal DrugProhibiting transactions in a good or service raises the cost of such trading.If sellers and/or buyers of an illegal drug are penalized, then the cost of trading to the drug increases.Figure 6.15 shows the effect of these penalties.
51Markets for Illegal Goods Penalties on SellersIf the penalty on the seller is the amount HK, then the quantity supplied at a market price of PC is QP.Supply of the drug decreases to S + CBL.The new equilibrium is at point F. The price rises and the quantity decreases.
52Markets for Illegal Goods Penalties on BuyersIf the penalty on the buyer is the amount JH, the quantity demanded at a market price of PC is QP.Demand for the drug decreases to D – CBL.The new equilibrium is at point G. The market price falls and the quantity decreases.
53Markets for Illegal Goods But the opportunity cost of buying this illegal good rises above PC because the buyer pays the market price plus the cost of breaking the law.
54Markets for Illegal Goods Penalties on Both Sellers and BuyersNow suppose that both buyers and sellers are penalized for trading in the illegal drug.Both the demand for the drug and the supply of the drug decrease.
55Markets for Illegal Goods The new equilibrium is at point H.The quantity decreases to QP.The market price is PC.The buyer pays PB and the seller receives PS.
56Markets for Illegal Goods Legalizing and Taxing DrugsAn illegal good can be legalized and taxed.A high enough tax rate would decrease consumption to the level that occurs when trade is illegal.Arguments that extend beyond economics surround this choice. |
Common Core Standards: Math
High School: Functions
Interpreting Functions HSF-IF.A.1
1. Understand that a function from one set (called the domain) to another set (called the range) assigns to each element of the domain exactly one element of the range. If f is a function and x is an element of its domain, then f(x) denotes the output of f corresponding to the input x. The graph of f is the graph of the equation y = f(x).
A function is like any other system. What you get out of the system depends on what you put into it. Think of the human body. We put food into our digestive system and we get something very different out. (Gross.) What we put into our bodies affects what comes out and if you don't believe that, try eating beets or asparagus.
Students should understand that functions do the exact same thing, only with numbers. (Maybe not the exact same thing.) They're all about describing relationships between two sets of numbers. These two sets of numbers must have the condition that each item from the first set of numbers pairs with exactly one item from the second set of numbers.
In other words, for every input, there is exactly one output. That's like the functions' motto.
Students should know that functions can be expressed as a pair of input and output values. A relation is a set of pairs of input and output values, usually represented in ordered pairs. For instance, the ordered pair (1, 2) means that for an input value of 1, we get an output value of 2. The ordered pair (2, 3) means that if we input 2, we get 3 out. Typically, we represent this as (x, y).
The domain is the set of inputs in a relation, also called the x-coordinates of an ordered pair. The range is the set of outputs in a relation, also called the y-coordinates of an ordered pair. If students have a hard time remembering which is which, tell them to think alphabetically. Since D comes before R in the alphabet, the domain has to come before the range. If that doesn't work, the acronym "DIXROY" might. (Domain, input, x; Range, output, y.)
To start with, students can represent functions as several ordered pairs in the form of a table. One column will be the input, or x values, and the other will be the output, or y values. For instance, we can rewrite the three points of a function (-2, 3), (0, 4), and (1, -3) in the following table.
Here, we have our clearly defined domain (D: x = -2, 0, 1) and range (R: y = 3, 4, -3). Any table with the same x value resulting in multiple y values is not a function. Remember the functions' motto?
When domains and ranges cover more than just a few select points, they're often included in parenthesis or brackets. Parenthesis indicate that the point on that end is not included, while brackets indicate that it is included. When the ∞ symbol is used, we use parentheses. Makes sense, since infinity isn't really a number and can't actually be reached.
As useful as tables are, many functions have domains and ranges that extend to positive and negative infinity. When the students' data tables start getting longer than their arms, we recommend switching to graphs. Spare a headache and a few trees, while they're at it.
A graph is a visual representation of relations. We plot the input values as x and the output values as y and treat the ordered pairs (x, y) as points on the coordinate plane. For the function above, we could plot the points as (-2, 3), (0, 4), and (1, -3).
Students should also know that functions can be represented by curves on the coordinate plane as y = f(x) where f(x) is some function of x. These are basically a bunch of points that are so close together that they form a continuous curve. For instance, these points could be part of a larger function shown by the graph below.
If students aren't sure whether they're looking at a function or not, they should perform the vertical line test: if they draw a vertical line on a graph of a relation and it intersects with the curve more than once, the relation is not a function.
The key concept to remember is that functions are systems in which one input corresponds to one output. Just like the human body is a system in which every meal corresponds to one trip to the bathroom. Or something like that.
- Evalute a Defined Operator Numerically - Math Shack
- Find Domain: Range of Function Given - Math Shack
- Find the Domain of Line Segments on the Coordinate Plane - Math Shack
- Find the Range of Line Segments on the Coordinate Plane - Math Shack
- Range Given Linear Function and Domain - Math Shack
- Range Given Quadratic Function and Domain - Math Shack |
4.1 Introduction of Ratio
MEANING OF RATIO:
A ratio is one figure express in terms of another figure. It is a mathematical yardstick that measures the relationship of two figures, which are related to each other and mutually interdependent. Ratio is express by dividing one figure by the other related figure. Thus a ratio is an expression relating one number to another. It is simply the quotient of two numbers. It can be expressed as a fraction or as a decimal or as a pure ratio or in absolute figures as “so many times”. As accounting ratio is an expression relating two figures or accounts or two sets of account heads or group contain in the financial statements.
MEANING OF RATIO ANALYSIS:
Ratio analysis is a widely-used tool of financial analysis. It can be used to compare the risk and return relationship of firms of different sizes. It is defined as the systematic use ratio to interpret the financial statement so that the strengths and weaknesses of a firm as well as historical performance and current financial condition can be determined. The term ratio refers to the numerical or quantitative relationship between two the items/variable. This relationship can be expressed as (i) percentages, say, net profits are 25percent of sales (assuming net profits of Rs
25,000 and sales of Rs. 1, 00,000), (ii) fraction (net profit is one –fourth of sales) and (iii) proportion of numbers (the relationship between net profits and sales is 1:4).these alternative methods of expressing items which are related to each other are, for purposes of financial analysis, referred to as ratio analysis. It should be noted that computing the ratios does not add any information not already inherent in the above figures of profit and sales. What the ratio does is that they reveal the relationship in a more meaningful way so as to enable equity investors; management and lenders make better investment and credit decisions.
The rationale of ratio |
NCERT Solutions for Class 9 Maths Chapter 13 – Surface Areas and Volumes Exercise 13.4, includes step-wise solved problems from the NCERT textbook. The NCERT solutions are created by Maths subject experts along with proper geometric figures and explanations in a step-by-step procedure for good understanding. All the NCERT solutions for Science and Maths subjects are made available in PDF format, hence students can download them easily.
The NCERT solutions for class 9 maths are prepared as per the latest NCERT guidelines and syllabus. It is intended to help the students to score good marks in first and second term and competitive examinations.
Access Other Exercise Solutions of Class 9 Maths Chapter 13 – Surface Areas and Volumes
Exercise 13.1 solution (9 questions)
Exercise 13.2 solution (8 questions)
Exercise 13.3 solution (9 questions)
Exercise 13.5 solution (5 questions)
Exercise 13.6 solution (8 questions)
Exercise 13.7 solution (9 questions)
Exercise 13.8 solution (10 questions)
Exercise 13.9 solution (3 questions)
Access Answers to NCERT Class 9 Maths Chapter 13 – Surface Areas and Volumes Exercise 13.4
1. Find the surface area of a sphere of radius:
(i) 10.5cm (ii) 5.6cm (iii) 14cm
Formula: Surface area of sphere (SA) = 4πr2
(i) Radius of sphere, r = 10.5 cm
SA = 4×(22/7)×10.52 = 1386
Surface area of sphere is 1386 cm2
(ii) Radius of sphere, r = 5.6cm
Using formula, SA = 4×(22/ 7)×5.62 = 394.24
Surface area of sphere is 394.24 cm2
(iii) Radius of sphere, r = 14cm
SA = 4πr2
Surface area of sphere is 2464 cm2
2. Find the surface area of a sphere of diameter:
(i) 14cm (ii) 21cm (iii) 3.5cm
(Assume π = 22/7)
(i) Radius of sphere, r = diameter/2 = 14/2 cm = 7 cm
Formula for Surface area of sphere = 4πr2
= 4×(22/7)×72 = 616
Surface area of a sphere is 616 cm2
(ii) Radius (r) of sphere = 21/2 = 10.5 cm
Surface area of sphere = 4πr2
= 4×(22/7)×10.52 = 1386
Surface area of a sphere is 1386 cm2
Therefore, the surface area of a sphere having diameter 21cm is 1386 cm2
(iii) Radius(r) of sphere = 3.5/2 = 1.75 cm
Surface area of sphere = 4πr2
= 4×(22/7)×1.752 = 38.5
Surface area of a sphere is 38.5 cm2
3. Find the total surface area of a hemisphere of radius 10 cm. [Use π=3.14]
Radius of hemisphere, r = 10cm
Formula: Total surface area of hemisphere = 3πr2
= 3×3.14×102 = 942
The total surface area of given hemisphere is 942 cm2.
4. The radius of a spherical balloon increases from 7cm to 14cm as air is being pumped into it. Find the ratio of surface areas of the balloon in the two cases.
Let r1 and r2 be the radii of spherical balloon and spherical balloon when air is pumped into it respectively. So
r1 = 7cm
r2 = 14 cm
Now, Required ratio = (initial surface area)/(Surface area after pumping air into balloon)
= (7/14)2 = (1/2)2 = ¼
Therefore, the ratio between the surface areas is 1:4.
5. A hemispherical bowl made of brass has inner diameter 10.5cm. Find the cost of tin-plating it on the inside at the rate of Rs 16 per 100 cm2. (Assume π = 22/7)
Inner radius of hemispherical bowl, say r = diameter/2 = (10.5)/2 cm = 5.25 cm
Formula for Surface area of hemispherical bowl = 2πr2
= 2×(22/7)×(5.25)2 = 173.25
Surface area of hemispherical bowl is 173.25 cm2
Cost of tin-plating 100 cm2 area = Rs 16
Cost of tin-plating 1 cm2 area = Rs 16 /100
Cost of tin-plating 173.25 cm2 area = Rs. (16×173.25)/100 = Rs 27.72
Therefore, the cost of tin-plating the inner side of the hemispherical bowl at the rate of Rs 16 per 100 cm2 is Rs 27.72.
6. Find the radius of a sphere whose surface area is 154 cm2. (Assume π = 22/7)
Let the radius of the sphere be r.
Surface area of sphere = 154 (given)
4πr2 = 154
r2 = (154×7)/(4×22) = (49/4)
r = (7/2) = 3.5
The radius of the sphere is 3.5 cm.
7. The diameter of the moon is approximately one fourth of the diameter of the earth.
Find the ratio of their surface areas.
If diameter of earth is said d, then the diameter of moon will be d/4 (as per given statement)
Radius of earth = d/2
Radius of moon = ½×d/4 = d/8
Surface area of moon = 4π(d/8)2
Surface area of earth = 4π(d/2)2
Ratio of their Surface area;
The ratio between their surface areas is 1:16.
8. A hemispherical bowl is made of steel, 0.25 cm thick. The inner radius of the bowl is 5cm. Find the outer curved surface of the bowl. (Assume π =22/7)
Inner radius of hemispherical bowl = 5cm
Thickness of the bowl = 0.25 cm
Outer radius of hemispherical bowl = (5+0.25) cm = 5.25 cm
Formula for outer CSA of hemispherical bowl = 2πr2, where r is radius of hemisphere
= 2×(22/7)×(5.25)2 = 173.25
Therefore, the outer curved surface area of the bowl is 173.25 cm2.
9. A right circular cylinder just encloses a sphere of radius r (see fig. 13.22). Find
(i) surface area of the sphere,
(ii) curved surface area of the cylinder,
(iii) ratio of the areas obtained in(i) and (ii).
(i) Surface area of sphere = 4πr2, where r is the radius of sphere
(ii) Height of cylinder, h = r+r =2r
Radius of cylinder = r
CSA of cylinder formula = 2πrh = 2πr(2r) (using value of h)
(iii) Ratio between areas = (Surface area of sphere)/CSA of Cylinder)
= 4r2/4r2 = 1/1
Ratio of the areas obtained in (i) and (ii) is 1:1.
Exercise 13.4 of Class 9 Maths involves application level real-time problems that help students to think and apply the relevant formula. It helps to apply the total surface area of a sphere and hemisphere.
Learn the NCERT solutions of class 9 maths chapter 13 along with other learning materials and notes provided by SSC CGL APEX COACHING. The problems are solved in a detailed way with relevant formulas and figures, to score well in first and second term exams.
Key Features of NCERT Solutions for Class 9 Maths Chapter 13 – Surface Areas and Volume Exercise 13.4
- These NCERT Solutions help you solve and revise all questions of Exercise 13.4.
- Helps to find the radius of a sphere and the total surface area of hemisphere.
- It follows NCERT guidelines which help in preparing the students accordingly.
- Stepwise solutions given by our subject expert teachers will help you to secure more marks. |
Volume of Pyramid
The volume of pyramid is space occupied by it (or) it is defined as the number of unit cubes that can be fit into it. A pyramid is a polyhedron as its faces are made up of polygons. There are different types of pyramids such as a triangular pyramid, square pyramid, rectangular pyramid, pentagonal pyramid, etc that are named after their base, i.e., if the base of a pyramid is a square, it is called a square pyramid. All the side faces of a pyramid are triangles where one side of each triangle merges with a side of the base. Let us explore more about the volume of pyramid along with its formula, proof, and a few solved examples.
|1.||What is Volume of Pyramid?|
|2.||Volume of Pyramid Formula|
|3.||Volume Formulas of Different Types of Pyramids|
|4.||FAQs on Volume of Pyramid|
What is Volume of Pyramid?
The volume of a pyramid refers to the space enclosed between its faces. It is measured in cubic units such as cm3, m3, in3, etc. A pyramid is a three-dimensional shape where its base (a polygon) is joined to the vertex (apex) with the help of triangular faces. The perpendicular distance from the apex to the center of the polygon base is referred to as the height of the pyramid. A pyramid's name is derived from its base. For example, a pyramid with a square base is referred to as a square pyramid. Thus, the base area plays a major role in finding the volume of a pyramid. The volume of the pyramid is nothing but one-third of the product of the base area times its height.
Volume of Pyramid Formula
Let us consider a pyramid and prism each of which has a base area 'B' and height 'h'. We know that the volume of a prism is obtained by multiplying its base by its height. i.e., the volume of the prism is Bh. The volume of a pyramid is one-third of the volume of the corresponding prism (i.e., their bases and heights are congruent). Thus,
Volume of pyramid = (1/3) (Bh), where
- B = Area of the base of the pyramid
- h = Height of the pyramid (which is also called "altitude")
The derivation of this formula involves calculus and one can learn it in higher grades.
Note: The triangle formed by the slant height (s), the altitude (h), and half the side length of the base (x/2) is a right-angled triangle and hence we can apply the Pythagoras theorem for this. Thus, (x/2)2 + h2 = s2. We can use this while solving the problems of finding the volume of the pyramid given its slant height.
Volume Formulas of Different Types of Pyramids
From the earlier section, we have learned that the volume of a pyramid is (1/3) × (area of the base) × (height of the pyramid). Thus, to calculate the volume of a pyramid, we can use the areas of polygons formulas (as we know that the base of a pyramid is a polygon) to calculate the area of the base, and then by simply applying the above formula, we can calculate the volume of pyramid. Here, you can see the volume formulas of different types of pyramids such as the triangular pyramid, square pyramid, rectangular pyramid, pentagonal pyramid, and hexagonal pyramid and how they are derived.
Solved Examples on Volume of Pyramid
Example 1: Cheops pyramid in Egypt has a base measuring about 755 ft. × 755 ft. and its height is around 480 ft. Calculate its volume.
Cheops Pyramid is a square pyramid. Its base area (area of square) is,
B = 755 × 755 = 570,025 square feet.
The height of the pyramid is, h = 480 ft.
Using the volume of pyramid formula,
Volume of pyramid, V = (1/3) (Bh)
V = (1/3) × 570025 × 480
V = 91,204,000 cubic feet.
Answer: The volume of the Cheops pyramid is 91,204,000 cubic feet.
Example 2: A pyramid has a regular hexagon of side length 6 cm and height 9 cm. Find its volume.
The side length of the base (regular hexagon) is, a = 6.
The base area (area of regular hexagon) is,
B = (3√3/2) × a2
B = (3√3/2) × 62 ≈ 93.53 cm2.
The height of the pyramid is h = 9 cm.
The volume of the hexagonal pyramid is,
V = (1/3) (Bh)
V = (1/3) × 93.53 × 9
V = 280.59 cm3
Answer: The volume of the pyramid is 280.59 cm3.
Example 3: Tim built a rectangular tent (that is of the shape of a rectangular pyramid) for his night camp. The base of the tent is a rectangle of side 6 units × 10 units and the height is 3 units. What is the volume of the tent?
The base area (area of rectangle) of the tent is, B = 6 × 10 = 60 square units.
The height of the tent is h = 3 units.
The volume of the tent using the volume of pyramid formula is,
V = (1/3) (Bh)
V = (1/3) × 60 × 3
V = 60 cubic units.
Answer: The volume of the tent = 60 cubic units.
FAQs on Volume of Pyramid
What Is Meant By Volume of Pyramid?
The volume of a pyramid is the space that a pyramid occupies. The volume of a pyramid whose base area is 'B' and whose height is 'h' is (1/3) (Bh) cubic units.
What Is the Volume of Pyramid With a Square Base?
If 'B' is the base area and 'h' is the height of a pyramid, then its volume is V = (1/3) (Bh) cubic units. Consider a square pyramid whose base is a square of length 'x'. Then the base area is B = x2 and hence the volume of the pyramid with a square base is (1/3)(x2h) cubic units.
What Is the Volume of Pyramid With a Triangular Base?
To find the volume of a pyramid with a triangular base, first, we need to find its base area 'B' which can be found by applying a suitable area of triangle formula. If 'h' is the height of the pyramid, its volume is found using the formula V =(1/3) (Bh).
What Is the Volume of Pyramid With a Rectangular Base?
A pyramid whose base is a rectangle is a rectangular pyramid. Its base area 'B' is found by applying the area of the rectangle formula. i.e., if 'l' and 'w' are the dimensions of the base (rectangle), then its area is B = lw. If 'h' is the height of the pyramid, then its volume is V =(1/3) (Bh) = (1/3) lwh cubic units.
What Is the Formula To Find the Volume of Pyramid?
The volume of a pyramid is found using the formula V = (1/3) Bh, where 'B' is the base area and 'h' is the height of the pyramid. As we know the base of a pyramid is any polygon, we can apply the area of polygons formulas to find 'B'.
How To Find Volume of Pyramid With Slant Height?
If 'x' is the base length, 's' is the slant height, and 'h' is the height of a regular pyramid, then they satisfy the equation (the Pythagoras theorem) (x/2)2 + h2 = s2. If we are given with 'x' and 's', then we can find 'h' first using this equation and then apply the formula V = (1/3) Bh to find the volume of the pyramid where 'B' is the base area of the pyramid.
Why is There a 1/3 in the Formula for the Volume of Pyramid?
A cube of unit length can be divided into three congruent pyramids. So, the volume of pyramid is 1/3 of the volume of a cube. Hence, we have a 1/3 in the volume of pyramid. |
Elementary algebra encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables (quantities without fixed values).
This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers.
It is typically taught to secondary school students and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations.
Main article: Mathematical notation
Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression has the following components:
A coefficient is a numerical value, or letter representing a numerical constant, that multiplies a variable (the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. ) are typically used to represent constants, and those toward the end of the alphabet (e.g. and z) are used to represent variables. They are usually printed in italics.
Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation. and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, is written as , and may be written .
Usually terms with the highest power (exponent), are written on the left, for example, is written to the left of x. When a coefficient is one, it is usually omitted (e.g. is written ). Likewise when the exponent (power) is one, (e.g. is written ). When the exponent is zero, the result is always 1 (e.g. is always rewritten to 1). However , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents.
Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., , in plain text, and in the TeX mark-up language, the caret symbol represents exponentiation, so is written as "x^2". This also applies to some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, is written "3*x".
Main article: Variable (mathematics)
Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons.
Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example,
Main article: Equation
An equation states that two expressions are equal using the symbol for equality, = (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle:
This equation states that , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by a and b.
An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. is true only for and . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving.
Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: where represents 'greater than', and where represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped.
By definition, equality is an equivalence relation, meaning it is reflexive (i.e. ), symmetric (i.e. if then ), and transitive (i.e. if and then ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties:
The relations less than and greater than have the property of transitivity:
By reversing the inequation, and can be swapped, for example:
Main article: Substitution (algebra)
See also: Substitution (logic)
Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for a in the expression a*5 makes a new expression 3*5 with meaning 15. Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if is meant as the definition of as the product of a with itself, substituting 3 for a informs the reader of this statement that means 3 × 3 = 9. Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement x + 1 = 0, if x is substituted with 1, this implies 1 + 1 = 2 = 0, which is false, which implies that if x + 1 = 0 then x cannot be 1.
If x and y are integers, rationals, or real numbers, then xy = 0 implies x = 0 or y = 0. Consider abc = 0. Then, substituting a for x and bc for y, we learn a = 0 or bc = 0. Then we can substitute again, letting x = b and y = c, to show that if bc = 0 then b = 0 or c = 0. Therefore, if abc = 0, then a = 0 or (b = 0 or c = 0), so abc = 0 implies a = 0 or b = 0 or c = 0.
If the original fact were stated as "ab = 0 implies a = 0 or b = 0", then when saying "consider abc = 0," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if abc = 0 then a = 0 or b = 0 or c = 0 if, instead of letting a = a and b = bc, one substitutes a for a and b for bc (and with bc = 0, substituting b for a and c for b). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression a into the a term of the original equation, the a substituted does not refer to the a in the statement "ab = 0 implies a = 0 or b = 0."
See also: Equation solving
The following sections lay out examples of some of the types of algebraic equations that may be encountered.
Main article: Linear equation
Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider:
To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows:
|1. Equation to solve:|
|2. Subtract 4 from both sides:|
|3. This simplifies to:|
|4. Divide both sides by 2:|
|5. This simplifies to the solution:|
In words: the child is 4 years old.
The general form of a linear equation with one variable, can be written as:
Following the same procedure (i.e. subtract b from both sides, and then divide by a), the general solution is given by
A linear equation with two variables has many (i.e. an infinite number of) solutions. For example:
That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that can be solved as described above.
To solve a linear equation with two variables (unknowns), requires two related equations. For example, if it was also revealed that:
Now there are two related linear equations, each with two unknowns, which enables the production of a linear equation with just one variable, by subtracting one from the other (called the elimination method):
In other words, the son is aged 12, and since the father 22 years older, he must be 34. In 10 years, the son will be 22, and the father will be twice his age, 44. This problem is illustrated on the associated plot of the equations.
For other ways to solve this kind of equations, see below, System of linear equations.
Main article: Quadratic equation
A quadratic equation is one which includes a term with an exponent of 2, for example, , and no term with higher exponent. The name derives from the Latin quadrus, meaning square. In general, a quadratic equation can be expressed in the form , where a is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term , which is known as the quadratic term. Hence , and so we may divide by a and rearrange the equation into the standard form
where and . Solving this, by a process known as completing the square, leads to the quadratic formula
where the symbol "±" indicates that both
are solutions of the quadratic equation.
Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring:
which is the same thing as
It follows from the zero-product property that either or are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example,
has no real number solution since no real number squared equals −1. Sometimes a quadratic equation has a root of multiplicity 2, such as:
For this equation, −1 is a root of multiplicity 2. This means −1 appears twice, since the equation can be rewritten in factored form as
All quadratic equations have exactly two solutions in complex numbers (but they may be equal to each other), a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation
Since is not any real number, both of these solutions for x are complex numbers.
Main article: Logarithm
An exponential equation is one which has the form for , which has solution
when . Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if
then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain
A logarithmic equation is an equation of the form for , which has solution
For example, if
then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get
from which we obtain
A radical equation is one that includes a radical sign, which includes square roots, cube roots, , and nth roots, . Recall that an nth root can be rewritten in exponential format, so that is equivalent to . Combined with regular exponents (powers), then (the square root of x cubed), can be rewritten as . So a common form of a radical equation is (equivalent to ) where m and n are integers. It has real solution(s):
|n is odd||n is even
|n and m are even
|n is even, m is odd, and|
|no real solution|
For example, if:
Main article: System of linear equations
There are different methods to solve a system of linear equations with two variables.
An example of solving a system of linear equations is by using the elimination method:
Multiplying the terms in the second equation by 2:
Adding the two equations together to get:
which simplifies to
Since the fact that is known, it is then possible to deduce that by either of the original two equations (by using 2 instead of x ) The full solution to this problem is then
This is not the only way to solve this specific system; y could have been resolved before x.
Another way of solving the same system of linear equations is by substitution.
An equivalent for y can be deduced by using one of the two equations. Using the second equation:
Subtracting from each side of the equation:
and multiplying by −1:
Using this y value in the first equation in the original system:
Adding 2 on each side of the equation:
which simplifies to
Using this value in one of the equations, the same solution as in the previous method is obtained.
This is not the only way to solve this specific system; in this case as well, y could have been solved before x.
In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is
As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution. However, not all inconsistent systems are recognized at first sight. As an example, consider the system
Multiplying by 2 both sides of the second equation, and adding it to the first one results in
which clearly has no solution.
There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for x and y) For example:
Isolating y in the second equation:
And using this value in the first equation in the system:
The equality is true, but it does not provide a value for x. Indeed, one can easily verify (by just filling in some values of x) that for any x there is a solution as long as . There is an infinite number of solutions for this system.
Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is
When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express all solutions numerically because there are an infinite number of them if there are any.
A system with a higher number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others. |
Event horizonWikipedia open wikipedia design.
|Part of a series of articles about|
In astrophysics, an event horizon is a boundary beyond which events cannot affect an observer on the opposite side of it. An event horizon is most commonly associated with black holes, where gravitational forces are so strong that light cannot escape.
Any object approaching the horizon from the observer's side appears to slow down and never quite pass through the horizon, with its image becoming more and more redshifted as time elapses. This means that the wavelength of the light emitted from the object is getting longer as the object moves away from the observer. The notion of an event horizon was originally restricted to black holes; light originating inside an event horizon could cross it temporarily but would return. Later, in 1958, a strict definition was introduced by David Finkelstein as a boundary beyond which events cannot affect any outside observer at all, encompassing other scenarios than black holes. This strict definition of EH has caused information and firewall paradoxes; therefore Stephen Hawking has supposed an apparent horizon to be used, saying "gravitational collapse produces apparent horizons but no event horizons" and "The absence of event horizons mean that there are no black holes - in the sense of regimes from which light can't escape to infinity."
The black hole event horizon is teleological in nature, meaning that we need to know the entire future space-time of the universe to determine the current location of the horizon, which is essentially impossible. Because of the purely theoretical nature of the event horizon boundary, the traveling object does not necessarily experience strange effects and does, in fact, pass through the calculatory boundary in a finite amount of proper time.
More specific types of horizon include the related but distinct absolute and apparent horizons found around a black hole. Still other distinct notions include the Cauchy and Killing horizons; the photon spheres and ergospheres of the Kerr solution; particle and cosmological horizons relevant to cosmology; and isolated and dynamical horizons important in current black hole research.
Event horizon of a black hole
Far away from the black hole a particle can move in any direction. It is only restricted by the speed of light.
Closer to the black hole spacetime starts to deform. In some convenient coordinate systems, there are more paths going towards the black hole than paths moving away.[Note 1]
Inside the event horizon all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape.
One of the best-known examples of an event horizon derives from general relativity's description of a black hole, a celestial object so massive that no nearby matter or radiation can escape its gravitational field. Often, this is described as the boundary within which the black hole's escape velocity is greater than the speed of light. However, a more accurate description is that within this horizon, all lightlike paths (paths that light could take) and hence all paths in the forward light cones of particles within the horizon, are warped so as to fall farther into the hole. Once a particle is inside the horizon, moving into the hole is as inevitable as moving forward in time, and can actually be thought of as equivalent to doing so, depending on the spacetime coordinate system used.
The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body that fits inside this radius (although a rotating black hole operates slightly differently). The Schwarzschild radius of an object is proportional to its mass. Theoretically, any amount of matter will become a black hole if compressed into a space that fits within its corresponding Schwarzschild radius. For the mass of the Sun this radius is approximately 3 kilometers and for the Earth it is about 9 millimeters. In practice, however, neither the Earth nor the Sun has the necessary mass and therefore the necessary gravitational force, to overcome electron and neutron degeneracy pressure. The minimal mass required for a star to be able to collapse beyond these pressures is the Tolman–Oppenheimer–Volkoff limit, which is approximately three solar masses.
Black hole event horizons are widely misunderstood. Common, although erroneous, is the notion that black holes "vacuum up" material in their neighborhood, where in fact they are no more capable of seeking out material to consume than any other gravitational attractor. As with any mass in the universe, matter must come within its gravitational scope for the possibility to exist of capture or consolidation with any other mass. Equally common is the idea that matter can be observed falling into a black hole. This is not possible. Astronomers can detect only accretion disks around black holes, where material moves with such speed that friction creates high-energy radiation which can be detected (similarly, some matter from these accretion disks is forced out along the axis of spin of the black hole, creating visible jets when these streams interact with matter such as interstellar gas or when they happen to be aimed directly at Earth). Furthermore, a distant observer will never actually see something reach the horizon. Instead, while approaching the hole, the object will seem to go ever more slowly, while any light it emits will be further and further redshifted.
Cosmic event horizon
In cosmology, the event horizon of the observable universe is the largest comoving distance from which light emitted now can ever reach the observer in the future. This differs from the concept of particle horizon, which represents the largest comoving distance from which light emitted in the past could have reached the observer at a given time. For events beyond that distance, light has not had time to reach our location, even if it were emitted at the time the universe began. How the particle horizon changes with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, there are parts of the universe that will never be observable, no matter how long the observer waits for light from those regions to arrive. The boundary past which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon.
The criterion for determining whether a particle horizon for the universe exists is as follows. Define a comoving distance dp as
In this equation, a is the scale factor, c is the speed of light, and t0 is the age of the Universe. If dp → ∞ (i.e., points arbitrarily as far away as can be observed), then no event horizon exists. If dp ≠ ∞, a horizon is present.
Examples of cosmological models without an event horizon are universes dominated by matter or by radiation. An example of a cosmological model with an event horizon is a universe dominated by the cosmological constant (a de Sitter universe).
A calculation of the speeds of the cosmological event and particle horizons was given in a paper on the FLRW cosmological model, approximating the Universe as composed of non-interacting constituents, each one being a perfect fluid.
Apparent horizon of an accelerated particle
If a particle is moving at a constant velocity in a non-expanding universe free of gravitational fields, any event that occurs in that Universe will eventually be observable by the particle, because the forward light cones from these events intersect the particle's world line. On the other hand, if the particle is accelerating, in some situations light cones from some events never intersect the particle's world line. Under these conditions, an apparent horizon is present in the particle's (accelerating) reference frame, representing a boundary beyond which events are unobservable.
For example, this occurs with a uniformly accelerated particle. A spacetime diagram of this situation is shown in the figure to the right. As the particle accelerates, it approaches, but never reaches, the speed of light with respect to its original reference frame. On the spacetime diagram, its path is a hyperbola, which asymptotically approaches a 45-degree line (the path of a light ray). An event whose light cone's edge is this asymptote or is farther away than this asymptote can never be observed by the accelerating particle. In the particle's reference frame, there is a boundary behind it from which no signals can escape (an apparent horizon). The distance to this boundary is given by where is the constant proper acceleration of the particle.
While approximations of this type of situation can occur in the real world (in particle accelerators, for example), a true event horizon is never present, as this requires the particle to be accelerated indefinitely (requiring arbitrarily large amounts of energy and an arbitrarily large apparatus).
Interacting with an event horizon
A misconception concerning event horizons, especially black hole event horizons, is that they represent an immutable surface that destroys objects that approach them. In practice, all event horizons appear to be some distance away from any observer, and objects sent towards an event horizon never appear to cross it from the sending observer's point of view (as the horizon-crossing event's light cone never intersects the observer's world line). Attempting to make an object near the horizon remain stationary with respect to an observer requires applying a force whose magnitude increases unboundedly (becoming infinite) the closer it gets.
In the case of a horizon perceived by a uniformly accelerating observer in empty space, the horizon seems to remain a fixed distance from the observer no matter how its surroundings move. Varying the observer's acceleration may cause the horizon to appear to move over time, or may prevent an event horizon from existing, depending on the acceleration function chosen. The observer never touches the horizon and never passes a location where it appeared to be.
In the case of a horizon perceived by an occupant of a de Sitter universe, the horizon always appears to be a fixed distance away for a non-accelerating observer. It is never contacted, even by an accelerating observer.
In the case of the horizon around a black hole, observers stationary with respect to a distant object will all agree on where the horizon is. While this seems to allow an observer lowered towards the hole on a rope (or rod) to contact the horizon, in practice this cannot be done. The proper distance to the horizon is finite, so the length of rope needed would be finite as well, but if the rope were lowered slowly (so that each point on the rope was approximately at rest in Schwarzschild coordinates), the proper acceleration (G-force) experienced by points on the rope closer and closer to the horizon would approach infinity, so the rope would be torn apart. If the rope is lowered quickly (perhaps even in freefall), then indeed the observer at the bottom of the rope can touch and even cross the event horizon. But once this happens it is impossible to pull the bottom of rope back out of the event horizon, since if the rope is pulled taut, the forces along the rope increase without bound as they approach the event horizon and at some point the rope must break. Furthermore, the break must occur not at the event horizon, but at a point where the second observer can observe it.
Observers crossing a black hole event horizon can calculate the moment they have crossed it, but will not actually see or feel anything special happen at that moment. In terms of visual appearance, observers who fall into the hole perceive the black region constituting the horizon as lying at some apparent distance below them, and never experience crossing this visual horizon. Other objects that had entered the horizon along the same radial path but at an earlier time would appear below the observer but still above the visual position of the horizon, and if they had fallen in recently enough the observer could exchange messages with them before either one was destroyed by the gravitational singularity. Increasing tidal forces (and eventual impact with the hole's singularity) are the only locally noticeable effects. Tidal forces are a function of the mass of the black hole. In realistic stellar black holes, spaghettification occurs early: tidal forces tear materials apart well before the event horizon. However, in supermassive black holes, which are found in centers of galaxies, spaghettification occurs inside the event horizon. A human astronaut would survive the fall through an event horizon only in a black hole with a mass of approximately 10,000 solar masses or greater.
Beyond general relativity
The description of event horizons given by general relativity is thought to be incomplete. When the conditions under which event horizons occur are modeled using a more comprehensive picture of the way the Universe works, that includes both relativity and quantum mechanics, event horizons are expected to have properties that are different from those predicted using general relativity alone.
At present, it is expected by the Hawking radiation mechanism that the primary impact of quantum effects is for event horizons to possess a temperature and so emit radiation. For black holes, this manifests as Hawking radiation, and the larger question of how the black hole possesses a temperature is part of the topic of black hole thermodynamics. For accelerating particles, this manifests as the Unruh effect, which causes space around the particle to appear to be filled with matter and radiation.
According to the controversial black hole firewall hypothesis, matter falling into a black hole would be burned to a crisp by a high energy "firewall" at the event horizon.
An alternative is provided by the complementarity principle, according to which, in the chart of the far observer, infalling matter is thermalized at the horizon and reemitted as Hawking radiation, while in the chart of an infalling observer matter continues undisturbed through the inner region and is destroyed at the singularity. This hypothesis does not violate the no-cloning theorem as there is a single copy of the information according to any given observer. Black hole complementarity is actually suggested by the scaling laws of strings approaching the event horizon, suggesting that in the Schwarzschild chart they stretch to cover the horizon and thermalize into a Planck length-thick membrane.
- Abraham–Lorentz force
- Acoustic metric
- Beyond black holes
- Black hole electron
- Black hole starship
- Cosmic censorship hypothesis
- Dynamical horizon
- Event Horizon Telescope
- Hawking radiation
- Kugelblitz (astrophysics)
- Micro black hole
- Rindler coordinates
- The set of possible paths, or more accurately the future light cone containing all possible world lines (in this diagram represented by the yellow/blue grid), is tilted in this way in Eddington–Finkelstein coordinates (the diagram is a "cartoon" version of an Eddington–Finkelstein coordinate diagram), but in other coordinates the light cones are not tilted in this way, for example in Schwarzschild coordinates they simply narrow without tilting as one approaches the event horizon, and in Kruskal–Szekeres coordinates the light cones don't change shape or orientation at all.
- Chaisson, Eric (1990). Relatively Speaking: Relativity, Black Holes, and the Fate of the Universe. W. W. Norton & Company. p. 213. ISBN 978-0393306750.
- Bennett, Jeffrey; Donahue, Megan; Schneider, Nicholas; Voit, Mark (2014). The Cosmic Perspective. Pearson Education. p. 156. ISBN 978-0-134-05906-8.
- Hawking, S. W. (2014). "Information Preservation and Weather Forecasting for Black Holes". arXiv:1401.5761v1 [hep-th].
- Curiel, Erik (2019). "The many definitions of a black hole". Nature Astronomy. 3: 27–34. arXiv:1808.01507v2. Bibcode:2019NatAs...3...27C. doi:10.1038/s41550-018-0602-1.
- Joshi, Pankaj; Narayan, Ramesh (2016). "Black Hole Paradoxes". Journal of Physics: Conference Series. 759: 12–60. arXiv:1402.3055v2. doi:10.1088/1742-6596/759/1/012060.
- Misner, Thorne & Wheeler 1973, p. 848
- Hawking, S. W.; Ellis, G. F. R. (1975). The Large Scale Structure of Space-Time. Cambridge University Press.[page needed]
- Misner, Charles; Thorne, Kip S.; Wheeler, John (1973). Gravitation. W. H. Freeman and Company. ISBN 978-0-7167-0344-0.[page needed]
- Wald, Robert M. (1984). General Relativity. Chicago: University of Chicago Press. ISBN 978-0-2268-7033-5.[page needed]
- Peacock, J. A. (1999). Cosmological Physics. Cambridge University Press. doi:10.1017/CBO9780511804533. ISBN 978-0-511-80453-3.[page needed]
- Brill, Dieter (2012). "Black Hole Horizons and How They Begin". The Astronomical Review. 7 (1): 25–35. Bibcode:2012AstRv...7a..25B. doi:10.1080/21672857.2012.11519694. Retrieved 1 September 2012.
- Margalef Bentabol, Berta; Margalef Bentabol, Juan; Cepa, Jordi (21 December 2012). "Evolution of the cosmological horizons in a concordance universe". Journal of Cosmology and Astroparticle Physics. 2012 (12): 035. arXiv:1302.1609. Bibcode:2012JCAP...12..035M. doi:10.1088/1475-7516/2012/12/035.
- Margalef Bentabol, Berta; Margalef Bentabol, Juan; Cepa, Jordi (8 February 2013). "Evolution of the cosmological horizons in a universe with countably infinitely many state equations". Journal of Cosmology and Astroparticle Physics. 015. 2013 (2): 015. arXiv:1302.2186. Bibcode:2013JCAP...02..015M. doi:10.1088/1475-7516/2013/02/015.
- Misner, Charles; Thorne, Kip S.; Wheeler, John (1973). Gravitation. W. H. Freeman and Company. p. 824. ISBN 978-0-7167-0344-0.
- "Journey into a Schwarzschild black hole". jila.colorado.edu.
- "Dive into the Black Hole". casa.colorado.edu.
- Hobson, Michael Paul; Efstathiou, George; Lasenby, Anthony N. (2006). "11. Schwarzschild black holes". General Relativity: An introduction for physicists. Cambridge University Press. p. 265. ISBN 978-0-521-82951-9. |
X-ray astronomy is an observational branch of astronomy that focuses on the study of celestial objects based on their X-ray emissions. These emissions are thought to come from sources that contain extremely hot matter, at temperatures ranging from a million to hundred million kelvin (K). This matter is in a state known as plasma (ionized gas), which consists of ions and electrons at very high energies.
Astronomers have discovered various types of X-ray sources in the universe. They include stars, binary stars containing a white dwarf, neutron stars, supernova remnants, galaxy clusters, and black holes. Some Solar System bodies, such as the Moon, also emit X-rays, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. The detection of X-rays gives scientists clues about possible processes and events that may be occurring at or near the radiation sources.
Nearly all of the X-ray radiation from cosmic sources is absorbed by the Earth's atmosphere. X-rays that have energies in the 0.5 to 5 keV (80 to 800 aJ) range, in which most celestial sources give off the bulk of their energy, can be stopped by a few sheets of paper. Ninety percent of the photons in a beam of three keV (480 aJ) X-rays are absorbed by traveling through just ten cm of air. Even highly energetic X-rays, consisting of photons at energies greater than 30 keV (4,800 aJ), can penetrate through only a few meters of the atmosphere.
For this reason, to observe X-rays from the sky, the detectors must be flown above most of the Earth's atmosphere. In the past, X-ray detectors were carried by balloons and sounding rockets. Nowadays, scientists prefer to put the detectors on satellites.
An X-ray detector may be placed in the nose cone section of a sounding rocket and launched above the atmosphere. This was first done at White Sands Missile Range in New Mexico with a V-2 rocket in 1949. X-rays from the Sun were detected by the Navy's experiment on board. In June 1962, an instrument aboard an Aerobee 150 rocket first detected X-rays from another celestial source (Scorpius X-1, mentioned below).
The greatest drawbacks to rocket flights are (a) their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth), and (b) their limited field of view. A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky.
Balloon flights can carry instruments to altitudes of up to 40 kilometers above sea level, where they are above as much as 99.997 percent of the Earth's atmosphere. Unlike a rocket, which can collect data during a brief few minutes, balloons are able to stay aloft much longer.
However, even at such altitudes, much of the X-ray spectrum is still absorbed by the atmosphere. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. One of the recent balloon-borne experiments was performed by using the High Resolution Gamma-ray and Hard X-ray Spectrometer (HIREGS). It was first launched from McMurdo Station, Antarctica, in December 1991, when steady winds carried the balloon on a circumpolar flight lasting about two weeks. The instrument has been on three Antarctic campaigns.
A detector is placed on a satellite that is then put into orbit well above the Earth's atmosphere. Unlike balloons, instruments on satellites are able to observe the full range of the X-ray spectrum. Unlike sounding rockets, they can collect data for as long as the instruments continue to operate. In one instance, the Vela 5B satellite, the X-ray detector remained functional for over ten years.
Satellites in use today include the XMM-Newton observatory (for low- to mid-energy X-rays, 0.1-15 keV) and the INTEGRAL satellite (high-energy X-rays, 15-60 keV). Both these were launched by the European Space Agency. NASA has launched the Rossi X-ray Timing Explorer (RXTE), and the Swift and Chandra observatories. One of the instruments on Swift is the Swift X-Ray Telescope (XRT). Also, SMART-1 contained an X-ray telescope for mapping lunar X-ray fluorescence. Past observatories included ROSAT, the Einstein Observatory, the ASCA observatory, and BeppoSAX.
Most existing X-ray telescopes use CCD (charge-coupled device) detectors, similar to those in visible-light cameras. In visible light, a single photon can produce a single electron of charge in a pixel, and an image is built up by accumulating many such charges from many photons during the exposure time. When an X-ray photon hits a CCD, it produces enough charge (hundreds to thousands of electrons, proportional to its energy) that the individual X-rays have their energies measured on read-out.
Microcalorimeters can detect X-rays only one photon at a time. This works well for astronomical uses, because there just aren't a lot of X-ray photons coming our way, even from the strongest sources like black holes.
TES devices are the next step in microcalorimetery. In essence they are superconducting metals kept as close as possible to their transition temperature, that is, the temperature at which these metals become superconductors and their resistance drops to zero. These transition temperatures are usually just a few degrees above absolute zero (usually less than ten K).
Discovery of the first cosmic X-ray source (beyond the Solar System) came as a surprise in 1962. This source is called Scorpius X-1, the first X-ray source found in the constellation of Scorpius, located in the direction of the center of the Milky Way. Based on this discovery, Riccardo Giacconi received the Nobel Prize in Physics in 2002. It was later found that the X-ray emission from this source is 10,000 times greater than its optical emission. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun at all wavelengths.
By now, astronomers have discovered X-rays emissions from several different types of astrophysical objects. These sources include galaxy clusters, black holes in active galactic nuclei (AGN), galactic objects such as supernova remnants, stars, binary stars containing a white dwarf (cataclysmic variable stars), and neutron stars. Some Solar System bodies also emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background, which is occulted by the dark side of the Moon.
It is thought that black holes give off radiation because matter falling into them loses gravitational energy, which may result in the emission of radiation before the matter falls into the event horizon. The infalling matter has angular momentum, which means that the material cannot fall in directly, but spins around the black hole. This material often forms an accretion disk. Similar luminous accretion disks can also form around white dwarfs and neutron stars, but in these cases, the infalling matter releases additional energy as it slams against the high-density surface with high speed. In the case of a neutron star, the infalling speed can be a sizable fraction of the speed of light.
In some neutron star or white dwarf systems, the star's magnetic field is strong enough to prevent the formation of an accretion disc. The material in the disc gets very hot because of friction and emits X-rays. The material in the disc slowly loses its angular momentum and falls into the compact star. In the case of neutron stars and white dwarfs, additional X-rays are generated when the material hits their surfaces. X-ray emission from black holes is variable, varying in luminosity in very short timescales. The variation in luminosity can provide information about the size of the black hole.
Clusters of galaxies are formed by the merger of smaller units of matter, such as galaxy groups or individual galaxies. The infalling material (which contains galaxies, gas, and dark matter) gains kinetic energy as it falls into the cluster's gravitational potential well. The infalling gas collides with gas already in the cluster and is shock heated to between 107 and 108 K, depending on the size of the cluster. This very hot material emits X-rays by thermal bremsstrahlung emission, and line emission from "metals." (In astronomy, "metals" often means all elements except hydrogen and helium.)
X-rays of Solar System bodies are generally produced by fluorescence. Scattered solar X-rays provide an additional component.
All links retrieved August 1, 2013.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
Multiplying with Scientific Notation
Multiplying numbers that are in scientific notation is fairly simple because multiplying powers of 10 is so easy. Here's how to multiply two numbers that are in scientific notation:
1. Multiply the two decimal parts of the numbers.
Suppose you want to multiply the following:
(4.3 x 105)(2 x 107)
Multiplication is commutative, so you can change the order of the numbers without changing the result. And because of the associative property, you can also change how you group the numbers. Therefore, you can rewrite this problem as
(4.3 x 2)(105 x 107)
Multiply what's in the first set of parentheses — 4.3 x 2 — to find the decimal part of the solution:
4.3 x 2 = 8.6
2. Multiply the two exponential parts by adding their exponents.
Now multiply 105 by 107:
105 x 107 = 105 + 7 = 1012
3. Write the answer as the product of the numbers you found in Steps 1 and 2.
8.6 x 1012
4. If the decimal part of the solution is 10 or greater, move the decimal point one place to the left and add 1 to the exponent.
Because 8.6 is less than 10, you don't have to move the decimal point again, so the answer is 8.6 x 1012.
Note: This number equals 8,600,000,000,000.
Because scientific notation uses positive decimals less than 10, when you multiply two of these decimals, the result is always a positive number less than 100. So in Step 4, you never have to move the decimal point more than one place to the left.
This method even works when one or both of the exponents are negative numbers. For example, suppose you want to multiply the following:
(6.02 x 1023)(9 x 10–28)
1. Multiply 6.02 by 9 to find the decimal part of the answer:
6.02 x 9 = 54.18
2. Multiply 1023 by 10–28 by adding the exponents:
1023 x 10–28 = 1023 + –28 = 10–5
3. Write the answer as the product of the two numbers:
54.18 x 10–5
4. Because 54.18 is greater than 10, move the decimal point one place to the left and add 1 to the exponent:
5.418 x 10–4
Note: In decimal form, this number equals 0.0005418.
Scientific notation really pays off when you're multiplying very large and very small numbers. If you'd tried to multiply the numbers in the preceding example the usual way, here's what you would've been up against:
602,000,000,000,000,000,000,000 x 0.0000000000000000000000000009
As you can see, scientific notation makes the job a lot easier. |
1.the unit of plane angle adopted under the Systeme International d'Unites; equal to the angle at the center of a circle subtended by an arc equal in length to the radius (approximately 57.295 degrees)
RadianRa"di*an (rā"dĭ*�n), n. [From Radius.] (Math.) An arc of a circle which is equal to the radius, or the angle measured by such an arc.
voir la définition de Wikipedia
unité de mesure géométrique (fr)[Classe]
Radian is the ratio between the length of an arc and its radius. The radian is the standard unit of angular measure, used in many areas of mathematics. The unit was formerly an SI supplementary unit, but this category was abolished in 1995 and the radian is now considered an SI derived unit. The SI unit of solid angle measurement is the steradian.
The radian is represented by the symbol "rad" or, more rarely, by the superscript c (for "circular measure"). For example, an angle of 1.2 radians would be written as "1.2 rad" or "1.2c" (the second symbol is often mistaken for a degree: "1.2°"). As the ratio of two lengths, the radian is a "pure number" that needs no unit symbol, and in mathematical writing the symbol "rad" is almost always omitted. In the absence of any symbol radians are assumed, and when degrees are meant the symbol ° is used.
Radian describes the plane angle subtended by a circular arc as the length of the arc divided by the radius of the arc. One radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. More generally, the magnitude in radians of such a subtended angle is equal to the ratio of the arc length to the radius of the circle; that is, θ = s /r, where θ is the subtended angle in radians, s is arc length, and r is radius. Conversely, the length of the enclosed arc is equal to the radius multiplied by the magnitude of the angle in radians; that is, s = rθ.
It follows that the magnitude in radians of one complete revolution (360 degrees) is the length of the entire circumference divided by the radius, or 2πr /r, or 2π. Thus 2π radians is equal to 360 degrees, meaning that one radian is equal to 180/π degrees.
The concept of radian measure, as opposed to the degree of an angle, is normally credited to Roger Cotes in 1714. He had the radian in everything but name, and he recognized its naturalness as a unit of angular measure. The idea of measuring angles by the length of the arc was used already by other mathematicians. For example al-Kashi (c. 1400) used so-called diameter parts as units where one diameter part was 1/60 radian and they also used sexagesimal subunits of the diameter part.
The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson (brother of Lord Kelvin) at Queen's College, Belfast. He used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, vacillated between rad, radial and radian. In 1874, Muir adopted radian after a consultation with James Thomson.
As stated, one radian is equal to 180/π degrees. Thus, to convert from radians to degrees, multiply by 180/π.
Conversely, to convert from degrees to radians, multiply by π/180.
Radians can be converted to turns by dividing the number of radians by 2π.
We know that the length of circumference of a circle is given by , where is the radius of the circle.
So, we can very well say that the following equivalent relation is true:
[Since a sweep is need to draw a full circle]
By definition of radian, we can formulate that a full circle represents:
Combining both the above relations we can say:
The table shows the conversion of some common angles.
In calculus and most other branches of mathematics beyond practical geometry, angles are universally measured in radians. This is because radians have a mathematical "naturalness" that leads to a more elegant formulation of a number of important results.
Most notably, results in analysis involving trigonometric functions are simple and elegant when the functions' arguments are expressed in radians. For example, the use of radians leads to the simple limit formula
which is the basis of many other identities in mathematics, including
Because of these and other properties, the trigonometric functions appear in solutions to mathematical problems that are not obviously related to the functions' geometrical meanings (for example, the solutions to the differential equation , the evaluation of the integral , and so on). In all such cases it is found that the arguments to the functions are most naturally written in the form that corresponds, in geometrical contexts, to the radian measurement of angles.
The trigonometric functions also have simple and elegant series expansions when radians are used; for example, the following Taylor series for sin x :
If x were expressed in degrees then the series would contain messy factors involving powers of π/180: if x is the number of degrees, the number of radians is y = πx /180, so
Mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler's formula) are, again, elegant when the functions' arguments are in radians and messy otherwise.
Although the radian is a unit of measure, it is a dimensionless quantity. This can be seen from the definition given earlier: the angle subtended at the centre of a circle, measured in radians, is equal to the ratio of the length of the enclosed arc to the length of the circle's radius. Since the units of measurement cancel, this ratio is dimensionless.
Another way to see the dimensionlessness of the radian is in the series representations of the trigonometric functions, such as the Taylor series for sin x mentioned earlier:
If x had units, then the sum would be meaningless: the linear term x cannot be added to (or have subtracted) the cubic term or the quintic term , etc. Therefore, x must be dimensionless.
Although polar and spherical coordinates use radians to describe coordinates in two and three dimensions, the unit is derived from the radius coordinate, so the angle measure is still dimensionless.
The radian is widely used in physics when angular measurements are required. For example, angular velocity is typically measured in radians per second (rad/s). One revolution per second is equal to 2π radians per second.
Similarly, angular acceleration is often measured in radians per second per second (rad/s2).
For the purpose of dimensional analysis, the units are s−1 and s−2 respectively.
Likewise, the phase difference of two waves can also be measured in radians. For example, if the phase difference of two waves is (k·2π) radians, where k is an integer, they are considered in phase, whilst if the phase difference of two waves is (k·2π + π), where k is an integer, they are considered in antiphase.
Metric prefixes have limited use with radians, and none in mathematics.
There are 2π × 1000 milliradians (≈ 6283.185 mrad) in a circle. So a trigonometric milliradian is just under 1⁄6283 of a circle. This “real” trigonometric unit of angular measurement of a circle is in use by telescopic sight manufacturers using (stadiametric) rangefinding in reticles. The divergence of laser beams is also usually measured in milliradians.
An approximation of the trigonometric milliradian (0.001 rad), known as the (angular) mil, is used by NATO and other military organizations in gunnery and targeting. Each angular mil represents 1⁄6400 of a circle and is 1-⅞% smaller than the trigonometric milliradian. For the small angles typically found in targeting work, the convenience of using the number 6400 in calculation outweighs the small mathematical errors it introduces. In the past, other gunnery systems have used different approximations to 1⁄2000π; for example Sweden used the 1⁄6300 streck and the USSR used 1⁄6000. Being based on the milliradian, the NATO mil subtends roughly 1 m at a range of 1000 m (at such small angles, the curvature is negligible).
Smaller units like microradians (μrads) and nanoradians (nrads) are used in astronomy, and can also be used to measure the beam quality of lasers with ultra-low divergence. Similarly, the prefixes smaller than milli- are potentially useful in measuring extremely small angles.
|Wikibooks has a book on the topic of|
|Look up radian in Wiktionary, the free dictionary.|
Contenu de sensagent
dictionnaire et traducteur pour sites web
Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.
Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer
Dictionnaire de la langue française
La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
calculé en 0,078s |
Fourth Grade Multiplication Worksheets and Printables. Discover new strategies for multiplying large numbers. Area Model Multiplication Song Lyrics 3. In this math exercise, students will use this useful model to multiply two- digit numbers. Multiplication; Area Model; Description Build rectangles of sheet various sizes and relate multiplication to area. Home > By Subject > Multiplication > Area ( Box) Method For Multiplication; The area method, also sometimes called the box method is an alternative to the standard algorithmic method ( see below) for long multiplication. This worksheet is a trainer for students not yet ready to take on multi- digit multiplication that need additional practice with the single- model digit multiplication facts before moving on.
Online homework students that reinforce student learning through practice , grading sheet tools for instructors instant feedback. Area Model Multiplication Music Video 2. Important bridge from the concrete to the pictorial to the abstract! model Sparking Curiosity to Fuel Sense Making of Multiplication and Division. 113 Surface Area And Volume Practice A 2 Digit Multiplication Area Model 3rd Grade Area 4th Grade Division Area Model 4th Grade Multiplication Area Model 6 Grade Area. A Linear Model for Predicting Enset Plant Yield and Assessment of Kocho Production in Ethiopia. Array/ Area Model for sheet Division 5. Showing sheet top 8 worksheets in the category - Area Model. Sample Learning Goals Recognize that area represents the product of two numbers and is additive.
6 Find whole number quotients of whole numbers with up to four- digit dividends using strategies based on place value, two- digit divisors, , the properties of operations, / , the relationship between multiplication division. The area model visual allows children to see the layers of computation within a multi- digit multiplication problem. For even more practice, consider downloading the recommended multiplication worksheets that accompany the lesson. Some of the worksheets displayed are Whole numbers using an area model to explain multiplication Dividing fractions using sheet an area model a look at in, Math mammoth grade 5 a worktext, Use the illustration to write the multiplication sentence, Array area model for division, An area model for fraction multiplication Area model. They considered the factors to be the length and width of a.
Download Presentation Multiplication: Area Model An Image/ Link below is provided ( as is) to download presentation. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other. A worksheet generator that produces limitless 1- digit x 2- digit ( or 2- digit x 2 or 3- digit) questions that are to be answer using the box / partial products method for multiplication. Model the product of two decimals by finding the area of a rectangle. Estimate the area of the rectangle first. Then break the rectangle into several pieces and find the area of each piece ( partial product).
area model multiplication sheet
Add these areas together to find the whole area ( product). Demonstrate multiplication with. |
What Is Gross Domestic Income (GDI)?
Gross domestic income (GDI) is a measure of a nation's economic activity that is based on all of the money earned for all of the goods and services produced in the nation during a specific period.
In theory, GDI should be identical to gross domestic product (GDP), a more commonly used measure of a country's economic activity. However, the different sources of data used in each calculation lead to somewhat different results.
Generally, GDP tends to be the more reliable metric as it is based on fresher and more expansive data.
- GDI and GDP are two slightly different measures of a nation's economic activity.
- GDI counts what all participants in the economy make or "take in" (like wages, profits, and taxes). GDP counts the value of what the economy produces (like goods, services, and technology).
- One of the core concepts of macroeconomics is that income equals spending, which means that GDI will be the same as GDP in an economy at equilibrium.
Understanding Gross Domestic Income (GDI)
GDI is the total income that all sectors of an economy generate, including wages, profits, and taxes.
It is a lesser-known statistic than gross domestic product (GDP), which is used by the Federal Reserve Bank to measure total economic activity in the United States.
One of the core concepts in the field of macroeconomics is that income equals spending. This means that the money spent buying what was produced must equal the source of that money.
Formula and Calculation of Gross Domestic Income
Note the differences in formula for GDI compared to the formula for GDP:
- GDI = Wages + Profits + Interest Income + Rental Income + Taxes - Production/Import Subsidies + Statistical Adjustments
- GDP = Consumption + Investment + Government Purchases + Exports - Imports
Wages encompass the total compensation to employees for services rendered. Profits, also called "net operating surplus," are the surpluses of incorporated and unincorporated businesses. Statistical adjustments may include corporate income tax, dividends, and undistributed profits.
The most significant component of GDI is wages and salaries. Historically, roughly 50% of all national income goes to workers. In Q3, 2021, U.S. GDI clocked in at roughly $23.8 trillion with $12.8 trillion coming in the form of compensation of employees.
Another large component of GDI is the net operating surplus from private enterprises. In Q3, 2021, about $6.1 trillion of the $23.8 trillion in GDI was attributed to that category.
GDI vs. GDP
According to the Bureau of Economic Analysis (BEA) of the U.S. Department of Commerce, GDI and GDP are conceptually equivalent in terms of national economic accounting, with minor differences attributed to statistical discrepancies. The market value of goods and services consumed often differs from the amount of income earned to produce them due to sampling errors, coverage differences, and timing differences.
But while the difference between GDI and GDP is usually minimal, they can sometimes vary up to a full percentage point for some quarters. The gap also varies over different periods of time.
GDI differs from GDP, which values production by the amount of output that is purchased, in that it measures total economic activity based on the income paid to generate that output. In other words, GDI aims to measure what the economy makes or "takes in" (like wages, profits, and taxes) while GDP seeks to measure what the economy produces (goods, services, technology).
GDI calculates the income that was paid to generate GDP. So, an economy at equilibrium will see GDI equal to GDP.
Some economists have argued that GDI might be a more accurate gauge of the economy. The reason is that more advanced estimates of GDI are closer to the final estimates of both calculations. Research from Federal Reserve economist Jeremy Nalewalk showed that early estimates of GDI captured the Great Recession of 2007-2009 better than GDP, suggesting that policymakers would have been better prepared if GDI was the main indicator used.
Over time, according to the BEA, "GDI and GDP provide a similar overall picture of economic activity." For annual data, the correlation between GDI and GDP is 0.97, according to BEA calculations.
Gross Domestic Income Analytics
GDI figures have various analytical uses:
- One important metric is the ratio of wages and salaries to GDI. The BEA compares this ratio with corporate profits as a share of GDI to see where the constituents, mainly workers and company owners, stand relative to each other with respect to their share of GDI. A rule of thumb states that workers' share of GDI should be higher when unemployment is low.
- Employee compensation share of GDI is also compared with the inflation trendline. Economists generally anticipate that higher employee compensation share will correlate with an upward trend in inflation. |
Impulse is the magnitude we use in dynamics to relate the force applied on a body to the time the force has been applied. It let us understand, for example, the mechanism of takeoff of the space shuttles, but also why football players put the ball behind their heads for the throw-in. In this chapter we are going to study it through the following points:
Gather momentum!, let´s get started!
Have you ever ask yourself, why football players put the ball behind their heads to throw-in? That gesture does not increase considerably the value of the force to throw the ball but, however, it will let the players to exert the same force during more time. Players do what we know as gather momentum.
So, it seems clear that if we want to give a specific velocity to a body we have two options: to apply a bigger force during a shorter interval of time or, a smaller force during a longer interval of time. The longer the force applied is, the higher the speed we can get is.
Goalkeeper giving impulse to a ball
The goalkeeper moves the arm backwards as much as possible to start a movement forward it let him to apply the force on the ball for longer. This will make the ball to get further.
impulse is the magnitude that let us quantify those ideas. Let´s define it formally.
Impulse is a vector magnitude that relates the force to the time its action takes.
: It´s the impulse of the force. Some times it is also abbreviated Imp. Its unit in the S.I. is the newton per second ( N·s ). : It is the force we are considering, supposedly constant. Its unit of measure in the S.I. is the newton (N). : It is the interval time according to wich the force acts. Its unit of measure in the S.I. is the second (s).
Observe that, from the previous definition we can deduce that the impulse vector of a force possess the same direction than the force to which it´s associated.
Taking into account that the variation of linear momentum can be related to the resultant force acting on a body according to:
The product is the own definition we have given for the impulse, so that is related to the variation of the linear momentum of the body. It is the impulse-momentum theorem.
Impulse-momentum theorem establishes that the impulse of the resultant force acting on a body is equal to the variation of its linear momentum:
: It is the total impulse the body is subdued, the impulse of the resultant force. Its unit in the S.I. is the newton per second ( N·s ). : It is the resultant force or total force to what the body is subdued, supposedly constant. Its unit of measure is the newton (N). : It is the time interval during which the force is acting. Its unit of measure is the second ( s ). : Represents the variation of the linear momentum produced in the considered time interval. It can be calculated as the difference between its final value and its initial value. Remember its unit of measure in the S.I. is the kg·m/s.
Observe that the previous expression brings to light the statement we did about that by giving a determined velocity to a body (to increase its linear momentum) we can act in two ways: by acting over the force or acting over the time on which it acts. So, in the space shuttles the ship get the desired speed due to the continuous effect of the force the motor drive provides.
They are closely related though, you should not confuse the linear momentum with the impulse. Impulse can be related to the variation of the first one, but they are magnitudes conceptually different.
It is normal to get confused. Bear in mind that their ecuations and dimensions are the same...
... and the units of measure in the S.I. are equivalent...
In the definition we have made of the linear impulse we assumed the force remains constant during the time interval ∆t acting. This is not like that in general, the force is variable though. We can, then, find the impulse acting on the time interval infinitly small (diferential). That impulse would be a diferential impulse, and the force it would act on that time interval it would be constant, indeed. So:
So the impulse transferred during a finite time interval is obtained adding the finite diferential impulses by integrating:
Regarding the impulse theorem, we can get to the same statement already introduced for constant forces, considering this time as variable forces. In order to check it we must consider the differential version of the Newton´s second Law, that is
Probably you know now that the defined integral between two values of a function coincides numerically in value with the area set under that function. So, if we represent in the horizontal axis the time, and, the force in the vertical axis, either constant or variable, the area of the curve between ti and tf coincides with the value of the impulse:
In the chart it is represented how the force acting on a body varies in time. The area locked under the curve between the instants ti and tf, in red, coincides numerically with the value of the impulse of that force in the interval tf - ti, thus it coincides with the value of the variation of the linear momentum the body on which that interval will be applied is going to experience.
From this idea, watch that it is always possible to find an average constant force whose value of impulse in that time interval coincides with the one of the variable force.
Average constant force.
Due the area in blue and the area in green are equal, the area locked under the average force curve that represents the real and variable force coincides with the area under the average force, represented in black. As it is a rectangle area, the calsulation of that is reduced to a simple multiplication, opposite the use of integrals the first one would require.
Thus, most of the times on this level, when we talk about impulse force we are referring to that same assumed average constant force. |
Menelaus's theorem, named for Menelaus of Alexandria, is a proposition about triangles in plane geometry. Suppose we have a triangle ABC, and a transversal line that crosses BC, AC, and AB at points D, E, and F respectively, with D, E, and F distinct from A, B, and C. A weak version of the theorem states that
where |AB| is taken to be the ordinary length of segment AB: a positive value.
The theorem can be strengthened to a statement about signed lengths of segments, which provides some additional information about the relative order of collinear points. Here, the length AB is taken to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line; for example, AF/FB is defined as having positive value when F is between A and B and negative otherwise. The signed version of Menelaus's theorem states
Some authors organize the factors differently and obtain the seemingly different relation
but as each of these factors is the negative of the corresponding factor above, the relation is seen to be the same.
The converse is also true: If points D, E, and F are chosen on BC, AC, and AB respectively so that
then D, E, and F are collinear. The converse is often included as part of the theorem. (Note that the converse of the weaker, unsigned statement is not necessarily true.)
A standard proof is as follows:
First, the sign of the left-hand side will be negative since either all three of the ratios are negative, the case where the line DEF misses the triangle (lower diagram), or one is negative and the other two are positive, the case where DEF crosses two sides of the triangle. (See Pasch's axiom.)
To check the magnitude, construct perpendiculars from A, B, and C to the line DEF and let their lengths be a, b, and c respectively. Then by similar triangles it follows that |AF/FB| = |a/b|, |BD/DC| = |b/c|, and |CE/EA| = |c/a|. So
For a simpler, if less symmetrical way to check the magnitude, draw CK parallel to AB where DEF meets CK at K. Then by similar triangles
and the result follows by eliminating CK from these equations.
The converse follows as a corollary. Let D, E, and F be given on the lines BC, AC, and AB so that the equation holds. Let F′ be the point where DE crosses AB. Then by the theorem, the equation also holds for D, E, and F′. Comparing the two,
But at most one point can cut a segment in a given ratio so F=F′.
A proof using homotheciesEdit
The following proof uses only notions of affine geometry, notably homothecies. Whether or not D, E, and F are collinear, there are three homothecies with centers D, E, F that respectively send B to C, C to A, and A to B. The composition of the three then is an element of the group of homothecy-translations that fixes B, so it is a homothecy with center B, possibly with ratio 1 (in which case it is the identity). This composition fixes the line DE if and only if F is collinear with D and E (since the first two homothecies certainly fix DE, and the third does so only if F lies on DE). Therefore D, E, and F are collinear if and only if this composition is the identity, which means that the magnitude of product of the three ratios is 1:
which is equivalent to the given equation.
It is uncertain who actually discovered the theorem; however, the oldest extant exposition appears in Spherics by Menelaus. In this book, the plane version of the theorem is used as a lemma to prove a spherical version of the theorem.
In Almagest, Ptolemy applies the theorem on a number of problems in spherical astronomy. During the Islamic Golden Age, Muslim scholars devoted a number of works that engaged in the study of Menelaus's theorem, which they referred to as "the proposition on the secants" (shakl al-qatta'). The complete quadrilateral was called the "figure of secants" in their terminology. Al-Biruni's work, The Keys of Astronomy, lists a number of those works, which can be classified into studies as part of commentaries on Ptolemy's Almagest as in the works of al-Nayrizi and al-Khazin where each demonstrated particular cases of Menelaus's theorem that led to the sine rule, or works composed as independent treatises such as:
- The "Treatise on the Figure of Secants" (Risala fi shakl al-qatta') by Thabit ibn Qurra.
- Husam al-Din al-Salar's Removing the Veil from the Mysteries of the Figure of Secants (Kashf al-qina' 'an asrar al-shakl al-qatta'), also known as "The Book on the Figure of Secants" (Kitab al-shakl al-qatta') or in Europe as The Treatise on the Complete Quadrilateral. The lost treatise was referred to by Sharaf al-Din al-Tusi and Nasir al-Din al-Tusi.
- Work by al-Sijzi.
- Tahdhib by Abu Nasr ibn Iraq.
- Roshdi Rashed and Athanase Papadopoulos, Menelaus' Spherics: Early Translation and al-Mahani'/al-Harawi's version (Critical edition of Menelaus' Spherics from the Arabic manuscripts, with historical and mathematical commentaries), De Gruyter, Series: Scientia Graeco-Arabica, 21, 2017, 890 pages. ISBN 978-3-11-057142-4
- Russell, p. 6.
- Johnson, Roger A. (2007) , Advanced Euclidean Geometry, Dover, p. 147, ISBN 978-0-486-46237-0
- Benitez, Julio (2007). "A Unified Proof of Ceva and Menelaus' Theorems Using Projective Geometry" (PDF). Journal for Geometry and Graphics. 11 (1): 39–44.
- Follows Russel
- Follows Hopkins, George Irving (1902). "Art. 983". Inductive Plane Geometry. D.C. Heath & Co.
- Follows Russel with some simplification
- See Michèle Audin, Géométrie, éditions BELIN, Paris 1998: indication for exercise 1.37, p. 273
- Smith, D.E. (1958). History of Mathematics. Vol. II. Courier Dover Publications. p. 607. ISBN 0-486-20430-8.
- Rashed, Roshdi (1996). Encyclopedia of the history of Arabic science. Vol. 2. London: Routledge. p. 483. ISBN 0-415-02063-8.
- Moussa, Ali (2011). "Mathematical Methods in Abū al-Wafāʾ's Almagest and the Qibla Determinations". Arabic Sciences and Philosophy. Cambridge University Press. 21 (1): 1–56. doi:10.1017/S095742391000007X. S2CID 171015175.
- Russell, John Wellesley (1905). "Ch. 1 §6 "Menelaus' Theorem"". Pure Geometry. Clarendon Press. |
What is Acceleration Help
Acceleration is an expression of the rate of change in the velocity of an object. This can occur as a change in speed, a change in direction, or both. Acceleration can be defined in one dimension (along a straight line), in two dimensions (within a flat plane), or in three dimensions (in space), just as can velocity. Acceleration sometimes takes place in the same direction as an object’s velocity vector, but this is not necessarily the case.
Acceleration is a Vector
Acceleration, like velocity, is a vector quantity. Sometimes the magnitude of the acceleration vector is called “acceleration,” and is usually symbolized by the lowercase italic letter a. But technically, the vector expression should be used; it is normally symbolized by the lowercase bold letter a.
In our previous example of a car driving along a highway, suppose the speed is constant at 25 m/s. The velocity changes when the car goes around curves, and also if the car crests a hilltop or bottoms-out in a ravine or valley (although these can’t be shown in this two-dimensional drawing). If the car is going along a straight path, and its speed is increasing, then the acceleration vector points in the same direction that the car is traveling. If the car puts on the brakes, still moving along a straight path, then the acceleration vector points exactly opposite the direction of the car’s motion.
Fig. 15-7. Acceleration vectors x, y, and z for a car at three points (X, Y, and Z) along a road. The magnitude of y is 0 because there is no acceleration at point Y.
Acceleration vectors can be graphically illustrated as arrows. Figure 15-7 illustrates acceleration vectors for a car traveling along a level, but curving, road at a constant speed of 25 m/s. Three points are shown, called X, Y, and Z. The corresponding acceleration vectors are x, y, and z. Because the speed is constant and the road is level, acceleration only takes place where the car encounters a bend in the road. At point Y, the road is essentially straight, so the acceleration is zero (y = 0 ). The zero vector is shown as a point at the origin of a vector graph.
How Acceleration is Determined
Acceleration magnitude is expressed in meters per second per second, also called meters per second squared (m/s 2 ). This seems esoteric at first. What does s 2 mean? Is it a “square second”? What in the world is that? Forget about trying to imagine it in all its abstract perfection. Instead, think of it in terms of a concrete example. Suppose you have a car that can go from a standstill to a speed of 26.8 m/s in 5 seconds. Suppose that the acceleration rate is constant from the moment you first hit the gas pedal until you have attained a speed of 26.8 m/s on a level straightaway. Then you can calculate the acceleration magnitude:
a = (26.8m/s)/(5s) = 5.36 m/s 2
The expression s 2 translates, in this context, to “second, every second.” The speed in the above example increases by 5.36 meters per second, every second.
Fig. 15-8. An accelerometer. This measures the magnitude only, and must be properly oriented to provide an accurate reading.
Acceleration magnitude can be measured in terms of force against mass. This force, in turn, can be determined according to the amount of distortion in a spring. The force meter shown in Fig. 15-4 can be adapted to make an acceleration meter, more technically known as an accelerometer, for measuring acceleration magnitude.
Here’s how a spring type accelerometer works. A functional diagram is shown in Fig. 15-8. Before the accelerometer can be used, it is calibrated in a lab. For the accelerometer to work, the direction of the acceleration vector must be in line with the spring axis, and the acceleration vector must point outward from the fixed anchor toward the mass. This produces a force on the mass. The force is a vector that points directly against the spring, exactly opposite the acceleration vector.
A common weight scale can be used to indirectly measure acceleration. When you stand on the scale, you compress a spring or balance a set of masses on a lever. This measures the downward force that the mass of your body exerts as a result of a phenomenon called the acceleration of gravity . The effect of gravitation on a mass is the same as that of an upward acceleration of approximately 9.8 m/s 2 . Force, mass, and acceleration are interrelated as follows:
F = m a
That is, force is the product of mass and acceleration. This formula is so important that it’s worth remembering, even if you aren’t a scientist. It quantifies and explains a lot of things in the real world, such as why it takes a fully loaded semi truck so much longer to get up to highway speed than the same truck when it’s empty, or why, if you drive around a slippery curve too fast, you risk sliding off the road.
Suppose an object starts from a dead stop and accelerates at an average magnitude of a avg in a straight line for a period of time t . Suppose after this length of time, the distance from the starting point is d . Then this formula applies:
d = a avg t 2 /2
In the above example, suppose the acceleration magnitude is constant; call it a. Let the instantaneous speed be called v inst at time t . Then the instantaneous speed is related to the acceleration magnitude as follows:
v inst = at
- Kindergarten Sight Words List
- First Grade Sight Words List
- Child Development Theories
- 10 Fun Activities for Children with Autism
- Social Cognitive Theory
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Signs Your Child Might Have Asperger's Syndrome
- A Teacher's Guide to Differentiating Instruction
- Theories of Learning
- Definitions of Social Studies |
Downhill creep, also known as soil creep or commonly just creep, is the slow downward progression of rock and soil down a low grade slope; it can also refer to slow deformation of such materials as a result of prolonged pressure and stress. Creep may appear to an observer to be continuous, but it really is the sum of numerous minute, discrete movements of slope material caused by the force of gravity. Friction, being the primary force to resist gravity, is produced when one body of material slides past another offering a mechanical resistance between the two which acts to hold objects (or slopes) in place. As slope on a hill increases, the gravitational force that is perpendicular to the slope decreases and results in less friction between the material that could cause the slope to slide.
|This section does not cite any references or sources. (August 2013)|
The rate of soil creep down a slope depends on the steepness (gradient) of the slope, water absorption and content, type of sediment and material, and lastly vegetation. The rate of creep will take into account all of these factors to decide whether or not the hillside will progress downward. Creep is what is responsible for the rounded shape of hillsides.
Water is a very important factor when discussing soil deformation and movement. For instance, a sandcastle will only stand up when it is made with damp sand. The water offers cohesion to the sand which binds the sand particles together. However, pouring water over the sandcastle destroys it. This is because the presence of too much water fills the pores between the grains with water creating a slip plane between the particles and offering no cohesion causing them to slip and slide away. This holds true for hillsides and creep as well. The presence of water may help the hillside stay put and give it cohesion, but in a very wet environment or during or after a large amount of precipitation the pores between the grains can become saturated with water and cause the ground to slide along the slip plane it creates.
Creep can also be caused by the expansion of materials such as clay when they are exposed to water. Clay expands when wet, then contracts after drying. The expansion portion pushes downhill, then the contraction results in consolidation at the new offset.
Vegetation plays a role with slope stability and creep. When a hillside contains many trees, ferns, and shrubs their roots create an interlocking network that can strengthen unconsolidated material. They also aid in absorbing the excess water in the soil to help keep the slope stable. However, they do add to the weight of the slope giving gravity that much more of a driving force to act on in pushing the slope downward. In general, though, slopes without vegetation have a greater chance of movement.
Design engineers sometimes need to guard against downhill creep during their planning to prevent building foundations from being undermined. Pilings are planted sufficiently deep into the surface material to guard against this behavior.
Modeling regolith diffusion
For shallow to moderate slopes, diffusional sediment flux is modeled linearly as (Culling, 1960; McKean et al., 1993)
where is the diffusion constant, and is slope. For steep slopes, diffusional sediment flux is more appropriately modeled as a non-linear function of slope
where is the critical gradient for sliding of dry soil.
- Roering, Joshua J., James W. Kirchner, and William E. Dietrich. "Evidence for nonlinear, diffusive sediment transport on hillslopes and implications for landscape morphology." Water Resources Research 35.3 (1999): 853-870.
- Rosenbloom, N. A., and Robert S. Anderson. "Hillslope and channel evolution in a marine terraced landscape, Santa Cruz." California: Journal of Geophysical Research 99.B7 (1994): 14-013.
- Culling, 1960.
- McKean et al., 1993.
- Monkhouse, F. J. (University of Southampton). A Dictionary of Geography. London: Edward Arnold (Publishers) Ltd. 1978.
- Roering, Kirchner and Dietrich, 1999. Evidence for nonlinear diffusive sediment transport on hilslopes and implications for landscape morphology. Water Resour. Res., 35:853-887.
- Strahler, Arthur N. Physical Geography. New York: John Wiley & Sons, 1960, 2nd edition, 7th printing, p. 318-319
- Easterbrook, Don J., 1999, Surface Processes and Landforms, Prentice-Hall, Inc.
|Wikimedia Commons has media related to Soil creep.| |
Presentation on theme: "Graphing Ideas in Physics And Use of Vectors. 1.4 Simple Types of Motion Below is a table of values of time and distance. The table shows the values of."— Presentation transcript:
Graphing Ideas in Physics And Use of Vectors
1.4 Simple Types of Motion Below is a table of values of time and distance. The table shows the values of distance (d) at certain times (t). – These values all satisfy the equation d = 7t. – You could make the table longer or shorter by using different time increments.
1.4 Simple Types of Motion A way to show the relationship between distance and time is to graph the values in the table. The usual practice is to graph distance versus time, which puts distance on the vertical axis. – Note that the data points lie on a straight line.
1.4 Simple Types of Motion Remember this simple rule: when the speed is constant, the graph of distance versus time is a straight line. – In general, when one quantity is proportional to another, the graph of the two quantities is a straight line.
1.4 Simple Types of Motion An important feature of this graph is its slope. – The slope of a graph is a measure of its steepness. – In particular, the slope is equal to the rise between two points on the line divided by the run between the points.
1.4 Simple Types of Motion This is illustrated in the figure shown. – The rise is a distance, d, and the run is a time interval, t. – So the slope equals d divided by t, which is also the object’s velocity. The slope of a distance- versus-time graph equals the velocity.
1.4 Simple Types of Motion The graph for a faster-moving body, a racehorse for instance, would be steeper—it would have a larger slope. The graph of d versus t for a slower object (a person walking) would have a smaller slope. When an object is standing still (when it has no motion), the graph of d versus t is a flat line parallel with the horizontal axis. – The slope is zero because the velocity is zero.
1.4 Simple Types of Motion Even when the velocity is not constant, the slope of a d versus t graph is still equal to the velocity. – In this case, the graph is not a straight line because, as the slope changes (a result of the changing velocity), the graph curves or bends.
1.4 Simple Types of Motion The graph shown represents the motion of a car that starts from a stop sign, drives down a street, and then stops and backs into a parking place.
1.4 Simple Types of Motion When the car is stopped, the graph is flat. – The distance is not changing and the velocity is zero. When the car is backing up, the graph is slanted downward. – The distance is decreasing, and the velocity is negative.
1.4 Simple Types of Motion Constant Acceleration Keep in mind that the graph of distance versus time is a straight line for uniform motion. – For constant acceleration it is the graph of velocity versus time that exhibits a linear behavior.
1.4 Simple Types of Motion Constant Acceleration As shown in the table of distance values for a falling body, distance increases rapidly.
1.4 Simple Types of Motion Constant Acceleration The graph of distance versus time curves upward. – This is because the velocity of the body is increasing with time, and the slope of this graph equals the velocity.
1.4 Simple Types of Motion Rarely does the acceleration of an object stay constant for long. – As a falling body picks up speed, air resistance causes its acceleration to decrease. – When a car is accelerated from a stop, its acceleration usually decreases, particularly when the transmission is shifted into a higher gear.
1.4 Simple Types of Motion The figure shows the velocity of a car as it accelerates from 0 to 80 mph. – Note that acceleration steadily decreases (the slope gets smaller). – During the short time the transmission is shifted, the acceleration is zero.
1.4 Simple Types of Motion During a karate demonstration, a concrete block is broken by a person’s fist.
1.4 Simple Types of Motion The fist travels downward until it contacts the block, at about 6 milliseconds. – This causes a large acceleration as the fist is brought to a sudden stop.
1.4 Simple Types of Motion The graph shows the velocity of the fist, just the slope of the distance versus time graph at each time. – Contact with the concrete is indicated by the steep part of the graph as the velocity goes to zero.
1.4 Simple Types of Motion If we take the slope of this segment of the velocity graph, we find that the magnitude of the acceleration of the fist at that moment is about 3,500 m/s 2, or 360g (ouch!). – What happened at about 25 milliseconds?
2-3 Instantaneous Velocity On a graph of a particle’s position vs. time, the instantaneous velocity is the tangent to the curve at any point.
Concept Map 1.2
3-1 Vectors and Scalars A vector has magnitude as well as direction. Some vector quantities: displacement, velocity, force, momentum A scalar has only a magnitude. Some scalar quantities: mass, time, temperature
Vectors Quantities that have both a magnitude and a direction. –Magnitude consists of an amount and the units. –Direction can be expressed as some defined "x-hat" or "y-hat" direction or as East, North, etc. –Example: –A displacement vector (distance with direction): 40 meters East
Drawing Vectors We draw a vector as: An arrow from tail to head with the head of the arrow pointing in the direction of the vector. The length of the arrow represents the vector's magnitude We specify its direction by the angle it makes with the (+) horizontal axis. tail head
3-2 Addition of Vectors—Graphical Methods For vectors in one dimension, simple addition and subtraction are all that is needed. You do need to be careful about the signs, as the figure indicates.
Adding Vectors To Add a Set of Vectors Draw the first vector from a start point on a grid with horizontal and vertical axes. Draw the next vector starting from the head of the previous vector. Repeat this last step until the last vector is drawn. Now draw the resultant (or “net”)vector starting from the tail of the first vector to the head of the last vector (or in other words: from start point to end point).
1.2 Speed and Velocity Vector Addition Sometimes a moving body has two velocities at the same time. – The runner on the deck of the ship in the figure has a velocity relative to the ship and a velocity because the ship itself is moving.
1.2 Speed and Velocity Vector Addition – A bird flying on a windy day has a velocity relative to the air and a velocity because the air carrying the bird is moving relative to the ground. The velocity of the runner relative to the water or that of the bird relative to the ground is found by adding the two velocities together to give the net, or resultant, velocity. Let’s consider how two velocities (or two vectors of any kind) are combined in vector addition.
1.2 Speed and Velocity Vector Addition When adding two velocities, you represent each as an arrow with its length proportional to the magnitude of the velocity—the speed. – For the runner on the ship, the arrow representing the ship’s velocity is a little more than 2 times as long as the arrow representing the runner’s velocity because the two speeds are 20 mph and 8 mph, respectively.
1.2 Speed and Velocity Vector Addition Each arrow can be moved around for convenience, provided its length and its direction are not altered. – Any such change would make it a different vector.
1.2 Speed and Velocity Vector Addition The procedure for adding two vectors is as follows. – Two vectors are added by representing them as arrows and then positioning one arrow so its tip is at the tail of the other. – A new arrow drawn from the tail of the first arrow to the tip of the second is the arrow representing the resultant vector—the sum of the two vectors.
1.2 Speed and Velocity Vector Addition The figure shows this for the runner on the deck of the ship. The runner is running forward in the direction of the ship’s motion, so the two arrows are parallel. – When the arrows are positioned “tip to tail,” the resultant velocity vector is parallel to the others, and its magnitude—the speed—is 28 mph (8 mph +20 mph).
1.2 Speed and Velocity Vector Addition In this figure, the runner is running toward the rear of the ship, so the arrows are in opposite directions. – The resultant velocity is again parallel to the ship’s velocity, but its magnitude is 12 mph (20 mph – 8 mph).
1.2 Speed and Velocity Vector Addition Vector addition is done the same way when the two vectors are not along the same line. The figure shows a bird with velocity 8 m/s north in the air while the air itself has velocity 6 m/s east.
1.2 Speed and Velocity Vector Addition The bird’s velocity observed by someone on the ground, (b), is the sum of these two velocities.
1.2 Speed and Velocity Vector Addition We determine this by placing the two arrows representing the velocities tip to tail as before and drawing an arrow from the tail of the first to the tip of the second. – The direction of the resultant velocity is toward the northeast.
3-2 Addition of Vectors—Graphical Methods If the motion is in two dimensions, the situation is somewhat more complicated. Here, the actual travel paths are at right angles to one another; we can find the displacement by using the Pythagorean Theorem.
3-2 Addition of Vectors—Graphical Methods Adding the vectors in the opposite order gives the same result:
1.2 Speed and Velocity Vector Addition Watch for this when you see a bird flying on a windy day: – Often the direction the bird is moving is not the same as the direction its body is pointed.
1.2 Speed and Velocity Vector Addition What about the magnitude of the resultant velocity? It is not simply or 8 – 6, because the two velocities are not parallel. – With the numbers chosen for this example, the magnitude of the resultant velocity—the bird’s speed—is 10 m/s.
1.2 Speed and Velocity Vector Addition If you draw the two original arrows with correct relative lengths and then measure the length of the resultant arrow, it will be 5/4 times the length of the arrow representing the 8 m/s vector. – Then 8 m/s times 5/4 equals 10 m/s. – This can be calculated using the Pythagorean theorem because the arrows form a right triangle.
1.2 Speed and Velocity Vector Addition Vector addition is performed in the same manner, no matter what the directions of the vectors. The figure shows two other examples of a bird flying with different wind directions. – The magnitudes of the resultants in these cases are best determined by measuring the lengths of the arrows.
1.2 Speed and Velocity Vector Addition There are many other situations in which a body’s net velocity is the sum of two (or more) velocities (for example, a swimmer or boat crossing a river). Displacement vectors are added in the same fashion. – If you walk 10 meters south, then 10 meters west, your net displacement is 14.1 meters southwest.
1.2 Speed and Velocity Vector Addition The process of vector addition can be “turned around.” Any vector can be thought of as the sum of two other vectors, called components of the vector. – When we observe the bird’s single velocity, we would likely realize that the bird has two velocities that have been added.
1.2 Speed and Velocity Vector Addition A soccer player running southeast across a field can be thought of as going south with one velocity and east with another velocity at the same time.
3-2 Addition of Vectors—Graphical Methods Even if the vectors are not at right angles, they can be added graphically by using the tail-to-tip method.
3-2 Addition of Vectors—Graphical Methods The parallelogram method may also be used; here again the vectors must be tail-to-tip.
3-3 Subtraction of Vectors, and Multiplication of a Vector by a Scalar In order to subtract vectors, we define the negative of a vector, which has the same magnitude but points in the opposite direction. Then we add the negative vector.
3-3 Subtraction of Vectors, and Multiplication of a Vector by a Scalar A vector can be multiplied by a scalar c ; the result is a vector c that has the same direction but a magnitude cV. If c is negative, the resultant vector points in the opposite direction.
3-4 Adding Vectors by Components Any vector can be expressed as the sum of two other vectors, which are called its components. Usually the other vectors are chosen so that they are perpendicular to each other.
3-4 Adding Vectors by Components If the components are perpendicular, they can be found using trigonometric functions.
3-4 Adding Vectors by Components The components are effectively one-dimensional, so they can be added arithmetically.
3-4 Adding Vectors by Components Adding vectors: 1. Draw a diagram; add the vectors graphically. 2. Choose x and y axes. 3. Resolve each vector into x and y components. 4. Calculate each component using sines and cosines. 5. Add the components in each direction. 6. To find the length and direction of the vector, use: and.
3-4 Adding Vectors by Components Example 3-2: Mail carrier’s displacement. A rural mail carrier leaves the post office and drives 22.0 km in a northerly direction. She then drives in a direction 60.0° south of east for 47.0 km. What is her displacement from the post office?
3-4 Adding Vectors by Components Example 3-3: Three short trips. An airplane trip involves three legs, with two stopovers. The first leg is due east for 620 km; the second leg is southeast (45°) for 440 km; and the third leg is at 53° south of west, for 550 km, as shown. What is the plane’s total displacement?
3-5 Unit Vectors Unit vectors have magnitude 1. Using unit vectors, any vector can be written in terms of its components:
3-6 Vector Kinematics In two or three dimensions, the displacement is a vector:
3-6 Vector Kinematics As Δ t and Δ r become smaller and smaller, the average velocity approaches the instantaneous velocity.
3-6 Vector Kinematics The instantaneous acceleration is in the direction of Δ = 2 – 1, and is given by:
2-9 Graphical Analysis and Numerical Integration The total displacement of an object can be described as the area under the v-t curve: |
"Math Salamanders Free Math Sheets"
Welcome to the Math Salamanders 2nd Grade Fraction Math Worksheets.
Here you will find some printable worksheets to help your child understand what a half is. This will help your child sequence numbers involving halves like '2 and a half', as well as support their understanding of shading 'half a shape' or working out 'half of 12'.
How to Print or Save these sheets
Need further help? Use our How to Print Support Page
How to Print or Save these sheets
Need help with printing or saving?
Follow these 3 easy steps to get your worksheets printed out perfectly!
At Second Grade, children love to explore Math with fun Math activities and games.
Children will enjoy completing these Second Grade Math games and Math worksheets
printable whilst learning at the same time.
During Second Grade, the Math work extends to place value up to 1000. Children practice counting in ones, tens and hundreds from different starting points. They build on their understanding of addition and subtraction and develop quick recall of basic addition and subtraction facts. Their written methods in addition and subtraction extends into using 3 digits.
At 2nd Grade children also learn to solve simple addition and subtraction problems and work out the answers mentally or on paper. Children are also introduced to mutiplication and division at this stage, and also the fraction 'half'. They learn their multiplication table up to 5x5.
The free printable Second Grade Math Worksheets, Games and other free Grade 2 Math sheets will help your child to achieve their Elementary Math benchmark set out by Achieve, Inc.
In the UK, 2nd Grade is equivalent to Year 3.
Fractions is an area that a lot of kids find hard to understand. One of the problems with fractions is that they have different meanings depending on the context.
For example, you can eat half an apple or cut an orange into fourths. But you can also put the number one-half on a number line, measure a line 3 1/2 cm long, or find a fourth of 24.
Once the concept of fractions as numbers and fractions as parts of a whole has been understood, your child will be well on their way to developing a sound understanding of fractions.
Fraction Math Worksheets - Understanding a half
Here you will find a selection of Fraction worksheets designed to help your child understand what a half is, both as a number and as an operator. The sheets are graded so that the easier ones are at the top.
Using these sheets will help your child to:
All the free 2nd Grade Fraction sheets in this section support the Elementary Math Benchmarks for Second Grade.
Finding a half on a number line
Sequencing a Half
Halving numbers to 20
Riddles are a great way to get children to apply their knowledge of fractions.
These riddles are a good way to start off a maths lesson, or also to use as a way of checking your child's understanding about fractions.
All the riddles consist of 3 or 4 clues and a selection of 6 or 8 possible answers. Children have to read the clues and work out which is the correct answer.
The riddles can also be used as a template for the children to write their own clues for a partner to guess.
Are you looking for free fraction help or fraction support?
Here you will find a range of fraction help on a variety of fraction topics, from simplest form to converting fractions.
There are fraction videos, worked examples and practice fraction worksheets.
Halving and Doubling Online Practice
If you need to practice your halving and doubling, then why not try out our NEW Online Halving and Doubling Online Practice Zone.
You can select the numbers you want to practice with, and print out your results when you have finished.
You can also use the practice zone for benchmarking your performance, or using it with a group of children to gauge progress.
The Math Salamanders hope you enjoy using these free printable Math worksheets and all our other Math games and resources.
We welcome any comments about our site or worksheets on the Facebook comments box at the bottom of every page.
Check out our LATEST webpages.
Take a look at all our latest worksheets!
Have a look at some of our most popular pages to see different Math activities and ideas you could use with your child
If you are a regular user of our site and appreciate what we do, please consider making a small donation to help us with our costs.
Get a free sample copy of our Math Salamanders Dice Games book with each donation!
Looking for some cool math certificates to hand out?
A certificate is a great way to praise achievement in math learning.
Check out our printable math certificate collection!
ABCmouse is a subscription based online education program designed for children aged from 2 to 8.
Fun learning activities based around Reading, Math, Science and Art.
Winner of several curriculum awards, including Mom's Choice Awards (Gold) and Parents Choice Gold Award.
Try it free for 30 days!
Looking for a new direction in your life in 2017?
Want an extra income you can earn from home with proven results?
Got a hobby you want to share?
Expand your horizons with SBI!
I used this to develop this site in 2010, and have never looked back!
Click here for my story! |
During the age of sailing ships, vessels were at the complete mercy of the sea, and a captain’s job was mitigating the ocean’s rage and harnessing its winds as best as possible. Travel times were a matter of chance as much as the skill of the crew, and the best that could be expected was a safe voyage.
During the age of steam ships, which had its heyday in the second half of the 19th century, vessels sought to conquer the elements, following more direct routes, even if that meant heading against the winds. Captains were expected to follow strict schedules, whether they oversaw a navy vessel seeking to rendezvous with a fleet, or a commercial ship carrying passengers or freight. Shipping companies that could hew close to schedule were rewarded with custom, while those who were habitually tardy suffered. During this era, it became a priority of navies and shipping firms to have a precise understanding of ocean winds and currents, with their great seasonal variations, so that they could properly plan their schedules and anticipate their fuel needs, etc.
Moreover, the mid- and late 19th century saw the rise of the science of meteorology, whereby people used empirical data to predict the weather. A critical factor in this pursuit was a stellar understanding of ocean winds and currents. Weather prediction promised all manner of economic, military and social advantages, and the matter was of great import to authorities and commercial concerns worldwide.
While crude, although not necessarily unhelpful, attempts to map ocean winds and currents were endeavoured during the 1700s, it was not until the middle of the next century that these efforts assumed a scientific form, predicated upon the analysis and consolidation of data gathered at sea from shipping logs, leading to its sophisticated graphic representation upon sea charts.
A key advancement in maritime anemology (the study of winds) was the invention of the wind rose, in 1840, by the Birmingham glass manufacturer and amateur scientist Abraham Follett Osler. This circular diagram, placed at set locations on sea charts, featured arrows or lines emanating out of it in various directions, with their lengths corresponding to the strength of the prevailing winds from said directions.
Another great leap forward in mapping ocean winds came with the publication of the charts of Matthew Fontaine Maury, the chief of the Depot of Charts and Instruments of the U.S. Navy. From 1847 to 1860, he produced charts of the waters and routes frequented by American vessels that featured wind roses at regular locations, with details predicated upon data gathered from hundreds of naval and merchant mariner shipping logs.
Maury’s stellar work was continued by several other great scientist-mariners who took the study of maritime anemology to whole new levels.
Enter Admiral de Chabannes: Charting the Atlantic Winds off South America
The creator of the present atlas, Viscount Octave Pierre Antoine Henri de Chabannes-Curton (1803 – 1889), was a prominent French naval officer, colonial administrator, politician and scientist. Born in Paris of noble stock, he graduated from the elite École Polytechnique before joining the French Navy. For a time from 1831, he served as an officer aboard the royal yacht Reine Amélie, whereupon he gained the favourable attention of King Louis-Philippe and several leading political figures, which did much to advance his career.
Then Frigate Captain Chabannes commenced his critical experience with South America upon serving as the interim Governor of French Guiana (1851-2). From 1853-4, as the commander of the Charlemagne, he served as one of the leading Allied naval figures during the Crimean War, before serving as the Commander-in-Chief of the Algeria Squadron of the French Navy.
From 1858 to 1863, Chabannes fulfilled an important assignment as the Commander-in-Chief of the Brazil Squadron of the French Navy, headquartered in Rio de Janeiro (France and Brazil were then close allies). In this capacity he was one of the architects of Napoleon III’s ambitious designs to meddle in the political, commercial and military affairs of several South American countries. It was also during this time that Chabannes had the opportunity to gather the immense amount of raw data that led the creation of the present atlas.
Chabannes was promoted to the rank of Vice-Admiral in 1861, and in 1863 he was made the commander of the important naval base of Cherbourg, and the following year given the same role in Toulon. He was made a senator in 1867 and retired from active naval service in 1868. He left public life upon the downfall of the Second Empire in 1870.
The Present Atlas in Focus
During the mid-19th century, South America was of the upmost importance to France. First, it owned a piece of the continent in the form of French Guiana. Second, France was one of the largest trading partners, foreign investors and creditors of many key nations, namely Brazil, Argentina and Uruguay. Third, France had long extensively interfered in the military and political affairs of these countries. This included direct involvement in such momentous events as the and Uruguayan Civil War, or Guerra Grande (1839–1851) and the Anglo-French Blockade of the Río de la Plata (1845-50). During the rule of Napoleon III (reigned 1852-70), France, as a close ally of Brazil, ramped up its involvement in South America, and as such, aiding shipping across the Atlantic from Europe to ports such as Rio de Janeiro, Montevideo and Buenos Aires, etc., was critical.
In the late 1850s, The French Navy held as a priority the creation a scientific wind atlas of the South Atlantic covering the waters of Brazil, Uruguay and Argentina, from mouth of the Amazon, in the north, down to Buenos Aires, in the south, (from the 1° South down to 36° South). Notably, Matthew Fontaine Maury had already published masterly large format wind charts of the same seas, being a Pilot Chart of the South Atlantic (Washington, D.C., 1850), composed of a series of very sophisticated wind roses, and his Wind and Current Chart of the South Atlantic (Washington, D.C. 1853), a colossal 4-sheet masterpiece, being a map that shows the tracks of voyages that recorded data, along with wind roses. However, these works were incredibly complex, both visually and intellectually, and did not lend themselves to easy practical use at sea. What was desired was a scientifically precise, but easily accessible work that charted South Atlantic wind patterns.
Admiral de Chabannes, while serving as the Commander-in-Chief of France’s Brazil Squadron filled the void. In 1858, upon his arrival in Brazil, he submitted a proposal to the French Navy Minister, requesting official support to create the present atlas. His plan was approved and for the next three years Chabannes worked feverishly to obtain the best possible data on the wind currents in the Atlantic off the east coast of South America as they were at all times of the year. The French government sent him information from the hundreds of logbooks in their possession, while the Brazilians furnished him with access to their excellent collection of logs. He also acquired information from innumerable merchant and naval vessels of all flags.
Chabannes stated that he owed a great debt to Maury especially, as well as other anemological mapmakers. His objective was that his charts “will result in providing navigators, as regards the winds, important information which the longest personal experience could not make known to them and which it is possible to obtain only by means of observations collected in thousands of logbooks”. Amazingly, in composing the present atlas, Chabannes used 84,000 data points gleaned from recent and historical ships’ logs, while adding another 27,000 data points from his own contemporary investigations, bringing a total of 111,000 data points!
Chabannes’s resulting atlas was published in 1861, in Paris, by the Dépôt général de la marine, in a grand folio format. It covered the Atlantic waters extending well off the coast of South America from 1° South (near Pará (Belém), by the moth of the Amazon) down all along the length of Brazil and Uruguay to just past Buenos Aires, at 36° South, thus embracing all the great Atlantic trading ports of the continent.
After the introductory text, the map features 50 full-page plates (49 maps, 1 with a trio of diagrams). The first map, a key chart, embraces the entire area covered, and has it divided into 4 sections, or feuilles (sheets), which are as follows, Première feuille (1° to 11° South); Deuxième feuille (11° to 21° South); Troisième feuille (21° to 30° South); and Quatrième feuille (30° to 36° South).
Following the key chart are 48 maps, individually providing a view of each feuille for each month of the year. The maritime areas of each chart are divided into quadrants, within each of which is a wind rose that shows the prevailing direction and windspeeds at each location at said time. While quite accurate, anchored in an unprecedented amount of scientific data, the wind roses are intentionally quite simple in design (certainly compared to those of Maury) in order to ensure that they would be easy to understand by skippers at sea. The maps label key ports and other seminal features but are otherwise sparing of detail to allow users to clearly plot courses in manuscript as they might see fit.
The final plate [no. 50], features a trio of charts with wind roses for each month of the year for the roadsteads of the great ports of Rio de Janeiro, Montevideo and Buenos Aires.
Chabannes expressed that he had considerable “confidence” in the atlas, as “I must say that there are none of the same extent based on such a large number of observations”. Indeed, the work was a great leap forward in maritime anemology, with its charts being more accurate and easier to comprehend that its predecessors, such that it would have been of tremendous value to mariners navigating to and from Brazil, Uruguay and Argentina.
Chabannes’s atlas would have been a fine companion to the Portuguese Admiral João Carlos de Brito Capello’s Ventos e Correntes do Golpho de Guiné (Lisbon, 1861), a series of four seasonal maps of the notoriously treacherous waters off West Africa (there was then much maritime commerce between Brazil and West Africa).
The present atlas had a profound influence upon the work of Chabannes’s younger colleague, the French naval officer, Commander Louis-Désiré-Léon Brault (1839 – 1885). He published a series of sets of 12 charts each showing the monthly wind profiles in every sector of the oceans, being Atlantique nord: cartes de la direction et de l’intensité probables des vents (1874); and relating closely to the present work, Atlantique sud: cartes de la direction et de l’intensité probables des vents (1876); Mer des Indes: cartes de la direction et de l’intensité probables des vents (1880); and Océan Pacifique: cartes de la direction et de l’intensité probables des vents (1880).
Provenance: Prosper de Chasseloup-Laubat – Arch-Proponent of French Foreign Adventurism
The present example of Chabannes’s atlas comes from the library of Prosper de Chasseloup-Laubat, Viscount (later the 4th Marquis) of Chasseloup-Laubat (1805-73), a political heavyweight who was one of the leading proponents of French colonialism and robust engagement in South America. One of Louis-Napoléon Bonaparte’s (later Emperor Napoleon III) most trusted associates, he served as the French Navy Minister (1851) and the Minister for Algeria & the Colonies (1859-60), before assuming the ultra-powerful combined portfolio of Minister of the Navy & Colonies (1860-7), whereupon he oversaw French colonial expansion in Africa and the French takeover of Southern Vietnam. Chasseloup-Laubat was a bibliophile and highly carto-literate, so there is little doubt that he would have shown considerable interest in the present atlas, especially as he was the dedicatee of the work (as noted in the title) and the final sponsor of its publication
Befitting its unique status, the present example of the atlas was finely bound in quarter dark green calf with pebbled cloth with elaborate blind-stamped deigns and gilt title, whereas the other examples of which we are aware have far more modest bindings.
A Note on Rarity
Chabannes’s atlas is very rare, it would have been published in only a small print run for select specialist use. We can trace only 5 institutional examples, held by the Bibliothèque nationale de France; Defensiebibliotheken (The Hague); Bibliotheek Universiteit van Amsterdam; Virginia Military Institute (Preston Library); and the National Library of Sweden. Moreover, we can trace only a single example as having appeared on the market (being a very flawed copy).
References: Bibliothèque nationale de France: GE SH 19 PF 1 TER DIV 8 P 4 (2) D; OCLC: 69383766, 923502324; Annales hydrographiques: Recueil d’avis, instructions, documents et mémoires relatifs à l’hydrographie et à la navigation, tom. 19 (Paris, 1861), pp. 474-7; Dépot des cartes et plans de la marine n°408. Catalogue chronologique des cartes, plans, vues de côtes, mémoires, instructions nautiques, etc. (Paris, 1865), p. 185; Mittheilungen aus Justus Perthes’ Geographischer Anstalt, Band 7 (Gotha, 1861), p. 406. |
In digital systems, timing is one of the most important factors. The reliability and accuracy of digital communications are based on the quality of its timing. However, in the real world, nothing is ever ideal. Below are some common terms and ways to better understand the timing of your particular digital signal.
Jitter is the deviation from the ideal timing of an event to the actual timing of an event. To understand what this means, imagine you are sending a digital sine wave and plotting it on graph paper. Each square corresponds to a clock pulse; because the vertical lines are equidistant apart, you end up with a perfectly periodic clock signal. At each clock pulse, you receive three bits and plot that point on your graph paper. Because of the periodic nature, it ends up as a nice sine wave.
Figure 4: A sample clock that is periodic allows a digital system to communicate correctly and accurately.
Now, imagine that those lines aren’t equidistant apart. This would make your clock signal less periodic. When you plot your data, it isn’t at the same intervals and, thus, doesn’t look correct.
Figure 5: If a clock signal has jitter, it results in distortion of the digital waveform.
In Figure 5, you can see that the distance between the transitions in the clock signal is uneven; this is jitter in the clock. Although the above figure has an exaggerated amount of jitter, it does show how a jittery clock can cause samples to be triggered at uneven intervals. This unevenness introduces distortion into the waveform you are trying to record and reproduce.
Now look at jitter in terms of a digital signal with only 1s and 0s. Remember, jitter is the deviation from the ideal timing of an event to the actual timing of an event. Taking a look at a single pulse, jitter is the deviation in edge timing from the actual signal to the ideal positions in time.
Figure 6: Jitter of a single pulse is the deviation in edge timing.
Jitter is typically measured from the zero-crossing of a reference signal. It typically comes from cross-talk, simultaneous switching outputs, and other regularly occurring interference signals. Jitter varies over time, so measurements and quantification of jitter can range from a visual estimate on an oscilloscope in the range of jitter in seconds to a measurement based on statistics such as the standard deviation over time.
Another common timing issue is drift. Clock drift occurs when the transmitter’s clock period is slightly different from that of the receiver. At first, it may not make much of a difference. However, over time, the difference between the two clock signals may become noticeable and cause loss of synchronization and other errors.
Rise Time, Fall Time, and Aberrations
Even with drift, in theory, when a digital signal goes from a 0 to a 1, it would happen instantaneously. However, in reality, it takes time for a signal to change between high and low levels. Rise time (trise) is the time it takes a signal to rise from 20 percent to 80 percent of the voltage between the low level and high level. Fall time (tfall) is the time it takes a signal to fall from 80 percent to 20 percent of the voltage between the low level and high level.
Figure 7: Rise time and fall time indicate the length of time a signal takes to change voltage between the low level and high level.
In addition, in the real world, a signal rarely hits a voltage level and stays there in a clean fashion. When a signal actually exceeds the voltage level following an edge, the peak distortion is called overshoot. If the signal exceeds the voltage level preceding an edge, the peak distortion is called preshoot. In between edges, if the signal drifts short of the voltage level it is called undershoot.
Figure 8: Overshoot, preshoot, and undershoot are collectively called aberrations.
Together, overshoot, preshoot, and undershoot are called aberrations. Aberrations can result from board layout problems, improper termination, or quality problems in the semiconductor devices themselves.
After a digital signal has reached a voltage level, it bounces a little and then settles to a more constant voltage. The settling time (ts) is the time required for an amplifier, relay, or other circuit to reach a stable mode of operation. In the context of digital signal acquisition, the settling time for full-scale step is the amount of time required for a signal to reach a certain accuracy and stay within that range.
Figure 9: Settling time is the amount of time for a signal to reach a certain accuracy and stay within that range.
Hysteresis refers to the difference in voltage levels between the detection of a transition from logic low to logic high, and the transition from logic high to logic low. It can be calculated by subtracting the input high voltage from the input low voltage.
Figure 10: Hysteresis is the difference in voltage levels between the detection of a transition from one logic value to another.
Hysteresis is a useful property for digital devices, because it naturally provides some amount of immunity to high-frequency noise in your digital system. This noise, often caused by reflections from the high-edge rates of logic level transitions, could cause the digital device to make false transition detections if only a single voltage threshold determined a change in logic state. You can see this in Figure 11. The first sample is acquired as a logic low level. The second sample is also a logic low level because the signal has not yet crossed the high-level threshold. The third and fourth samples are logic high levels, and the fifth is a logic low level.
Figure 11: Hysteresis provides an amount of immunity to high-frequency noise in your digital system.
For devices with fixed voltage thresholds, the noise immunity margin (NIM) and hysteresis of your system are determined by your choice of system components. Both system NIM and hysteresis give your system levels of noise immunity, but for a specific logic family, there is always a trade-off between these two—the larger the hysteresis, the smaller the NIM, and vice versa. To determine how to set your voltage thresholds, you should carefully examine the signal quality in your system to determine whether you need more noise immunity from your high and low logic levels (greater NIM) or need more noise immunity on your logic level transitions (greater hysteresis).
Skew is when the clock signal arrives at different components at different times. Unlike drift, the clock signals have the same period; they just arrive at different times. This can be caused by a variety of factors including wire length, temperature variation, or differences in input capacitance. Channel-to-channel skew generally refers to the skew across all data channels on a device. When each sample is acquired, the point in time at which each data channel is sampled with respect to every other data channel is not identical, but the difference is within some small window of time called the channel-to-channel skew.
Figure 12: Channel-to-channel skew generally refers to the skew across all data channels on a device.
An eye diagram is a timing analysis tool that provides you with a good visual of timing and level errors. In real life, errors, like jitter, are difficult to quantify because they change so often and are so small. Therefore, an eye diagram is an excellent tool for finding the maximum jitter as well as measuring aberrations, rise times, fall times, and other errors. As these errors increase, the white space in the center of the eye diagram decreases.
An eye diagram is created by overlaying sweeps of different segments of a digital signal. It should contain every possible bit sequence from simple high to low transitions to isolated transitions after long runs of consistency. When overlapped, it looks like an eye. Eye diagrams are a visual way to understand the signal integrity of a design. Keep in mind that an eye diagram shows parametric information about a signal, but does not detect logic or protocol problems such as when it is supposed to transmit a high but sends a low.
Figure 13 shows common terminology of an eye diagram.
A. High level, also called the one level, is the main value of a logic high. The calculated value of a high level comes from the mean value of all the data samples captured in the middle 20 percent of the eye period.
B. Low level, also called the zero level, is the main value of a logic low. This level is calculated in the same region as the high level.
C. Amplitude of the eye diagram is the difference between the high and low levels.
D. Bit period, also referred to as the unit interval (UI), is a measure of the horizontal opening of an eye diagram at the crossing points of the eye. It is the inverse of the data rate. When creating eye diagrams, using the bit period on the horizontal axis instead of time, gives you the ability to compare diagrams with different data rates easily.
E. Eye height is the vertical opening of an eye diagram. Ideally, this would equal the amplitude, but this rarely occurs in the real world because of noise. As such, the eye height is smaller the more noise in the system. The eye height indicates the signal-tonoise ratio of the signal.
F. Eye width is the horizontal opening. It is calculated as the difference between the statistical mean of the crossing points of the eye.
G. Eye crossing percentage shows duty cycle distortion or pulse symmetry problems. An ideal signal is 50 percent; as the percentage deviates, the eye closes and indicates degradation of the signal.
Figure 13: The image shows the high level (A), low level (B), amplitude (C), bit period (D), eye height (E), eye width (F), and eye crossing percentage (G) on an eye diagram.
Figure 14 shows additional measurements on an actual eye diagram.
A. Rise time in the diagram is the mean of the individual rise times. The slope indicates sensitivity to timing error; the smaller the better.
B. Fall time in the diagram is the mean of the individual fall times. The slope indicates sensitivity to timing error; the smaller the better.
C. The width of the logic high value is the amount of distortion in the signal (set by the signal-to-noise ratio).
D. The signal-to-noise ratio at the sampling point is from the eye width to the bottom or the logic high-voltage range.
E. Jitter of the signal.
F. The most open part of the eye is when there is the best signal-to-noise ratio and is thus the best time to sample.
Figure 14: The image shows the rise time (A), fall time (B), distortion (C), signal-to-noise ratio (D), jitter (E), and best time to sample (F) on an eye diagram. |
Ice drilling allows scientists studying glaciers and ice sheets to gain access to what is beneath the ice, to take measurements along the interior of the ice, and to retrieve samples. Instruments can be placed in the drilled holes to record temperature, pressure, speed, direction of movement, and for other scientific research, such as neutrino detection.
Many different methods have been used since 1840, when the first scientific ice drilling expedition attempted to drill through the Unteraargletscher in the Alps. Two early methods were percussion, in which the ice is fractured and pulverized, and rotary drilling, a method often used in mineral exploration for rock drilling. In the 1940s, thermal drills began to be used; these drills melt the ice by heating the drill. Drills that use jets of hot water or steam to bore through ice soon followed. A growing interest in ice cores, used for palaeoclimatological research, led to ice coring drills being developed in the 1950s and 1960s, and there are now many different coring drills in use. For obtaining ice cores from deep holes, most investigators use cable-suspended electromechanical drills, which use an armoured cable to carry electrical power to a mechanical drill at the bottom of the borehole.
In 1966, a US team successfully drilled through the Greenland ice sheet at Camp Century, at a depth of 1,387 metres (4,551 ft). Since then many other groups have succeeded in reaching bedrock through the two largest ice sheets, in Greenland and Antarctica. Recent projects have focused on finding drilling locations that will give scientists access to very old undisturbed ice at the bottom of the borehole, since an undisturbed stratigraphic sequence is required to accurately date the information obtained from the ice.
The first scientific ice drilling expeditions, led by Louis Agassiz from 1840 to 1842, had three goals: to prove that glaciers flowed, to measure the internal temperature of a glacier at different depths, and to measure the thickness of a glacier. Proof of glacier motion was achieved by placing stakes in holes drilled in a glacier and tracking their motion from the surrounding mountain. Drilling through glaciers to determine their thickness, and to test theories of glacier motion and structure, continued to be of interest for some time, but glacier thickness has been measured by seismographic techniques since the 1920s. Although it is no longer necessary to drill through a glacier to determine its thickness, scientists still drill shot holes in ice for these seismic studies. Temperature measurements continue to this day: modelling the behaviour of glaciers requires an understanding of their internal temperature, and in ice sheets, the borehole temperature at different depths can provide information about past climates. Other instruments may be lowered into the borehole, such as piezometers, to measure pressure within the ice, or cameras, to allow a visual review of the stratigraphy. IceCube, a large astrophysical project, required numerous optical sensors to be placed in holes 2.5 km deep, drilled at the South Pole.
Borehole inclination, and the change in inclination over time, can be measured in a cased hole, a hole in which a hollow pipe has been placed as a "liner" to keep the hole open. This allows the three-dimensional position of the borehole to be mapped periodically, revealing the movement of the glacier, not only at the surface, but throughout its thickness. To understand whether a glacier is shrinking or growing, its mass balance must be measured: this is the net effect of gains from fresh snow, minus losses from melting and sublimation. A straightforward way to determine these effects across the surface of a glacier is to plant stakes (known as ablation stakes) in holes drilled in the glacier's surface, and monitor them over time to see if more snow is accumulating, burying the stake, or if more and more of the stake is visible as the snow around it disappears. The discovery of layers of aqueous water and of several hundred mapped subglacial lakes, beneath the Antarctic ice sheet, led to speculation about the existence of unique microbial environments that had been isolated from the rest of the biosphere, potentially for millions of years. These environments can be investigated by drilling.
Ice cores are one of the most important motivations for drilling in ice. Since ice cores retain environmental information about the time the ice in them fell as snow, they are useful in reconstructing past climates, and ice core analysis includes studies of isotopic composition, mechanical properties, dissolved impurities and dust, trapped atmospheric samples, and trace radionuclides. Data from ice cores can be used to determine past variations in solar activity, and is important in the construction of marine isotope stages, one of the key palaeoclimatic dating tools. Ice cores can also provide information about glacier flow and accumulation rates. IPICS (International Partnership in Ice Core Sciences) maintains a list of key goals for ice core research. Currently these are to obtain a 1.5 million year old core; obtain a complete record of the last interglacial period; use ice cores to assist with the understanding of climate change over long time scales; obtain a detailed spatial array of ice core climate data for the last 2,000 years; and continue the development of advanced ice core drilling technology.
The constraints on ice drill designs can be divided into the following broad categories.
The ice must be cut through, broken up, or melted. Tools can be directly pushed into snow and firn (snow that is compressed, but not yet turned to ice, which typically happens at a depth of 60 metres (200 ft) to 120 metres (390 ft)); this method is not effective in ice, but it is perfectly adequate for obtaining samples from the uppermost layers. For ice, two options are percussion drilling and rotary drilling. Percussion drilling uses a sharp tool such as a chisel, which strikes the ice to fracture and fragment it. More common are rotary cutting tools, which have a rotating blade or set of blades at the bottom of the borehole to cut away the ice. For small tools the rotation can be provided by hand, using a T-handle or a carpenter's brace. Some tools can also be set up to make use of ordinary household power drills, or they may include a motor to drive the rotation. If the torque is supplied from the surface, then the entire drill string must be rigid so that it can be rotated; but it is also possible to place a motor just above the bottom of the drill string, and have it supply power directly to the drill bit.
If the ice is to be melted instead of cut, then heat must be generated. An electrical heater built into the drill string can heat the ice directly, or it can heat the material it is embedded in, which in turn heats the ice. Heat can also be sent down the drill string; hot water or steam pumped down from the surface can be used to heat a metal drillhead, or the water or steam can be allowed to emerge from the drillhead and melt the ice directly. In at least one case a drilling project experimented with heating the drillhead on the surface, and then lowering it into the hole.
Many ice drilling locations are very difficult to access, and drills must be designed so that they can be transported to the drill site. The equipment should be as light and portable as possible. It is helpful if the equipment can be broken down so that the individual components can be carried separately, thus reducing the burden for hand-carrying, if required. Fuel, for steam or hot water drills, or for a generator to provide power, must also be transported, and this weight has to be taken into account as well.
Mechanical drilling produces pieces of ice, either as cuttings, or as granular fragments, which must be removed from the bottom of the hole to prevent them from interfering with the cutting or percussing action of the drill. An auger used as the cutting tool will naturally move ice cuttings up its helical flights. If the drill's action leaves the ice chips on top of the drill, they can be removed by simply raising the drill to the surface periodically. If not, they can be brought to the surface by lowering a tool to scoop them up, or the hole can be kept full of water, in which case the cuttings will naturally float to the top of the hole. If the chips are not removed, they must be compacted into the walls of the borehole, and into the core if a core is being retrieved.
Cuttings can also be moved to the surface by circulating compressed air through the hole, either by pumping the air through the drillpipe and out at the drillhead, forcing the chips up in the space between the drill string and the borehole wall, or by reverse air circulation, in which the air flows up through the drill string. Compressed air will be heated by the compression, and it must be cooled before being pumped downhole, or it will cause melting of the borehole walls and the core. If the air is circulated by creating a vacuum, rather than pumping air in, ambient air carries the cuttings, so no cooling is needed.
A fluid can be used to circulate the cuttings away from the bit, or the fluid may be able to dissolve the cuttings. Rotary mineral drilling (through rock) typically circulates fluid through the entire hole, and separates solids from the fluid at the surface before pumping the fluid back down. In deep ice drilling it is usual to circulate the fluid only at the bottom of the hole, collecting cuttings in a chamber that is part of the downhole assembly. For a coring drill, the cuttings chamber can be emptied each time the drill is brought to the surface to retrieve a core.
Thermal drills will produce water, so there are no cuttings to dispose of, but the drill must be capable of working while submerged in water, or else the drill must have a method of removing and storing the meltwater while drilling.
The drilling mechanism must be connected to the surface, and there must be a method of raising and lowering the drill. If the drill string consists of pipes or rods that have to be screwed together, or otherwise assembled, as the hole gets deeper and the drill string lengthens, then there must be a way to hold the drill string in place as each length of rod or pipe is added or removed. If the hole is only a few metres deep, no mechanical assistance may be necessary, but drill strings can get very heavy for deep holes, and a winch or other hoisting system must be in place that is capable of lifting and lowering it.
A "trip" in drilling refers to the task of pulling a drill string completely out of the hole (tripping out) and then reinserting it back into the hole (tripping in). Tripping time is the time taken to trip in and out of the hole; it is important for a drill design to minimize tripping time, particularly for coring drills, since they must complete a trip for each core.
The overburden pressure in a deep hole from the weight of the ice above will cause a borehole to slowly close up, unless something is done to counteract it, so deep holes are filled with a drilling fluid that is about the same density as the surrounding ice, such as jet fuel or kerosene. The fluid must have low viscosity to reduce tripping time. Since retrieval of each segment of core requires a trip, a slower speed of travel through the drilling fluid could add significant time to a project—a year or more for a deep hole. The fluid must contaminate the ice as little as possible; it must have low toxicity, for safety and to minimize the effect on the environment; it must be available at a reasonable cost; and it must be relatively easy to transport. The depth at which borehole closure prevents dry drilling is strongly dependent on the temperature of the ice; in a temperate glacier, the maximum depth might be 100 metres (330 ft), but in a very cold environment such as parts of East Antarctica, dry drilling to 1,000 metres (3,300 ft) might be possible.
Snow and firn are permeable to air, water, and drilling fluids, so any drilling method that requires liquid or compressed air in the hole needs to prevent them from escaping into the surface layers of snow and firn. If the fluid is only used in the lower part of the hole, permeability is not an issue. Alternatively the hole can be cased down past the point where the firn turns to ice. If water is used as a drilling fluid, in cold enough temperatures, it will turn to ice in the surrounding snow and firn and seal the hole.
Tools can be designed to be rotated by hand, via a brace or T-handle, or a hand crank gearing, or attached to a hand drill. Drills with powered rotation require an electrical motor at the rig site, which generally must have fuel, though in at least one case a drilling project was set up near enough to a permanent research station to run a cable to the research building for power. The rotation can be applied at the surface, by a rotary table, using a kelly, or by a motor in the drillhead, for cable-suspended drills; in the latter case the cable must carry power to the drillhead as well as support its weight. For rotary drills, gearing is required to reduce the engine's rotation to a suitable speed for drilling.
If torque is supplied at the bottom of the hole, the motor supplying it to the drillbit beneath it will have a tendency to rotate around its own axis, rather than imparting the rotation to the drillbit. This is because the drillbit will have a strong resistance to rotation since it is cutting ice. To prevent this, an anti-torque mechanism of some kind must be provided, typically by giving the motor some grip against the walls of the borehole.
A thermal drill that uses electricity to heat the drill head so that it melts the ice must bring power down the hole, just as with rotary drills. If the drillhead is heated by pumping water or steam down to the bottom of the hole, then no downhole power is needed, but a pump at the surface is required for hot water. The water or steam can be heated at the surface by a fuel-powered boiler. Solar power can also be used.
Some drills which are designed to rest on their tip as they drill will lean to one side in the borehole, and the hole they drill will gradually drift towards the horizontal unless some method of counteracting this tendency is provided. For other drills, directional control can be useful in starting additional holes at depth, for example to retrieve additional ice cores.
Many glaciers are temperate, meaning that they contain "warm ice": ice that is at melting temperature (0 °C) throughout. Meltwater in boreholes in warm ice will not refreeze, but for colder ice, meltwater is likely to cause a problem, and may freeze the drill in place, so thermal drills that operate submerged in the meltwater they produce, and any drilling method that results in water in the borehole, are difficult to use in such conditions. Drilling fluids, or antifreeze additives to meltwater, must be chosen to keep the fluid liquid at the temperatures found in the borehole. In warm ice, ice tends to form on cutters and the drillhead, and to pack into spaces at the bottom of the hole, slowing down drilling.
To retrieve a core, an annulus of ice must be removed from around the cylindrical core. The core should be unbroken, which means that vibrations and mechanical shocks must be kept to a minimum, and changes in temperature which could cause thermal shock to the core must also be avoided. The core must be kept from melting caused by heat generated either mechanically from the drilling process, from the heat of compressed air if air is used as the drilling fluid, or from a thermal drill, and must not be contaminated by the drilling fluid. When the core is about to be retrieved, it is still connected to the ice beneath it, so some method of breaking it at the lower end must be provided, and of gripping it so it does not fall from the core barrel as it is brought to the surface, which must be done as quickly and safely as possible.
Most coring drills are designed to retrieve cores that are no longer than 6 metres (20 ft), so drilling must stop each time the hole depth is extended by that amount, so that the core can be retrieved. A drill string that must be assembled and disassembled in segments, such as pipe sections that must be screwed together, takes a long time to trip in and out; a cable which can be continuously winched up, or a drill string that is flexible enough to be coiled, significantly reduces tripping time. Wireline drills have a mechanism that allows the core barrel to be detached from the drill head and winched directly to the surface without having to trip out the drill string. Once the core is removed, the core barrel is lowered to the bottom of the hole and reattached to the drill.
Over a depth range known as the brittle ice zone, bubbles of air are trapped in the ice under great pressure. When a core is brought to the surface, the bubbles can exert a stress that exceeds the tensile strength of the ice, resulting in cracks and spall. At greater depths, the ice crystal structure changes from hexagonal to cubic, and the air molecules move inside the crystals, in a structure called a clathrate. The bubbles disappear, and the ice becomes stable again.
The brittle ice zone typically returns poorer quality samples than for the rest of the core. Some steps can be taken to alleviate the problem. Liners can be placed inside the drill barrel to enclose the core before it is brought to the surface, but this makes it difficult to clean off the drilling fluid. In mineral drilling, special machinery can bring core samples to the surface at bottom-hole pressure, but this is too expensive for the inaccessible locations of most drilling sites. Keeping the processing facilities at very low temperatures limits thermal shocks. Cores are most brittle at the surface, so another approach is to break them into 1 m lengths in the hole. Extruding the core from the drill barrel into a net helps keep it together if it shatters. Brittle cores are also often allowed to rest in storage at the drill site for some time, up to a full year between drilling seasons, to let the ice gradually relax. Core quality in the brittle ice zone is much improved when a drilling fluid is used, as opposed to dry hole drilling.
A percussion drill penetrates ice by repeatedly striking it to fracture and fragment it. The cutting tool is mounted at the bottom of the drill string (typically connected metal rods[note 1]), and some means of giving it kinetic energy must be provided. A tripod erected over the hole allows a pulley to be set up, and a cable can then be used to repeatedly raise and drop the tool. This method is known as cable tool drilling. A weight repeatedly dropped on to a rigid drill string can also be used to provide the necessary impetus. The pulverized ice collects at the bottom of the borehole, and must be removed. It can be collected with a tool capable of scooping it from the bottom of the hole, or the hole can be kept full of water, so that the ice floats to the top of the hole, though this retards the momentum of the drill striking the ice, reducing its effectiveness. A percussion drilling tool that is not mechanically driven requires some method of raising the drill so it can be released, to fall on the ice. To do this efficiently with manual labour, it is usual to set up a tripod or other supporting scaffold, and a pulley to allow the drill string to be raised by a rope. This arrangement, known as a cable-tool rig, can also be used for mechanical drilling, with a motor raising the drill string and allowing it to fall. An alternative approach is to leave the drill string at the bottom of the borehole, and to raise and let fall a hammer weight onto the drill string.
The earliest scientific ice drilling expedition used percussion drilling; Louis Agassiz used iron rods to drill holes in the Unteraargletscher, in the Alps, in the summer of 1840. Cable-tool rigs have been used for ice drilling in more recent times; Soviet expeditions in the 1960s drilled with cable-tool rigs in the Caucasus and the Tien Shan range, and US projects have drilled on the Blue Glacier in Washington between 1969 and 1976, and on the Black Rapids Glacier in Alaska in 2002.
Two other percussion methods have been tried. Pneumatic drills have been used to drill shallow holes in ice in order to set blast charges, and rotary percussion drills, a type of drilling tool once in common use in the mining industry, have also been used for drilling blasting holes, but neither approach has been used for scientific investigations of ice. Percussion drilling is now rarely used for scientific ice drilling, having been overtaken by more effective techniques for both ice and mineral drilling.
A soil sampling auger contains a pair of blades at the bottom of an enclosed cylinder; it can be driven and rotated by hand to pick up soft soil. A similar design, called a spoon-borer, has been used for ice drilling, though it is not effective in hard ice. A version used by Erich von Drygalski in 1902 had two half-moon cutting blades set into the base of the cylinder in such a way as to allow the ice cuttings to accumulate in the cylinder, above the blades.[note 2]
Augers have long been used for drilling through ice for ice fishing. Augers can be rotated by hand, using a mechanism such as a T handle or a brace bit, or by attaching them to powered hand drills. Scientific uses for non-coring augers include sensor installation and determining ice thickness. Augers have a helical screw blade around the main drilling axis; this blade, called the "flighting", carries the ice cuttings up from the bottom of the hole. For drilling deeper holes, extensions can be added to the auger, but as the auger gets longer it becomes more difficult to rotate. With a platform such as a stepladder, a longer auger can be rotated from higher off the ground.
Commercially available ice augers for winter fishing, powered by petrol, propane, or battery power, are available for hole diameters from 4.5 in to 10 in. For holes deeper than 2 m a tripod can be used to winch the auger from the hole. A folding brace handle with an offset design is common; this allows both hands to contribute to the torque.
Augers that are capable of retrieving ice cores are similar to noncoring augers, except that the flights are set around a hollow core barrel. Augers have been devised that consist of the helical cutting blades and a space for a core, without the central supporting cylinder, but they are difficult to make sufficiently rigid. Coring augers typically produce cores with diameters in the range 75–100 mm, and with lengths up to 1 m. Coring augers were originally designed to be manually rotated, but over time they have been adapted for use with handheld drills or small engines.
As with noncoring augers, extensions can be added to drill deeper. Drilling deeper than 6 m requires more than one person because of the weight of the drill string. A clamp placed at the surface is useful for supporting the string, and a tripod and block and tackle can also be used for support and to increase the weight of string that can be handled. As the drill string gets longer, it takes more time to complete a trip to extract a core, since each extension rod must be separated from the drill string when tripping out, and re-attached when tripping in.
Drilling with a tripod or other method of handling a long drill string considerably extends the depth limit for the use of a coring auger. The deepest hole drilled by hand with an auger was 55 m, in the Ward Hunt Ice Shelf on Ellesmere Island, in 1960. Usually a hole deeper than 30 m will be drilled with other methods, because of the weight of the drill string and the long trip time required.
Modern coring augers have changed little in decades: an ice coring auger patented in the US in 1932 closely resembles coring augers in use eighty years later. The US military's Frost Effects Laboratory (FEL) developed an ice mechanics testing kit that included a coring auger in the late 1940s; the Snow, Ice and Permafrost Research Establishment (SIPRE), a successor organization, refined the design in the early 1950s, and the resulting auger, known as the SIPRE auger, is still in wide use. It was modified slightly by the Cold Regions Research and Engineering Laboratory (CRREL), another successor organization, in the 1960s, and is sometimes known as the CRREL auger for that reason. An auger developed in the 1970s by the Polar Ice Core Office (PICO), then based in Lincoln, Nebraska, is also still widely used. A coring auger designed at the University of Copenhagen in the 1980s was used for the first time at Camp Century, and since then has been frequently used in Greenland. In 2009, the US Ice Drilling Design and Operations group (IDDO) began work on an improved hand auger design and a version was successfully tested in the field during the 2012–2013 field season at WAIS Divide. As of 2017 IDDO maintains both 3-inch and 4-inch diameter versions of the new auger for the use of US ice drilling research programs, and these are now the most-requested hand augers provided by IDDO.
The Prairie Dog auger, designed in 2007, adds an outer barrel to the basic coring auger design. Cuttings are captured between the auger flights and the outer barrel, which has an anti-torque section to prevent it from rotating in the hole. The goal of the outer barrel is to increase the efficiency of chip collection, since it is common to see chips from a hand auger run fall back into the hole from the auger flights, which means the next run has to redrill through these cuttings. The outer barrel also makes the auger effective in warm ice, which could easily cause an auger with no outer barrel to jam. The outside barrel of the Prairie Dog is the same as the diameter of the PICO auger, and since the Prairie Dog's anti-torque blades do not perform well in soft snow and firn, it is common to start a hole with the PICO auger and then continue it with the Prairie Dog once dense firn is reached. The Prairie Dog is relatively heavy, and can require two drillers to handle it as it is being removed from the hole. The IDDO maintains a Prairie Dog drill for the use of US ice drilling research programs.
IDDO also provides a lifting system for use with hand augers, known as the Sidewinder. It is driven by an electric hand drill, which can be powered by a generator or by solar cells. The Sidewinder winds a rope around the hand auger as it is lowered into the hole, and assists in raising the auger back out of the hole. This extends the maximum practical depth for hand augering to about 40 m. Sidewinders have proved popular with researchers.
A piston drill consists of a flat disc at the bottom of a long rod, with three or four radial slots in the disc, each of which has a cutting edge. The rod is rotated by hand, using a brace handle; the ice comes through the slots and piles up on top of the disc. Pulling the drill out of the borehole brings the cuttings up on the disc. In the 1940s some patents for piston drill designs were filed in Sweden and the U.S., but these drills are now rarely used. They are less efficient than auger drills, since the drill must be periodically removed from the hole to get rid of the cuttings.
Some hand drills have been designed to retrieve cores without using auger flights to transport the cuttings up the hole. These drills typically have a core barrel with teeth at the lower end, and are rotated by a brace or T-handle, or by a small engine. The barrel itself can be omitted, so that the drill consists only of a ring with a cutting slot to cut the annulus around the core, and a vertical rod to attach the ring to the surface. A couple of small hand-held drills, or mini drills, have been designed to quickly collect core samples up to 50 cm long. A difficulty with all these designs is that as soon as cuttings are generated, if they are not removed they will interfere with the cutting action of the drill, making these tools slow and inefficient in use. A very small drill, known as the Chipmunk Drill, was designed by IDDO for use by a project in West Greenland in 2003 and 2004, and was subsequently used at the South Pole in 2013.
Rotary rigs used in mineral drilling use a string of drillpipe connected to a drillbit at the bottom of the hole, and to a rotary mechanism at the top of the hole, such as a top drive or rotary table and kelly. As the borehole deepens, drilling is paused periodically to add a new length of drill pipe at the top of the drill string. These projects have usually been undertaken with commercially available rotary rigs originally designed for mineral drilling, with adaptations to suit the special needs of ice drilling.
When drilling in ice, the hole may be drilled dry, with no mechanism to dispose of the cuttings. In snow and firn this means that the cuttings simply compact into the walls of the borehole; and in coring drills they also compact into the core. In ice, the cuttings accumulate in the space between the drillpipe and the borehole wall, and eventually start to clog the drill bit, usually after no more than 1 m of progress. This increases the torque needed to drill, slows down progress, and can cause the loss of the drill. Dry core drilling generally produces a poor quality core that is broken into pieces.
In 1950, the French Expédition Polaires Françaises (EPF) drilled two dry holes in Greenland using a rotary rig, at Camp VI, on the west coast, and Station Centrale, inland, reaching 126 m and 151 m. Some shallow holes were also drilled that summer on Baffin Island, using a coring drill, and in the Antarctic, the Norwegian–British–Swedish Antarctic Expedition (NBSAE) drilled several holes between April 1950 and the following year, eventually reaching 100 m in one hole. The last expedition to try dry drilling in ice was the 2nd Soviet Antarctic Expedition (SAE), which drilled three holes between July 1957 and January 1958. Since that time dry drilling has been abandoned as other drilling methods have proved to be more effective.
Several holes have been drilled in ice using direct air circulation, in which compressed air is pumped down the drillpipe, to escape through holes in the drillbit, and return up the annular space between the drillbit and the borehole, carrying the cuttings with it. The technique was first tried by the 1st Soviet Antarctic Expedition, in October 1956. There were problems with poor cuttings removal, and ice forming in the borehole, but the drill succeeded in reaching a depth of 86.5 m. Further attempts were made to use air circulation with rotary rigs by US, Soviet and Belgian expeditions, with a maximum hole depth of 411 m reached by a US team at Site 2 in Greenland in 1957. The last time a project used a conventional rotary rig with air circulation was 1961.
In mineral exploration, the most common drilling method is a rotary rig with fluid circulated down the drillpipe and back up between the drillpipe and the borehole wall. The fluid carries the cuttings to the surface, where the cuttings are removed, and the recycled fluid, known as mud, is returned to the hole. The first ice drilling project to try this approach was an American Geographical Society expedition to the Taku Glacier in 1950. Fresh water, drawn from the glacier, was used as the drilling fluid, and three holes were drilled, to a maximum depth of 89 m. Cores were retrieved, but in poor condition. Seawater has also been tried as a drilling fluid. The first time a fluid other than water was used with a conventional rotary rig was in late 1958, at Little America V, where diesel fuel was used for the last few metres of a 254 m hole.
A wireline drill uses air or fluid circulation, but also has a tool that can be lowered into the drillpipe to retrieve a core without removing the drill string. The tool, called an overshot, latches onto the core barrel and pulls it up to the surface. When the core is removed, the core barrel is lowered back into the borehole and reattached to the drill. A wireline core drilling project was planned in the 1970s for the International Antarctic Glaciological Project, but was never completed, and the first wireline ice drilling project took place in 1976,[note 3] as part of the Ross Ice Shelf Project (RISP). A hole was started in November of that year with a wireline drill, probably using air circulation, but problems with the overshot forced the project to switch to thermal drilling when the hole was 103 m deep. The RISP project reached over 170 m with another wireline drill the following season, and several 1980s Soviet expedition also used wireline drills, after starting the holes with an auger drill and casing the holes. The Agile Sub-Ice Geological (ASIG) drill, designed by IDDO to collect sub-glacial cores, is a recent wireline system; it was first used in the field in the 2016–2017 season, in West Antarctica.
There are many disadvantages to using conventional rotary rigs for ice drilling. When a conventional rotary rig is used for coring, the entire drill string must be hoisted out of the borehole each time the core is retrieved; each length of pipe in turn must be unscrewed and racked. As the hole gets deeper, this becomes very time-consuming. Conventional rigs are very heavy, and since many ice drilling sites are not easily accessible these rigs place a large logistical burden on an ice drilling project. For deep holes, a drilling fluid is required to maintain pressure in the borehole and prevent the hole from closing up because of the pressure the ice is under; a drilling fluid requires additional heavy equipment to circulate and store the fluid, and to separate the circulated material. Any circulation system also requires the upper part of the hole, through the snow and firn, to be cased, since circulated air or fluid would escape through anything more permeable than ice. Commercial rotary rigs are not designed for extremely cold temperatures, and in addition to problems with components such as the hydraulics and fluid management systems, they are designed to operate outdoors, which is impractical in extreme environments such as Antarctic drilling.
Commercial rotary rigs can be effective for large-diameter holes, and can also be used for subglacial drilling into rock. They have also been used with some success for rock glaciers, which are challenging to drill because they contain a heterogeneous mixture of ice and rock.
Flexible drillstem rigs use a drill string that is continuous, so that it does not have to be assembled or disassembled, rod by rod or pipe by pipe, when tripping in or out. The drill string is also flexible, so that when out of the borehole it can be stored on a reel. The drill string may be a reinforced hose, or it may be steel or composite pipe, in which case it is known as a coiled-tubing drill. Rigs designed along these lines began to appear in the 1960s and 1970s in mineral drilling, and became commercially viable in the 1990s.
Only one such rig, the rapid air movement (RAM) system developed at the University of Wisconsin-Madison by Ice Coring and Drilling Services (ICDS), has been used for ice drilling. The RAM drill was developed in the early 2000s, and was originally designed for drilling shot holes for seismic exploration. The drill stem is a hose through which air is pumped; the air drives a turbine that powers a downhole rotary drill bit. Ice cuttings are removed by the exhaust air and fountain out of the hole. The compressor increases the temperature of the air by about 50°, and it is cooled again before being pumped downhole, with a final temperature about 10° warmer than the ambient air. This means it cannot be used in ambient temperatures warmer than −10 °C. To avoid ice forming in the hose, ethanol is added to the compressed air. The system, which includes a winch to hold 100 m of hose, as well as two air compressors, is mounted on a sled. It has successfully drilling hundreds of holes in West Antarctica, and was easily able to drill to 90 m in only 25 minutes, making it the fastest ice drill. It was also used by the Askaryan Radio Array project in 2010–2011 at the South Pole, but was unable to drill below 63 m there because of variations in the local characteristics of the ice and firn. It cannot be used in a fluid-filled hole, which limits the maximum hole depth for this design. The main problem with the RAM drill is a loss of air circulation in firn and snow, which might be addressed by using reverse air circulation, via a vacuum pump drawing air up through the hose. As of 2017 IDDO is planning a revised design for the RAM drill to reduce the weight of the drill, which is currently 10.3 tonnes.
Other flexible drill stem designs have been considered, and in some cases tested, but as of 2016 none had been successfully used in the field. One design suggested using hot water to drill via a hose, and replacing the drillhead with a mechanical drill for coring once the depth of interest is reached, using the hot water both to hydraulically power the down hole motor, and to melt the resulting ice cuttings. Another design, the RADIX drill, produces a very narrow hole (20 mm) and is intended for rapid drilling access holes; it uses a small hydraulic motor on a narrow hose. It was tested in 2015 but found to have difficulty with cuttings transport, probably because of the very narrow space available between the hose and the borehole wall.
Coiled-tubing designs have never been successfully used for ice drilling. Coring operations would be particularly difficult, since a coring drill must trip out and in for each core, which would lead to fatigue; the tubing is typically rated for a lifetime of only 100 to 200 trips.
A cable-suspended drill has a downhole system, known as a sonde, to drill the hole. The sonde is connected to the surface by an armoured cable, which provides power and enables the drill to be winched in and out of the hole. Electromechanical (EM) cable-suspended drills have a cutting head, with blades that shave the ice as they rotate, like a carpenter's plane. The depth of penetration of the cut is adjusted by a device called a shoe, which is part of the cutting head. The ice cuttings are stored in a chamber in the sonde, either in the core barrel, above the core, or in a separate chamber, further up the drill.
The cuttings can be transported by auger flights or by fluid circulation. Drills that rely on auger flights and which are not designed to work in a fluid-filled hole are limited to depths at which borehole closure is not a problem, so these are known as shallow drills. Deeper holes have to be drilled with drilling fluid, but whereas circulation in a rotary drill takes the fluid all the way down and then up the borehole, cable-suspended drills only need to circulate the fluid from the drill head up to the cuttings chamber. This is known as bottom-hole circulation.
The upper part of the sonde has an antitorque system, which most commonly consists of three or four leaf-springs that press out against the borehole walls. Sharp edges on the leaf springs catch in the walls and provide the necessary resistance to prevent this part of the drill from rotating. At the point where the cable connects to the sonde, most drills include a slip ring, to allow the drill to rotate independently of the cable. This is to prevent torque damage to the cable if the anti-torque system fails. Coring drills may also have a weight that can be used as a hammer to assist in breaking the core, and a chamber for any instrumentation or sensors needed.
At the bottom of the sonde is the cutting head, and above this is the core barrel, with auger flights around it on shallow drills, and typically an outer barrel around that, usually with internal vertical ribs or some other way of providing additional impetus to the upward-bound cuttings on the flights. If there is a separate chip chamber it will be above the core barrel. The motor, with suitable gearing, is also above the core barrel.
Shallow drills can retrieve cores up to 300–350 m deep, but core quality is much improved if drilling fluid is present, so some shallow drills have been designed to work in wet holes. Tests reported in 2014 showed that wet drilling, with the top of the drilling fluid no deeper than 250 m, would maintain good core quality.
Drilling fluids are necessary for drilling deep holes, so the cable-suspended drills that are used for these projects use a pump to provide fluid circulation, in order to remove the cuttings from the bit. A few drills designed for use with drilling fluid also have auger flights on the inner barrel. As with shallow drills, the cuttings are stored in a chamber above the core. The circulation can be in either direction: down the inside of the drill string, and up between the core barrel and the borehole wall, or in the reverse direction, which has become the favoured approach in drill design as it gives better cuttings removal for a lower flow rate. Drills capable of reaching depths over 1500 m are known as deep drilling systems; they have generally similar designs to the intermediate systems that can drill from 400 m to 1500 m, but must have heavier and more robust systems such as winches, and have longer drills and larger drilling shelters. Core diameters for these drills have varied from 50 mm to 132 mm, and the core length from as short as 0.35 m up to 6 m. A common design feature of these deep drills is that they can be tipped to the horizontal to make it easier to remove the core and the cuttings. This reduces the required height of the mast, but requires a deep slot to be cut into the ice, to make room for the sonde to swing up.
The first cable-suspended electromechanical drill was invented by Armais Arutunoff for use in mineral drilling; it was tested in 1947 in Oklahoma, but did not perform well. CRREL acquired a reconditioned Arutunoff drill in 1963, modified it for drilling in ice, and in 1966 used it to extend a hole at Camp Century in Greenland to the base of the ice cap, at 1387 m, and 4 m further into the bedrock.
Many other drills have since been based on this basic design. A recent variation on the basic EM drill design is the Rapid Access Isotope Drill, designed by the British Antarctic Survey to drill dry holes to 600 m. This drill does not collect a complete ice core; instead it will collect ice cuttings, using a cutting head similar to a spoonborer. The resulting access hole will be used for temperature profiling, and along with the isotope results which will indicate the age of the ice, the data will be used for modeling the ice profile down to bedrock in order to determine the best place to drill to obtain the oldest possible undisturbed basal ice. The drill is expected to reach 600 m in 7 days of drilling, rather than the 2 months which would be needed to drill a core; the speed is because the cutters can be more aggressive as core quality is not an issue, and because the borehole is narrow which reduces power requirements for the winch.
Thermal drills work by applying heat to the ice at the bottom of the borehole to melt it. Thermal drills in general are able to drill successfully in temperate ice, where an electromechanical drill is at risk of jamming because of ice forming in the borehole. When used in colder ice, some form of antifreeze is likely to be introduced into the borehole to prevent the meltwater from freezing in the drill.
Hot water can be used to drill in ice by pumping it down a hose with a nozzle at the end; the jet of hot water will quickly produce a hole. Letting the hose dangle freely will produce a straight hole; as the hole gets deeper the weight of the hose makes this hard to manage manually, and at a depth of about 100 m it becomes necessary to run the hose over a pulley and enlist some method to help lower and raise the hose, usually consisting of a hose reel, capstan, or some type of hose assist. Since the pressure in the hose is proportional to the square of the flow, hose diameter is one of the limiting factors for a hot-water drill. To increase flow rate beyond a certain point, the hose diameter must be increased, but this will require significant capacity increases elsewhere in the drill design. Hoses that are wrapped around a drum before being pressurized will exert constricting force on the drum, so the drums must be of robust design. Hoses must wrap neatly when spooling up, to avoid damage; this can be done manually for smaller systems, but for very large drills a level-wind system has to be implemented. The hose ideally should have the tensile strength to support its weight when spooling into the hole, but for very deep holes a supporting cable may need to be used to support the hose.
Steam can also be used in place of hot water, and does not need to be pumped. A handheld steam drill is able to rapidly drill short holes, for example for ablation stakes, and both steam and hotwater drills can be made light enough to be hand carried. A guide tube can be used to help keep the borehole straight.
In cold ice, a borehole drilled with hot water will close up as the water freezes. To avoid this, the drill can be run back down the hole, warming the water and hence the surrounding ice. This is a form of reaming. Repeated reamings will raise the temperature of the surrounding ice to the point where the borehole will stay open for longer periods. However, if the goal is to measure temperature in the borehole, then it is better to apply as little additional heat as possible to the surrounding ice, which means that a higher energy drill with a high water flow rate is desirable, since this will be more efficient. If there is a risk of the drill freezing in, a "back drill" can be included in the design. This is a mechanism which redirects the hot water jet upwards if the drill meets with resistance on tripping out. A separate hot water reamer can also be used, jetting hot water sideways onto the borehole walls as it passes.
Boreholes drilled with hot water are rather irregular, which makes them unsuitable for certain kinds of investigations, such as speed of borehole closure, or inclinometry measurements. The warm water from the nozzle will continue to melt the borehole walls as it rises, and this will tend to make the hole cone-shaped—if the hole is being drilled at a location with no surface snow or firn, such as an ablation zone in a glacier, then this effect will persist to the top of the borehole.
The water supply for a hot water drill can come from water at the surface, if available, or melted snow. The meltwater in the borehole can be reused, but this can only be done once the hole penetrates below the firn to the impermeable ice layer, because above this level the meltwater escapes. The pump to bring the meltwater back to the surface must be placed below this level, and in addition, if there is a chance that the borehole will penetrate to the base of the ice, the drilling project must plan for the likelihood that this will change the water level in the hole, and ensure that the pump is below the lowest likely level. Heating systems are usually adapted from the heaters used in the pressure washer industry.
When any thermal drilling method is used in dirty ice, the debris will accumulate at the bottom of the borehole, and start to impede the drill; enough debris, in the form of sand, pebbles, or a large rock, could completely stop progress. One way to avoid this is to have a nozzle angled at 45°; using this nozzle will create a side channel into which the obstructions will go. Vertical drilling can then start again, bypassing the debris. Another approach is to recirculate the water at the bottom of the hole, with an electrical heater embedded in the drill head and filters in the circulation. This can remove most of the small debris that impedes the drillhead.
A different problem with impure ice comes from contaminants brought in by the project, such as clothing and wood fibres, dust, and grit. Using snow from around the campsite to supply the drill with water is often necessary at the start of drilling, as the hole will not yet have reached the impermeable ice, so water cannot be pumped back up from the bottom of the hole; shoveling this snow into the drill's water supply will pass these contaminants through the drill mechanism, and can damage the pumps and valves. A fine filter is required to avoid these problems.
An early expedition using hot water drills was in 1955, to the Mer de Glace; Électricité de France used hot water to reach the base of the glacier, and also used equipment that sprayed multiple jets simultaneously to create a tunnel under the ice. More development work was done in the 1970s. Hot water drills are now capable of drilling very deep holes and capable of clean access for sub glacial lakes: for example, between 2012–2019 on the WISSARD/SALSA project, the WISSARD drill, a mid-sized hot water drill, drilled clean access up to 1 km at Lake Mercer in Antarctica; and between 2004 and 2011, a large hot water drill at the South Pole was used to drill 86 holes to a depth of 2.5 km to set strings of sensors in the boreholes, for the IceCube project. Hot water coring drills have also been developed but are susceptible to debris stopping forward motion in dirty ice.
An early steam drill was developed by F. Howorka in the early 1960s for work in the Alps. Steam drills are not used for holes deeper than 30 m, as they are quite inefficient due to thermal losses along the hose, and pressure losses with increasing depth under water. They are primarily used for quickly drilling shallow holes.
Instead of using a jet of hot water or steam, thermal drills can also be constructed to provide heat to a durable drillhead, for example by pumping hot water down and back up again inside the drill string, and use that to melt the ice. Modern thermal drills use electrical power to heat the drillhead instead.
It is possible to drill with a hotpoint that consists of an electrical heating element, directly exposed to the ice; this means that the element must be able to work underwater. Some drills instead embed the heating element in a material such as silver or copper that will conduct the heat quickly to the hotpoint surface; these can be constructed so that the electrical connections are not exposed to water. Electrothermal drills require a cable to bring the power down the hole; the circuit can be completed via the drillpipe if one is present. A transformer is needed in the drill assembly since the cable must carry high voltage to avoid power dissipation. It is more difficult to arrange electrical power at a remote location than to generate heat via a gas boiler, so hotpoint drills are only used for boreholes up to a few hundred metres deep.
The earliest attempt to use heat to drill in ice was in 1904, when C. Bernard, drilling at the Tête Rousse Glacier, tried using heated iron bars to drill with. The ends of the bars were heated until incandescent, and lowered into the borehole. The first true hotpoint was used by Mario Calciati in 1942 on the Hosand Glacier. Calciati pumped hot water from the surface down the drillstem, and back up after it had passed through the drillhead. Other hotpoint designs have used electrical heating to heat the drillhead; this was done in 1948 by a British expedition to the Jungfraujoch, and by many other drill designs since then. Hotpoints do not produce cores, so they are used primarily for creating access holes.
The development in the 1960s of thermal coring drills for intermediate depth holes was prompted by the problems associated with rotary coring drills, which were too costly to use for polar ice cores because of the logistical problems caused by their weight. The components of a thermal drill are generally the same as for a cable-suspended EM drill: both have a mast and winch, and an armoured cable to provide power downhole to a sonde, which includes a core barrel. No antitorque system is needed for a thermal drill, and instead of a motor that provides torque, the power is used to generate heat in the cutting head, which is ring shaped to melt an annulus of ice around the core. Some drills may also have a centralizer, to keep the sonde in the middle of the borehole.
The sonde of an electrothermal drill designed to run submerged in meltwater may consist almost entirely of the core barrel plus the heated cutting head (diagram (a) in the figure to the right). Alternative designs for use in colder ice (see diagram (b) at right) may have a compartment above the core barrel, and tubes that run down to just above the cutting head; a vacuum pump sucks up the meltwater. In these drills the meltwater must be emptied at the surface at the end of each coring run.
Another approach (see (c) at right) is to use a drilling fluid that is a mixture of ethanol and water, with the exact proportions determined by the ice temperature. In these drills there is a piston above the core barrel and at the start of a run the piston is at the bottom of the sonde, and the space above is filled with drilling fluid. As the drills cuts downwards, the core pushes the piston up, pumping the fluid down and out around the cutting head, where it mixes with the meltwater and prevents it from freezing. The piston is the only moving part, which simplifies the design; and the core barrel can take up much of the length of the sonde, whereas drills which suck out the meltwater in order to drill in a dry hole have to sacrifice a large section of the sonde for meltwater storage.
Thermal drills designed for temperate ice are light and straightforward to operate, which makes them suitable for use on high-altitude glaciers, though this also requires that the drill can be disassembled into components for human-powered transport to the most inaccessible locations, since helicopters may not be able to reach the highest glaciers.
Electrothermal drill designs date back to the 1940s. An electrothermal drill was patented in Switzerland in May 1946 by René Koechlin, and was used in Switzerland, and in 1948 a British expedition to the Jungfraujoch drilled to the bed of the glacier using an electrothermal design. Twenty electrothermal coring drills were designed between 1964 and 2005, though many designs were abandoned because of the higher performance of EM coring drills.
If the goal is to obtain instrument readings from within the ice, and there is no need to retrieve either the ice or the drill system, then a probe containing a long spool of cable and a hotpoint can be used. The hotpoint allows the probe to melt its way through the ice, unreeling the cable behind it. The meltwater will refreeze, so the probe cannot be recovered, but it can continue to penetrate the ice until it reaches the limit of the cable it carries, and send instrument readings back up through the cable to the surface. Known as Philberth probes, these devices were designed by Karl and Bernhard Philberth in the 1960s as a way to store nuclear waste in the Antarctic, but were never used for that purpose. Instead, they were adapted to use for glaciological research, reaching a depth of 1005 metres and sending temperature information back to the surface when tested in 1968 as part of the Expédition Glaciologique Internationale au Groenland (EGIG).
Because thermal probes support their weight on the ice at the bottom of the borehole, they lean slightly out of the vertical, and this means they have a natural tendency to stray away from a vertical borehole towards the horizontal. Various methods have been proposed to address this. A cone-shaped tip, with a layer of mercury above the tip, will cause additional heat transfer to the lower side of a slanting borehole, increasing the speed of melting on that side, and returning the borehole to the vertical. Alternatively the probe can be constructed to be supported by ice above its centre of gravity, by providing two heating rings, one of which is towards the top of the probe, and has a greater diameter than the rest of the probe. Giving this upper ring a slightly lower heating power will cause the probe to have more bearing pressure on the upper ring, which will give it a natural tendency to swing back to vertical if the borehole starts to deviate. The effect is called pendulum steering, by analogy with the tendency of a pendulum always to swing back towards a vertical position.
In the 1990s NASA combined the Philberth probe design with ideas drawn from hot-water drills, to design a cryobot probe that had hot water jets in addition to a hotpoint nose. Once the probe was submerged in a thin layer of meltwater, the water was drawn in and reheated, emerging at the nose as a jet. This design was intended to help move particulate matter away from the nose, as a hot-water drill tends to. A version with no analytical tools on board was built and field tested in Svalbard, Norway, in 2001. It penetrated to 23 m, successfully passing through layers of particulates.
Cryobots remain in good thermal contact with the surrounding ice throughout their descent, and in very cold ice this can drain a substantial fraction of their power budget, which is finite since they must carry their power source with them. This makes them unsuitable for investigating the Martian polar ice cap. Instead, NASA added a pump to the cryobot design, to raise meltwater to the surface, so that the probe, known as the SIPR (for Subsurface Ice Probe) descends in a dry hole. The lower gravity on Mars means that the overburden pressure on the ice cap is much less, and an open borehole is expected to be stable to a depth of 3 km, the expected depth of the ice cap. The meltwater can then be analyzed at the surface. Pumping through a vertical tube will cause mixing, so to ensure discrete samples for analysis at the surface, a large bore and a small bore tube are used; the small bore tube is used for sampling, and then its contents are allowed to return to the probe and are pumped back up the large bore tube for use in experiments that do not depend on stratigraphy, such as searches for living organisms. Leaving the analytical instruments on the surface reduces the necessary size of the probe, which helps make this design more efficient.
Along with the water transport tubes, a heated wire ensures that the water stays liquid all the way to the surface, and power and telemetry is also carried from the surface. To keep the hole vertical the probe can sense when it is deviating, and the jets of hot water are adjusted to compensate. The drill is expected to make use of solar power in operation, meaning it must be able to function on less than 100 W when in sunlight. A fully built version of the probe was successfully tested in Greenland in 2006, drilling to a depth of 50 m. NASA has proposed a similar design for drilling in the ice on Europa, a moon of Jupiter. Any such probe would have to survive temperatures of 500 °C while being sterilized to avoid biological contamination of the target environment.
Snow samples are taken to measure the depth and density of the snow pack in a given area. Measurements of depth and density can be converted into a snow water equivalent (SWE) number, which is the depth of water that would result from converting the snow into water. Snow samplers are typically hollow cylinders, with toothed ends to help them penetrate the snow pack; they are used by pushing them into the snow, and then pulling them out along with the snow in the cylinder. Weighing the cylinder full of snow and subtracting the weight of the empty cylinder gives the snow weight; samplers usually have lengthwise slots to allow the depth of the snow to be recorded as well, though a sampler made of transparent material makes this unnecessary.
The sampler must grip the snow well enough to keep the snow inside the cylinder as it is removed from the snow, which is easier to accomplish with a smaller diameter cylinder; however, larger diameters give more accurate readings. Samples must avoid compacting the snow, so they have smooth inner surfaces (usually of anodized aluminium alloy, and sometimes waxed in addition) to prevent the snow from gripping the sides of the cylinder as it is pushed in. A sampler may penetrate light snow under its own weight; denser snow pack, firn, or ice, may require the user to rotate the sampler gently so that the cutting teeth are engaged. Pushing too hard without successfully cutting a dense layer may cause the sample to push the layer down; this situation can be identified because the snow level inside the sampler will be lower than the surrounding snow. Multiple readings are usually taken at each location of interest, and the results are averaged. Snow samplers are usually accurate to within about 5–10%.
The first snow sampler was developed by J.E. Church in the winter of 1908/1909, and the most common modern snow sampler, known as the Federal snow sampler, is based on Church's design, with some modifications by George D. Clyde and the U.S. Soil Conservation Service in the 1930s. It can be used for sampling snow up to 9 m in depth.
Penetration testing involves inserting a probe into snow to determine the snow's mechanical properties. Experienced snow surveyors can use an ordinary ski pole to test snow hardness by pushing it into the snow; the results are recorded based on the change in resistance felt as the pole is inserted. A more scientific tool, invented in the 1930s but still in widespread use, is a ram penetrometer. This takes the form of a rod with a cone at the lower end. The upper end of the rod passes through a weight that is used as a hammer; the weight is lifted and released, and hits an anvil—a ledge around the rod which it cannot pass—which drives the rod into the snow. To take a measurement, the rod is placed on the snow and the hammer is dropped one or more times; the resulting depth of penetration is recorded. In soft snow a lighter hammer can be used to obtain more precise results; hammer weights range from 2 kg down to 0.1 kg. Even with lighter hammers, ram penetrometers have difficulty distinguishing thin layers of snow, which limits their usefulness with regard to avalanche studies, since thin and soft layers are often involved in avalanche formation.
Two lightweight tools are in wide use that are more sensitive than ram penetrometers. A snow micro-penetrometer uses a motor to drive a rod into snow, measuring the force required; it is sensitive to 0.01–0.05 newtons, depending on the snow strength. A SABRE probe consists of a rod that is inserted manually into snow; accelerometer readings are then used to determine the penetrative force needed at each depth, and stored electronically.
For testing dense polar snow, a cone penetrometer test (CPT) is use, based on the equivalent devices used for soil testing. CPT measurements can be used in hard snow and firn to depths of 5–10 m.
Commercially available rotary rigs have been used with large augers to drill in ice, generally for construction or for holes to gain access below the ice. Although they are unable to produce cores, they have been intermittently used by US and Soviet scientific expeditions in the Antarctic. In 2012, a British Antarctic Survey expedition to drill down to Lake Ellsworth, two miles below the surface of the Antarctic ice, used an Australian earth auger driven by a truck-mounted top drive to help drill two 300 m holes as part of the project, though in the event the project was delayed.
Powered augers designed to drill large holes through ice for winter fishing may be mounted on a snow vehicle, or a tractor or sled; hole diameters can be as high as 350 mm. These rigs have been produced commercially in both the US and the USSR, but are no longer in common use.
A flame-jet drill, more usually used to drill through crystalline rocks, was used to drill through ice on the Ross Ice Shelf, in the 1970s. The drill burns fuel oil, and can be run under water as long as enough compressed air is available. It drills rapidly, but produces an irregular hole contaminated by soot and fuel oil.
A Soviet-designed drill used a motor to provide vertical vibration to the barrel of the drill at 50 Hz; the drill had an outer diameter of 0.4 m, and in tests at Vostok Station in the Antarctic drilled a 6.5 m hole, with a 1.2 m drilling run taking between 1 and 5 minutes to complete. The drill's steel edges compacted snow into the core, which helped it stick to the inside of the barrel when the drill was winched out of the hole.
Mechanical drills typically have three cutters, spaced evenly around the drill head. Two cutters leads to vibration and poorer ice core quality, and tests of drillheads with four cutters have produced unsatisfactory performance. Geometric design varies, but the relief angle, α, varies from 5–15°, with 8–10° the most common range in cold ice, and the cutting angle, δ, varies from 45° (the most common in cold ice) up to 90°. The safety angle, between the underside of the cutting blade and the ice, can be as low as 0.8° in successful drill designs. Different shapes for the end of the blade have been tried: flat (the most common design), pointed, rounded, and scoop shaped.
Cutters have to be made of extremely strong materials, and usually have to be sharpened after every 10–20 m of drilling. Tool steels containing carbon are not ideal because the carbon makes the steel brittle in temperatures below −20 °C. Sintered tungsten carbide has been suggested for use in cutters, since it is extremely hard, but the best tool steels are more cost effective: carbide cutters are fixed to the body of the cutting tool by cold pressing or brass soldering, and cannot easily be unmounted and sharpened in the field.
The cutting depth is controlled by mounting shoes on the bottom of the drill head; these ride on the ice surface and so limit how deep the cutter can penetrate in each revolution of the drill. They are most commonly mounted just behind the cutters, but this position can lead to ice accumulating in the gap between the cutter and the shoe. So far it has not proved possible to correct this by modifying the shoe design.
Drilling fluids are necessary for borehole stability in deep cores, and can also be used to circulate cuttings away from the bit. Fluids used include water, ethanol/water and water/ethylene glycol mixtures, petroleum fuels, non-aromatic hydrocarbons, and n-butyl acetate.
Densifiers are used in drilling fluids to adjust the density of the fluid to match the surrounding ice. Perchloroethylene and trichloroethylene were often used in early drilling programs, in combination with petroleum fuels. These have been phased out for health reasons. Freon was a temporary replacement, but has been banned by the Montreal Protocol, as has HCFC-141b, a hydrochlorofluorocarbon densifier used once Freon was abandoned. Future options for drilling fluids include low molecular weight esters, such as ethyl butyrate, n-propyl propionate, n-butyl butyrate, n-amyl butyrate and hexyl acetate; mixtures of various kinds of ESTISOL; and dimethyl siloxane oils.
The two main requirements of an anti-torque system are that it should prevent rotation of the sonde, and it should allow easy movement of the drill up and down the borehole. Attempts have been made to design drills with counter-rotating components so that overall torque is minimized, but these have had limited success. Five kinds of anti-torque systems have been devised for use with cable-suspended EM drills, though not all are in current use, and some drills have used a combination of more than one design. The first drill to require an anti-torque system was used at Camp Century by CRREL in 1966; the drill incorporated a set of hinged friction blades that swung out from the sonde when the drill motor was started. These were found to have very weak friction against the borehole wall, and were ineffective; the drill had to be controlled carefully to prevent twisting the cable. No other drills have attempted to use this approach.
For the next deployment of the drill leaf springs were installed, and this has proved to be a more durable design. These are mounted vertically, with a curve outwards so that they are easily compressed by the borehole wall, and can slide up and down with the movement of the drill. They pass easily through any areas of irregularity in the borehole, but the edges of the springs cut into the borehole wall and prevent rotation. Leaf springs are very simple mechanically, with the additional benefit of being easy to adjust by changing the spacing between the end points. They can be placed anywhere on the drill that does not rotate, so they do not add length to the sonde. The shape is usually a fourth-order parabola, since this has been determined to provide the most even loading against the borehole wall. Leaf springs have been found to be so effective that they can prevent rotation even in heavy drills running at full power.
Skate antitorque systems have blades attached to vertical bars which are pushed against the borehole wall; the blades dig into the wall and provide the anti-torque. Skates can be built with springs which allow them to keep the blades pressed against the wall in an irregular borehole, and to prevent problems in narrower parts of the borehole. Although skates are a popular design for anti-torque and have been used with success, they have difficulty preventing rotation in firn and at boundaries between layers of different densities, and can cause problems when drilling with high torque. When they fail, they act as reamers, removing chips from the wall which can fall to the drillbit and interfere with drilling.
In the 1970s, the Japanese Antarctic Research Expedition (JARE) group designed several drills using side-mill cutters. These are toothed gears that are driven from the rotation of the main drill motor via 45° spiral gears; their axis of rotation is horizontal, and they are placed so that the teeth cut four vertical slots in the borehole wall. Guide fins higher on the sonde travel in these slots and provide the antitorque. The design was effective at preventing rotation of the sonde, but it proved to be almost impossible to realign the guide fins with the existing slots when tripping in. Misalignment increased the chance of the drill getting stuck in the borehole; and there was also a risk of ice cuttings from the mill cutters jamming in between the drill and the borehole wall, causing the drill to get stuck. The system was used again in a drill developed in China in the 1980s and 1990s, but the problems inherent in the design are now considered insuperable and it is no longer in use.
The most recent anti-torque system design is the use of U-shaped blades, made of steel and fixed vertically to the sides of the sonde. Initial implementations ran into problems with thin blades bending too easily, and thick blades providing too much resistance to vertical movement of the sonde, but the final design can generate strong resistance to torque in both firn and ice.
Drills may be designed with more than one anti-torque system in order to take advantage of the different performance of the different designs in different kinds of snow and ice. For example, a drill may have skates to be used in hard firn or ice, but also have a leaf-spring system, which will be more effective in soft firn.
In ice core drilling, when an annulus has been drilled around the core to be retrieved, the core is still attached to the ice sheet at its lower end, and this connection has to be broken before the core can be retrieved. One option is to use a collet, which is a tapered ring inside the cutting head. When the drill is pulled up, the collet compresses the core and holds it, with loose ice chips wedged in it increasing the compression. This breaks the core and holds it in the barrel once it has broken. Collets are effective in firn but less so in ice, so core dogs, also known as core catchers, are often used for ice cores.
A typical ice drill core dog has a dog-leg shape, and will be built into the drill head with the ability to rotate, and with a spring supplying some pressure against the core. When the drill is lifted, the sharp point of the core dog engages and rotates around, causing the core to break. Some core dogs have a shoulder to stop them from over-rotating. Most drill heads have three core dogs, though having only two core dogs is possible; the asymmetric shearing force helps break the core. The angle, δ, between the core dog point and the core, has been the subject of some investigation; a study in 1984 concluded that the optimum angle was 55°, and a later study concluded that the angle should be closer to 80°. Core catchers are made from hardened steel, and need to be as sharp as possible. The force required to break the core varies with temperature and depth, and in warm ice the core dogs may gouge grooves up the core before they catch and it breaks. Some drills may also include a weight that can be used as a hammer, to provide an impact to help in breaking the core.
For snow and firn, where the core material may be at risk of falling out of the bottom of the core barrel, a basket catcher is a better choice. These catchers consist of spring wires or thin pieces of sheet metal, placed radially around the bottom of the core barrel and pressed against the side of the barrel by the core as the drill descends around it. When the drill is lifted, the ends of the catcher engage with the core and break it from the base, and act as a basket to hold it in place while it is brought to the surface.
Casing, or lining a hole with a tube, is necessary whenever drilling operations require that the borehole be isolated from the surrounding permeable snow and firn. Uncased holes can be drilled with fluid by using a hose lowered into the hole, but this is likely to lead to increased drilling fluid consumption and environmental contamination from leaks. Steel casing was used in the 1970s, but rust from the casing caused damage to the drills, and the casing was not sealed, leading to fluid leaks. There were also problems with the casing tubes not being centered, which caused damage to the drill bit as it was lowered through the casing. Fibreglass and HDPE casing has become more common, with junctions sealed with PTFE tape, but leaks are frequent. Heat fusion welding for HDPE casing is a possible solution. To seal the bottom of the casing, water can be pumped to the bottom of the hole once the casing is set, or a thermal head can be used to melt ice around the casing shoe, creating a seal when the water freezes again. Another approach is to use a hotpoint drill, saturating the snow and firn with melted water, which will then freeze and seal the borehole.
Low-temperature PVC tubing is not suitable for permanent casing, since it cannot be sealed at the bottom, but it can be used to pass drilling fluid through the permeable zone. Its advantage is that it requires no connections since it can be coiled on a reel for deployment. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.