text stringlengths 4 602k |
|---|
About This Chapter
About This Chapter
In this chapter, video instructors show you current theories about the origins of the universe, including the Big Bang Theory and theories about expanding and contracting universes. You'll be introduced to Tycho Brahe and Nicolaus Copernicus and learn about their contributions to knowledge of the universe in their times. This chapter will discuss evidence for the Big Bang Theory - including background radiation, red shift, and expansion.
You'll learn about the formation of spiral, elliptical, and irregular galaxies and of dwarf and giant stars, as well as the typical star formation process. As you progress through the chapter, you'll learn about the life cycles of black holes, neutron stars, supernovas, and red giants. When you finish this chapter, you should be able to:
- Evaluate the characteristics and supporting evidence of the various theories of cosmology and cosmogony
- Distinguish different types of stars by size, color, and life cycle
- Understand galaxy and star formation
- Recognize the historical progression of astronomy and the physical laws that govern it
These video lessons are written to be short enough to keep you engaged, yet detailed enough that you can learn the most important facts about each topic. To make these videos easy to use, take advantage of the tags that allow you to toggle between the most important points. If you get stuck on an unfamiliar keyword, just click on it; key terms link to text lessons with further information. After watching each video lesson, you'll have the opportunity to take a self-assessment quiz so you can gauge how well you understood the material.
1. Origins of the Universe: The Big Bang and Expanding & Contracting Universes
Students will learn the origins of the universe, the Big Bang theory, the timeline of the universe, how the universe is still expanding to this day, and what astronomers expect the universe to look like in the future.
2. Evidence for the Big Bang Theory: Background Radiation, Red-Shift and Expansion
Discover what evidence exists to support the Big Bang theory of the birth of the universe. Learn how cosmic background radiation, the red shift of light and the ongoing expansion of the universe led scientists to believe that the universe was started with the Big Bang.
3. Galaxy Formation: Spiral, Elliptical & Irregular Galaxies
This lesson explains how galaxies form, starting with the Big Bang. You'll also learn about the solar nebula hypothesis and three galaxy types, including spiral, elliptical, and irregular galaxies.
4. Star Formation: Main Sequence, Dwarf & Giant Stars
Learn how stars are born, beginning with a protostar. Then learn about stars in later stages of life, including main sequence stars, brown dwarfs, red giants, and black holes.
5. Types of Stars by Size, Color and Life Cycle
Learn to identify the different sizes and colors of stars and how they relate to the star life cycle. In this lesson, we'll talk about spectral classification, how many stars there are of each type and the approximate color of the different classes of stars.
6. Supernova and Supergiant Star Life Cycle
Learn about one of the biggest explosions known to humankind - a supernova. Follow a star's life cycle and learn how a star changes from a red giant to a supernova to a black hole or neutron star.
7. Life Cycle of Black Holes
Learn about black holes, their myths and their reality. Learn how black holes form after stars undergo supernovae and create singularities. Discover how big black holes grow, how scientists find black holes and where black holes are located in the universe.
8. Life Cycle of Neutron Stars
Discover the life of a neutron star, including how it's born after a supernova explosion and how its extreme pressure causes protons and electrons to combine into neutrons. Discover also how pulsars are rotating neutron stars that will eventually slow down to become regular neutron stars.
9. Tycho Brahe and Copernicus Take On the Known Universe
Astronomy according to Ptolemy was the popular theory until Copernicus turned it on its head. This lesson explores the theories of Copernicus and Brahe and how the two changed astronomical study.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the ILTS Science - Earth and Space Science (108): Test Practice and Study Guide course
- Scientific Inquiry and Practices
- Cell Structure
- Heredity and Evolution
- Characteristics and Life Functions of Organisms
- Organisms and the Environment
- Fundamentals of Thermodynamics
- The Atom & Matter
- Trends of the Periodic Table
- Chemical Bonds and Reactions
- Nuclear Processes
- Waves Overview
- Magnetism & Electricity
- Land, Water, and Atmospheric Systems
- Weather and Storms
- Earth's Geology and Structure
- Historical Evolution of Earth
- Energy Sources & Human Impact on the Earth
- Water Systems
- Renewable Energy Resources
- Environmental Impact & Sustainability
- Planets and the Solar System
- Astronomy & Space Exploration
- ILTS Science - Earth and Space Science Flashcards |
In this topic we will cover all types of DNA structure. According to Watson-Crick pairing the pairing occurs between A and T and between G and C which is said to be complementary. The base pairs lie almost flat, stacked on top of one another perpendicular to the long axis of double helix. Stacking adds to stability of DNA molecule by excluding water molecules from the spaces between the base pairs. While discussing a DNA molecule, biologists frequently refer to the individual strands as single stranded DNA and to the double helix as double stranded DNA or duplex DNA.
Types of DNA Structure / Forms of DNA
DNA can exist in other forms also like A, B, Z etc. Although only B- DNA and Z-DNA have been directly observed in functional organism. The conformation that DNA adopts depends on the following factors:-
- The hydration level
- DNA sequence
- The amount and direction of supercoiling
- Chemical modification of bases
- Type and concentration of metal ions as well as the presence of polyamines in the solution
- Conformational change involves the rotation around the glycosidic bond. . It changes the orientation of base in relation to the sugar.
- Conformational changes around the bond between the 3’ and 4’ carbon can also take place.
Both the above (6,7) rotations result in changed positioning of two strands and certain alternate structure of DNA can be formed.
Table of Contents
- Types of DNA Structure / Forms of DNA
- It is very minor species of DNA that may or may not be present under normal physiological conditions.
- A-DNA appears when the DNA fibre (B-DNA) is dehydrated, i.e., relative humidity is reduced from 92 to 75% and Na+, K+ and Cs+ ions are present in the medium.
- In other words, in solution, DNA assumes the B form and under conditions of dehydration, the A form. This is because the phosphate groups in the A-DNA bind fewer water molecules than do phosphates in B-DNA.
- The `A’ DNA is more compact than the `B’ DNA.
- It has the diameter of 25.5 Å, distance between two adjacent bases is 2.9 Å and the pitch is 32 Å. Thus there are 11 bases/turn.
- This form of DNA has high degree of resemblance with double stranded RNA.
- It has much deeper major groove and the minor groove is very shallow.
- The `A’ DNA is right handed in its helical turnings.
‘B’ DNA or Watson and Crick DNA double helix model
i. It consists of two antiparallel polynucleotide strands that wind about a common axis with a right handed twist to form a double helix.
ii. The ideal B-DNA helix has the diameter of 20 Å. The bases are 3.4 Å apart along the helix axis and the helix rotates 36° per base pair. Therefore, the helical structure repeats after 10 residues on each chain, i.e., at intervals of 34 Å. In other words, each turn of the helix contains 10 nucleotide residues.
iii. The phosphate and deoxyribose units are found on the periphery of the helix, whereas the purine and pyrimidine bases occur in the centre. The planes of the bases are perpendicular to the helix axis.
iv. Each base is hydrogen bonded to a base on opposite strand (A with T and G with C) to form a planar base pair. The planes of the sugars are almost at right angles to those of the bases.
v. The hydrogen bonding between the bases takes place either between the –NH2 group of one base and =O of the other base or between =NH of one base and the –N of the other base. For stable bond formation, the distance between N-N is 0.30 nm and that between O-N is 0.28-0.29 nm.
vi. The double helix has major and minor grooves.
vii. When the ion such as Na+ and the relative humidity is >92%. Fibers of DNA assume the so called B- Conformation It is the most stable structure for a random sequence of DNA and is therefore the standard point of reference.
- C-DNA is formed at 66% relative humidity in the presence of Li+ ions.
- C-DNA is also right-handed, with an axial rise of 3.32 Å per base pair.
- There are 9.33 base pair per turn of the helix ; the value of helix pitch is, therefore, 3.32 × 9.33 Å or 30.97 Å.
- The rotation per base pair in C-DNA is approximately 360/9.33 or 38.58°.
- The diameter of C-helix is 19 Å, which is smaller than that of both B- and A-helix.
- The tilt of the base pairs is 7.8°.
- D-DNA consists of only 8 base pairs per helical turn.Therefore, is an extremely rare variant.
- D- DNA is found in some of the DNA molecules which are devoid of guanine.
- D-DNA has an axial rise of 3.03 Å per base pair, with a tilting of 16.7° from the axis of the helix.
By contrast, A-, B- and C forms of DNA are found in all DNA molecules, irrespective of their base sequence.
- `Z’ DNA is left handed form of the DNA.
- In Z DNA the turns in the DNA helix are in opposite direction than in other forms of DNA.
- `Z’ DNA is slimmer and has a diameter of only 18.4 Å.
- `Z’ DNA have about 12-bases/ turn.
- `Z’ DNA have no major and minor grooves. There is only one groove and that too is narrow and deep.
- `Z’ DNA named as such because the base conformation is more like a zig-zag arrangement.
- Under experimental conditions the presence of `Z’ DNA has been shown in high salt condition or in presence of certain specific cations, such as spermine and spermidine.
- Z’ DNA has high degree of negative supercoiling and has certain specific proteins attached to it.
- Besides, relatively high methylation at 5- position of C residues has been found in `Z’ DNA.
Role of different forms
Though precise role of alternate forms of DNA is not very well understood, they may play some regulatory function. The possibility that some of these DNAs may be artifact of experimental conditions may not be completely ruled out. As discussed, the conformation of DNA plays an important biological function. Majority of the regulatory controls require binding of certain factors to DNA. Any change in the structure will affect the binding of these factors and will therefore regulate the biological activity.
Some Unusual Structures
- Bends occur in DNA helix wherever four or more adenosine residues appear sequentially in one strand.
- Bent DNA may be important in binding of some proteins to DNA.
- H-DNA is usually found in polypyrimidine or polypurine segments that contain within themselves a mirror repeat.
- One simple example is a long stretch of alternating T and C residues.
- A striking feature of H-DNA is the pairing and interwinding of 3 strands of DNA to form a triple helix.
- Triple-helical DNA is produced spontaneously only within long sequences containing only pyrimidines (or purines) in one strand.
- Two of the three strands in the H-DNA triple helix contain pyrimidines and the third contains purines.
The N- 7, O6 and N6 of purines, the atoms that participate in hydrogen bonding of triplex DNA are often referred to as Hoogsteen position and the non Watson – Crick pairing is called Hoogsteen pairing.
The triplexes are most stable at low pH, and are readily formed within long sequences containing only purines in a given strand.
Tetraplex structures may also form in DNA sequences with a very high proportion of guanosine residues.
- Structure of a DNA quadruplex formed by telomere repeats.
- The conformation of the DNA backbone diverges significantly from the typical helical structure.
- Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure.
- The guanine-rich sequences may stabilize chromosome ends by forming very unusual structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules.
- Telomeres and telomerase have recently received great attention because of their potential links to cancer, HIV and other diseases.
- A unique G-rich DNA sequence in the telomeres was found to protect the chromosomes from recombination, end to end fusion, and degradation through forming G-quadruplexes with highly polymorphic structures in the presence of alkali metal cations.
- This unusual structure and extensive cellular functions make G-quadruplex a very attractive target for drug design, which made it important for determination of G-quadruplex. |
Adding fractions inquiry
Mathematical inquiry processes: Identify and create patterns; conjecture and generalise. Conceptual field of inquiry: Addition and subtraction of fractions.
The prompt was devised by Mark Greenaway (an Advanced Skills Teacher in Suffolk, UK) to encourage students to analyse the sum of two unit fractions in which the denominators are in the form n and n + 1.
Initial questions and observations
The teacher can assess students' level of understanding of fractions through their initial questions and observations about the prompt. The board below, which was constructed by a year 7 mixed attainment class, shows that at least some students have sound prior knowledge. The early phases of the inquiry might, therefore, involve spreading that knowledge and ensuring students are secure in finding the sum of two fractions, rather than involve a teacher explanation.
Some students are already beginning to speculate about the next case and the rules for finding the sum of a quarter and a fifth. They suggest two different rules:
Add two to the numerator and six to the denominator, giving 9/18; and
Add two to the numerator and double the denominator, giving 9/24.
That neither are correct intrigues students. In fact, the denominator increases in a quadratic sequence: 6, 12, 20, 30, ...
Teachers might have misgivings about the prompt's potential to sow misconceptions or to focus students' thinking on the operations rather than underlying concepts.
For example, students often notice that the sum and product of the denominators on the left-hand side of the equation give, respectively, the numerator and denominator on the right-hand side. They might try to generalise the two 'rules' to other cases without realising that they only work in the case of unit fractions.
However, the value of the prompt lies precisely in the way it exposes students' misconceptions and procedural thinking that already exist.
Noticeably, the board above does not feature the comment that regularly occurs at the initial stage of the inquiry: "The first answer is wrong because 1 + 1 = 2 and 2 + 3 = 5, so it should be 2/5." In bringing misconceptions to the surface, the inquiry gives teachers the chance to tackle them (see 'Tackling misconceptions' below).
Ultimately, the inquiry could lead into deductive algebraic proof in all years of secondary or high school. For unit fractions, the teacher could introduce:
1/n + 1/(n + 1) = n + 1/n(n +1 ) + n/n(n + 1) = n + (n + 1)/n(n + 1) .
Students might then be expected to construct other proofs for when the numerators are any integer and equal (x/n + x/(n + 1)) or any integer and different (x/n + (x + k)/(n + 1)).
Matthew Bernstein, a teacher of a grade 5/6 class at the Fred Varley Public School (Markham, Ontario), posted these pictures on twitter. He describes how the student-driven inquiry developed:
Students used Google Jamboard (we are still hybrid) to make observations and ask some great questions regarding what they saw.
Within the lots of ideas, there were two that they were interested in exploring: (1) If you add the denominators together it makes the numerator; (2) Can you always multiply the denominators of fractions together in an equation to get a common one?
Students were very curious about this and the class was excited to explore these questions together in small groups to either prove or disprove them. They really enjoyed finding patterns to see if they could be generalized. Students used Mathigon Polypad to help represent ideas visually.
There is no doubt that it helped that most were familiar with the notion of needing to find a common denominator, but I think that's what made the task that much richer for them. I do think this could be done if students were unfamiliar but it would go in a different direction.
The inquiry in a grade 4 classroom
The first picture shows the initial thoughts and questions of Amanda Klahn's grade 4 PYP class at the Western Academy of Beijing, China.
Students are starting to look for patterns linking the two fractions with their sum. They have found the rules for adding and multiplying the denominators to get the numerator and denominator respectively in the sum.
A student has extended the rules to a new case (one fifth and one seventh) in which the denominators are in the form n and (n + 2).
While the fractions remain unit fractions, the rules continue to give the correct answer - in the new case, 1/5 + 1/7 = (5 + 7)/(5 x 7) = 12/35 .
Other students' presentations show the use of manipulatives (fraction bricks) to explain the calculations in the prompt. The final picture shows a student has changed the numerator to two, going on to find the sum of 2/14 and 2/15 .
There are more rich mathematical results from the inquiry on the class blog.
The inquiry in a mixed attainment class
The picture shows the questions and observations of a year 7 mixed attainment class in an inner-city comprehensive school in the UK. They reveal a wide variety of prior knowledge and approaches to the prompt. At least one student evidently knows how to add fractions, another has a partial recollection that a common denominator is required, and another perpetuates the misconception that you add numerators and denominators separately. Other students prefer to speculate about the sum of a quarter and a fifth by extending the pattern from the two examples in the prompt.
When given the choice of six regulatory cards, the class required an explanation of how to add fractions, which the teacher orchestrated by drawing on the knowledge that already existed in the classroom. The students then opted either to practise a procedure (adding fractions) or find more examples by summing unit fractions.
The first lesson ended with a pair of students presenting the general form of the fraction on the right-hand side of each equation, with n being the denominator of the first fraction: (2n + 1)/n(n + 1).
At the start of the second lesson, other students explained on a number line why the equations in the prompt are correct. The class then created their own lines of inquiry by changing features of the prompt.
Lines of inquiry
(1) Changing the numerator
How do the results change when the numerator is greater than one? For example, 2/3 + 2/4 = 14/12 and 2/4 + 2/5 = 18/20
What if the numerators have a difference of one? For example, 2/3 + 3/4 = 17/12 and 2/4 + 3/5 = 22/20
How does the general form change for the new cases?
(2) Changing the difference between the denominators
How do the results change when the difference between the denominators is greater than one? For example, 1/2 + 1/4 = 6/8 and 1/3 + 1/5 = 8/15 or 1/2 + 1/5 = 7/10 and 1/3 + 1/6 = 9/18
How does the general form change in each case?
(3) Summing three 'consecutive' unit fractions
Can you find the sum of three 'consecutive' unit fractions? What about 1/2 + 1/3 + 1/4? The teacher, using a number line, contributed an explanation of how three unit fractions were not consecutive in the same way as three positive integers. Indeed, the class resolved not to use the term 'consecutive' in the context of fractions unless they (or their equivalents) had the same denominators and numerators with a difference of one. In this definition, a third, a quarter and a fifth are not consecutive.
The inquiry ended with students giving presentations about their findings and the patterns they had noticed.
Engagement and creativity through inquiry
Emmy Bennett, a teacher of mathematics at Priory School, Edgbaston (UK), used the adding fractions prompt to initiate inquiries with her two year 7 classes. The pupils responded in highly creative ways and developed multiple lines of inquiry. Emmy reports:
After the success of my initial inquiry lesson with a year 9 class (see Challenge through inquiry), I decided to try the adding fractions prompt with my two year seven classes. In both classes, the pupils started by discussing the prompt in pairs. Then we shared ideas and decided where to go with the inquiry.
With one class they spent quite a bit of time deciding if the prompt was true and some pupils chose to practice adding fractions after some examples. The picture shows the different strands of the inquiry. The pupils explored some equivalent fractions and were enthusiastic to notice all the properties of the initial prompt as they could.
In the other year seven lesson pupils were interested in finding more examples or changing the prompt to find other patterns.
For this lesson I asked pupils who found more examples to write them on the whiteboard as we went along. (I’m lucky enough to have three whiteboards at the front of my classroom). The pupils loved this and, at one point, there were eight pupils writing on the boards.
One pupil was really interested in looking at examples when the difference and product of two fractions are the same. He called it a 'maths hack' and initially said, 'It doesn’t work when the denominators are two apart.' However, he kept going and noticed that the numerator of the difference became the difference of the initial denominators. The picture shows the record of the pupils’ inquiry.
All the pupils in the two classes were fully engaged throughout the lessons. Unfortunately, I did this inquiry on the last day of term so we couldn't spend more time on it, but some pupils said they were going to explore more at home. It was an absolute joy to teach in this way and I can’t wait to try more inquiries in the future.
Helen Hindle, Hugh Salter and Andrew Blair, three teachers at Longhill High School (Brighton, UK), used the prompt to challenge students' misconceptions about adding fractions. The inquiries that developed from the prompt featured hugely valuable discussions in which entrenched notions were challenged and an understanding of the concept of a fraction was reconstructed by students and the teacher.
Andrew Blair reports on the lesson study:
In the lesson study cycle, which involved year 7 classes, I went first. I decided to use a number line as a tool with which to approach the concepts of a fraction and then of adding fractions. Before showing the class the prompt, we started by locating fractions on a number line.
This led immediately to our first misconception about representing 1/6, which one student argued should be placed half way along the number line as six is half of twelve.
Speculation about patterns
The students' questions and comments about the prompt provided a strong foundation for inquiry. In particular, the speculation around the solution to 1/4 + 1/5 motivated the students to request instruction in how to add fractions. Should we continue the sequence 5/6, 7/12 by adding two to the numerator and six to the denominator, giving 9/18? Or should we apply the 'rule' derived from the denominators of the unit fractions? In the latter case, we would find their sum for the numerator and their product for the denominator, giving 9/20 .
Inquiry for all attainment levels
One class with lower prior attainment that was part of the lesson study posed meaningful questions and made insightful observations (see picture). The students' responses show the potential of the prompt to promote questioning and noticing in all classes.
As the inquiries developed, students were taught to link the number of intervals on the number line with the product of the denominators. The students then showed the sum of any two fractions on a number line by using equivalent fractions. So, typically, a student went on to show 1/4 + 1/5 on a number line of length 20, explain why it is equivalent to 5/20 + 4/20, and give the solution 9/20.
Misconceptions identified during the lesson study
1/6 should be placed half way along a line of 12 units because the 'number' six stands for the length along the line. The student who said this had no problem marking a quarter. Thus, while students might have a sense of 1/4, they might not have developed a conceptual understanding.
Having started with number lines of length 12 units, students refused to use a line of six units to show the first calculation in the prompt. This revealed an inability to conceive of a fraction as part of any whole. Once a third was represented by an arrow four units along a line of length 12, students would not accept it could also be shown as two units along a shorter line of length six.
To add two fractions, students claimed, you add the numerators and then the denominators separately. Thus, 1/2 + 1/3 = 2/5. This shows a misconception of fractions as two unrelated 'numbers'.
To add two fractions, you add the denominators to get the numerator in the answer and multiply them to get the denominator. (As students realise during their inquiry, this works for unit fractions, but not when the numerator is greater than one.)
When showing the solution to 1/2 + 1/3 on a number line, students start both fractions at zero, rather than place one fraction after the other (see illustration). This idea that the fraction of a line can only be shown from zero was surprisingly common.
Questions to extend the inquiry
The following questions come from year 7 students at Haverstock School (Camden, London, UK) midway through the inquiry:
(1) "Would it ever be true if you switched the numerators and denominators?" The students could not find any values to make this true.
(2) "If you switched the numerators and denominators in the question, could they be equal?" The students found values for a, b, c and d that satisfy the equation. They realised that ac = bd.
The questions expressed formally would be:
Terry Patterson, a maths teacher in London, contacted Inquiry Maths about a prompt she had devised. Terry's first experience of an inquiry lesson came when she used the prompt with her year 8 class. She commented on the emotional impact an inquiry can have: "The students' questions are moving and revealing. They loved running the lesson. I was quite choked up after my first lesson yesterday - an eye-opener."
The class had low prior attainment in maths and the prompt gave Terry an insight into the students' level of understanding: "Every question they posed revealed the group's bafflement." The questions included:
Why does 1 - 1 = 1?
Why does 2 - 3 = 6?
Is it to do with times tables?
The last question could follow from identifying supposed links between the numerators and denominators - that is, 1 x 1 = 1 and 2 x 3 = 6 respectively. The questions reveal the kinds of misconceptions that are common when students are faced with fractions prompts. |
Module 1. Phase Rule
Module 2. Fuels
Module 3. Colloids Classification, properties
Module 4. Corrosion Causes, type and methods of p...
Module 5. Water Hardness
Module 6. Scale and sludge formation in boilers, b...
Module 7. Analytical methods like thermo gravimetr...
Module 8. Nuclear radiation, detectors and analyti...
Module 9. Enzymes and their use in manufacturing o...
Module 10. Principles of Food Chemistry
Module 11. Lubricants properties, mechanism, class...
Module 12. Polymers type of polymerization, proper...
Lesson 19 Enzymes-II
In this lesson, we will study mechanism of enzyme action, enzyme kinetics, Michaelis-Menten equation and regulation of enzyme activity.
19.2 Mechanism of Enzyme activity
All enzymes contain at least one active site (a specific region of enzyme) which combines with substrate (reacting molecules). The binding of substrate and enzyme causes change in shape of substrate molecules which leads to formation of new bonds or breakage of old bonds which leads to formation of product molecules. The products are released from enzyme surface and the enzyme molecules can enter other reaction cycle. In this way a small number of enzymes molecules change a large number of reactant molecules to form products.
It is found that enzymes are specific towards substrates. This can be explained on the basis of lock and key model. Consider the enzyme molecule to be a ‘lock’ and substrate molecules its ‘key’. We know that every lock and key combination is specific i.e. A lock can be opened only by a particular key. Similarly enzyme-substrate combination is specific. This happens because the shape of the active site of the enzyme and substrate are complimentary to each other similar to fitting of the puzzle pieces to form a complete picture as illustrated in figure no.19.1. This means that an enzyme molecule reacts with only one or very few similar compounds.
If a substrate molecule does not fit correctly to an enzyme, no reaction will take place as illustrated in Fig no.19.2
Figure 19.1 Figure showing combination of enzyme with correct substrate
Figure 19.2 Figure showing combination of enzyme with incorrect substrate
19.3 Kinetics of enzyme action
Enzymes are catalysts. They enhance the rate of specific chemical reactions that would occur very slowly in absence of them. They do not change equilibrium point of a reaction nor are they used up or permanently changed by the reactions.
It is found that the substrate concentration has a great effect on initial rate of enzyme catalyzed reactions. All enzymes exhibit saturation effect as explained in section 18.5.2.
This led Victor Henri in 1903 to the conclusion that during enzyme catalyzed reactions, enzyme molecules combine with substrate molecules to form a complex. This idea was expanded into a general theory of enzyme action by Leonor Michaelis and Maud Menten in 1913.
19.4 Michaelis-Menten equation
There are two basic reactions involved in formation and breakdown of enzyme-substrate complex. The enzyme E first combines reversibly with substrate S to form an enzyme-substrate complex ES. This reaction is fast and reversible. The complex ES then breaks down in a second reaction to form product P and the enzyme E is regenerated. This reaction is also reversible but slower than the first reaction.
If [Et] represents total enzyme concentration (the sum of free and combined enzyme), [ES] is the concentration of the enzyme-substrate complex, then [Et]-[ES] represents concentration of free or uncombined enzyme. Substrate concentration [S] is far greater than [Et], so the amount of S bound by E at any given time is negligible compared with total concentration of S.
The rate of formation of [ES] in Equation 1 is
Rate of formation =k1 ([Et ] - [ES])[S] --------- Equation 3
Where k1 is the rate constant of forward reaction 1. The rate of formation of ES from E+P by reverse reaction of (2) is very small and can be neglected.
Rate of breakdown of ES is Rate of Breakdown = k-1(-1) [ES]+ k2 [ES] -------------Equation 4
Where k-1 and k2 are the rate constants for reverse of reaction (1) and reaction (2) respectively.
When rate of formation of ES is equal to its rate of breakdown, concentration of ES is constant and reaction is in steady state.
Thus, equating (3) and (4),
k1 ([Et ]- [ES])[S]=k-1(-1) [ES]+ k2 [ES] ---------------- Equation 5
This can be further solved as
The initial velocity is determined by the rate of breakdown of ES in reaction (2) whose rate constant is k2. Thus we have
Let, the Michaelis-Menten constant
and , the rate when all available enzyme is present as ES.
Substituting in above equation, we get,
This is the Miscaelis-Menten equation, the rate equation for one-substrate enzyme catalyzed reaction. It is a equation of the quantitative relationship between the initial velocity v0, the maximum velocity Vmax, and the initial substrate concentration, all related through the Michaleis-Menten constant KM.
In the special case when the initial reaction rate is exactly one-half the maximum velocity
Dividing by Vmax and solving for KM,
The Michaelis-Menten equation is basic to all aspects of kinetics of enzyme action. If we know KM and Vmax, we can calculate the reaction rate of an enzyme at any concentration of its substrate.
19.5 Regulation of enzyme activity
Enzymes are catalysts whose activity is regulated by the cell. There are two main reasons for this type of control
(1) All enzyme Action requires energy. A cell has limited store of energy. If an unnecessary enzyme action goes on continuing, the cell will be short of energy and will die.
(2) Some of the products of the enzyme catalyzed reactions may be harmful to the cell. If the concentration of such products increases the cell may die.
Some of the mechanisms which exist for regulation of enzyme activity are as follows:
19.5.1 The simplest method is to produce the enzyme only when it is required. This mechanism is used by bacteria. The bacteria produce enzyme only when a substrate is available to it which it wants to degrade or when a particular product is desired.
19.5.2 Allosterism: Enzymes more than one binding site are called as allosterism enzymes. In such enzymes, one site will be for substrate molecule called as active site. The other site is called as effector binding site. When a molecule (called effector molecule) binds to this site, it changes the shape of active site so that its activity is either increased or decreased. If activity is increased it is called the positive allosterism and if the activity is decreased it is called negative allosterism.
19.5.3 Feedback inhibition: It regulates a chain of reaction involved in synthesis of biological molecules. Consider a sequence of reactions represented below
The starting material A is converted to B by enzyme E1, B is converted to C by Enzyme E2 and so on until final product F is generated. If F is no longer needed, then F will exert negative allosteric effect on enzyme E1 and its activity will be decreased. Thus A will not be converted to B whole chain of relations will stop.
19.5.4 Zymogens: An enzyme in inactive form is called Zymogen or Proenzyme. It is produced away from where it is required. When it reaches its destination, it is converted to active form. E.g. the digestive enzymes pepsin, trypsin and chymotrypsin are very destructive in nature. These enzymes degrade protein to amino acids. If the enzymes are synthesized in active form, the cell will be killed. Thus the cells synthesize inactive forms called pepsinogen, trypsinogen and chymotrypsinnogen which are converted to the active forms in stomach, where they are required for digestion of proteins.
19.5.5 Protein Modification: In this mechanism a covalent group is added or removed from protein molecules of enzyme. This change in enzyme molecule either activates it or turns off its action. Generally, phosphoryl groups are added to or removed from amino acids serine, tyrosine or threonine in protein chain of enzyme.
19.5.6 Inhibition of enzyme activity by pH: Most enzymes are active only within a certain pH range. Making the solutions more basic or acidic causes denaturation of enzyme molecule in which its shape changes and as a result, the activity is lost because the enzyme substrate complex can no longer formed.
References & Further Reading
1. Lehninger A. (1987), “Principles of Biochemistry”. B S Publishers & Distributers, pp. 207-221.
2. Satyanarayana U. & Chakrapani U. (2011), “Biochemistry”. Books and Allied (P) Ltd. pp. 85-103.
3. Denninston (2003), “General, Organic & Biochemistry”. The McGraw-Hill Companies. pp. 612-615. |
Special education or special needs education is the practice of educating students with special needs in a way that addresses their individual differences and needs. Ideally, this process involves the individually planned and systematically monitored arrangement of teaching procedures, adapted equipment and materials, and accessible settings. These interventions are designed to help learners with special needs achieve a higher level of personal self-sufficiency and success in school and their community, than may be available if the student were only given access to a typical classroom education.
Common special needs include learning disabilities, communication disabilities, emotional and behavioral disorders, physical disabilities, and developmental disabilities. Students with these kinds of special needs are likely to benefit from additional educational services such as different approaches to teaching, the use of technology, a specifically adapted teaching area, or a resource room.
Intellectual giftedness is a difference in learning and can also benefit from specialized teaching techniques or different educational programs, but the term "special education" is generally used to specifically indicate instruction of students with disabilities. Gifted education is handled separately.
Whereas special education is designed specifically for students with special needs, remedial education can be designed for any students, with or without special needs; the defining trait is simply that they have reached a point of underpreparedness, regardless of why. For example, even people of high intelligence can be underprepared if their education was disrupted, for example, by internal displacement during civil disorder or a war.
In most developed countries, educators modify teaching methods and environments so that the maximum number of students are served in general education environments. Therefore, special education in developed countries is often regarded as a service rather than a place. Integration can reduce social stigmas and improve academic achievement for many students.
The opposite of special education is general education. General education is the standard curriculum presented without special teaching methods or supports.
- 1 Identifying students or learners with special needs
- 2 Individual needs
- 3 Methods of provision
- 4 Instructional strategies
- 5 Issues
- 6 National approaches
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
Identifying students or learners with special needs
Some children are easily identified as candidates for special needs due to their medical history. They may have been diagnosed with a genetic condition that is associated with intellectual disability, may have various forms of brain damage, may have a developmental disorder, may have visual or hearing disabilities, or other disabilities.
For students with less obvious disabilities, such as those who have learning difficulties, two primary methods have been used for identifying them: the discrepancy model and the response to intervention model. The discrepancy model depends on the teacher noticing that the students' achievements are noticeably below what is expected. The response to intervention model advocates earlier intervention.
In the discrepancy model, a student receives special education services for a specific learning difficulty (SLD) if the student has at least normal intelligence and the student's academic achievement is below what is expected of a student with his or her IQ. Although the discrepancy model has dominated the school system for many years, there has been substantial criticism of this approach (e.g., Aaron, 1995, Flanagan and Mascolo, 2005) among researchers. One reason for criticism is that diagnosing SLDs on the basis of the discrepancy between achievement and IQ does not predict the effectiveness of treatment. Low academic achievers who also have low IQ appear to benefit from treatment just as much as low academic achievers who have normal or high intelligence.
The alternative approach, response to intervention, identifies children who are having difficulties in school in their first or second year after starting school. They then receive additional assistance such as participating in a reading remediation program. The response of the children to this intervention then determines whether they are designated as having a learning disability. Those few who still have trouble may then receive designation and further assistance. Sternberg (1999) has argued that early remediation can greatly reduce the number of children meeting diagnostic criteria for learning disabilities. He has also suggested that the focus on learning disabilities and the provision of accommodations in school fails to acknowledge that people have a range of strengths and weaknesses and places undue emphasis on academics by insisting that students should be supported in this arena and not in music or sports.
A special education program should be customized to address each individual student's unique needs. Special educators provide a continuum of services, in which students with special needs receives varying degrees of support based on their individual needs. Special education programs need to be individualized so that they address the unique combination of needs in a given student.
In the United States, Canada, and the UK, educational professionals use the initialism IEP when referring to a student’s individualized education plan. For children who are not yet 3, an IFSP. (Individual Family Service Plan)It contains 1) information on the child’s present level of development in all areas; 2) outcomes for the child and family; and 3) services the child and family will receive to help them achieve the outcomes.<©2011, 2000 PACER Center>
Students with special needs are assessed to determine their specific strengths and weaknesses. Placement, resources, and goals are determined on the basis of the student's needs. Accommodations and Modifications to the regular program may include changes in the curriculum, supplementary aides or equipment, and the provision of specialized physical adaptations that allow students to participate in the educational environment as much as possible. Students may need this help to access subject matter, physically gain access to the school, or meet their emotional needs. For example, if the assessment determines that the student cannot write by hand because of a physical disability, then the school might provide a computer for typing assignments, or allow the student to answer questions verbally instead. If the school determines that the student is severely distracted by the normal activities in a large, busy classroom, then the student might be placed in a smaller classroom such as a resource room.
Methods of provision
Schools use different approaches to providing special education services to students. These approaches can be broadly grouped into four categories, according to how much contact the student with special needs has with non-disabled students (using North American terminology):
- Inclusion: In this approach, students with special needs spend all, or most of the school day with students who do not have special needs. Because inclusion can require substantial modification of the general curriculum, most schools use it only for selected students with mild to moderate special needs, which is accepted as a best practice. Specialized services may be provided inside or outside the regular classroom, depending on the type of service. Students may occasionally leave the regular classroom to attend smaller, more intensive instructional sessions in a resource room, or to receive other related services that might require specialized equipment or might be disruptive to the rest of the class, such as speech and language therapy, occupational therapy, physical therapy, rehabilitation counseling. They might also leave the regular classroom for services that require privacy, such as counseling sessions with a social worker.
- Mainstreaming refers to the practice of educating students with special needs in classes with non-disabled students during specific time periods based on their skills. Students with special needs are segregated in separate classes exclusively for students with special needs for the rest of the school day.
- Segregation in a separate classroom or special school for students with special needs: In this model, students with special needs do not attend classes with non-disabled students. Segregated students may attend the same school where regular classes are provided, but spend all instructional time exclusively in a separate classroom for students with special needs. If their special class is located in an ordinary school, they may be provided opportunities for social integration outside the classroom, such as by eating meals with non-disabled students. Alternatively, these students may attend a special school.
- Exclusion: A student who does not receive instruction in any school is excluded from school. In the past, most students with special needs have been excluded from school. Such exclusion still affects about 23 million disabled children worldwide, particularly in poor, rural areas of developing countries. It may also occur when a student is in hospital, housebound, or detained by the criminal justice system. These students may receive one-on-one instruction or group instruction. Students who have been suspended or expelled are not considered excluded in this sense.
Effective Instruction for students with disabilities
- Goal Directed: Each child must have an Individualized Education Program (IEP) that distinguishes his/her particular needs. The child must get the services that are designed for him/her. These services will allow him/her to reach his/her annual goals which will be assessed at the end of each term along with short term goals that will be assessed every few months.
- Research-Based Methods- There has been a lot of research done about students with disabilities and the best way to teach them. Testing, IQs, interviews, the discrepancy model, etc. should all be used to determine where to place the child. Once that is determined, the next step is the best way for the child to learn. There are plenty of different programs such as the Wilson Reading Program and Direct Instruction
- Guided by student performance- While the IEP goals may be assessed every few months to a year, constant informal assessments must take place. These assessments will guide instruction for the teacher. The teacher will be able to determine if the material is too difficult or to easy.
A special school is a school catering for students who have special educational needs due to severe learning difficulties, physical disabilities or behavioural problems. Special schools may be specifically designed, staffed and resourced to provide appropriate special education for children with additional needs. Students attending special schools generally do not attend any classes in mainstream schools.
Special schools provide individualised education, addressing specific needs. Student to teacher ratios are kept low, often 6:1 or lower depending upon the needs of the children. Special schools will also have other facilities for children with special needs, such as soft play areas, sensory rooms, or swimming pools, which are necessary for treating students with certain conditions.
In recent times, places available in special schools are declining as more children with special needs are educated in mainstream schools. However, there will always be some children, whose learning needs cannot be appropriately met in a regular classroom setting and will require specialised education and resources to provide the level of support they require. An example of a disability that may require a student to attend a special school is intellectual disability. However this practice is often frowned upon by school districts in the USA in the light of Least Restrictive Environment as mandated in the Individuals with Disabilities Education Act.
An alternative is a special unit or special classroom, also called a self-contained classroom, which is a separate room or rooms dedicated solely to the education of students with special needs within a larger school that also provides general education. These classrooms are typically staffed by specially trained teachers, who provide specific, individualized instruction to individuals and small groups of students with special needs. Self-contained classrooms, because they are located in a general education school, may have students who remain in the self-contained classroom full-time, or students who are included in certain general education classes. In the United States a part-time alternative that is appropriate for some students is sometimes called a resource room.
History of special schools
One of the first special schools in the world was the Institut National des Jeunes Aveugles in Paris, which was founded in 1784. It was the first school in the world to teach blind students. The first school in U.K, for the Deaf was established 1760 in Edinburgh by Thomas Braidwood, with education for visually impaired people beginning in the Edinburgh and Bristol in 1765.
In the 19th Century, people with disabilities and the inhumane conditions where they were supposedly housed and educated were addressed in the literature of Charles Dickens. Dickens characterized people with severe disabilities as having the same, if not more, compassion and insight in Bleak House and Little Dorrit.
Such attention to the downtrodden conditions of people with disabilities brought resulted in reforms in Europe including the re-evalutation of special schools. In the United States reform came more slowly. Throughout the mid half of the 20th century, special schools, termed institutions, were not only accepted, but encouraged. Students with disabilites were housed with people with mental illnesses, and they were not educated much, if at all.
With the Amendments to the Individuals with Disabilities Act of 1997, school districts in the United States began to slowly integrate students with moderate and severe special needs into regular school systems. This changed the form and function of special education services in many school districts and special schools subsequently saw a steady decrease in enrollment as districts weighed the cost per student. It also posed general funding dilemmas to certain local schools and districts, changed how schools view assessments, and formally introduced the concept of inclusion to many educators, students and parents.
Different instructional techniques are used for some students with special educational needs. Instructional strategies are classified as being either accommodations or modifications.
An accommodation is a reasonable adjustment to teaching practices so that the student learns the same material, but in a format that is more accessible to the student. Accommodations may be classified by whether they change the presentation, response, setting, or scheduling of lessons. For example, the school may accommodate a student with visual impairments by providing a large-print textbook. This is a presentation accommodation.
A modification changes or adapts the material to make it simpler. Modifications may change what is learned, how difficult the material is, what level of mastery the student is expected to achieve, whether and how the student is assessed, or any another aspect of the curriculum. For example, the school may modify a reading assignment for a student with reading difficulties by substituting a shorter, easier book. A student may receive both accommodations and modifications.
- Examples of modifications
- Skipping subjects: Students may be taught less information than typical students, skipping over material that the school deems inappropriate for the student's abilities or less important than other subjects. For example, students with poor fine motor skills may be taught to print block letters, but not cursive handwriting.
- Simplified assignments: Students may read the same literature as their peers but have a simpler version, such as Shakespeare with both the original text and a modern paraphrase available.
- Shorter assignments: Students may do shorter homework assignments or take shorter, more concentrated tests.
- Extra aids: If students have deficiencies in working memory, a list of vocabulary words, called a word bank, can be provided during tests, to reduce lack of recall and increase chances of comprehension. Students might use a calculator when other students do not.
- Extended time: Students with a slower processing speed may benefit from extended time for assignments and/or tests in order to have more time to comprehend questions, recall information, and synthesize knowledge.
- Examples of accommodations
- Response accommodations: Typing homework assignments rather than hand-writing them (considered a modification if the subject is learning to write by hand). Having someone else write down answers given verbally.
- Presentation accommodations: Examples include listening to audio books rather than reading printed books. These may be used as substitutes for the text, or as supplements intended to improve the students' reading fluency and phonetic skills. Similar options include designating a person to read to the student, or providing text to speech software. This is considered a modification if the purpose of the assignment is reading skills acquisition. Other presentation accommodations may include designating a person to take notes during lectures or using a talking calculator rather than one with only a visual display.
- Setting accommodations: Taking a test in a quieter room. Moving the class to a room that is physically accessible, e.g., on the first floor of a building or near an elevator. Arranging seating assignments to benefit the student, e.g., by sitting at the front of the classroom.
- Scheduling accommodations: Students may be given rest breaks or extended time on tests (may be considered a modification, if speed is a factor in the test).
All developed countries permit or require some degree of accommodation for students with special needs, and special provisions are usually made in examinations which take place at the end of formal schooling.
In addition to how the student is taught the academic curriculum, schools may provide non-academic services to the student. These are intended ultimately to increase the student's personal and academic abilities. Related services include developmental, corrective, and other supportive services as are required to assist a student with special needs and includes speech and language pathology, audiology, psychological services, physical therapy, occupational therapy, counseling services, including rehabilitation counseling, orientation and mobility services, medical services as defined by regulations, parent counseling and training, school health services, school social work, assistive technology services, other appropriate developmental or corrective support services, appropriate access to recreation and other appropriate support services. In some countries, most related services are provided by the schools; in others, they are provided by the normal healthcare and social services systems.
As an example, students who have autistic spectrum disorders, poor impulse control, or other behavioral challenges may learn self-management techniques, be kept closely on a comfortingly predictable schedule, or given extra cues to signal activities.
At-risk students (those with educational needs that are not associated with a disability) are often placed in classes with students who have disabilities. Critics assert that placing at-risk students in the same classes as students with disabilities may impede the educational progress of people with disabilities. Some special education classes have been criticized for a watered-down curriculum.
The practice of inclusion (in mainstream classrooms) has been criticized by advocates and some parents of children with special needs because some of these students require instructional methods that differ dramatically from typical classroom methods. Critics assert that it is not possible to deliver effectively two or more very different instructional methods in the same classroom. As a result, the educational progress of students who depend on different instructional methods to learn often fall even further behind their peers.
Parents of typically developing children sometimes fear that the special needs of a single "fully included" student will take critical levels of attention and energy away from the rest of the class and thereby impair the academic achievements of all students.
Linked to this, there is debate about the extent to which students with special needs, whether in mainstream or special settings, should have a specific pedagogy, based on the scientific study of particular diagnostic categories, or whether general instructional techniques are relevant to all students including those with special needs.
Some parents, advocates, and students have concerns about the eligibility criteria and their application. In some cases, parents and students protest the students' placement into special education programs. For example, a student may be placed into the special education programs due to a mental health condition such as obsessive compulsive disorder, depression, anxiety, panic attacks or ADHD, while the student and his parents believe that the condition is adequately managed through medication and outside therapy. In other cases, students whose parents believe they require the additional support of special education services are denied participation in the program based on the eligibility criteria.
Whether it is useful and appropriate to attempt to educate the most severely disabled children, such as children who are in a persistent vegetative state, is debated. While many severely disabled children can learn simple tasks, such as pushing a buzzer when they want attention, some children may be incapable of learning. Some parents and advocates say that these children would be better served by substituting improved physical care for any academic program. In other cases, they question whether teaching such non-academic subjects, such as pushing a buzzer, is properly the job of the school system, rather than the health care system.
Another large issue is the lack of resources enabling individuals with special needs to receive an education in the developing world. As a consequence, 98 percent of children with special needs in developing countries do not have access to education.
Issues in Math
1.) Cognitive Development
- Declarative Knowledge- This means remembering math facts that children build off of for each new lesson taught by the teacher
- Procedural Knowledge- This is the difficulties that students have remembering the procedures or steps for various operations. An example of this would be the Order of Operations. Most students remember that the order is parentheses, exponents, multiplication, division, addition, and then subtraction. Children remember this through various chants and rhymes. However, children with disabilities have difficulty grasping procedural knowledge.
- Conceptual Knowledge- This is the overall picture. Some students with disabilities have difficulties understanding how various math concepts relate and what mathematics means to our society.
2.) Problems in performance
- Writing numbers and different math symbols correctly- Some children with various disorders such as dyslexia and dysgraphia will greatly struggle with this.
- Recalling the meanings of symbols and answers to basic facts – This goes along closely with cognitive issues. Some of these children may recognize the math symbol or the basic problem, however they cannot recall the meaning of the symbol or answer to the math fact.
- Counting – Some children may forget which numbers come first, last, etc.
- Following the steps of a strategy – Word problems can sometimes cause an issue of a step-by-step process. Children with disabilities may forget the order, how to use context clues, etc.
3.) Performance on basic arithmetic
- Errors in computation -The child may be able to actually understand the problem and how to solve it. However, there may be various mistakes throughout the multi-step problem.
- Difficulty with the fact retrieval – Every child must know their basic facts and be able to retrieve them. If the child cannot, he will struggle in math.
4.) Difficulty with word problems
- Excluding irrelevant information – Students with disabilities have a difficult time picking out information that is irrelevant.
- Complex sentence structures – These children may have difficulty reading the actual problem itself due to the complex wording.
- South Africa
White Papers in 1995 and 2001 discuss special education in the country. Local schools are given some independent authority.
Both modifications and accommodations are recommended, depending on the student's individual needs.
Japanese students with special needs are placed in one of four different school arrangements: special schools, special classrooms with another school, in resource rooms (which are called tsukyu), or in regular classrooms.
Special schools are reserved for students whose severe disabilities cannot be accommodated in the local school. They do not use the same grading or marking systems as mainstream schools, but instead assess students according to their individualized plans.
Special classes are similar, and may vary the national curriculum as the teachers see fit. Tsukyu are resource rooms that students with milder problems use part-time for specialized instruction individually in small groups. These students spend the rest of the day in the mainstream classroom. Some students with special needs are fully included in the mainstream classroom, with accommodations or modifications as needed.
Training of disabled students, particularly at the upper-secondary level, emphasizes vocational education to enable students to be as independent as possible within society. Vocational training varies considerably depending on the student's disability, but the options are limited for some. It is clear that the government is aware of the necessity of broadening the range of possibilities for these students. Advancement to higher education is also a goal of the government, and it struggles to have institutions of higher learning accept more disabled students.
Special education is regulated centrally by the Singapore Ministry of Education. Both special schools and integration into mainstream schools are options for students with special educational needs, but most students with disabilities are placed in special schools.
Students with special education who wish accommodations on national exams must provide appropriate documentation to prove that they are disabled. Accommodations, but not modifications (e.g., simpler questions) are normally approved if they are similar to the accommodations already being used in everyday schoolwork, with the goal of maintaining the exam's integrity while not having students unfairly disadvantaged by factors that are unrelated to what is being tested. The accommodations are listed on the Primary School Leaving Exam.
Australian Association of Special Education Inc (AASE)‘s position is informed by the Disability Standards for Education 2005 which require that students with disabilities are treated on the same basis as other students in regards to enrolment and participation in education.
With respect to standardized tests, special consideration procedures are in place in all states for students who are disabled. Students must provide documentation Not all desired forms of accommodations are available. For example, students who cannot read, even if the inability to read is due to a disability, cannot have the exam read to them, because the exam results should accurately show that the student is unable to read. Reports on matriculation exams do not mention whether the student received any accommodations in taking the test.
Each country in Europe has its own special education support structures.
For more details on 28 European countries see European Agency for Special Needs and Inclusive Education.
- Czech Republic
Schools must take students' special education needs into account when assessing their achievements.
In Denmark, 99% of students with specific learning difficulties like dyslexia are educated alongside students without any learning challenges.
Schools adapt the national guidelines to the needs of individual students. Students with special educational needs are given an individualized plan.
They may be exempted from some parts of school examinations, such as students with hearing impairments not taking listening comprehension tests. If the student receives modifications to the school-leaving exams, this is noted on the certificate of achievement. If they are not following the national core curriculum, then they are tested according to the goals of their individual educational program.
French students with disabilities are normally included in their neighborhood school, although children may be placed in special schools if their personalized plan calls for it. Each student's personalized school plan describes teaching methods, psychological, medical and paramedical services that the school will provide to the student.
Most students with special needs in Germany attend a special school that serves only children with special needs. These include:
- Förderschule für Lernbehinderte (special school for learning disabilities): for children who have challenges that impair learning
- Förderschule mit dem Förderschwerpunkt Geistige Entwicklung (school for cognitive development): for children with very severe learning challenges
- Förderschule Schwerpunkt emotionale und soziale Entwicklung (school for emotional and social development): for children who have special emotional needs
- Förderschule für Blinde (school for the blind): for blind children
- Förderschule für Sehbehinderte (school for the visually impaired): for children who are visually challenged
- Förderschule für Gehörlose (school for the deaf): for deaf children
- Förderschule für Schwerhörige (school for the hearing impaired): for children who are hearing impaired
- Förderschule für Körperbehinderte (school for children with physical disabilities): for children with physical disabilities
- Förderschule für Sprachbehinderte (school for children with language disorders): for children with language disorders
- Förderschule für Taubblinde (school for the deafblind): for children who are deafblind
- Schule für Kranke (school for ill children): for children who are too ill to attend school or are hospitalized for a longer
- Förderschule für schwer mehrfach Behinderte (school for children with severe and multiple disabilities): for children with severe and multiple disabilities who need very special care and attention. Sometimes these children are only susceptible for very basic emotional and sensory stimulation. Thus teachers at these school (as well as at schools for the deafblind) are highly specialized professionals.
One in 21 German students attends a special school. Teachers at those schools are specially trained professionals who have specialized in special needs education while in university. Special schools often have a very favorable student-teacher ratio and facilities other schools do not have.
Students with special educational needs may be exempted from standardized tests or given modified tests.
Greek students with special needs may attend either mainstream schools or special schools.
Students whose disabilities have been certified may be exempted from some standardized tests or given alternative tests. Accommodations are responsive to students' needs; for example, students with visual impairments may take oral tests, and students with hearing impairments take written tests. Accommodations and modifications are noted on the certificate of achievement.
Special education is regulated centrally.
According to the 1993 Act on Public Education, students with special educational needs may be exempted from standardized tests or given modified tests. They have a right to extra time, a choice of formats for the tests (e.g., oral rather than written), and any equipment that they normally use during the school day.
As of 2006, students with disabilities received a significant bonus (eight points) on the university entrance examination, which has been criticized as unfair.
- The Netherlands
As a general rule, students with special educational needs are integrated into their regular, mainstream schools with appropriate support, under the "Going to School Together" policy (Weer Samen Naar School). Four types of disability-specific special schools exist. The national policy is moving towards "suitable education" (passend onderwijs), based on the individual's strengths and weakensses.
The National Support System for Special Needs Education (Statped) is managed by the Norwegian Directorate for Education and Training. The general objective for Statped is to give guidance and support to those in charge of the education in municipalities and county administrations to ensure that children, young people and adults with major and special educational needs are secured well-advised educational and developmental provisions. The institutions affiliated with Statped offer a broad spectrum of services. Statped consists of 13 resource centres owned by the State, and 4 units for special education, where Statped buys services. These centres offer special educational guidance and support for local authorities and county administrations.
Students with disabilities have a "guaranteed right" to appropriate accommodations on assessments. Schools are generally considered autonomous.
On national tests, the National Examination Center normally grants most requests for accommodations that are supported by the local school's examination committee. Legislation opposes the use of modifications that would be unfair to non-disabled students.
Schools are required to provide services and resources to students with special educational needs so that they make progress and participate in school. If the local school is unable to provide appropriately for an individual student, then the student may be transferred to a special school.
Local schools have significant autonomy, based on national guidelines. Schools are expected to help students meet the goals that are set for them.
There are special schools (Swedish:Särskola) for students with low abilities to attend normal education. There has in 2012-2013 been media criticism on the fact that students with light problems such as dyslexia have been placed in special schools, seriously hampering their chances on the labour market.
Education is controlled by the 26 cantons, and so special education programs vary from place to place. However, integration is typical. Students are assessed according to their individual learning goals.
In England and Wales the acronym SEN for Special Educational Needs denotes the condition of having special educational needs, the services which provide the support and the programmes and staff which implement the education. In England SEN PPS refers to the Special Educational Needs Parent Partnership Service. SENAS is the special educational needs assessment service, which is part of the Local Authority. SENCO refers to a special educational needs coordinator, who usually works with schools and the children within schools who have special educational needs. The Special Educational Needs Parent Partnership Services help parents with the planning and delivery of their child's educational provision. The Department for Education oversees special education in England.
Most students have an individual educational plan, but students may have a group plan in addition to, or instead of, an individual plan. Groups plans are used when a group of students all have similar goals.
In Scotland the Additional Support Needs Act places an obligation on education authorities to meet the needs of all students in consultation with other agencies and parents. In Scotland the term Special Educational Needs (SEN), and its variants are not official terminology although the very recent implementation of the Additional Support for Learning Act means that both SEN and ASN (Additional Support Needs) are used interchangeably in current common practice.
All special-needs students receive an Individualized Education Program (BEP) that outlines how the school will meet the student’s individual needs. The Özel Eğitim Kurumları Yönetmeliğ (ÖEKY) requires that students with special needs be provided with a Free Appropriate Public Education in the Least Restrictive Environment that is appropriate to the student's needs. Government-run schools provide special education in varying degrees from the least restrictive settings, such as full inclusion, to the most restrictive settings, such as segregation in a special school.
The education offered by the school must be appropriate to the student's individual needs. Schools are not required to maximize the student's potential or to provide the best possible services. Unlike most of the developed world, American schools are also required to provide many medical services, such as speech therapy, if the student needs these services.
According to the Department of Education, approximately 10 percent of all school-aged children) currently receive some type of special education services.
As with most countries in the world, students who are poor, ethnic minorities, or do not speak the dominant language fluently are disproportionately identified as needing special education services.
Poor, refugies are more likely to have limited resources and to employ inexperienced teachers that do not cope well with student behavior problems, "thereby increasing the number of students they referred to special education." Teacher efficacy, tolerance, gender, and years of experience and special education referrals.
In North America, special education is commonly abbreviated as special ed, SpecEd, SPED, or SpEd in a professional context.
Education in Canada is the responsibility of the individual provinces and territories. As such, rules vary somewhat from place to place. However, inclusion is the dominant model.
For major exams, Canadian schools commonly use accommodations, such as specially printed examinations for students with visual impairments, when assessing the achievements of students with special needs. In other instances, alternative assessments or modifications that simplify tests are permitted, or students with disabilities may be exempted from the tests entirely.
- United States
All special-needs students receive an Individualized Education Program (IEP) that outlines how the school will meet the student’s individual needs. The Individuals with Disabilities Education Act (IDEA) requires that students with special needs be provided with a Free Appropriate Public Education in the Least Restrictive Environment that is appropriate to the student's needs. Government-run schools provide special education in varying degrees from the least restrictive settings, such as full inclusion, to the most restrictive settings, such as segregation in a special school. The education offered by the school must be appropriate to the student's individual needs. Schools are not required to maximize the student's potential or to provide the best possible services. Unlike most of the developed world, American schools are also required to provide many medical services, such as speech therapy, if the student needs these services.
According to the Department of Education, approximately 6 million children (roughly 10 percent of all school-aged children) currently receive some type of special education services. As with most countries in the world, students who are poor, ethnic minorities, or do not speak the dominant language fluently are disproportionately identified as needing special education services. Poor, black and Latino urban schools are more likely to have limited resources and to employ inexperienced teachers that do not cope well with student behavior problems, "thereby increasing the number of students they referred to special education."
During the 1960s, in some part due to the civil rights movement, some researchers began to study the disparity of education amongst people with disabilities. The landmark Brown v. Board of Education decision, which declared unconstitutional the "separate but equal" arrangements in public schools for students of different races, paved the way for PARC v. Commonwealth of Pennsylvania and Mills vs. Board of Education of District of Columbia, which challenged the segregation of students with special needs. Courts ruled that unnecessary and inappropriate segregation of students with disabilities was unconstitutional. Congress responded to these court rulings with the federal Education for All Handicapped Children Act in 1975 (since renamed the Individuals with Disabilities Education Act (IDEA)). This law required schools to provide services to students previously denied access to an appropriate education.
In US government-run schools, the dominant model is inclusion. In the United States, three out of five students with academic learning challenges spend the overwhelming majority of their time in the regular classroom.
- Adapted Physical Education
- Disability studies
- Disability and Poverty
- Early childhood intervention
- Inclusive education
- Mainstreaming in education
- Matching Person & Technology Model
- Post Secondary Transition For High School Students with Disabilities
- Reasonable accommodation
- Response to intervention
- Special Assistance Program (Australian education)
- Special needs
- Tracking (education)
- Washington County Closed-Circuit Educational Television Project
- What is special education? from New Zealand's Ministry of Education
- National Council on Disability. (1994). Inclusionary education for students with special needs: Keeping the promise. Washington, DC: Author.
- Swan, William W.; Morgan, Janet L (1993). "The Local Interagency Coordinating Council". Collaborating for Comprehensive Services for Young Children and Their Families. Baltimore: Paul H. Brookes Pub. Co. ISBN 1-55766-103-0. OCLC 25628688. OL 4285012W.
- Beverly Rainforth; York-Barr, Jennifer (1997). Collaborative Teams for Students With Severe Disabilities: Integrating Therapy and Educational Services. Brookes Publishing Company. ISBN 1-55766-291-6. OCLC 25025287.
- Stainback, Susan Bray; Stainback, William C. (1996). Support Networks for Inclusive Schooling: Interdependent Integrated Education. Paul H Brookes Pub Co. ISBN 1-55766-041-7. OCLC 300624925. OL 2219710M.
- Gaylord-Ross, Robert (1989). Integration strategies for students with handicaps. Baltimore: P.H. Brookes. ISBN 1-55766-010-7. OCLC 19130181.
- Gartner, Alan; Dorothy Kerzner Lipsky (1997). Inclusion and School Reform: Transforming America's Classrooms. Brookes Publishing Company. ISBN 1-55766-273-8. OCLC 35848926.
- Goodman, Libby (1990). Time and learning in the special education classroom. Albany, N.Y.: State University of New York Press. p. 122. ISBN 0-7914-0371-8. OCLC 20635959.
- Special Education Inclusion
- Smith P (October 2007). O'Brien, John, ed. "Have we made any progress? Including students with intellectual disabilities in regular education classrooms". Intellect Dev Disabil 45 (5): 297–309. doi:10.1352/0047-6765(2007)45[297:HWMAPI]2.0.CO;2. PMID 17887907.
- James Q. Affleck; Sally Madge; Abby Adams; Sheila Lowenbraun (January 1988). "Integrated classroom versus resource model: academic viability and effectiveness". Exceptional Children: 2. Retrieved 2010-05-29.
- Bowe, Frank (2004). Making Inclusion Work. Upper Saddle River, N.J: Prentice Hall. ISBN 0-13-017603-6. OCLC 54374653.
- Karen Zittleman; Sadker, David Miller (2006). Teachers, Schools and Society: A Brief Introduction to Education with Bind-in Online Learning Center Card with free Student Reader CD-ROM. McGraw-Hill Humanities/Social Sciences/Languages. pp. 48, 49, 108, G–12. ISBN 0-07-323007-3.
- Warnock Report (1978). "Report of the Committee of Enquiry into the Education of Handicapped Children and Young People", London.
- Wolffe, Jerry. (20 December 2010) What the law requires for disabled students The Oakland Press.
- Hicks, Bill (18 November 2011). "Disabled children excluded from education". BBC Online. Retrieved 13 June 2012.
- Bos, C. S. & Vaughn, S. (2005). Strategies for teaching students with learning and behavior problems. (6th ed.). Upper Saddle River, NJ: Pearson.
- Turnbull, Ron (2002). "Exceptional Lives: Special Education in Today's Schools (3rd ed.)Merrill Prentice Hall. New Jersey.
- History of the INJA (French)
- The history of special education: From isolation to integration. MA Winzer
- Inventing the feeble mind: A history of mental retardation in the United States. S McCuen – Journal of Health Politics, Policy and Law, 1997 – Duke Univ Press
- Jorgensen, C.M. (1998). Restructuring high school for all students: Taking inclusion to the next level. Baltimore: Paul H. Brooks Publishing co.
- Pepper, David (25 September 2007). Assessment for disabled students: an international comparison (Report). UK: Ofqual's Qualifications and Curriculum Authority, Regulation & Standards Division.
- Busuttil-Reynaud, Gavin and John Winkley. e-Assessment Glossary (Extended) (Report). UK: Joint Information Systems Committee and Ofqual's Qualifications and Curriculum Authority.
- Special Educational Needs Code of Practice. UK: Department for Education and Skills. November 2001. ISBN 1-84185-529-4. DfES/581/2001.
- Thorson, Sue. "Macbeth in the Resource Room: Students with Learning Disabilities Study Shakespeare." Journal of Learning Disabilities, v28 n9 p575-81 Nov 1995.
- "Related Services". National Dissemination Center for Children with Disabilities.
- Simpson, Richard L.; Sonja R. de Boer (2009). Successful inclusion for students with autism: creating a complete, effective ASD inclusion program. San Francisco: Jossey-Bass. pp. 38–42. ISBN 0-470-23080-0.
- Greenwood CR (May 1991). "Longitudinal analysis of time, engagement, and achievement in at-risk versus non-risk students". Except Child 57 (6): 521–35. PMID 2070811.
- Ellis, Edwin (2002). "Watering Up the Curriculum for Adolescents with Learning Disabilities, Part I: Goals of the Knowledge Dimension". WETA. Retrieved 2010-04-21.
- Carol A. Breckenridge; Candace Vogler (2001). "The Critical Limits of Embodiment: Disability's Criticism". Public Culture (Duke Univ Press) 13 (3): 349–357.
- Lews, Ann; Norwich, Brahm (2005). Special Teaching for Special Children?. Milton Keynes, UK: Open University Press. ISBN 0335214053.
- Mintz, Joseph (2014). Professional Uncertainty, Knowledge and Relationship in the Classroom: A Psycho-social Perspective. London: Routledge. ISBN 9780415822961.
- Amanda M. Vanderheyden; Joseph C Witt; Gale Naquin (2003). "Development And Validation Of A Process For Screening Referrals To Special Education". School Psychology Review (Research and Read Books, Journals, Articles at Questia Online Library) 32.
- Otterman, Sharon (19 June 2010). "A Struggle to Educate the Severely Disabled". The New York Times.
- UNESCO. (1995). Review of the present situation in special education. Webaccessed: http://www.unesco.org/pv_obj_cache/pv_obj_id_C133AD0AF05E62AC54C2DE8EE1C026DABFAF3000/filename/281_79.pdf
- "Disability standards for education".
- On the system of special education in the former Soviet Union, see Barbara A. Anderson, Brian D. Silver, and Victoria A. Velkoff, "Education of the Handicapped in the USSR: Exploration of the Statistical Picture," Europe-Asia Studies, Vol. 39, No. 3 (1987): 468-488.
- Country information: http://www.european-agency.org/country-information
- Robert Holland (2002-06-01). "Vouchers Help the Learning Disabled". School Reform News (The Heartland Institute).
- "Special education needs, Special needs education".
- Management of Inclusion. The SENCO Resource Centre, part 3.
- Karen Zittleman; Sadker, David Miller (2006). Teachers, Schools and Society: A Brief Introduction to Education with Bind-in Online Learning Center Card with free Student Reader CD-ROM. McGraw-Hill Humanities/Social Sciences/Languages. pp. 48, 49, 108, G–12. ISBN 0-07-323007-3.
- Priscilla Pardini (2002). "The History of Special Education". Rethinking Schools 16 (3).
- Blanchett, W. J. (2009). A retrospective examination of urban education: From "brown" to the resegregation of African Americans in special education—it is time to "go for broke". Urban Education, 44(4), 370–388.
- Tejeda-Delgado, M. (2009). Teacher efficacy, tolerance, gender, and years of experience and special education referrals. International Journal of Special Education, 24(1), 112–119.
- Ladson-Billings, Gloria (1994). The dreamkeepers: successful teachers of African American children. San Francisco: Jossey-Bass Publishers. ISBN 1-55542-668-9. OCLC 30072651.
- Cortiella, C. (2009). The State of Learning Disabilities. New York, NY: National Center for Learning Disabilities.
- Birsh, Judith R., & Wolf, B., eds. (2011). Multisensory Teaching of Basic Language Skills, Third Edition. Baltimore: Brookes.
- Wilmshurst, L., & Brue, A. W. (2010). The complete guide to special education (2nd ed.). San Francisco: Jossey-Bass.
- Nola Purdie & Louise Ellis (2005). "A Review of the Empirical Evidence Identifying Effective Interventions and Teaching Practices for Students with Learning Difficulties in Year 4, 5 and 6". ACEReSearch.
|Wikimedia Commons has media related to Special schools.|
- The European Agency for Special Needs and Inclusive Education
- National Dissemination Center for Children with Disabilities (NICHCY) (US)
- Council for Exceptional Children (US)
- Office of Special Education and Rehabilitative Services U.S. Department of Education
- When It's Your Own Child: A Report on Special Education from the Families Who Use It Public Agenda, 2002 (US) |
1 CORE CURRICULUM PRODUCTS FET PHASE GRADE 10 (Content of additional subjects available on request) MATHEMATICS (PACEs ) Learns about functional notation, graphs and determines the equation of a linear function, graphs and determines the equation of a hyperbole function, graphs and determines the equation of an exponential function, graphs and determines the equation of a parabola function. Identifies different types of angles, constructs congruent line segments, angles, and their bisectors, knows the properties of equality, recognizes special pairs of angles, identifies different types of triangles, understands SSS, SAS and ASA postulates, constructs triangles. Understands deductive reasoning, learns how to write a formal proof, learns to prove triangles congruent using SSS, SAS, ASA and AAS postulates, realizes that corresponding parts of congruent triangles are congruent, uses the LL, LA, HA and HL theorems in proofs, recognizes auxiliary lines, recognizes and learns to prove overlapping triangles congruent. Learns to use indirect proofs to prove theorems, memorizes the theorems dealing with parallel lines, perpendicular lines and triangles, identifies angles formed when a transversal intersects parallel lines, learns relationships between angles when a transversal intersects parallel lines, know which angles must be congruent for two lines to be parallel when intersected by a transversal, identifies the converse of a theorem, learns the sum of the angles of a triangle and resulting corollaries, reviews constriction of angles and perpendicular lines, learns how to construct parallel lines. Learns about different classifications of polygons, recognizes different types of quadrilaterals, knows the properties of parallelograms, proves that quadrilaterals are parallelograms, learns the characteristics of rectangles and rhombuses, learns theorems related to parallel lines, transversals and midpoints, learns how to divide a segment into any number of congruent segments, discovers characteristics of trapezoids and isosceles trapezoids. Draws and describes a locus of points and intersection of loci, identifies parts of circles, learns postulate s of circles, calculates the circumference, area, length of an arc and area of a sector in circles, learns about central angles and types of arcs, proves arcs and chords of circles congruent, learns about types of tangents, applies theorems of tangency to proofs. Learns about inscribed and circumscribed circles, finds the measure of an inscribed angle, finds the measures of angles formed by tangents and secants, learns about circle constructions. Reviews ratios and proportions from Algebra, learns about the properties of proportions, uses the AA Similarity Theorem, uses the right Triangle Similarity Corollary, proves triangles similar, learns about segment proportionalities, discovers relationships between triangles and parallel lines, understands the SAS and SSS Similarity Theorems.
2 Finds the geometric mean of proportion, learns to simplify radicals, discovers the significance of altitudes in right triangles, constructs a geometric mean, applies the Pythagorean Theorem to right triangles, uses the Right Triangle Theorem and Isosceles Right Triangle Theorem, applies the trigonometric ratios to find the lengths of sides and the measures of angles of right triangles. Learns how to find the perimeter of a polygon, finds the area of a rectangle, parallelogram, triangle, rhombus and trapezoid, learns about polyhedron, finds the lateral area and total area of a prism, pyramid, cylinder, cone and sphere, finds the volume of a prism, pyramid, cylinder, cone and sphere. Learns about the coordinate plane, uses the distance and midpoint formulas, uses the slope of a line in graphing equations, applies the slope intercept form to graph equations, determines from the slope if lines are parallel or perpendicular, learns how to graph circles, applies the coordinate system in geometric proofs. Learns about transformations, learns to reflect figures over lines and across points, memorizes the properties of isometries, uses the transformations of translation, rotation and glide reflection, identifies line, rotational and point symmetry, learns about dilations. ENGLISH (PACEs ) Prerequisite: English I Writes using four kinds of paragraphs and correct sentence structure. Reviews the characteristics of writing a biography and an autobiography and learns to make note and source cards while using reference books at the library. Studies the elements of a book and examines the author s style while reading, studying, and answering questions about God s Tribesman by James and Marti Hefley and The Hiding Place by Corrie ten Boom and John and Elizabeth Sherrill. Identifies and reviews basic grammar. Expands vocabulary through learning and writing new words. Classifies and diagrams the seven basic sentence patterns of simple and complex sentences. Discovers the purpose and type of newspaper articles and writes a newspaper article. Determines the purpose and appropriate forms of business and social letters and letters of application. Gains practical application of library skills. Learns to identify and appreciate poetic forms. Is encouraged in character development through examples given in each PACE.
3 GRADE 11 MATHEMATICS (PACEs ) Calculates simple interest and find the value of investment, uses simple interest formulae to solve real life hire purchase problems, calculates compound interest and find the value of an investment, uses compound interest formulae to solve real life inflation problems, finds the interest rate in simple and compound interest formulae, works with nominal and effective interest rates, calculates compound decrease, applies compound decrease formulae to solve real life depreciation problems, constructs timelines to solve real life financial problems, learns to be good stewards of the money God gives us. Solves problems relating to: straight line graphs, finding the equation of a straight line, parallel and perpendicular lines, simultaneous equations. Reviews: laws of exponents, product or quotient of polynomials, squares of binomials, factoring of polynomials, fractional exponents, equalities & inequalities, division of polynomials. Learns about algebraic fractions applied with all four operations in expressions and equations, rational numbers as decimals. Learns about relations and functions, inverse of a relation, linear equations, relations and slope, linear inequalities, direct variation, quadratic functions and graphs, axis of symmetry and the vertex, minimum and maximum points, completing the square, axis of symmetry and the vertex from y = a(x h)2 + k. Learns about square roots, roots of radicals, rational and irrational numbers, operating with radicals, rationalizing denominators, radicals & exponents, radicals & equations, radicals within radicals, complex numbers, imaginary and real numbers. Solves quadratics by factoring, fractional equations & quadratics, completing the square to solve quadratics, the quadratic formula, the discriminant and nature of roots, quadratic equations and using its roots, evaluating polynomial functions f(x), synthetic substitution, the remainder theorem, the factor theorem. Coordinates analytical geometry, the distance formula and circle centre at (j;k), the parabola with its vertex, directrix, focus and position and graph; quadraticlinear system of equations. Determines if a relation is a function, reviews function notation, reviews linear equations and functions, graphs hyperbole equations, determines the equation of a hyperbole function, graph parabola equations, determine the equation of a parabola function, solves problems relating to functions. Learns about permutations and factorial notation, probability. Recognizes angles in standard position, finds coterminal angles for a specific case, determines the distance a point is away from the origin, determines the distance between two points, determines the co-ordinates of the midpoint between two points, determines the trigonometric ratios of angles in standard position, determines the reference angle for positive or negative values, develops the remaining trigonometric ratios given one ratio, applies trigonometric ratios to real world problems. Sketches trigonometry sin, cos and tan functions, solves problems relating to trig functions, sketches trig functions with amplitude, period and vertical changes.
4 ENGLISH (PACEs ) Prerequisites: English I and II Identifies sentence fragments, run-ons, and complete sentences. Studies different periods of American literature. Recognizes and reviews grammar. Continues to build knowledge of capitalization and punctuation rules. Increases writing skills descriptive, narrative, expository, and persuasive elements of a paragraph; plans and writes an essay. Develops setting, character, and plot for a short story. Researches, plans, and writes a term paper in a step-by-step process. Verifies and clarifies facts presented in other types of expository texts by using a variety of consumer, workplace, and public documents. Reads In His Steps by Charles M. Sheldon and answers questions. Studies excerpts from The Oregon Trail. Analyses characteristics of satire, parody, allegory; pastoral themes used in poetry, prose, plays, novels, short stories, essays; and other basic genres. Is encouraged in character development through examples given in each PACE.
5 GRADE 12 MATHEMATICS (PACEs ) Revises the distance, midpoint and gradient formulas, solves problems relating to the distance, midpoint and gradient of a line, learns how to solve problems relating to parallel, perpendicular and collinear lines, calculates the inclination of a straight line, calculates the equation of a straight line, calculates the equation of an altitude, perpendicular bisector and median of a triangle, solves analytical geometry problems. Learns the equation of the circle with the centre at origin, learns the equation of the circle at any point ( ), solves analytical problems relating to the circle, learns how to find the equation of the circle, finds the equation of the tangent to the circle, learns to find the coordinates of the centre of the circle ( ) and the length of the radius, solves analytical geometry problems involving circles and straight lines, finds the length of the tangent of a circle from a given point, uses all analytical geometry formulae to solve problems relating to circles, lines, parallelograms and kites. Learns about number patterns and patterns in nature, learns to identify the and values in a linear number pattern, solves problems relating to linear number patterns, learns to solve the and values in quadratic number patterns, solves the problems relating to quadratic number patterns, solves everyday problems involving number patterns, learns the rules of symmetry, learns the rules of inverses, solves problems relating to function notation, symmetries and inverses. Understands what a logarithm is, evaluates logarithms, changes a logarithmic expression into an exponential expression and vice versa, solves logarithmic equations, understands that a logarithmic function is the inverse of an exponential function, recognize different types of logarithmic functions, graphs exponential functions and their inverses and solve problems relating to functions and their inverses, knows the properties of logarithms and use these properties to simplify logarithmic expressions, solves exponential equations using logarithms, applies knowledge of exponents and logarithms to solve practical problems. Understands and solves problems relating to arithmetic sequences, finds the value of any term in an arithmetic sequence, understands and solves problems relating to geometric sequences, finds the value of any term in a geometric sequence, solves problems relating to arithmetic and geometric means, understands and solves problems relating to arithmetic series, understands sigma notation and uses it to abbreviate an arithmetic series, finds the sum of an arithmetic series, understands and solves problems relating to geometric series, uses sigma notation to abbreviate a geometric series, finds the sum of a geometric series, understands what an infinite geometric series is, finds the sum of an infinite geometric series, converts repeating decimals to fractions in lowest terms, determines if a sequence is convergent or divergent, finds the limit of a convergent sequence. Understands the terminology used for compound interest and annuities, calculates the future amount of a single deposit, calculates the single present amount of a future amount, calculates the interest rate required to produce a certain present or future amount, calculates the future amount of an annuity, calculates the equal payment amount to accumulate a certain future amount, calculates the present amount of an annuity, calculates the monthly payment required to pay off a loan. Analyses a numerical data set by using measures of central tendency, makes
6 and interprets data from box-and whisker diagrams, calculates the standard deviation of a numerical data set, calculates the cumulative frequency and sketch the corresponding ogive, draws scatter plots and the line of best fit, calculates the correlation coefficient and equation of the least squares regression line, revises concepts of probability of single events, revises Venn diagrams, determines the probability of independent events, determines the probability that event A or event B occurs. Solves a right triangle, understands angle of elevation and angle of depression, understands the different trigonometric methods used to give direction, solves problems by applying the different trigonometric methods used to give direction, uses the law of cosines to solve SAS and SSS triangles, uses the law of sines to solve AAS, ASA, and ASS triangles, finds the area of an SAS, ASA, or an SSS triangle. Learns the six trigonometric ratios and applies all the ratios to the problems throughout this module, uses trig identities to simplify trig expressions, solves problems relating to reduction formulae with positive and negative angles, solves problems involving double angle identities, solves problems involving compound angle identities. Understands the definitions of terms used in calculus, uses and applies limits, calculate average gradient, understands how derivatives are introduced, understands the first principles of differentiation, works with standard forms of differentiation, works with differentiation and functional notation, applies differentiation to various problems. Understands and solves cubic equations, understands and sketch differential graphs, understands and interprets cubic graphs, determines equations of cubic graphs, performs practical applications of differentiation. Understands basic concepts related to angles in a circle and circle geometry, gains mastery in the recall of the geometric theorems and basic concepts through the practice of performing calculations with reasons, knows how to perform calculations involving the theorem of Pythagoras where applicable, develops logical thought patterns essential to careful analysis and synthesis of geometric problems, understands the importance of watching what we say and understanding that the words we speak carry power. ENGLISH (PACEs ) Prerequisites: English I, II, and III Is introduced to the different periods of British literature. Builds a vocabulary notebook. Improves writing skills in exposition, description, narration, and persuasion. Learns about parallelism. Writes character trait stories and answers essay questions accurately. Reviews and practices grammar capitalization and punctuation. Uses the dictionary as a reference tool. Learns about denotation and connotation. Paraphrases and writes summaries while reading The Rime of the Ancient Mariner by Samuel Taylor Coleridge and Silas Marner by George Eliot (special edition). Analyzes Shakespeare s life and Macbeth. Continues the study of speech topic selection, preparation, speaking methods, and speech delivery. Is encouraged in character development through examples given in each PACE. |
|This article needs additional citations for verification. (November 2014)|
Affect is the experience of feeling or emotion. Affect is a key part of the process of an organism's interaction with stimuli. The word also refers sometimes to affect display, which is "a facial, vocal, or gestural behavior that serves as an indicator of affect" (APA 2006).
The affective domain represents one of the three divisions described in modern psychology: the cognitive, the conative, and the affective. Classically, these divisions have also been referred to as the "ABC of psychology", in that case using the terms "affect", "behavior", and "cognition". In certain views, the cognitive may be considered as a part of the affective, or the affective as a part of the cognitive.
Affective states are psycho-physiological constructs. According to most current views, they vary along three principal dimensions: valence, arousal, and motivational intensity. Valence is the subjective positive-to-negative evaluation of an experienced state. Emotional valence refers to the emotion’s consequences, emotion-eliciting circumstances, or subjective feelings or attitudes. Arousal is objectively measurable as activation of the sympathetic nervous system, but can also be assessed subjectively via self-report. Arousal is a construct that is closely related to motivational intensity but they differ in that motivation necessarily implies action while arousal does not. Motivational intensity refers to the impulsion to act. It is the strength of an urge to move toward or away from a stimulus. Simply moving is not considered approach motivation without a motivational urge present. All three of these categories can be related to cognition when considering the construct of cognitive scope. Initially, it was thought that positive affects broadened cognitive scope whereas negative affects narrowed cognitive scope. However, evidence now suggests that affects high in motivational intensity narrow cognitive scope whereas affects low in motivational intensity broaden cognitive scope. The cognitive scope has indeed proven to be a valuable construct in cognitive psychology.
"Affect" can mean an instinctual reaction to stimulation occurring before the typical cognitive processes considered necessary for the formation of a more complex emotion. Robert B. Zajonc asserts this reaction to stimuli is primary for human beings and that it is the dominant reaction for lower organisms. Zajonc suggests that affective reactions can occur without extensive perceptual and cognitive encoding and can be made sooner and with greater confidence than cognitive judgments (Zajonc, 1980).
Many theorists (e.g., Lazarus, 1982) consider affect to be post-cognitive: elicited only after a certain amount of cognitive processing of information has been accomplished. In this view, such affective reactions as liking, disliking, evaluation, or the experience of pleasure or displeasure each result from a different prior cognitive process that makes a variety of content discriminations and identifies features, examines them to find value, and weighs them according to their contributions (Brewin, 1989). Some scholars (e.g., Lerner and Keltner 2000) argue that affect can be both pre- and post-cognitive: initial emotional responses produce thoughts, which produce affect. In a further iteration, some scholars argue that affect is necessary for enabling more rational modes of cognition (e.g., Damasio 1994).
A divergence from a narrow reinforcement model of emotion allows other perspectives about how affect influences emotional development. Thus, temperament, cognitive development, socialization patterns, and the idiosyncrasies of one's family or subculture might interact in non-linear ways. For example, the temperament of a highly reactive/low self-soothing infant may "disproportionately" affect the process of emotion regulation in the early months of life (Griffiths, 1997).
Some other social sciences, such as geography or anthropology, have adopted the concept of affect during the last decade. In French psychoanalysis a major contribution to the field of affect comes from Andre Green. The focus on affect has largely derived from the work of Deleuze and brought emotional and visceral concerns into such conventional discourses as those on geopolitics, urban life and material culture. Affect has also challenged methodologies of the social sciences by emphasizing somatic power over the idea of a removed objectivity and therefore has strong ties with the contemporary non-representational theory.
A number of experiments have been conducted in the study of social and psychological affective preferences (i.e., what people like or dislike). Specific research has been done on preferences, attitudes, impression formation, and decision making. This research contrasts findings with recognition memory (old-new judgments), allowing researchers to demonstrate reliable distinctions between the two. Affect-based judgments and cognitive processes have been examined with noted differences indicated, and some argue affect and cognition are under the control of separate and partially independent systems that can influence each other in a variety of ways (Zajonc, 1980). Both affect and cognition may constitute independent sources of effects within systems of information processing. Others suggest emotion is a result of an anticipated, experienced, or imagined outcome of an adaptational transaction between organism and environment, therefore cognitive appraisal processes are keys to the development and expression of an emotion (Lazarus, 1982).
Affect has been found across cultures to comprise both positive and negative dimensions. The most commonly used measure of positive and negative affect in scholarly research is the Positive and Negative Affect Schedule (PANAS). The PANAS is a lexical measure developed in a North American setting and consisting of 20 single-word items, for instance excited, alert, determined for positive affect, and upset, guilty, and jittery for negative affect. However, some of the PANAS items have been found either to be redundant or to have ambiguous meanings to English speakers from non-North American cultures. As a result, an internationally reliable short-form, the I-PANAS-SF, has been developed and validated comprising two 5-item scales with internal reliability, cross-sample and cross-cultural factorial invariance, temporal stability, convergent and criterion-related validities.
Non-conscious affect and perception
In relation to perception, a type of non-conscious affect may be separate from the cognitive processing of environmental stimuli. A monohierarchy of perception, affect and cognition considers the roles of arousal, attention tendencies, affective primacy (Zajonc, 1980), evolutionary constraints (Shepard, 1984; 1994), and covert perception (Weiskrantz, 1997) within the sensing and processing of preferences and discriminations. Emotions are complex chains of events triggered by certain stimuli. There is no way to completely describe an emotion by knowing only some of its components. Verbal reports of feelings are often inaccurate because people may not know exactly what they feel, or they may feel several different emotions at the same time. There are also situations that arise in which individuals attempt to hide their feelings, and there are some who believe that public and private events seldom coincide exactly, and that words for feelings are generally more ambiguous than are words for objects or events. Therefore, non-conscious emotions need to be measured by measures circumventing self-report such as the Implicit Positive and Negative Affect Test (IPANAT; Quirin, Kazén, & Kuhl, 2009).
Affective responses, on the other hand, are more basic and may be less problematical in terms of assessment. Brewin has proposed two experiential processes that frame non-cognitive relations between various affective experiences: those that are prewired dispositions (i.e., non-conscious processes), able to "select from the total stimulus array those stimuli that are causally relevant, using such criteria as perceptual salience, spatiotemporal cues, and predictive value in relation to data stored in memory" (Brewin, 1989, p. 381), and those that are automatic (i.e., subconscious processes), characterized as "rapid, relatively inflexible and difficult to modify... (requiring) minimal attention to occur and... (capable of being) activated without intention or awareness" (1989 p. 381). But a note should be considered on the differences between affect and emotion.
Arousal is a basic physiological response to the presentation of stimuli. When this occurs, a non-conscious affective process takes the form of two control mechanisms; one mobilizing and the other immobilizing. Within the human brain, the amygdala regulates an instinctual reaction initiating this arousal process, either freezing the individual or accelerating mobilization.
The arousal response is illustrated in studies focused on reward systems that control food-seeking behavior (Balleine, 2005). Researchers have focused on learning processes and modulatory processes that are present while encoding and retrieving goal values. When an organism seeks food, the anticipation of reward based on environmental events becomes another influence on food seeking that is separate from the reward of food itself. Therefore, earning the reward and anticipating the reward are separate processes and both create an excitatory influence of reward-related cues. Both processes are dissociated at the level of the amygdala, and are functionally integrated within larger neural systems.
Motivational intensity and cognitive scope
Measuring Cognitive Scope Cognitive scope can be measured by tasks involving attention, perception, categorization, and memory. Some studies use a flanker attention task to figure out whether cognitive scope is broadened or narrowed. For example, using the letters “H” and “N” participants need to identify as quickly as possible the middle letter of 5 when all the letters are the same (e.g., “HHHHH”) and when the middle letter is different from the flanking letters (e.g., “HHNHH”). Broadened cognitive scope would be indicated if reaction times differed greatly from when all the letters were the same compared to when the middle letter when different from the flanking letters. Other studies use a Navon attention task to measure difference in cognitive scope. A large letter is composed of smaller letters, in most cases smaller “L”’s or “F”’s that make up the shape of the letter “T” or “H” or vice versa. Broadened cognitive scope would be suggested by a faster reaction to name the larger letter, whereas narrowed cognitive scope would be suggested by a faster reaction to name the smaller letters within the larger letter. A source monitoring paradigm can also be used to measure how much contextual information is perceived. In the source monitoring paradigm task participants watch a screen which serially displays words to be memorized for 3 seconds each and participants also have to remember whether the word appeared on the left half or the right half of the screen. Participants are aware they will have to memorize the word and the location. The words were also encased in a colored box, and the participants did not know that they would eventually be asked what color box the word appeared in.
Main Research Findings Motivation intensity refers to the strength of urge to move toward or away from a particular stimulus.
Anger and fear affective states, induced via film clips, conferred more selective attention on a flanker task compared to controls as indicated by reaction times that were not very different, even when the flanking letters were different from the middle target letter. Both anger and fear have high motivational intensity because propulsion to act would be high in the face of an angry or fearful stimulus, like a screaming person or coiled snake. Affects high in motivational intensity, thus, narrow cognitive scope making people able to focus more on target information. After seeing a sad picture, participants were faster to identify the larger letter in a Navon attention task, suggesting more global or broaden cognitive scope. The sad emotion is thought to sometimes have low motivational intensity. But, after seeing a disgusting picture, participants were faster to identify the component letters, indicative of a localized more narrow cognitive scope. Disgust has high motivational intensity. Affects high in motivational intensity, thus, narrow cognitive scope making people able to focus more on central information. whereas affects low in motivational intensity broadened cognitive scope allowing for faster global interpretation. The changes in cognitive scope associated with different affective states is evolutionarily adaptive because high motivational intensity affects elicited by stimuli that require movement and action should be focused on, in a phenomenon known as goal-directed behavior. For example, in early times seeing a lion (fearful stimulus) probably elicited a negative but high motivational affective state (fear) in which the human being was propelled to run away. In this case the goal would be to avoid getting killed.
Moving beyond just negative affective states, researchers wanted to test whether or not the negative or positive affective states varied between high and low motivational intensity. To evaluate this theory, Harmon-Jones, Gable, and Price created an experiment using appetitive picture priming and the Navon task, which would allow them to measure the attentional scope with the detection of the Navon letters. The Navon task included a neutral affect comparison condition. Typically, neutral states cause broadened attention with a neutral stimulus. They predicted that a broad attentional scope could cause a faster detection of global letters (large), whereas a narrow attentional scope could cause a faster detection of local letters (small). The evidence proved that the appetitive stimuli produced a narrowed attentional scope. The experimenters further increased the narrowed attentional scope in appetitive stimuli by telling participants they would be allowed to consume the desserts shown in the pictures. The results revealed that their hypothesis was correct in that the broad attentional scope led to quicker detection of global letters and the narrowed attentional scope led to quicker detection of local letters.
Researchers Bradley, Codispoti, Cuthbert and Lang, wanted to further examine the emotional reactions in picture priming. Instead of using an appetitive stimulus they used stimulus sets from the International Affective Picture System (IAPS). The image set includes various unpleasant pictures such as snakes, insects, attack scenes, accidents, illness, and loss. They predicted that the unpleasant picture would stimulate a defensive motivational intensity response, which would produce strong emotional arousal such as skin gland responses and cardiac deceleration. Participants rated the pictures based on valence, arousal and dominance on the self-assessment manikin rating scale. The findings were consistent with the hypothesis and proved that emotion is organized motivationally by the intensity of activation in appetitive or defensive systems.
Prior to research in 2013, Harmon-Jones and Gable performed an experiment to examine whether neural activation related with approach-motivation intensity (left frontal-central activity) would trigger the effect of appetitive stimuli on narrowed attention. They also tested whether individual dissimilarities in approach motivation are associated with attentional narrowing. In order to test the hypothesis, the researchers used the same Navon task with appetitive and neutral pictures in addition to having the participants indicate how long since they had last eaten in minutes. To examine the neural activation, the researchers used an electroencephalography and recorded eye movements in order to detect what regions of the brain were being used during approach motivation. The results supported the hypothesis suggesting that the left frontal-central hemisphere is relative for approach-motivational processes and narrowed attentional scope. Some psychologists were concerned that the individuals who were hungry had an increase in the left frontal-central due to frustration. This statement was proved false because the research shows that the dessert pictures increase positive affect even in the hungry individuals. The findings revealed that narrowed cognitive scope has the ability to assist us in goal accomplishment.
Later on, researchers connected motivational intensity to clinical applications and found that alcohol-related pictures caused narrowed attention for persons who had a strong motivation to consume alcohol. The researchers tested the participants by exposing them to alcohol and neutral pictures. After the picture was displayed on a screen, the participants finished a test evaluating attentional focus. The findings proved that exposure to alcohol-related pictures led to a narrowing of attentional focus to individuals who were motivated to use alcohol. However, exposure to neutral pictures did not correlate with alcohol-related motivation to manipulate attentional focus. The Alcohol Myopia Theory (AMT) states that alcohol consumption reduces the amount of information available in memory, which also narrows attention so only the most proximal items or striking sources are encompassed in attentional scope. This narrowed attention leads intoxicated persons to make more extreme decisions than they would when sober. Researchers provided evidence that substance-related stimuli capture the attention of individuals when they have high and intense motivation to consume the substance. Motivational intensity and cue-induced narrowing of attention has a unique role in shaping people’s initial decision to consume alcohol. In 2013, psychologists from the University of Missouri investigated the connection between sport achievement orientation and alcohol outcomes. They asked varsity athletes to complete a Sport Orientation Questionnaire which measured their sport-related achievement orientation on three scales—competitiveness, win orientation, and goal orientation. The participants also completed assessments of alcohol use and alcohol-related problems. The results revealed that the goal orientation of the athletes were significantly associated with alcohol use but not alcohol-related problems.
In terms of psychopathological implications and applications, college students showing depressive symptoms were better at retrieving seemingly “nonrelevant” contextual information from a source monitoring paradigm task. Namely, the students with depressive symptoms were better at identifying the color of the box the word was in compared to nondepressed students. Sadness (low motivational intensity) is usually associated with depression, so the more broad focus on contextual information of sadder students supports that that affects high in motivational intensity narrow cognitive scope whereas affects low in motivational intensity broaden cognitive scope.
The Motivational Intensity theory states that the difficulty of a task combined with the importance of success determine the energy invested by an individual. There are three main layers to the motivational intensity theory. The innermost layer says human behavior is guided by the desire to conserve as much energy as possible. This is simply stating that individuals aim to avoid wasting energy so they invest only the energy that is required to complete the task. The middle layer focuses on the difficulty of tasks combined with the importance of success and how this affects energy conservation. This layer focuses on energy investment in situations of clear and unclear task difficulty. The last layer looks at predictions for energy invested by a person when they have several possible options to choose at different task difficulties. The person is free to choose among several possible options of task difficulty. The motivational intensity theory offers a logical and consistent framework for research. Researchers can predict a person’s actions by assuming effort refers to the energy investment. The motivational intensity theory is used to show how changes in goal attractiveness cause changes in energy investment.
Mood, like emotion, is an affective state. However, an emotion tends to have a clear focus (i.e., its cause is self-evident), while mood tends to be more unfocused and diffused. Mood, according to Batson, Shaw, and Oleson (1992), involves tone and intensity and a structured set of beliefs about general expectations of a future experience of pleasure or pain, or of positive or negative affect in the future. Unlike instant reactions that produce affect or emotion, and that change with expectations of future pleasure or pain, moods, being diffused and unfocused, and thus harder to cope with, can last for days, weeks, months, or even years (Schucman, 1975). Moods are hypothetical constructs depicting an individual's emotional state. Researchers typically infer the existence of moods from a variety of behavioral referents (Blechman, 1990).
Positive affect and negative affect (PANAS) represent independent domains of emotion in the general population, and positive affect is strongly linked to social interaction. Positive and negative daily events show independent relationships to subjective well-being, and positive affect is strongly linked to social activity. Recent research suggests that high functional support is related to higher levels of positive affect. In his work on negative affect arousal and white noise, Seidner found support for the existence of a negative affect arousal mechanism regarding the devaluation of speakers from other ethnic origins. The exact process through which social support is linked to positive affect remains unclear. The process could derive from predictable, regularized social interaction, from leisure activities where the focus is on relaxation and positive mood, or from the enjoyment of shared activities. The techniques used to shift a negative mood to a positive one are called mood repair strategies.
Affect display is a critical facet of interpersonal communication. Evolutionary psychologists have advanced the hypothesis that hominids have evolved with sophisticated capability of reading affect displays.
Emotions are portrayed as dynamic processes that mediate the individual's relation to a continually changing social environment. In other words, emotions are considered to be processes of establishing, maintaining, or disrupting the relation between the organism and the environment on matters of significance to the person.
Most social and psychological phenomena occur as the result of repeated interactions between multiple individuals over time. These interactions should be seen as a multiagent system—a system that contains multiple agents interacting with each other and/or with their environments over time. The outcomes of individual agents' behaviors are interdependent: Each agent’s ability to achieve its goals depends on not only what it does but also what other agents do.
Emotions are one of the main sources for the interaction. Emotions of an individual influence the emotions, thoughts and behaviors of others; others' reactions can then influence their future interactions with the individual expressing the original emotion, as well as that individual's future emotions and behaviors. Emotion operates in cycles that can involve multiple people in a process of reciprocal influence.
Affect, emotion, or feeling is displayed to others through facial expressions, hand gestures, posture, voice characteristics, and other physical manifestation. These affect displays vary between and within cultures and are displayed in various forms ranging from the most discrete of facial expressions to the most dramatic and prolific gestures.
Observers are sensitive to agents' emotions, and are capable of recognizing the messages these emotions convey. They react to and draw inferences from an agent's emotions. It should be noted that the emotion an agent displays may not be an authentic reflection of his or her actual state (See also Emotional labor).
Agents' emotions can have effects on four broad sets of factors:
- Emotions of other persons
- Inferences of other persons
- Behaviors of other persons
- Interactions and relationships between the agent and other persons.
Emotions may affect not only the person at whom the emotion was directed, but also third parties who observe an agent's emotion. Moreover, emotions can affect larger social entities such as a group or a team. Emotions are a kind of message and therefore can influence the emotions, attributions and ensuing behaviors of others, potentially evoking a feedback process to the original agent.
Agents' feelings evoke feelings in others by two suggested distinct mechanisms:
- Emotion Contagion – people tend to automatically and unconsciously mimic non-verbal expressions. Mimicking occurs also in interactions involving verbal exchanges alone.
- Emotion Interpretation – an individual may perceive an agent as feeling a particular emotion and react with complementary or situationally appropriate emotions of their own. The feelings of the others diverge from and in some way compliment the feelings of the original agent.
People may not only react emotionally, but may also draw inferences about emotive agents such as the social status or power of an emotive agent, his competence and his credibility. For example, an agent presumed to be angry may also be presumed to have high power.
- Hogg, M.A., Abrams, D., & Martin, G.N. (2010). Social cognition and attitudes. In Martin, G.N., Carlson, N.R., Buskist, W., (Ed.), Psychology (pp 646-677). Harlow: Pearson Education Limited.
- Duncan, S., & Barret, L.F. (2007). Affect is a form of cognition: A neurobiological analysis. Cognition & Emotion, 21(6), 1184-1211. doi:10.1080/02699930701437931
- Harmon-Jones, E.; Gable, P. A.; Price, T. F. (5 August 2013). "Does Negative Affect Always Narrow and Positive Affect Always Broaden the Mind? Considering the Influence of Motivational Intensity on Cognitive Scope". Current Directions in Psychological Science 22 (4): 301–307. doi:10.1177/0963721413481353.
- Harmon-Jones, E.; Harmon-Jones, C.; Amodio, D.M.; Gable, P.A. "Attitude toward emotions". Journal of Personality and Social Psychology 101 (6): 1332–1350. doi:10.1037/a0024951.
- Gable, P.A.; Harmon-Jones, E. "Does arousal per se account for the influence of appetitive stimuli on attentional scope and the late positive potential?". Psychophysiology 50 (4): 344–350. doi:10.1111/psyp.12023.
- "Emotion". The Penguin Dictionary of Psychology. Credo Reference: Penguin. 2009.
- Harmon-Jones, E.; Harmon-Jones, C.; Price, T.F. "Which is approach motivation?". Emotion Review.
- Green, Andre (1973), The Fabric of Affect in the Psychoanalytic Discourse, The New Library of Psychoanalysis, London and NY, 1999
- Watson, D.; Clark, L. A.; Tellegen, A. (1988). "Development and validation of brief measures of positive and negative affect: the PANAS scales". Journal of Personality and Social Psychology 54 (6): 1063–1070. doi:10.1037/0022-35188.8.131.523.
- Thompson, E. R. (2007). "Development and validation of an internationally reliable short-form of the positive and negative affect schedule (PANAS)". Journal of Cross-Cultural Psychology 38 (2): 227–242. doi:10.1177/0022022106297301.
- Finucane, Anne M. (2011). "The effect of fear and anger on selective attention". Emotion 11 (4): 970–974. doi:10.1037/a0022574. PMID 21517166.
- Gable, P.; Harmon-Jones, E. (14 January 2010). "The Blues Broaden, but the Nasty Narrows: Attentional Consequences of Negative Affects Low and High in Motivational Intensity". Psychological Science 21 (2): 211–215. doi:10.1177/0956797609359622.
- von Hecker, Ulrich; Meiser, Thorsten (2005). "Defocused Attention in Depressed Mood: Evidence From Source Monitoring.". Emotion 5 (4): 456–463. doi:10.1037/1528-35184.108.40.2066.
- Harmon-Jones, E.; Gable, P.A.; Price, T.F. (August 2013). "Does Negative affect always narrow and positive affect always broaden the mind? Considering the influence of motivational intensity on cognitive scope". Current Directions in Psychological Science 22 (4): 301–307. doi:10.1177/0963721413481353.
- Harmon-Jones, Eddie; Price, Tom F.; Gable, Philip A. (April 2012). "The Influence of Affective States on Cognitive Broadening/Narrowing: Considering the Importance of Motivational Intensity". Social and Personality Psychology Compass 6 (4): 314–327. doi:10.1111/j.1751-9004.2012.00432.x.
- Harmon-Jones, Eddie; Gable, Phillip (April 2009). "Neural Activity Underlying the Effect of Approach-Motivated Positive Affect on Narrowed Attention". Psychological Sciences 20 (4): 406–409. doi:10.1111/j.1467-9280.2009.02302.x.
- Bradley, Margaret; Maurizio Codispoti; Bruce Cuthbert; Peter Lang (September 2001). "Emotion and Motivation I: Defensive and Appetitive Reactions in Picture Processing". Emotion 1 (3): 276–298. doi:10.1037/1528-35220.127.116.116. PMID 12934687.
- Hicks, J.A.; Friedman, R.S.; Gable, P.A.; Davis, W.E. (June 2001). "Interactive effects of approach motivational intensity and alcohol cues on the scope of perceptual attention". Addiction 107 (6): 1074–1080. doi:10.1111/j.1360-0443.2012.03781.x. PMID 22229816.
- Weaver, CC; Martens MP; Cadigan JM; Takamatsu SK; Treloar HR; Pederson ER (29 August 2013). "Sport-related Achievement Motivation And Alcohol Outcomes: An Athlete-Specific Risk Factor Among Intercollegiate Athletes". Addictive Behavior 38 (12): 2930–6. doi:10.1016/j.addbeh.2013.08.021. PMID 24064192.
- Richter, M. "A closer look into the multi-layer structure of Motivational Intensity theory". Social and Personality Psychology Compass 7 (1): 1–12. doi:10.1111/spc3.12007.
- Martin, Brett A. S. (2003). "The Influence of Gender on Mood Effects in Advertising" (PDF). Psychology and Marketing 20 (3): 249–273. doi:10.1002/mar.10070.
- Seidner, Stanley S. (1991). Negative Affect Arousal Reactions from Mexican and Puerto Rican Respondents. Washington, D.C.: ERIC.
- Nesse, R.M. (1990). "Evolutionary explanations of emotions". Human Nature 1 (3): 261–289. doi:10.1007/bf02733986.
- Keltner, D.; Haidt, J. (1999). "Social Functions of Emotions at Four Levels of Analysis". COGNITION AND EMOTION 13 (5): 505–521. doi:10.1080/026999399379168.
- Campos, J.; Campos, R. G.; Barrett, K. (1989). "Emergent themes in the study of emotional development and emotion regulation". Developmental Psychology 25 (3): 394–402. doi:10.1037/0012-1618.104.22.1684.
- Smith, R.; Conrey, F.R. (2007). "Agent-Based Modeling: A New Approach for Theory Building in Social Psychology;". Personality and Social Psychology Review 11 (1): 87–104. doi:10.1177/1088868306294789.
- Rafaeli A. & Hareli S. (2007) Emotion cycles: On the social influence of emotion in organizations; Research in Organizational Behavior
- Ekman, P (1992). "An argument for basic emotion". Cognition and Emotion 6 (3/4): 169–200. doi:10.1080/02699939208411068.
- Hatfield, E., Cacioppo, J. T., & Rapson, R. L. 1994. Emotional contagion. Cambridge: Cambridge University Press.
- Rafaeli, A., Cheshin, A., & Israeli, R. (2007). Anger contagion and team performance. Paper presented at the Annual Meeting of the Academy of Management, Philadelphia, Pennsylvania.
- Frijda, N.H. (1986). The emotions. Cambridge: Cambridge University Press.
- Tiedens, L. (2001). "Anger and advancement versus sadness and subjugation: The effect of negative emotion expression on social status conferral". Journal of Personality and Social Psychology 80 (1): 86–94. doi:10.1037/0022-3522.214.171.124. PMID 11195894.
- APA (2006). VandenBos, Gary R., ed. APA Dictionary of Psychology Washington, DC: American Psychological Association, page 26.
- Balliene, B. W. (2005). "Dietary Influences on Obesity: Environment, Behavior and Biology". Physiology & Behavior 86 (5): 717–730.
- Batson, C.D., Shaw, L. L., Oleson, K. C. (1992). Differentiating Affect, Mood and Emotion: Toward Functionally based Conceptual Distinctions. Emotion. Newbury Park, CA: Sage
- Blechman, E. A. (1990). Moods, Affect, and Emotions. Lawrence Erlbaum Associates: Hillsdale, NJ
- Brewin, C. R. (1989). "Cognitive Change Processes in Psychotherapy". Psychological Review 96 (45): 379–394. doi:10.1037/0033-295x.96.3.379.
- Damasio, A., (1994). *Descartes' Error: Emotion, Reason, and the Human Brain, Putnam Publishing
- Griffiths, P. E. (1997). What Emotions Really Are: The Problem of Psychological Categories. The University of Chicago Press: Chicago
- Lazarus, R. S. (1982). "Thoughts on the Relations between Emotions and Cognition". American Physiologist 37 (10): 1019–1024.
- Lerner, J.S.; Keltner, D. (2000). "Beyond valence: Toward a model of emotion-specific influences on judgement and choice". Cognition and Emotion 14 (4): 473–493. doi:10.1080/026999300402763.
- Nathanson, Donald L. Shame and Pride: Affect, Sex, and the Birth of the Self. London: W.W. Norton, 1992
- Quirin, M.; Kazén, M.; Kuhl, J. (2009). "When nonsense sounds happy or helpless: The Implicit Positive and Negative Affect Test (IPANAT)". Journal of Personality and Social Psychology 97 (3): 500–516. doi:10.1037/a0016063.
- Schucman, H., Thetford, C. (1975). A Course in Miracle. New York: Viking Penguin
- Shepard, R. N. (1984). "Ecological Constraints on Internal Representation". Psychological Review 91 (4): 417–447. doi:10.1037/0033-295x.91.4.417.
- Shepard, R. N. (1994). "Perceptual-cognitive Universals as Reflections of the World". Psychonomic Bulletin & Review 1 (1): 2–28. doi:10.3758/bf03200759.
- Weiskrantz, L. (1997). Consciousness Lost and Found. Oxford: Oxford Univ. Press.
- Zajonc, R. B. (1980). "Feelings and Thinking: Preferences Need No Inferences". American Psychologist 35 (2): 151–175. doi:10.1037/0003-066x.35.2.151.
|Look up affect or affective in Wiktionary, the free dictionary.|
- Personality and the Structure of Affective Responses
- Lynch, Brian. "Affect and Script Theory - Silvan S. Tomkins". Archived from the original on 15 September 2008. Retrieved 2008-09-26.
- Circumplex Model of Affect
- Affect and Memory |
History of Knoxville, Tennessee
The History of Knoxville, Tennessee, began with the establishment of James White's Fort on the Trans-Appalachian frontier in 1786. The fort was chosen as the capital of the Southwest Territory in 1790, and the city, named for Secretary of War Henry Knox, was platted the following year. Knoxville became the first capital of the State of Tennessee in 1796, and grew steadily during the early 19th century as a way station for westward-bound migrants and as a commercial center for nearby mountain communities. The arrival of the railroad in the 1850s led to a boom in the city's population and commercial activity.
While a Southern city, Knoxville was home to a strong pro-Union element during the secession crisis of the early 1860s, and remained bitterly divided throughout the Civil War. The city was occupied by Confederate forces until September 1863, when Union forces entered the city unopposed. Confederate forces laid siege to the city later that year, but retreated after failing to breach the city's fortifications during the Battle of Fort Sanders.
Following the war, business leaders, many from the North, established major iron and textile industries in Knoxville. As a nexus between rural towns in Southern Appalachia and the nation's great manufacturing centers, Knoxville grew to become the third-largest wholesaling center in the South. Tennessee marble, extracted from quarries on the city's periphery, was used in the construction of numerous monumental buildings across the country, earning Knoxville the nickname, "The Marble City."
Knoxville's economy slowed in the early 1900s. Political factioning hampered revitalization efforts throughout much of the 20th century, though the creation of federal entities such as the Tennessee Valley Authority in the 1930s and the ten-fold expansion of the University of Tennessee helped keep the economy stable. Beginning in the late 1960s, a city council more open to change, along with economic diversification, urban renewal, and the hosting of the 1982 World's Fair, helped the city revitalize to some extent.
- 1 Prehistory and early recorded history
- 2 Early Knoxville
- 3 Knoxville in the antebellum period
- 4 The Secession crisis in Knoxville
- 5 The Civil War
- 6 Knoxville and the rise of the New South (1866–1920)
- 7 Transition to a modern city (1920–1960)
- 8 Revitalization (1960–present)
- 9 Historiography of Knoxville
- 10 See also
- 11 References
- 12 Bibliography
- 13 External links
Prehistory and early recorded history
|History of Tennessee|
The first humans to form substantial settlements in what is now Knoxville arrived during the Woodland period (c. 1000 B.C. – 1000 A.D). Knoxville's two most prominent prehistoric structures are Late Woodland period burial mounds, one located along Cherokee Boulevard in Sequoyah Hills, and the other located along Joe Johnson Drive on the U.T. campus. Substantial Mississippian period (c. 1100–1600 A.D.) village sites have been found at Post Oak Island (along the river near the Knox-Blount line), and at Bussell Island (near Lenoir City).
The Spanish expedition of Hernando de Soto is believed to have traveled down the French Broad Valley and visited the Bussell Island village in 1540 en route to the Mississippi River. A follow-up expedition led by Juan Pardo may have visited village sites in the Little Tennessee Valley in 1567. The records of these two expeditions suggest the area was part of a Muskogean chiefdom known as Chiaha, which was subject to the Coosa chiefdom further to the south.
By the 18th century, the Cherokee had become the dominant tribe in the East Tennessee region, although they were consistently at war with the Creeks and Shawnee. The Cherokee people called the Knoxville area kuwanda'talun'yi, which means "Mulberry Place." Most Cherokee habitation in the area was concentrated in the Overhill settlements along the Little Tennessee River, southwest of Knoxville.
Early exploration and late-18th century politics
By the early 1700s, traders from South Carolina were visiting the Overhill towns regularly, and following the discovery of Cumberland Gap in 1748, long hunters from Virginia began pouring into the Tennessee Valley. At the outbreak of the French and Indian War in 1754, the Cherokee supported the British, and the British in return constructed Fort Loudoun to protect the Overhill towns from the French and their allies.:1 After a falling out, however, the Cherokee attacked the fort and killed its occupants in 1760. A peace expedition to the Overhill towns led by Henry Timberlake passed along the river through what is now Knoxville in December 1761.
The Cherokee supported the British during the American Revolution, and after the war, North Carolina, which considered the Tennessee Valley part of its territory, deemed Cherokee claims to the region void.:2 North Carolina made plans to cede its Trans-Appalachian territory to the federal government, but decided to open up the lands to settlement first. In 1783, land speculator William Blount and his brother, John Gray Blount, convinced North Carolina to pass a law offering lands in the Tennessee Valley for sale. Later that year, an expedition consisting of James White (1747–1820), James Connor, Robert Love, and Francis Alexander Ramsey, explored the Upper Tennessee Valley, and discovered the future site of Knoxville. Taking advantage of Blount's land-grab act, White took out a claim for the site shortly afterward.
In 1786, White moved to the future site of Knoxville, where he and fellow explorer James Connor built what became known as White's Fort. The site straddled a hill that was bounded by the river on the south, creeks (First Creek and Second Creek) on the east and west, and a swampy declivity on the north. The fort, which originally stood along modern State Street, consisted of four heavily timbered cabins connected by an 8-foot (2.4 m) palisade, enclosing one-quarter acre of ground.:374 White also erected a mill for grinding grain on nearby First Creek.:375
White's Fort represented the western extreme of the so-called State of Franklin, which Tennessee settlers organized in 1784 after North Carolina reneged on its plans to cede its western territory to the federal government.:3 James White supported the State of Franklin, and served as its Speaker of the Senate in 1786. The federal government never recognized the State of Franklin, however, and by 1789, its supporters once again pledged allegiance to North Carolina.:4
In 1789, White, William Blount, and former State of Franklin leader John Sevier, now members of the North Carolina state legislature, helped convince the state to ratify the United States Constitution.:4 Following ratification, North Carolina ceded control of its Tennessee territory to the federal government.:4–5 In May 1790, the United States created the Southwest Territory, which included Tennessee, and President George Washington appointed Blount the territory's governor.:4–5
Establishment of Knoxville
Blount immediately moved to White's Fort (chosen for its central location) to begin resolving land disputes between the Cherokee and white settlers in the region.:5–6 In the Summer of 1791, he met with forty-one Cherokee chiefs at the mouth of First Creek to negotiate the Treaty of Holston, which was signed on July 2 of that year.:5–6 The treaty moved the boundary of Cherokee lands westward to the Clinch River and southwestward to the Little Tennessee River.:5–6
While Blount initially sought to place the territorial capital at the confluence of the Clinch and Tennessee rivers (near modern Kingston), where he had land claims, he was unable to convince the Cherokee to completely relinquish this area, and thus settled on White's Fort as the capital.:6–7 James White set aside land for a new town, which initially consisted of the area now bounded by Church Avenue, Walnut Street, First Creek, and the river, in what is now Downtown Knoxville. White's son-in-law, Charles McClung, surveyed the land and divided it into 64 half-acre lots. Lots were set aside for a church and cemetery, a courthouse, a jail, and a college.
On October 3, 1791, a lottery was held for those wishing to purchase lots in the new city, which was named "Knoxville" in honor of Blount's superior, Secretary of War Henry Knox. Along with Blount and McClung, those who purchased lots in the city included merchants Hugh Dunlap, Thomas Humes, and Nathaniel and Samuel Cowan, newspaper publisher George Roulstone, the Reverend Samuel Carrick, frontiersman John Adair (who had built a fort just to the north in what is now Fountain City), and tavern keeper John Chisholm.
Knoxville in the 1790s
Following the sale of lots, Knoxville's leaders set about constructing a courthouse and jail. A garrison of federal soldiers, under the command of David Henley, erected a blockhouse in Knoxville in 1792. The Cowan brothers, Nathaniel and Samuel, opened the city's first general store in August 1792, and John Chisholm's tavern was in operation by December 1792. The city's first newspaper, the Knoxville Gazette, was established by George Roulstone in November 1791. In 1794, Blount College, the forerunner of the University of Tennessee, was chartered, with Samuel Carrick as its first president. Carrick also established the city's first church, the First Presbyterian Church, though a building wasn't constructed until the 1800s.
In many ways, early Knoxville was a typical rowdy late-18th century frontier village. A detached group of Cherokee, known as the Chickamaugas, refused to recognize the Holston treaty, and remained a constant threat. In September 1793, a large force of Chickamaugas and Creeks marched on Knoxville, and massacred the inhabitants of Cavet's Station (near modern Bearden) before dispersing.:11 Outlaws roamed the city's periphery, among them the Harpe Brothers, who murdered at least one settler in 1797 before fleeing to Kentucky.:12 Abishai Thomas, an associate of Blount who visited Knoxville in 1794, noted that the city was full of taverns and tippling houses, and that the blockhouse's jail was overcrowded with criminals.:11–2
In 1795, James White set aside more land for the growing city, allowing it to expand northward to modern Clinch Avenue and westward to modern Henley Street. A census that year showed that Tennessee had a large enough population to apply for statehood. In January 1796, delegates from across Tennessee, including Blount, Sevier, and Andrew Jackson, convened in Knoxville to draw up a constitution for the new state, which was admitted to the Union on June 1, 1796. Knoxville was chosen as the initial capital of the state.:13
Knoxville in the antebellum period
While Knoxville's population grew steadily in the early 1800s, most new arrivals were westward-bound migrants staying in the town for a brief period. By 1807, some 200 migrants were passing through the town every day.:75 Cattle drovers, who specialized in driving herds of cattle across the mountains to markets in South Carolina, were also frequent visitors to the city.:75 The city's merchants acquired goods from Baltimore and Philadelphia via wagon trains.
French botanist André Michaux visited Knoxville in 1802, and reported the presence of approximately 200 houses and 15 to 20 "well-stocked" stores. While there was "brisk commerce" at the city's stores, Michaux noted, the only industries in the city were tanneries. In February 1804, itinerant Methodist preacher Lorenzo Dow passed through Knoxville, and reported the widespread presence of a religious phenomenon in which worshippers would go into seizure-like convulsions, or "jerks," at rallies. Illinois governor John Reynolds, who studied law in Knoxville, recalled a raucous, anti-British celebration held in the city on July 4, 1812, at the onset of the War of 1812.
On October 27, 1815, Knoxville officially incorporated as a city. The city's new charter set up an alderman-mayor form of government, in which a Board of Aldermen was popularly elected, and in turn selected a mayor from one of their own.:75 This remained Knoxville's style of government until the early 20th century, though the city's charter was amended in 1838 to allow for popular election of mayor as well.:76 In January 1816, Knoxville's newly elected Board of Aldermen chose Judge Thomas Emmerson (1773–1837) as the city's first mayor.
Sectionalism and struggles with isolation
Historian William MacArthur once described Knoxville as a "product and prisoner of its environment.":1 Throughout the first half of the 19th century, Knoxville's economic growth was stunted by its isolation. The rugged terrain of the Appalachian Mountains made travel in and out of the city by road difficult, with wagon trips to Philadelphia or Baltimore requiring a round trip of several months. Flatboats were in use as early as 1795 to carry goods from Knoxville to New Orleans via the Tennessee, Ohio, and Mississippi rivers,:94 but river hazards near Muscle Shoals and Chattanooga made such a trek risky.
During the 1820s and 1830s, state legislators from East Tennessee continuously bickered with legislators from Middle and West Tennessee over funding for road and navigational improvements. East Tennesseans felt the state had squandered the proceeds from the sale of land in the Hiwassee District (1819) on a failed state bank, rather than on badly needed internal improvements. It wasn't until 1828 that a steamboat, the Atlas, managed to navigate Muscle Shoals and make it upriver to Knoxville. River improvements in the 1830s allowed Knoxville semi-annual access to the Mississippi, though by this time the city's merchants had shifted their focus to railroad construction.
Life in Knoxville, 1816–1854
In 1816, as the Gazette was in decline, businessmen Frederick Heiskell and Hugh Brown established a newspaper, the Knoxville Register. Along with the Register, Heiskell and Brown published a pro-emancipation newsletter, the Western Monitor and Religious Observer, as well as books such as John Haywood's Civil and Political History of the State of Tennessee (1823), one of the state's first comprehensive histories.:15 The Register celebrated the move of East Tennessee College (the new name of Blount College following its rechartering in 1807) to Barbara Hill in 1826, and encouraged the trustees of the Knoxville Female Academy, which had been chartered in 1811, to finally hire a faculty and hold its first classes in 1827.
In the April 1839 issue of the Southern Literary Messenger, a traveler who had recently visited Knoxville described the people of the city as "moral, sociable and hospitable," but "with less refinement of mind and manners" than people in older towns. In 1842, English travel writer James Gray Smith reported that the city was home to a university, an academy, a "ladies' school," three churches, two banks, two hotels, 15-20 stores, and several "handsome country residences" occupied by people "as aristocratic as even an Englishman... could possibly desire."
In 1816, merchant Thomas Humes began building a lavish hotel on Gay Street, later known as the Lamar House Hotel, which for decades would provide a gathering place for the city's elite. In 1848, the Tennessee School for the Deaf opened in Knoxville, giving an important boost to the city's economy. In 1854, land speculators Joseph Mabry and William Swan donated land for the creation of Market Square, creating a venue for farmers from the surrounding region to sell their produce.:4–11
The arrival of the railroads
As early as the 1820s, Knoxville's business leaders viewed railroads— then a relatively new form of transportation— as a solution to the city's economic isolation. Led by banker J. G. M. Ramsey (1797–1884), Knoxville business leaders joined calls to build a rail line connecting the city to Cincinnati to the north and Charleston to the southeast, which led to the chartering of the Louisville, Cincinnati and Charleston Railroad (LC&C) in 1836. The Hiwassee Railroad, chartered two years later, was to connect this line with a rail line in Dalton, Georgia.
In spite of Knoxvillians' enthusiasm (the city celebrated the passage of a state appropriations bill for the LC&C with a 56-gun salute in 1837), the LC&C was doomed by a financial recession in the late 1830s, and construction of the Hiwassee Railroad was stalled by lack of funding amidst continued sectional bickering. The Hiwassee was rechartered as the East Tennessee and Georgia Railroad in 1847, and construction finally began the following year. The first train rolled into Knoxville on June 22, 1855, to great fanfare.:106
With the arrival of the railroad, Knoxville expanded rapidly. The city's northern boundary extended northward to absorb the tracks, and its population grew from about 2,000 in 1850 to over 5,000 in 1860.:20 Local crop prices spiked, the number of wholesaling firms in Knoxville grew from 4 to 14,:21–23 and two new factories— the Knoxville Manufacturing Company, which made steam engines, and Shepard, Leeds and Hoyt, which built railroad cars— were established.:21–23 In 1859, the city had 6 hotels, several tanners, tinners, and furniture makers, and 26 liquor stores.:21–23
The Secession crisis in Knoxville
Antebellum politics in Knoxville
Early-19th century Knoxville was often caught in the middle of the sectionalist fighting between East Tennessee and the state as a whole. Following the presidential election of 1836, in which Knoxvillian Hugh Lawson White (James White's son) ran against Andrew Jackson's hand-picked successor, Martin Van Buren, political divisions in the city manifested along Whig (anti-Jackson) and Democratic party lines.:17–8 In 1839, W.B.A. Ramsey won the city's first popular mayoral election by a single vote, illustrating how strong these divisions had become.:76
In 1849, William G. "Parson" Brownlow moved his radical Whig newspaper, the Whig, to Knoxville. Brownlow's editorial style, which often involved vicious personal attacks, intensified the already-sharp political divisions within the city. In 1857, he quarreled with the pro-Secession Southern Citizen and its publishers, Knoxville businessman William G. Swan and Irish Patriot John Mitchell (then in exile), to the point of threatening Swan with a pistol.:49 Brownlow's attacks drove Whig-turned-Democrat John Hervey Crozier from public life,:289–290 and forced two directors of the failed Bank of East Tennessee, A.R. Crozier and William Churchwell, to flee town. He brought charges of swindling against a third director, J.G.M. Ramsey, the former railroad promoter and a staunch Democrat.:290
Following the nationwide collapse of the Whig Party in 1854, many of Knoxville's Whigs, including Brownlow, were unwilling to support the new Republican Party formed by northern Whigs, and instead aligned themselves with the anti-immigrant American Party (commonly called the "Know Nothings").:25 When this movement disintegrated, Knoxville's ex-Whigs turned to the Opposition Party. In 1858, Opposition Party candidate Horace Maynard, with Brownlow's endorsement, soundly defeated Democratic candidate J.C. Ramsey (J.G.M. Ramsey's son) for the 2nd district's congressional seat.:49
Knoxville and slavery
By 1860, slaves comprised 22% of Knoxville's population, which was higher than the percentage across East Tennessee (approximately 10%) but lower than the rest of the South (about one-third).:78–9 Most of Knox County's farms were small (only one was larger than 1,000 acres (4.0 km2)) and typically focused on livestock or other products that weren't labor-intensive.:78–9 The city's largest slaveholder was Joseph Mabry, who owned 42 slaves in 1860. The city was home to a chapter of the American Colonization Society,:34 led by St. John's Episcopal Church rector Thomas William Humes.:35
While Knoxville was far less dependent on slavery than the rest of the South, most of the city's leaders, even those who opposed secession, were pro-slavery at the onset of the Civil War.:34–39 Some, such as J.G.M. Ramsey, had always been pro-slavery.:35 However, numerous prominent Knoxvillians, including Brownlow, Oliver Perry Temple, and Horace Maynard, had been pro-emancipation in the 1830s, but, for reasons not fully understood, were pro-slavery by the 1850s.:36–39
Temple later wrote that he and others abandoned their anti-slavery stance due to the social ostracism abolitionists faced in the South.:37 Historian Robert McKenzie, however, argues that the aggression of northern abolitionists toward Southerners pushed many Southern abolitionists toward pro-slavery views, though he points out that no one explanation neatly explains this shift.:39 In any case, by the late-1850s, most of Knoxville's leaders were pro-slavery. The views of Brownlow and Ramsey, bitter enemies on many fronts, were virtually identical on the issue of slavery.
The secession debate in Knoxville
The election of Abraham Lincoln in 1860 drastically intensified the secession debate in Knoxville, and the city's leaders met on November 26 to discuss the issue.:56 Those who favored secession, such as J.G.M. Ramsey, believed it was the only way to ensure the rights of Southerners. Those who rejected secession, such as Maynard and Temple, believed that East Tennesseans, most of whom were yeoman farmers, would be rendered subservient to a government dominated by Southern planters.:57 In February 1861, Tennessee held a vote on whether or not to hold a statewide convention to consider seceding and joining the Confederacy.:60 In Knoxville, 77% voted against this measure, affirming the city's allegiance to the Union.:60
Throughout the first half of 1861, Brownlow and J. Austin Sperry (the radical secessionist editor of the Knoxville Register) assailed one another mercilessly in their respective papers,:128, 214 and Union and Secessionist leaders blasted one another in speeches across the region. Simultaneous Union and Confederate recruiting rallies were held on Gay Street.:72 Following the attack on Fort Sumter in April, Governor Isham Harris made moves to align the state with the Confederacy, prompting the region's Unionists to form the East Tennessee Convention, which met at Knoxville on May 30, 1861. The convention submitted a petition to Governor Isham Harris, calling his actions undemocratic and unconstitutional.
In a second statewide vote on June 8, 1861, a majority of East Tennesseans still rejected secession,:80–82 but the measure succeeded in Middle and West Tennessee, and the state thus joined the Confederacy. In Knoxville, the vote was 777 to 377 in favor of secession.:81 McKenzie points out, however, that 436 Confederate soldiers from outside Knox County were stationed in Knoxville at the time and were allowed to vote.:81 If these votes are removed, the tally in Knoxville was 377 to 341 against secession.:81 Following the vote, the East Tennessee Union Convention petitioned the state legislature, asking that East Tennessee be allowed to form a separate, Union-aligned state. The petition was rejected, however, and Governor Harris ordered Confederate troops into the region.:359–365
The Civil War
The Confederate commander in East Tennessee, Felix Zollicoffer, initially took a lenient stance toward the region's Unionists. In November 1861, however, Union guerrillas destroyed several railroad bridges across East Tennessee, prompting Confederate authorities to institute martial law.:370–406 Suspected bridge-burning conspirators were tried and executed, and hundreds of other Unionists were jailed, forcing authorities to erect a makeshift prison at the corner of Main and Prince (Market) streets in Downtown Knoxville.:34 Brownlow was among those arrested, but was released after a few weeks. He spent 1862 touring the north in an attempt to rally support for a Union invasion of East Tennessee.:111–2
Zollicoffer was replaced by John Crittenden in November 1861,:40 and Crittenden was in turn replaced by Edmund Kirby Smith in March 1862,:50 as Confederate authorities consistently struggled to find an acceptable commander for its East Tennessee forces. In June 1862, George Wilson, one of Andrews' Raiders, was tried and convicted in Knoxville.:65–6 In July 1862, 40 Union soldiers captured by Nathan Bedford Forrest near Murfreesboro were marched down Gay Street, with Confederate soldiers jokingly reading aloud their personal correspondence afterward.:66
The divided 2nd District sent representatives to both the U.S. Congress (Horace Maynard) and the Confederate Congress (William G. Swan) in 1861.:90:24 Maynard, along with fellow East Tennessee Unionist Andrew Johnson, consistently pleaded with President Lincoln to send troops into the region.:437–441 For nearly two years, however, Union generals in Kentucky consistently ignored orders to march on Knoxville, and instead focused on Middle Tennessee.:44 On June 20, 1863, William P. Sanders's Union cavalry briefly laid siege to Knoxville, but a Confederate citizens' guard within the city managed to fend them off.:77–8
In August 1863, Simon Buckner, the last of a string of Confederate commanders based in Knoxville, evacuated the city. On September 1, the vanguard of Union general Ambrose Burnside entered the city to great fanfare (the unit briefly chased future mayor Peter Staub through the streets).:84 Oliver Perry Temple joyously ran behind the soldiers the length of Gay Street, and pro-Union Mayor James C. Luttrell raised a large American flag he had saved for the occasion.:479 Burnside set up his headquarters at John Hervey Crozier's house at the corner of Gay and Union. Thomas William Humes was reinstalled as rector of St. John's Episcopal,:85 and Brownlow returned to the city and once again began publication of the Whig.:153
Anticipating the Confederates would soon attempt to retake the city, Burnside and his chief engineer, Orlando Poe, set about fortifying the city with a string of earthworks, bastions, and trenches.:108–114 In November 1863, Confederate general James Longstreet moved north from Chattanooga in hopes of forcing Burnside out of Knoxville. Burnside's forces managed to delay Longstreet at the Battle of Campbell's Station on November 16, but was forced to retreat back to Knoxville with Longstreet in pursuit.:126–9 General Sanders was mortally wounded on November 18 executing a critical delaying action along Kingston Pike. Fort Loudon, one of the city's earthen bastions, was renamed "Fort Sanders" in his honor.:141, 147
Longstreet's forces laid siege to Knoxville for two weeks, though the Union Army managed to resupply Burnside via the river.:164 On the morning of November 29, 1863, Longstreet ordered his forces to attack Fort Sanders. The Confederate attackers struggled to overcome Union trenches and the barrage of Union gunfire, and were forced to withdraw after just 20 minutes.:191–9 On December 2, Longstreet lifted the siege and withdrew to Virginia, leaving the city in Union hands until the end of the war.:213
In April 1864, the East Tennessee Union Convention reconvened in Knoxville, and while its delegates were badly divided, several, including Brownlow and Maynard, supported a resolution recognizing the Emancipation Proclamation.:191–3 Confederate businessman Joseph Mabry and future business leaders such as Charles McClung McGhee and Peter Kern began working with Union leaders to rebuild the city.:31 Brownlow remained vengeful, however, seizing the property of Confederate leaders J.G.M. Ramsey, William Sneed (including the Lamar House Hotel), and William Swan, and expelling known Confederate sympathizers from the city.:198–201
Acts of Civil War-related violence occurred in Knoxville for years after the war. On September 4, 1865, Confederate soldier Abner Baker was lynched in Knoxville after killing a Union soldier who had killed his father.:217–9 On July 10, 1868, Union major E.C. Camp shot and killed Confederate colonel Henry Ashby on Main Street over a Civil War grievance. On June 13, 1870, Joseph Mabry shot pro-Union attorney John Baxter in front of the Lamar House, capping a feud that had been building since the war. The following year, David Nelson, the son of pro-Union congressman T.A.R. Nelson, shot and killed Confederate general James Holt Clanton on Gay Street.
Knoxville and the rise of the New South (1866–1920)
According to historian William MacArthur, Knoxville "grew from a town to a city between 1870 and 1900.":29 A number of newcomers from the North, with the help of prewar local business elites, quickly established the city's first heavy industries. Hiram Chamberlain and the Welsh-born Richards brothers established the Knoxville Iron Company in 1868, and erected a large mill in the Second Creek Valley.:208–210 The following year, Charles McClung McGhee and several investors purchased the city's two major railroads and merged them into the East Tennessee, Virginia and Georgia Railway, which would eventually control over 2,500 miles (4,000 km) of tracks in five states.:196 The city's textile industry took shape with the establishment of the Knoxville Woolen Mills and Brookside Mills in 1884 and 1885, respectively.:46–7
As one of the largest cities in the Southern Appalachian region, Knoxville had long been a nexus between the surrounding rural mountain hinterland and the major industrial centers of the North, and thus had long been home to a thriving wholesaling (or "jobbing") market. Rural merchants from across East Tennessee purchased goods for their general stores from Knoxville wholesalers.:17 With the arrival of the railroad, the city's wholesaling sector expanded rapidly, with over a dozen firms in operation by 1860, and 50 by 1896.:46 In 1866, Knoxville-based wholesaler Cowan, McClung and Company was the most profitable company in the state. By the late-1890s, Knoxville had the third-largest wholesaling market in the South.:18
The railroad also led to a boom in the quarrying and production of Tennessee marble, a type of crystalline limestone found in abundance in the ridges surrounding Knoxville. By the early 1890s, twenty-two quarries and three finishing mills were in operation in Knox County alone, and the industry as a whole was generating over a million dollars in annual profits.:204–6 Tennessee marble was used in monumental construction projects across the nation, earning Knoxville the nickname, "The Marble City," during the late 19th century. The Flag of Knoxville, Tennessee incorporates the color white to symbolize marble and displays a derrick used in marble mining.
Knoxville's pre-1850s population consisted primarily of European-American (of mostly English, Scots-Irish, or German descent) Protestants and a small community of free blacks and slaves.:24:30–1 Railroad construction in the 1850s brought to the city large numbers of Irish Catholic immigrants, who helped establish the city's first Catholic congregation in 1851.:24 The Swiss were another important group in 19th-century Knoxville, with businessmen James G. Sterchi and Peter Staub, Supreme Court justice Edward Terry Sanford, philosopher Albert Chavannes, and builder David Getaz, all claiming descent from the city's Swiss immigrants.:26–31 Welsh immigrants brought mining and metallurgical expertise to the city in the late 1860s and 1870s.:33
After the Civil War, African Americans, both freed slaves and blacks that had been free prior to the war, played an increasing role in the city's political and economic affairs. Racetrack and saloon owner Cal Johnson, born a slave, was one of the wealthiest African Americans in the state by the time of his death. Attorney William F. Yardley, a member of the city's free black community, was Tennessee's first black gubernatorial candidate in 1876. Knoxville College was founded in 1875 to provide educational opportunities for the city's black community.
Greek immigrants began arriving in Knoxville in significant numbers in the early 20th century. Knoxville's Greek community is perhaps best known for its restaurateurs, namely the Regas family, who operated a restaurant on North Gay Street from 1919 to 2010, and the Paskalis family, who founded the Gold Sun Cafe on Market Square around 1909.:108–9 Notable members of Knoxville's Jewish community included jeweler Max Friedman and department store owner Max Arnstein.:35 One of Knoxville's largest migrant groups consisted of rural people who moved to the city from the surrounding rural counties, often seeking wage-paying jobs in mills.:25–7 Many of Knoxville's political and business leaders throughout the 20th century hailed from rural areas of Southern Appalachia.
Knoxville in the Gilded Age
Swiss immigrant Peter Staub built Knoxville's first opera house, Staub's Theatre, on Gay Street in 1872. This was also one of the first major structures designed by architect Joseph Baumann, who would design many of the city's more prominent late-19th-century buildings. During this same period, the Lamar House Hotel, located across the street from the theater, was a popular gathering place for the city's elite. The hotel hosted lavish masquerade balls, and served oysters, cigars, and imported wines.:70–1
Initially a place for farmers to sell produce, Market Square had evolved into one of the city's commercial and cultural centers by the 1870s. The square's most notable business was Peter Kern's ice cream saloon and confections factory, which hosted numerous festivals for various groups in the late 19th century.:28–32 The square also attracted street preachers, early country musicians,:52–60 and political activists. Women's suffragist Lizzie Crozier French was delivering speeches on Market Square as early as the 1880s.:97–9
After the Civil War, Thomas William Humes was named president of East Tennessee University (renamed the University of Tennessee in 1879), and managed to acquire for the institution the state's Morrill Act land-grant funds, allowing the school to expand. In 1885, Charles McClung McGhee established the Lawson McGhee Library, named for his late daughter, which became the basis of Knox County's public library system. In the early 1870s, Humes managed to obtain a Peabody Fund grant that allowed Knoxville to establish a public school system.
Knoxville's first major annexation following the Civil War came in 1868, when it annexed the city of East Knoxville, an area east of First Creek that had incorporated in 1855.:137–8 In 1882, Knoxville annexed Mechanicsville, which had developed just northwest of the city as a village for Knoxville Iron Company and other factory workers. In the 1870s and 1880s, the development of Knoxville's streetcar system (electrified by William Gibbs McAdoo in 1890) led to the rapid development of suburbs on the city's periphery.:100–101 Neighborhoods such as Fort Sanders, Fourth and Gill, Old North Knoxville, and Parkridge, are all rooted in "streetcar suburbs" developed during this period.
In 1889, the area now consisting of Fort Sanders and the U.T. campus were incorporated as the City of West Knoxville, and the area now consisting of Old North Knoxville and Fourth and Gill incorporated as the City of North Knoxville. Knoxville annexed both in 1897.:139–147 In 1907, Parkridge, Chilhowee Park, and adjacent neighborhoods incorporated as Park City.:104 Lonsdale, a factory village northwest of the city, and Mountain View, located south of Park City, incorporated that same year.:104 Oakwood, which developed alongside the Southern Railway's Coster rail yard, incorporated in 1913.:104 In 1917, Knoxville annexed these four cities, along with the burgeoning suburb of Sequoyah Hills and parts of South Knoxville, effectively doubling the city's population, and increasing its land area from 4 to 26 square miles (10 to 67 km2).:104
As Knoxville grew, the city's boosters continuously touted the city as an industrial boom town in an attempt to lure major companies. In 1910 and 1911, two major national fairs, the Appalachian Expositions, were held at Chilhowee Park. A third, the National Conservation Exposition, was held in 1913. The fairs demonstrated the economic trend known as the "New South," the transition of the South from an agricultural-based economy to an industrial one. The fairs also advocated the responsible usage of the region's natural resources.
Knoxville's rapid growth in the late 19th century led to increased pollution, mainly from the increasing use of coal,:29 and a rise in the crime rate, exacerbated by the influx of large numbers of people with very low-paying jobs.:27 The city, which had suffered serious cholera outbreaks in 1854 and 1873, and smallpox epidemics throughout the 1860s, created a health department in 1879, and established a city hospital in 1884.:91–3 Activists such as Lizzie Crozier French and businessmen such as E.C. Camp established organizations that helped the poor.
By the 1880s, Knoxville had a murder rate that was higher than Los Angeles's murder rate in the 1990s.:102 Journalist Jack Neely points out that "saloons, whorehouses, cocaine parlors, gambling dens, and poolrooms" lined Central Street from the railroad tracks to the river. High-profile shootouts were not uncommon, the most well-known being the Mabry-O'Connor shootout on Gay Street, which left banker Thomas O'Connor, businessman Joseph Mabry, and Mabry's son, dead in 1882. In 1901, Kid Curry, a member of Butch Cassidy's Wild Bunch, shot and killed two police officers at Ike Jones's Bar on Central. The Kid Curry shooting helped fuel calls for citywide prohibition, which was enacted in 1907.
After World War I, the United States suffered a major economic recession, and Knoxville, like many other cities, experienced an influx of migrants moving to the city in search of work. Racial tensions heightened as poor whites and blacks competed for the few available jobs, and both the Ku Klux Klan and the National Association for the Advancement of Colored People (NAACP) opened chapters in the city. On August 30, 1919, these tensions erupted in the so-called Riot of 1919, the city's worst race riot, which shattered the city's vision of itself as a racially tolerant Southern town.
Transition to a modern city (1920–1960)
In 1912, Knoxvillians replaced their mayor-alderman form of government with a commissioner form of government that consisted of five commissioners elected at-large, and a mayor chosen from among the five.:38–9 Following the 1917 annexations, the city began to struggle as it extended services to the newly annexed areas, and it became clear the new government was ineffective at dealing with the city's financial issues.:38–9 In 1923, the city voted to replace the commissioners with a city manager-council form of government, which involved the election of a city council, who would then hire a city manager to oversee the city's business affairs.:53
The first city manager hired by Knoxville was Louis Brownlow, the successful city manager of Petersburg, Virginia, and a cousin of Parson Brownlow.:39 When Brownlow arrived in Knoxville, he was horrified by the city's condition, later writing that he found "something new and more disturbing" every day.:166 There were no paved roads connecting Knoxville with other major cities. The lone operable tank of the city's waterworks was full of cracks that Knoxvillians had been lazily plugging with gunny sacks.:168 The city hospital was unable to buy drugs, as it was deeply in debt, and its credit had been cut off.:169 City Hall, then located on Market Square, was filthy, noisy and disorganized.:167
Brownlow immediately got to work, negotiating a more favorable bond rate and ordering greater scrutiny of all purchases.:173–9 He also convinced the city to purchase the vacated Tennessee School for the Deaf building for use as a city hall.:180 While Brownlow had some initial success, his initiatives met staunch opposition from South Knoxville councilman Lee Monday, who according to Brownlow, was "representative of that top-of-the-voice screamology of East Tennessee mountain politics.":190 Opposition to Brownlow gradually intensified, especially after he called for a tax increase, and following the election of a less-friendly city council in 1926, Brownlow resigned.:195–8
While Knoxville experienced tremendous growth in the late 19th century, by the early 1900s, the city's economy was beginning to show signs of stagnation.:48–9 The natural resources of the surrounding region were either exhausted or their demand fell sharply, and the decline of railroads in favor of other forms of shipping led to the collapse of the city's wholesaling sector.:59–60 Population growth also declined, though this trend was masked by the 1917 annexations.:36
Historian Bruce Wheeler suggests that the city's overly provincial economic "elite," which had long demonstrated a disdain for change, and the masses of new rural ("Appalachian") and African-American migrants, both of whom were suspicious of government, formed an odd alliance that consistently rejected major attempts at reform.:38–44 As Knoxvillians were adamantly opposed to tax increases, the city consistently had to rely on bond issues to pay for city services.:44 An increasingly greater portion of existing revenues was required to pay interest on these bonds, leaving little money for civic improvements. Urban neighborhoods fell into ruin and the downtown area deteriorated.:44–6 Those who could afford it fled to new suburbs on the city's periphery, such as Sequoyah Hills, Lindbergh Forest, or North Hills.:45–6
During the Great Depression, Knoxville's six largest banks either failed or were forced into mergers.:56 Construction fell 70%, and unemployment tripled.:57–8 African Americans were hit hardest, as business owners began hiring whites for jobs traditionally held by black workers, such as bakers, telephone workers, and road pavers.:58 The city was forced pay its employees in scrip, and begged creditors to allow it to refinance its debt.:57–8
Federal programs and infrastructure growth
In the 1930s and early 1940s, several major federal programs provided some relief to Knoxvillians suffering amidst the Depression. The Great Smoky Mountains National Park, which wealthy Knoxvillians had led the drive to create, opened in 1932.:55–6 In 1933, the Tennessee Valley Authority (TVA) was established with its headquarters in Knoxville, its initial purpose being to control flooding and improve navigation in the Tennessee River watershed, and provide electricity to the area.:61–3 During World War II, the construction of Manhattan Project facilities in nearby Oak Ridge brought thousands of federal workers to the area, and helped boost Knoxville's economy.:61–3
Kingston Pike saw a boom in tourism in the 1930s and 1940s as it lay along a merged stretch of two cross-country tourism routes, the Dixie Highway and the Lee Highway. During the same period, traffic to the Smokies led to development along Chapman Highway (named for the park's chief promoter, David Chapman) in South Knoxville. In the late 1920s, General Lawrence Tyson donated land off Kingston Pike for McGhee Tyson Airport, named for his son, World War I aviator Charles McGhee Tyson (the airport has since moved to Blount County).:211–5 In the late 1940s, Knoxville replaced its streetcar system with buses.:230 TVA's completion of Fort Loudoun Dam in 1943 brought modifications to Knoxville's riverfront.:118
In 1946, travel writer John Gunther visited Knoxville, and dubbed the city, the "ugliest city" in America.:61–2 He also mocked its puritanical laws regarding liquor sales and the showing of movies on Sunday, and noted the city's relatively high crime rate.:61–2 While Knoxvillians vigorously defended their city, Gunther's comments nevertheless sparked discussions regarding the city's unsightliness and its blue laws. The ordinance forbidding the showing of movies on Sunday was done away with in 1946, with the help of the state legislature.:87 Knoxville legalized packaged liquor in 1961, though the issue remained a contentious one for years.:65
Political factionalism and metropolitan government
The decades following the tumultuous term of Louis Brownlow saw continuous fighting in Knoxville's city council over virtually every major issue. In 1941, Cas Walker, the owner of a grocery store chain and host of a popular local radio (and later television) program, was elected to the city council.:75–7 A successor of sorts to Monday, Walker vehemently opposed every progressive measure introduced in the city council during his 30-year tenure, including fluoridation of the city's water supply, adoption of daylight saving time, library construction, parking meters, and metropolitan government.:77 He also adamantly opposed any attempt to increase taxes.:77 Walker's brash and uncompromising style made him a folk hero to many, especially the city's working class and poor.:75
Knoxville's economy continued to struggle following World War II. The city's textile industry collapsed in the mid-1950s with the closure of Appalachian Mills, Cherokee Mills, Venus Hosiery, and Brookside Mills, leaving thousands unemployed.:98 Major companies refused to build new factories in Knoxville due to a lack of suitable industrial sites. Between 1956 and 1961, 35 companies inquired into establishing major operations in Knoxville, but all 35 chose cities with better-developed industrial parks.:100–2 In 1961, Mayor John Duncan called for a bond issue to develop a new industrial site, but voters rejected the measure.:101
As early as the 1930s, leaders in Knoxville and Knox County had pondered forming a metropolitan government. In the late 1950s, the issue gained momentum, with the support of many city and county officials, and the city's two major newspapers, the News-Sentinel and the Journal.:127 Cas Walker, however, blasted the idea of a metropolitan government as a communist plot, and his old political rival, George Dempster, also rejected the idea.:122 When the measure was presented to voters in 1959, it was soundly defeated, with just 21% of Knoxvillians and 13.8% of Knox Countians supporting it.:123
Knoxville in the 1960s
In 1960, several Knoxville College students, led by Robert Booker and Avon Rollins, engaged in a series of sit-ins to protest segregation at lunch counters in Downtown Knoxville.:125–6 This action prompted downtown department stores to desegregate, and by the end of the decade, most other downtown businesses had followed suit.:126 City schools also gradually desegregated during this period, largely in response to a lawsuit brought by Josephine Goss in 1959.:124–5
Between 1945 and 1975, the University of Tennessee's student body grew from just under 3,000 to nearly 30,000.:63 The school's campus expanded to cover the entire area between Cumberland Avenue and the river west of Second Creek, and the Fort Sanders neighborhood was largely converted into student housing. By the mid-1970s, U.T. employed over 4,000 faculty and staff, providing a boost to the city's economy.:63 The growing popularity of the school's sports teams led to the expansion of Neyland Stadium, one of the largest non-racing stadiums in the nation, and the eventual construction of Thompson–Boling Arena, one of the largest basketball venues in the nation at the time of its completion.
While unemployment declined to just 2.8% in the 1960s, many of the jobs paid low wages, stunting the growth of the city's service sector.:131–6 Large parts of the downtown area continued to deteriorate, and nearly half of all houses in the city's older neighborhoods were considered substandard and in a critical state of decline.:71 A nationwide survey ranked the Mountain View area of East Knoxville 20,875 out of 20,915 urban neighborhoods in terms of housing stock, and President Lyndon Johnson referred to the residents of Mountain View as "people as poverty-ridden as I have seen in any part of the United States.":72, 133
Downtown revitalization efforts
Beginning in the 1950s, Knoxville made serious efforts to reinvigorate the downtown area. One of the city's first major renovation efforts involved the replacement of the large Market House on Market Square with a pedestrian mall.:116–8 The city also made numerous attempts to lure shoppers back to Gay Street, starting with the Downtown Promenade in 1960, in which walkways were constructed behind buildings along the street's eastern half, and continuing with the so-called "Gay Way," which included the widening of sidewalks and the installation of storefront canopies, in 1964.:118 Downtown retailers continued to slip, however, and with the completion of West Town Mall in 1972, the downtown retail market collapsed. Miller's, Kress's, and the three surviving downtown theaters had all closed by 1978.:118, 153
In 1962, Knoxville annexed several large communities, namely Fountain City and Inskip north of the city, and Bearden and West Hills west of the city. This brought large numbers of progressive voters into the city, diluting the influence of Cas Walker and his allies.:138–9 In the early 1970s, Mayor Kyle Testerman, backed by a more open city council, implemented the "1990 Plan," which essentially abandoned attempts to lure large retailers back to the downtown area, aiming instead to create a financial district accompanied by neighborhoods containing a mixture of residences, office space, and specialty shops.:147
In 1978, Knoxville and Knox County voters again voted on the issue of metropolitan government. In spite of support by U.T. president Edward Boling, Mayor Randy Tyree (Testerman's successor), Pilot president Jim Haslam, Knoxville Superintendent of Schools Mildred Doyle, and Knox County judge Howard Bozeman, the initiative again failed.:154 While a majority of Knoxvillians had voted in favor of consolidated government, a majority of Knox Countians had voted against it.:155
1982 World's Fair
In 1974, Downtown Knoxville Association president Stewart Evans, following a discussion with King Cole, president of the 1974 Spokane Exposition, raised the possibility of a similar international exposition for Knoxville.:157 Testerman and Tyree both embraced the fair, though the city council and Knoxvillians in general were initially lukewarm to the idea.:160–1 One key supporter of the fair was rogue banker Jake Butcher, who in 1975 seized control of Knoxville's largest bank, Hamilton National, and shook up the city's conservative banking community.:158 Following his failed gubernatorial campaign in 1978, Butcher turned his attention to the fair initiative, and helped the city raise critical funding.:159
To prepare for the World's Fair, the merged stretch of I-40 and I-75 in West Knoxville was widened, and I-640 was constructed.:162 The old L&N yard along Second Creek, home to a rough neighborhood known as "Scuffletown," was chosen for the fair site, largely for its redevelopment potential.:160 Three hotel chains— Radisson, Hilton, and Holiday Inn— built large hotels in the downtown area in anticipation of the influx of fair visitors.:162 The fair, officially named the International Energy Exposition, was open from May 1 to October 31, 1982, and drew over 11 million visitors.:162 Its success defied the expectations of the Wall Street Journal, which had derided Knoxville as a "scruffy little town," and had predicted the fair would fail.:162
While the fair was profitable, it nevertheless left Knoxville in debt, and failed to spark the redevelopment boom Testerman, Tyree, and the fair's promoters had envisioned.:173 Furthermore, on the day after the fair closed, the FDIC raided all of Butcher's banks, leading to the collapse of his banking empire, and threatening the city's financial stability.:167–8 Testerman replaced an embattled Tyree as mayor in 1983, and attempted to reinvigorate interest in his downtown redevelopment plans.:169–170
1980s, 1990s and 2000s
The second Testerman administration stabilized the city's finances, initiated urban renewal projects in Mechanicsville and East Knoxville, and consolidated Knoxville City and Knox County schools.:173 With the help of rising entrepreneur Chris Whittle, Testerman came up with an updated downtown redevelopment plan, the "1987 Downtown Plan.":171–3 This new plan called for further renovations to Market Square and the beautification of Gay Street.:173
Victor Ashe, Testerman's successor, continued redevelopment efforts, focusing mainly on parks and blighted areas of East and North Knoxville. As the city's westward expansion along Kingston Pike had been thwarted by the incorporation of Farragut as a town in 1980, Ashe, rather than focus on large-scale annexations, turned instead to "finger" annexations, which involved annexing small parcels of land at a time.:178–9 Ashe would make hundreds of such annexations during his 16-year tenure, effectively expanding the city by over 25 square miles.:182
Preservation efforts in Knoxville, which have preserved historic structures such as Blount Mansion, the Bijou Theatre, and the Tennessee Theatre, have intensified in recent years, prompting the designation of numerous historic overlay districts throughout the city. The efforts of developers such as Kristopher Kendrick and David Dewhirst, who have purchased and restored numerous dilapidated buildings, gradually helped lure residents back to the Downtown area. In the 2000s, Knoxville's planners turned their focus to the development of mixed residential and commercial neighborhoods (such as the Old City), cohesive, multipurpose shopping centers (such as Turkey Creek in West Knoxville), and a Downtown area with a mixture of unique retailers, restaurants, and cultural and entertainment venues, all with considerable success.
Historiography of Knoxville
The East Tennessee Historical Society's annual journal, published since 1929, contains numerous articles on Knoxville and Knoxville-area topics. The Society has also published two comprehensive histories of Knoxville and Knox County, The French Broad-Holston Country (1946), edited by Mary Utopia Rothrock, and Heart of the Valley (1976), edited by Lucile Deaderick. In 1982, the Society published a follow-up to Heart of the Valley, William MacArthur's Knoxville: Crossroads of the New South, which includes hundreds of historic photographs. Other comprehensive histories of the city include William Rule's Standard History of Knoxville (1900) and Ed Hooper's Knoxville (2003), the latter being part of Arcadia's "Images of America" series.
The Civil War is one of the most extensively covered periods of Knoxville's history. Two early first-hand accounts of the war in Knoxville are William G. Brownlow's Sketches of the Rise, Progress and Decline of Secession (1862) and the diary of Ellen Renshaw House, edited by Daniel Sutherland and published as A Very Violent Rebel: The Civil War Diary of Ellen Renshaw House (1996). First-hand accounts written after the war include William Rule's The Loyalists of Tennessee in the Late War (1887), Thomas Williams Humes's The Loyal Mountaineers of Tennessee (1888), Oliver Perry Temple's East Tennessee and the Civil War (1899), and Albert Chavannes's East Tennessee Sketches (1900). Modern works include Digby Gordon Seymour's Divided Loyalties: Fort Sanders and the Civil War (1963) and Robert McKenzie's Lincolnites and Rebels (2006).
Knoxville's history from the end of the Civil War to the modern period is covered in Knoxville, Tennessee: Continuity and Change in an Appalachian City (1983), written by Michael McDonald and Bruce Wheeler, and subsequently expanded by Wheeler as Knoxville, Tennessee: A Mountain City in the New South (2005). Mark Banker's Appalachians All (2010) discusses the development of three East Tennessee communities, Knoxville, Cades Cove, and the Clearfork Valley (in Campbell and Claiborne counties).
The history of Knoxville's African American community is covered in Robert Booker's Two Hundred Years of Black Culture in Knoxville, Tennessee: 1791 to 1991 (1994). Booker's The Heat of a Red Summer: Race Mixing, Race Rioting in 1919 Knoxville (2001) details the Riot of 1919. Merrill Proudfoot's Diary of a Sit-In (1962) provides an account of the 1960 Knoxville sit-ins. A significant portion of Charles Cansler's Three Generations: The Story of a Colored Family in Eastern Tennessee (1939) takes place in Knoxville. Native Knoxvillian James Herman Robinson describes his childhood in Knoxville in his autobiography, Road Without Turning (1950).
Since the early 1990s, Metro Pulse editor Jack Neely has written numerous articles (often for his column, "The Secret History") that recall some of the more colorful, odd, obscure, and forgotten aspects of the city's history. Neely's articles have been compiled into several books, including, The Secret History of Knoxville (1995), From the Shadow Side (2003), and Knoxville: This Obscure Prismatic City (2009). Arcadia has published several short books on local topics as part of its "Images of America" series, including Ed Hooper's WIVK (2008) and WNOX (2009), and 1982 World's Fair (2009) by Martha Rose Woodward. Other books on Knoxville topics include Wendy Lowe Besmann's Separate Circle: Jewish Life in Knoxville, Tennessee, which details the development of the city's Jewish community, and Sylvia Lynch's Harvey Logan in Knoxville (1998), which covers Kid Curry's time in the city.
The Junior League of Knoxville's Knoxville: 50 Landmarks (1976), provides descriptions of various historical buildings in the city. A more detailed overview of the city's architectural development is provided in "Historic and Architectural Resources of Knox County" (1994), a pamphlet written by Metropolitan Planning Commission preservationist Ann Bennett for the National Register of Historic Places. The National Register includes over 100 buildings and districts in Knoxville and Knox County, with extensive descriptions of the buildings provided in their respective nomination forms, which are being digitized for the Register's online database.
- Timeline of Knoxville, Tennessee
- History of Tennessee
- List of people from Knoxville, Tennessee
- National Register of Historic Places listings in Knox County, Tennessee
- East Tennessee Historical Society
- W. Bruce Wheeler, Knoxville, Tennessee Encyclopedia of History and Culture, 2009. Retrieved: 17 August 2011.
- Ask Doc Knox, "What's With All This Marble City Business?" Metro Pulse, 10 May 2010. Retrieved: 10 August 2011.
- Fletcher Jolly III, "40KN37: An Early Woodland Habitation Site in Knox County, Tennessee", Tennessee Archaeologist 31, nos. 1-2 (1976), 51.
- Frank H. McClung Museum, "Woodland Period." Retrieved: 25 March 2008.
- James Strange, "An Unusual Late Prehistoric Pipe from Post Oak Island (40KN23)", Tennessee Archaeologist 30, no. 1 (1974), 80.
- Richard Polhemus, The Toqua Site — 40MR6, Vol. I (Norris, Tenn.: Tennessee Valley Authority, 1987), 1240-1246.
- Jefferson Chapman, Tellico Archaeology: 12,000 Years of Native American History (Norris, Tenn.: Tennessee Valley Authority, 1985), p. 97.
- Charles Hudson, Knights of Spain, Warriors of the Sun (Athens, GA: University of Georgia Press, 1997), pp. 204-214.
- Charles Hudson, The Juan Pardo Expeditions: Explorations of the Carolinas and Tennessee, 1566-1568 (Tuscaloosa, Ala.: University of Alabama Press, 2005), pp. 36-45, 62-63.
- Cora Tula Watters, "Shawnee." The Encyclopedia of Appalachia (Knoxville, Tenn.: University of Tennessee Press, 2006), 278-279.
- Ima Stephens, "Creek." The Encyclopedia of Appalachia (Knoxville, Tenn.: University of Tennessee Press, 2006), 252-253.
- James Mooney, Myths of the Cherokee and Sacred Formulas of the Cherokee (Nashville: Charles Elder, 1972), 526.
- William MacArthur, Lucile Deaderick (ed.), "Knoxville's History: An Interpretation," Heart of the Valley: A History of Knoxville, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1976).
- Henry Timberlake, Samuel Williams (ed.), Memoirs, 1756-1765 (Marietta, Georgia: Continental Book Co., 1948), p. 54.
- Stanley Folmsbee and Lucile Deaderick, "The Founding of Knoxville," East Tennessee Historical Society Publications, Vol. 13 (1941), pp. 3-20.
- J.G.M. Ramsey, The Annals of Tennessee to the End of the Eighteenth Century (Johnson City, Tenn.: Overmountain Press, 1999).
- Mary Rothrock (ed.), The French Broad-Holston Country: A History of Knox County, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1946), map facing page 33.
- Samuel Heiskell, Andrew Jackson and Early Tennessee History (Nashville: Ambrose Publishing Company, 1918), pp. 46-81.
- C. E. Allred, et al., "Farming From the Beginning to 1860," The French Broad-Holston Country: A History of Knox County, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1972).
- Andre Michaux, Travels to the Westward of the Allegany Mountains in the States of Ohio, Kentucky, and Tennessee, In the Year 1802 (London: Barnard and Sultzer, 1805), p. 89.
- James Lee, Naphtali Luccock, and James Dixon, The Illustrious History of Methodism (New York: Methodist Magazine Publishing Company, 1900), p. 376.
- John Reynolds, Reynolds' History of Illinois (Chicago: Chicago Historical Society, 1879), pp. 65-66.
- Aelred Gray and Susan Adams, Lucile Deaderick (ed.), "Government," Heart of the Valley: A History of Knoxville, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1976).
- Stanley Folmsbee, Mary Rothrock (ed.), "Transportation Prior to the Civil War," The French Broad-Holston Country: A History of Knox County, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1972).
- Stanley Folmsbee, Sectionalism and Internal Improvements in Tennessee, 1796-1845 (Knoxville, Tenn.: East Tennessee Historical Society, 1939), pp. 28-32, 54-55, 83-86, 132, 161.
- Laura Luttrell, "One Hundred Years of a Female Academy: The Knoxville Female Academy, 1811–1846; The East Tennessee Female Institute, 1846–1911." East Tennessee Historical Society Publications, Vol. 17 (1945), p. 72.
- H. Ruffner, "Notes of a Tour From Virginia to Tennessee, In the Months of July and August, 1838," Southern Literary Messenger, Vol. 5, No. 4 (April 1839), p. 270.
- James Gray Smith, A Brief Historical, Statistical and Descriptive Review of East Tennessee, U.S.A. (London: J. Leath, 1842), p. 22.
- Dean Novelli, "On a Corner of Gay Street: A History of the Lamar House—Bijou Theater, Knoxville, Tennessee, 1817 – 1985." East Tennessee Historical Society Publications, Vol. 56 (1984), pp. 3-45.
- Jack Neely, Market Square: A History of the Most Democratic Place on Earth (Knoxville, Tenn.: Market Square District Association, 2009).
- Robert McKenzie, Lincolnites and Rebels: A Divided Town in the American Civil War (New York: Oxford University Press, 2006).
- E. Merton Coulter, William G. Brownlow: Fighting Parson of the Southern Highlands (Knoxville, Tenn.: University of Tennessee Press, 1999).
- William Gannaway Brownlow, Sketches of the Rise, Progress, and Decline of Secession (Philadelphia: G.W. Childs, 1862).
- McKenzie (p. 36) notes that Ramsey called for the reopening of the Atlantic slave trade. Brownlow revealed similar views in his debate with Abram Tyne in 1857.
- "Proceedings of the E.T. Convention: Held at Knoxville, May 30th and 31st, 1861 and at Greeneville, on the 17th day of June, 1861, and following days" (Knoxville, Tenn.: H. Barry's Book and Job Office, 1861).
- Oliver P. Temple, East Tennessee and the Civil War (Johnson City, Tenn.: Overmountain Press, 1995).
- Digby Gordon Seymour, Divided Loyalties: Fort Sanders and the Civil War in East Tennessee (Knoxville, Tenn.: University of Tennessee Press, 1963).
- D 31, 636.
- Jerome Taylor, "The Extraordinary Life and Death of Joseph A. Mabry," East Tennessee Historical Society Publications, No. 44 (1972), pp. 41-70.
- Fred Brown, "Two Knox Combatants Carried Civil War Grudges Back Home," Knoxville News-Sentinel, 31 July 1994.
- Thomas Alexander, Thomas A. R. Nelson of East Tennessee (Nashville: Tennessee Historical Commission, 1956), pp. 152-166.
- John Wooldridge, George Mellen, William Rule (ed.), Standard History of Knoxville, Tennessee (Chicago: Lewis Publishing Company, 1900; reprinted by Kessinger Books, 2010).
- Edwin Patton, Lucile Deaderick (ed.), "Transportation Development," Heart of the Valley: A History of Knoxville, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1976).
- William Bruce Wheeler, Knoxville, Tennessee: A Mountain City in the New South (Knoxville, Tenn.: University of Tennessee Press, 2005).
- C. P. White, Mary Rothrock (ed.), "Commercial and Industrial Trends since 1865," The French Broad-Holston Country (Knoxville, Tenn.: East Tennessee Historical Society, 1972), p. 222.
- Ordinance No. 958, October 16, 1896, Knoxville Minute Book, Book L, pp. 370-371.
- Mark Banker, Appalachians All: East Tennessee and the Elusive History of an American Region (Knoxville, Tenn.: University of Tennessee Press, 2010).
- Ann Bennett, "Historic and Architectural Resources in Knoxville and Knox County, Tennessee," National Register of Historic Places Multiple Property Listing Registration Form, May 1994, Section E.
- Becky French Brewer and Douglas Stuart McDaniel, Park City (Arcadia Publishing, 2005), p. 38.
- Lewis Laska, William F. Yardley, Tennessee Encyclopedia of History and Culture, 2009. Retrieved: 10 August 2011.
- Cynthia Griggs Fleming, Knoxville College, Tennessee Encyclopedia of History and Culture, 2009. Retrieved: 10 August 2011.
- Ask Doc Knox, "Turn-of-the-Century Life in Knoxville's Bowery," Metro Pulse, 19 May 2010. Retrieved: 10 August 2011.
- Josh Flory, "Regas Restaurant's Closing Stirs Memories," Knoxville News Sentinel, 30 December 2010. Retrieved: 10 August 2011.
- Knoxville leaders born in rural Southern Appalachian counties include Cas Walker, John Duncan, Randy Tyree, Jake Butcher, and Chris Whittle.
- Lucile Deaderick, Heart of the Valley: A History of Knoxville, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1976), p. 494.
- Jack Neely, Knoxville's Secret History (Scruffy Books, 1995).
- Milton Klein, University of Tennessee, Tennessee Encyclopedia of History and Culture, 2009. Retrieved: 10 August 2011.
- Martha Ellison, Mary Rothrock (ed.), "The Library Movement After the Civil War," The French Broad-Holston Country: A History of Knox County, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1972), pp. 243-244.
- Paul Kelley, Lucile Deaderick (ed.), "Education," Heart of the Valley: A History of Knoxville, Tennessee (Knoxville, Tenn.: East Tennessee Historical Society, 1976), p. 243.
- Robert Lukens, Appalachian Exposition of 1910, Tennessee Encyclopedia of History and Culture, 2009. Retrieved: 10 August 2011.
- Jack Neely, "Detour de Knoxville," Metro Pulse, 28 May 2008. Retrieved: 10 August 2011.
- James B. Jones, Jr., "A Blood Feud in Nineteenth Century Knoxville, Tennessee," 27 July 2009. Retrieved: 10 August 2011.
- Jack Neely, "Knoxville's Oldest Bar," Metro Pulse, 20 August 2008. Retrieved: 10 August 2011.
- Matthew Lakin, "'A Dark Night': The Knoxville Race Riot of 1919," Journal of East Tennessee History, 72 (2000), pp. 1-29.
- Louis Brownlow, A Passion for Anonymity (University of Chicago Press, 1958).
- Jack Neely, From the Shadow Side: And Other Stories of Knoxville, Tennessee (Oak Ridge, Tenn.: Tellico Books, 2003), p. 125.
- Knoxville-Knox County Metropolitan Planning Commission, Preservation Works: Mayor's Task Force on Historic Preservation, 18 August 2000. Retrieved: 18 August 2011.
- Ronda Robinson, "Revered Developer Kendrick Dies," 4 May 2009. Retrieved: 18 August 2011.
- Josh Flory, "Dewhirst Plans to Convert Downtown Building into Apartments, Retail Shops," Knoxville News Sentinel, 8 August 2008. Retrieved: 18 August 2011.
- H. Blount Hunter, Downtown Knoxville Redevelopment Strategy, May 2007, pp. 3-6, 10-12.
|Wikimedia Commons has media related to Historical images of Knoxville, Tennessee.| |
Have you ever stored some piece of information in the computer and wondered how the computer represents these information? Well, for you it might seems like a bunch of text or a bunch of numbers. But the computer sees this information differently. This is the reason why the computer can perform more rigorous and complex computations than all the human efforts combined. This is made possible through the concept of ”Data Types”.
Because the computer knows how to represent this data in the memory, that is why it can manipulate the data so well. A human will out of the box tell you that
1 + 1 = 2, but a computer can’t directly give you the answer without first checking how the expression is represented in the memory. This is where the concept of data types comes to play.
Table of Contents
- What is a data type
- Basic Data types in Python
- Have the latest version of Python installed on your computer
What is a Data Type
A datatype is an attribute of a data which tells the computer how a data is to be used. The data type of a variable defines the meaning of a data, the operations that can be performed on that data, and how the data is stored in the memory.
Basic Data Types in Python
Python provides for us a rich set of built-in data types that can be used in programs without creating them from scratch. However, you can also write your own data type in Python to add some of the custom features not included in the ones provided by Python.
This article discuses some of the basic data types in Python.
The three numeric data types python provides are integers, float, and complex.
An integer is a whole number, it can be positive or negative. In Python, there is no limit to how long an integer can be. It can be as long as you want it, except that it is constrained by the memory of your computer. Numbers like -1, 0, 3, 99999999999999999 are a perfect fit for integers.
A float in Python is any number that has a decimal point. It designates a floating-point number. Floats can be positive or negative numbers. It also doesn't have a limit to how long it can be just like integers. Numbers like -0.1, 0.0, 1.5, 12345.678890000000. Note that floating-point numbers can contain only 1 decimal point. Using more than one decimal point for a floating number will throw a
You could also append the letter
E followed by an integer to represent a scientific notation.
Python represents floating-point numbers with 64-bit double-precision values in almost all platforms. This implies that the maximum value a floating-point number can be is approximately 1.8 ⨉ 10308. Any number greater than this will be indicated with the string
inf implying Infinity.
Similarly, the minimum value a floating number can be is 5.0 ⨉ 10-324. Any number smaller than this will be effectively zero.
A complex number is a number containing a real and imaginary part. A typical complex number looks like
a+bj, where a is the real part and b is the imaginary part. You can create a complex number using the
complex() function. This function takes in two arguments. The first is the real part and the second is the imaginary part.
You can alternatively create a complex number by just appending a
j to the end of the imaginary part. This will set the real part to zero and the imaginary part to whatever value was specified.
Complex numbers also provide two attributes that you can use to get the real and imaginary parts of the complex number. If the variable
a is a complex number, then you can access the real and imaginary parts of
Boolean values in Python are used to represent truth values. They consist of constants
False. Boolean values can also behave like integers when used in numeric concepts, where
True evaluates to 1 and
False evaluates to 0.
Python provides for us the in-built function
bool() which can be used to convert any value to a Boolean. The
bool() function takes in an input and returns the boolean values
False depending on the input. By default, the output is
True. It is
False if it falls under any of the following:
- Every constant returns
- The zero value of any numeric data type returns
- Empty sequence returns
Boolean values are handy when using conditionals and loops such as
if … else,
for in your programs. The truth value of a boolean expression is what determines whether a particular section of code is executed on not as in loops and conditionals.
In Python, a sequence is an ordered collection of data. This data can be similar or different data types. Some of the sequence types in Python include:
A string is a sequence of letters, space, punctuation marks, or even numbers. It is an immutable sequence of code points. Python handles any form of textual data using the
str object. Unlike integers, strings are represented differently in the computer’s memory that is why they cannot be used to perform arithmetic operations.
Strings are represented using either single, double, or triple quotes. Python does not have a type for a single character. A single character is a string with a length of 1.
- Creating Strings
# Creating a string with a single quote single = 'Hello, World!' print(single) # Creating a string with double-quote double = "Hi, i'm Prince" print(double) # Creating a string with a triple quote triple = """ The conversation went as follows: Me: Hello Her: Hey """ print(triple) ## Output # Hello, World! # Hi, i'm Prince # The conversation went as follows: # Me: Hello # Her: Hey
Python represents strings in either single, double, or triple quotes. The starting quote of a string must be the same as the end quote. A string cannot start with a single quote and end with a double quote. This will throw a
SyntaxError. But a single quote string can contain a double quote string inside, and a double quote string can contain a single quote string inside, like the
double variable in the above snippet.
The triple quote string can span multiple lines. Making it possible to write multi-line strings.
- Accessing the elements of a string
In Python, the elements of a string can be accessed using
Indexing. Indexing allows us to use the index of a string to access it. The first index of a string is 0, the second is 1, and so on. The last index of a string is -1, the second to last is -2, and so on.
# Accessing elements of a string message = "Hello, World!" print(message) # The first element of the string print(message[-1]) # The last element of a string print(message) # IndexError ## output # H # ! # IndexError: string index out of range
When accessing string elements using indexing, ensure that the index is within the range of the string. If the index entered is out of range, Python will throw an
A list is a mutable sequence of data that is used to store data. List in Python is similar to arrays in other languages, but they don’t have to be of the same type. A list can be created either by using a pair of square brackets containing
iterables or using the type constructor
list() that takes in an
iterableis any sequence or container that can be iterated upon (loop through).
# List # create an empty list a = b = list() print(a) print(b) # list with values of different types mixed = [1, 2, 3, 4, True, False, 'a', 'b', 'c', 'd'] print(mixed) # using the list() function list_function = list(mixed) # create a copy of the `mixed` list print(list_function) iterable = list(range(10)) print(iterable) tuple_inside_function = list((1, 2, 3)) print(tuple_inside_function) ## output # # # [1, 2, 3, 4, True, False, 'a', 'b', 'c', 'd'] # [1, 2, 3, 4, True, False, 'a', 'b', 'c', 'd'] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] # [1, 2, 3]
A list can either be a single-dimensional or multidimensional list. A single-dimensional list contains a sequence of data. A multidimensional list contains
iterable such as a list. A list containing lists is an example of a multidimensional list.
# List dimensions # single dimensional alphanumeric = ['a', 'b', 'c', 'z', 0, 1, 2, 9] print(alphanumeric) print(alphanumeric) # multidimensional alphanumeric = [['a', 'b', 'c', 'd'], [0, 1, 2, 9]] print(alphanumeric) print(alphanumeric) ## output # ['a', 'b', 'c', 'z', 0, 1, 2, 9] # 1 # [['a', 'b', 'c', 'd'], [0, 1, 2, 9]] # c
The syntax for the multidimensional list is
a is the outer list and
b is the inner list or
Tuples in Python are immutable sequences of data that cannot be modified after they are created. Similar to lists, tuples can contain different data types and the elements are accessed using indexing.
Tuples are created using the built-in tuple function that takes in an
iterable. They are also created with or without the use of parentheses containing the tuple element. A tuple can also contain a single element, but the element must have a trailing comma for it to be a tuple.
# Tuples # empty tuple empty = () print(empty) # singleton tuple single = 2, print(single) # tuple with strings string_tuple = ('Hello',) print(string_tuple) # tuple from list _list = [1, 2, 3] list_tuple = tuple(_list) print(list_tuple) # nested tuple tuple_a = ('a', 'b', 'c') tuple_b = (1, 2, 3) combined = (tuple_a, tuple_b) print(combined) ## output # () # (2,) # ('Hello',) # (1, 2, 3) # (('a', 'b', 'c'), (1, 2, 3))
Python has a bunch of built-in types that you can checkout here. While these types are handy to use and provides basic functionalities, Python also allow you to write your own data structures that can be used as types and add more custom functionalities to it.
This article discussed the basic data types needed to get you started in your Python journey. To know more about Python built-in data types, checkout the official documentation.
Feel free to drop your thoughts and suggestions in the discussion box. I will be available to attend to them. And also, if you have any questions, you can as well drop them in the discussion box.
Top comments (14)
This is very uncommon. Possibly I've never seen this
Rather add the brackets.
This is incorrect and needs comma.
Simply adding brackets is meaningless there.
Make sure to add the comma:
Thank you for the review on the
string_tuplevariable. It was a typo. I'll effect the change immediately.
I wrote a cheatsheet for Python data types and how to use them in case you are interested. I keep the focus on syntax and examples
Great cheatsheet over here @michaelcurrin . Is it open for contribution? I want to see how I can add to it.
Sure. Check the github link in the corner.
I'm accepting new content for the languages and tools covered.
Make a PR for a small change or an issue to discuss a bigger change first
You may be interested in the raw string.
Useful if you want literal text instead of special characters. Or for regex.
Yes, indeed True and False are just aliases for 1 and 0.
Python is implemented in C language which actually has no boolean type so 1 and 0 are used instead.
I've never seen anyone do
True + Truethough, lol.
Yeah, true. You might not see someone use
True+ Truein their codes because of it's less practical use. But it is necessary for a beginner to know how these things work under the hood.
Well, when I was learning Python, I didn't know that
Truewill evaluate to
2. So I think including it in an example will go a long way to help others who want to learn.
Anyways, thanks for your reviews. There are absolutely on point 🚀🚀🚀
What I mean is for a teaching a beginner syntax, get them to learn code that is best practice and common i.e. followed by most Python coders. So they pick up good habits
And then as a side note you can mention the alternative syntax without brackets as valid but not encouraged, and like you said something for den
In this case it is covered in the official style guide for Python in PEP8, which will be enforced by style and lint tools too.
Brackets around comma are recommended and leaving it out is less readable it says.
Yeah. Great point here @keonigarner . I think stating personal experience may sound biased.
My experience may be indeed be different from others. That's why I backed my comment with a link to the standard style Python programmers are recommended to follow. Not everyone follows it or follow it completely but it encourages quality (readable and debuggable code) and consistency (if you change codebases or change jobs you probably won't have to change styles because you alread follow the prevalent style which is documented and agreed upon).
I also recommend using a tool like pylint and flake8 ans black to give warnings and automated fixes for code style.
They are going to default to a prevalent standard to avoid debate between coders on what is right, as the tools enforce a standard for you.
You can also override with a config if you want 100 characters long instead of 79 or want to change how code is wrapped or variables are named, if you want to deviate from the style. |
Music and mathematics
Music theory has no axiomatic foundation in modern mathematics yet the basis of musical sound can be described mathematically (in acoustics) and exhibits "a remarkable array of number properties". Elements of music such as its form, rhythm and metre, the pitches of its notes and the tempo of its pulse can be related to the measurement of time and frequency, offering ready analogies in geometry.
The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory. Some composers have incorporated the golden ratio and Fibonacci numbers into their work.
Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of numbers".
From the time of Plato harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection.
Time, rhythm and meter
Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics.
The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3).
Musical form is the plan by which a short piece of music is extended. The term "plan" is also used in architecture, to which musical form is often compared. Like the architect, the composer must take into account the function for which the work is intended and the means available, practicing economy and making use of repetition and order. The common types of form known as binary and ternary ("twofold" and "threefold") once again demonstrate the importance of small integral values to the intelligibility and appeal of music.
Frequency and harmony
A musical scale is a discrete set of pitches used in making or describing music. The most important scale in the Western tradition is the diatonic scale but many others have been used and proposed in various historical eras and parts of the world. Each pitch corresponds to a particular frequency, expressed in hertz (Hz), sometimes referred to as cycles per second (c.p.s.). A scale has an interval of repetition, normally the octave. The octave of any pitch refers to a frequency exactly twice that of the given pitch.
Succeeding superoctaves are pitches found at frequencies four, eight, sixteen times, and so on, of the fundamental frequency. Pitches at frequencies of half, a quarter, an eighth and so on of the fundamental are called suboctaves. There is no case in musical harmony where, if a given pitch be considered accordant, that its octaves are considered otherwise. Therefore, any note and its octaves will generally be found similarly named in musical systems (e.g. all will be called doh or A or Sa, as the case may be).
When expressed as a frequency bandwidth an octave A2–A3 spans from 110 Hz to 220 Hz (span=110 Hz). The next octave will span from 220 Hz to 440 Hz (span=220 Hz). The third octave spans from 440 Hz to 880 Hz (span=440 Hz) and so on. Each successive octave spans twice the frequency range of the previous octave.
Because we are often interested in the relations or ratios between the pitches (known as intervals) rather than the precise pitches themselves in describing a scale, it is usual to refer to all the scale pitches in terms of their ratio from a particular pitch, which is given the value of one (often written 1/1), generally a note which functions as the tonic of the scale. For interval size comparison, cents are often used.
Common name Example
Fundamental A2,1101x 1/1 = 1x0 Octave A32202x 2/1 = 2x1200 2/2 = 1x0 Perfect Fifth E43303x 3/2 = 1.5x702 Octave A44404x 4/2 = 2x1200 4/4 = 1x0 Major Third C♯55505x 5/4 = 1.25x386 Perfect Fifth E56606x 6/4 = 1.5x702 Harmonic seventh G57707x 7/4 = 1.75x969 Octave A58808x 8/4 = 2x1200 8/8 = 1x0
There are two main families of tuning systems: equal temperament and just tuning. Equal temperament scales are built by dividing an octave into intervals which are equal on a logarithmic scale, which results in perfectly evenly divided scales, but with ratios of frequencies which are irrational numbers. Just scales are built by multiplying frequencies by rational numbers, which results in simple ratios between frequencies, but with scale divisions that are uneven.
One major difference between equal temperament tunings and just tunings is differences in acoustical beat when two notes are sounded together, which affects the subjective experience of consonance and dissonance. Both of these systems, and the vast majority of music in general, have scales that repeat on the interval of every octave, which is defined as frequency ratio of 2:1. In other words, every time the frequency is doubled, the given scale repeats.
Below are Ogg Vorbis files demonstrating the difference between just intonation and equal temperament. You may need to play the samples several times before you can pick the difference.
- Two sine waves played consecutively – this sample has half-step at 550 Hz (C♯ in the just intonation scale), followed by a half-step at 554.37 Hz (C♯ in the equal temperament scale).
- Same two notes, set against an A440 pedal – this sample consists of a "dyad". The lower note is a constant A (440 Hz in either scale), the upper note is a C♯ in the equal-tempered scale for the first 1", and a C♯ in the just intonation scale for the last 1". Phase differences make it easier to pick the transition than in the previous sample.
5-limit tuning, the most common form of just intonation, is a system of tuning using tones that are regular number harmonics of a single fundamental frequency. This was one of the scales Johannes Kepler presented in his Harmonices Mundi (1619) in connection with planetary motion. The same scale was given in transposed form by Scottish mathematician and musical theorist, Alexander Malcolm, in 1721 in his 'Treatise of Musick: Speculative, Practical and Historical', and by theorist Jose Wuerschmidt in the 20th century. A form of it is used in the music of northern India.
American composer Terry Riley also made use of the inverted form of it in his "Harp of New Albion". Just intonation gives superior results when there is little or no chord progression: voices and other instruments gravitate to just intonation whenever possible. However, it gives two different whole tone intervals (9:8 and 10:9) because a fixed tuned instrument, such as a piano, cannot change key. To calculate the frequency of a note in a scale given in terms of ratios, the frequency ratio is multiplied by the tonic frequency. For instance, with a tonic of A4 (A natural above middle C), the frequency is 440 Hz, and a justly tuned fifth above it (E5) is simply 440×(3:2) = 660 Hz.
Pythagorean tuning is tuning based only on the perfect consonances, the (perfect) octave, perfect fifth, and perfect fourth. Thus the major third is considered not a third but a ditone, literally "two tones", and is (9:8)2 = 81:64, rather than the independent and harmonic just 5:4 = 80:64 directly below. A whole tone is a secondary interval, being derived from two perfect fifths, (3:2)2 = 9:8.
The just major third, 5:4 and minor third, 6:5, are a syntonic comma, 81:80, apart from their Pythagorean equivalents 81:64 and 32:27 respectively. According to Carl Dahlhaus (1990, p. 187), "the dependent third conforms to the Pythagorean, the independent third to the harmonic tuning of intervals."
Western common practice music usually cannot be played in just intonation but requires a systematically tempered scale. The tempering can involve either the irregularities of well temperament or be constructed as a regular temperament, either some form of equal temperament or some other regular meantone, but in all cases will involve the fundamental features of meantone temperament. For example, the root of chord ii, if tuned to a fifth above the dominant, would be a major whole tone (9:8) above the tonic. If tuned a just minor third (6:5) below a just subdominant degree of 4:3, however, the interval from the tonic would equal a minor whole tone (10:9). Meantone temperament reduces the difference between 9:8 and 10:9. Their ratio, (9:8)/(10:9) = 81:80, is treated as a unison. The interval 81:80, called the syntonic comma or comma of Didymus, is the key comma of meantone temperament.
Equal temperament tunings
In equal temperament, the octave is divided into equal parts on the logarithmic scale. While it is possible to construct equal temperament scale with any number of notes (for example, the 24-tone Arab tone system), the most common number is 12, which makes up the equal-temperament chromatic scale. In western music, a division into twelve intervals is commonly assumed unless it is specified otherwise.
For the chromatic scale, the octave is divided into twelve equal parts, each semitone (half-step) is an interval of the twelfth root of two so that twelve of these equal half steps add up to exactly an octave. With fretted instruments it is very useful to use equal temperament so that the frets align evenly across the strings. In the European music tradition, equal temperament was used for lute and guitar music far earlier than for other instruments, such as musical keyboards. Because of this historical force, twelve-tone equal temperament is now the dominant intonation system in the Western, and much of the non-Western, world.
Equally tempered scales have been used and instruments built using various other numbers of equal intervals. The 19 equal temperament, first proposed and used by Guillaume Costeley in the 16th century, uses 19 equally spaced tones, offering better major thirds and far better minor thirds than normal 12-semitone equal temperament at the cost of a flatter fifth. The overall effect is one of greater consonance. 24 equal temperament, with 24 equally spaced tones, is widespread in the pedagogy and notation of Arabic music. However, in theory and practice, the intonation of Arabic music conforms to rational ratios, as opposed to the irrational ratios of equally tempered systems.
While any analog to the equally tempered quarter tone is entirely absent from Arabic intonation systems, analogs to a three-quarter tone, or neutral second, frequently occur. These neutral seconds, however, vary slightly in their ratios dependent on maqam, as well as geography. Indeed, Arabic music historian Habib Hassan Touma has written that "the breadth of deviation of this musical step is a crucial ingredient in the peculiar flavor of Arabian music. To temper the scale by dividing the octave into twenty-four quarter-tones of equal size would be to surrender one of the most characteristic elements of this musical culture."
The following graph reveals how accurately various equal-tempered scales approximate three important harmonic identities: the major third (5th harmonic), the perfect fifth (3rd harmonic), and the "harmonic seventh" (7th harmonic). [Note: the numbers above the bars designate the equal-tempered scale (i.e., "12" designates the 12-tone equal-tempered scale, etc.)]
Note Frequency (Hz) Frequency
Exact formula A2 110.00 N/A 6.781 N/A A♯2 116.54 6.54 6.864 0.0833 (or 1/12) B2 123.47 6.93 6.948 0.0833 C3 130.81 7.34 7.031 0.0833 C♯3 138.59 7.78 7.115 0.0833 D3 146.83 8.24 7.198 0.0833 D♯3 155.56 8.73 7.281 0.0833 E3 164.81 9.25 7.365 0.0833 F3 174.61 9.80 7.448 0.0833 F♯3 185.00 10.39 7.531 0.0833 G3 196.00 11.00 7.615 0.0833 G♯3 207.65 11.65 7.698 0.0833 A3 220.00 12.35 7.781 0.0833
Connections to Mathematics
Musical set theory uses the language of mathematical set theory in an elementary way to organize musical objects and describe their relationships. To analyze the structure of a piece of (typically atonal) music using musical set theory, one usually starts with a set of tones, which could form motives or chords. By applying simple operations such as transposition and inversion, one can discover deep structures in the music. Operations such as transposition and inversion are called isometries because they preserve the intervals between tones in a set.
Expanding on the methods of musical set theory, some theorists have used abstract algebra to analyze music. For example, the pitch classes in an equally tempered octave form an abelian group with 12 elements. It is possible to describe just intonation in terms of a free abelian group.
Transformational theory is a branch of music theory developed by David Lewin. The theory allows for great generality because it emphasizes transformations between musical objects, rather than the musical objects themselves.
Theorists have also proposed musical applications of more sophisticated algebraic concepts. The theory of regular temperaments has been extensively developed with a wide range of sophisticated mathematics, for example by associating each regular temperament with a rational point on a Grassmannian.
The chromatic scale has a free and transitive action of the cyclic group , with the action being defined via transposition of notes. So the chromatic scale can be thought of as a torsor for the group .
Real and complex analysis
- Mathematics and art
- Equal temperament
- Interval (music)
- Musical tuning
- Piano key frequencies
- 3rd bridge (harmonic resonance based on equal string divisions)
- The Glass Bead Game
- List of music software
- Non-Pythagorean scale
- Tonality Diamond
- Utonality and otonality
- Euclidean rhythms (traditional musical rhythms that are generated by Euclid's algorithm)
- Reginald Smith Brindle, The New Music, Oxford University Press, 1987, pp 42-3
- Reginald Smith Brindle, The New Music, Oxford University Press, 1987, Chapter 6 passim
- "Eric - Math and Music: Harmonious Connections".
- Reginald Smith Brindle, The New Music, Oxford University Press, 1987, p 42
- Purwins, Hendrik (2005). Profiles of pitch classes circularity of relative pitch and key-experiments, models, computational music analysis, and perspectives, pp. 22-24 (PDF).
- Plato, (Trans. Desmond Lee) The Republic, Harmondsworth Penguin 1974, page 340, note.
- Sir James Jeans, Science and Music, Dover 1968, p. 154.
- Alain Danielou, Introduction to the Study of Musical Scales, Mushiram Manoharlal 1999, Chapter 1 passim.
- Sir James Jeans, Science and Music, Dover 1968, p. 155.
- Arnold Whittall, in The Oxford Companion to Music, OUP, 2002, Article: Rhythm
- "Александр Виноград, Многообразие проявлений музыкального метра (LAP Lambert Academic Publishing, 2013)".
- Imogen Holst, The ABC of Music, Oxford 1963, p.100
- Jeremy Montagu, in The Oxford Companion to Music, OUP 2002, Article: just intonation.
- Touma, Habib Hassan (1996). The Music of the Arabs. Portland, OR: Amadeus Press. pp. 22–24. ISBN 0-931340-88-8.
- "Algebra of Tonal Functions.".
- "Harmonic Limit".
- Database of all the possible 2048 musical scales in 12 note equal temperament and other alternatives in meantone tunings
- Music and Math by Thomas E. Fiore
- Twelve-Tone Musical Scale.
- Sonantometry or music as math discipline.
- Music: A Mathematical Offering by Dave Benson.
- Nicolaus Mercator use of Ratio Theory in Music at Convergence
- The Glass Bead Game Hermann Hesse gave music and mathematics a crucial role in the development of his Glass Bead Game.
- Harmony and Proportion. Pythagoras, Music and Space.
- "Linear Algebra and Music"
- Notefreqs — A complete table of note frequencies and ratios for midi, piano, guitar, bass, and violin. Includes fret measurements (in cm and inches) for building instruments. |
Just like fractions, operations with decimals can be intimidating for kids when all they’ve ever known are whole numbers. But with some intentional teaching and strategies, we can help kids understand that the basic underlying principles are the same. Although this should probably not be a starting point (when addition & subtraction with decimals is new), number lines can be a fantastic aid and visual. Read on for how to add & subtract decimals on a number line, and be sure to grab the FREE blank number line practice pages at the end!
*Please Note: This post contains affiliate links which help support the work of this site. Read our full disclosure here.*
Add & Subtract Decimals with Real World Examples
When you’re first beginning a study of decimal operations, I recommend starting with money. This is a natural and familiar use of decimals, and one kids will be familiar with already.
They may not know how to formally write out the problems, or how to use and write out formal algorithms, but they likely already know how to add up all their dollars and change.
This lesson, with variations for increasing difficulty, is a great place for kids to start.
Add & Subtract Decimals with Base Ten Blocks
I would then recommend building decimal quantities and modeling addition and subtraction with base ten blocks.
When using base ten blocks for decimals (rather than whole numbers), the hundred block represents one whole and the tens rods represent tenths and then the ones blocks represent hundredths.
This will not only give kids a visual representation of decimal numbers, but reinforce their understanding of place value.
I would begin by giving kids different decimal numbers and having them build it with blocks.
Then you can begin adding whole numbers to show that the decimal portion doesn’t change, and vice versa.
And just as with whole numbers, regrouping works the same way and is easily modeled with base ten blocks.
For more information and examples, see this post from Rachel at You’ve Got This Math.
Add & Subtract Decimals on a Number Line
Once kids have a solid foundation and understand what decimals numbers represent, you can show them how to use a number line to solve addition & subtraction problems.
Before beginning, start by practicing labeling a number line with decimals rather than whole numbers.
I recommend starting by labeling increments of 0.1.
Label some number lines by starting at 0, and then practice with different starting values, but always labeling increments of 0.1.
Then you can challenge kids to label increments of 0.25 or 0.5. Once they have a few number lines labeled correctly, see if they can find and mark certain values on their number lines.
You can also give them fractions, and have them mark the decimal form on their number line.
Being familiar with how number lines work and how to find points and move around will be essential to actually using a number line to solve a math problem.
But once your kids are comfortable with them, you can give them decimal addition problems to solve.
Adding Decimals on a Number Line:
For example, let’s say you give kids the problem 3.6 + 0.8.
They would begin by labeling their number line with the starting value (3.6) and then labeling all the way across by tenths.
Once the number line is labeled, they can simply add on by counting 8 tenths. They mark the number line by showing their “jumps” as they count.
Once they’ve counted 8, they have found their answer.
When adding decimals that include whole numbers, they can begin by adding the whole number first.
For example, in the problem 5.2 + 1.4, kids can jump 1 whole (or ten tenths) to the number 6.2.
Then they can add 0.4 by jumping or counting by tenths and end on the answer: 6.6.
Subtracting Decimals on a Number Line
With subtraction problems, kids can solve essentially the same way. The difference here is that they will want to start on the far right side of their number line and label it backwards by tenths so that they can subtract.
For example, given the problem 4.8 – 0.9, kids would start by labeling 4.8.
They then label backwards to fill in the number line.
To subtract, they simply count backwards by tenths 9 times until the reach the solution: 3.9.
Then as kids encounter more challenging problems, they can label their number lines accordingly and then either add or subtract.
Add & Subtract Decimals with an Open Number Line
Once kids are more confident working with both number lines and decimals, using an open number line may be more beneficial for them.
An open number does not have any line markers to delineate number values. It is simply a straight line.
With an open number line, kids can start wherever they’d like without having to worry about equal increments.
They can then use it to merely show their thinking as they add or subtract each problem bit by bit. This is especially useful if they have a good understanding of place value.
For example, to add 6.5 + 3.3, kids can start at 6.5 and add 3, using larger “jumps” to show whole numbers. They “land” at 9.5.
Then they can count on with smaller jumps to add the tenths. Now they’re at 9.8, the final solution.
Open number lines are especially helpful if kids are adding larger quantities, as there may not be adequate space on a number line that already has tenths marked.
It’s also helpful as they begin to add and subtract hundredths, as they don’t have to worry about finding the correct location on the number line, they can simply add each place value one at a time.
Using an open number line to aid and model their thinking is also a great way to help kids work towards mental math with decimals.
Want an easy practice page your kids can use to solve any decimal problem? This simple set includes 2 pages of number lines.
The first is marked with lines so you can practice labeling different increments and help kids to be familiar.
The second is a page of open number lines.
There is also a quick overview page for the teacher with a few different ways you might use these number line pages and teach these concepts to your students.
More Decimal Resources:
- Decimals on a Number Line Partner Game
- Decimal Operations: Low-Prep Maze Practice
- Multiplying & Dividing Decimals Game
Never Run Out of Fun Math Ideas
If you enjoyed this post, you will love being apart of the Math Geek Mama community! Each week I send an email with fun and engaging math ideas, free resources and special offers. Join 46,000+ readers as we help every child succeed and thrive in math! |
How The Us Government Is Organized
The Constitution of the United States divides the federal government into three branches to make sure no individual or group will have too much power:
- LegislativeMakes laws
- ExecutiveCarries out laws
- JudicialEvaluates laws
Each branch of government can change acts of the other branches:
- The president can veto legislation created by Congress and nominates heads of federal agencies.
- Congress confirms or rejects the president’s nominees and can remove the president from office in exceptional circumstances.
- The Justices of the Supreme Court, who can overturn unconstitutional laws, are nominated by the president and confirmed by the Senate.
This ability of each branch to respond to the actions of the other branches is called the system of checks and balances.
Confirmation Process For Judges And Justices
Appointments for Supreme Court Justices and other federal judgeships follow the same basic process:
- The president nominates a person to fill a vacant judgeship.
- The Senate Judiciary Committee holds a hearing on the nominee and votes on whether to forward the nomination to the full Senate.
- If the nomination moves forward, the Senate can debate the nomination. Debate must end before the Senate can vote on whether to confirm the nominee. A Senator will request unanimous consent to end the debate, but any Senator can refuse.
- Without unanimous consent, the Senate must pass a cloture motion to end the debate. It takes a simple majority of votes51 if all 100 Senators voteto pass cloture and end debate about a federal judicial nominee.
- Once the debate ends, the Senate votes on confirmation. The nominee for Supreme Court or any other federal judgeship needs a simple majority of votes51 if all 100 Senators voteto be confirmed.
Federal Government Of The United States
Jump to navigationJump to searchAmerican Government State governments of the United StatesUnited States federal government
|1789 233 years ago|
The federal government of the United States is the national government of the United States, a federal republic located primarily in North America, composed of 50 states, a city within a federal district , five major self-governing territories and several island possessions. The federal government, sometimes simply referred to as Washington, is composed of three distinct branches: legislative, executive, and judicial, whose powers are vested by the U.S. Constitution in the Congress, the president and the federal courts, respectively. The powers and duties of these branches are further defined by acts of Congress, including the creation of executive departments and courts inferior to the Supreme Court.
Don’t Miss: Key West Hotels Government Rates
Elected And Appointed Officials
The US has a strong tradition of local government with a large number of elected officials, such as state legislators, mayors, city council members and even special district officials. Within these governing entities, there are over 500,000 elected officials. And very state, county and municipality has their own set of laws, so understanding the structure of government in your area is important if you.
Cabinet Executive Departments And Agencies
The daily enforcement and administration of federal laws is in the hands of the various federal executive departments, created by Congress to deal with specific areas of national and international affairs. The heads of the 15 departments, chosen by the president and approved with the “advice and consent” of the U.S. Senate, form a council of advisers generally known as the president’s “Cabinet”. Once confirmed, these “cabinet officers” serve at the pleasure of the president. In addition to departments, a number of staff organizations are grouped into the Executive Office of the President. These include the White House staff, the National Security Council, the Office of Management and Budget, the Council of Economic Advisers, the Council on Environmental Quality, the Office of the U.S. Trade Representative, the Office of National Drug Control Policy, and the Office of Science and Technology Policy. The employees in these United States government agencies are called federal civil servants.
The Judiciary, under Article III of the Constitution, explains and applies the laws. This branch does this by hearing and eventually making decisions on various legal cases.
You May Like: Government Loans For Federal Employees
Census Bureau Reports There Are 89004 Local Governments In The United States
The U.S. Census Bureau today released preliminary counts of local governments as the first component of the 2012 Census of Governments.
In 2012, 89,004 local governments existed in the United States, down from 89,476 in the last census of governments conducted in 2007. Local governments included 3,031 counties , 19,522 municipalities , 16,364 townships , 37,203 special districts and 12,884 independent school districts .
Conducted every five years , the census of governments provides the only uniform source of statistics for all of the nation’s state and local governments. These statistics allow for in-depth trend analysis of all individual governments and provide a complete, comprehensive and authoritative benchmark of state and local government activity.
The census of governments measures three components: organization, employment and finance. These components provide statistics on the number of governments that exist, the services they provide, the number of their employees and their financial activity. In addition to the information provided for states, cities, counties and townships, the census of governments also provides information on special districts and school districts.
Other Key Findings
Among the key findings in the 2012 Census of Governments preliminary counts:
History of Special Districts and School Districts in the United States
Isans Differ In Level Of Concern That Rights And Protections May Vary Across States
About four-in-ten U.S. adults say they are extremely or very concerned that the rights and protections a person has might be different depending on which state they live in, with an additional 35% saying they are somewhat concerned about this. About one-in-five are not too or not at all concerned about this possibility.
Democrats are more likely than Republicans to express this concern: 53% say they are concerned that the rights and protections a person has might be different depending on which state they live in, with a quarter saying they are extremely concerned. Roughly a third of Democrats say they are somewhat concerned that the rights and protections a person has might depend on where they live, and just 13% say they are not too or not at all concerned about this.
Among Democrats, there are differences across demographic groups in the level of concern that rights and protections might vary from state to state.
While similar shares of White and Black Democrats say they are extremely or very concerned that the rights and protection a person has may differ depending on which state they live in, Hispanic Democrats are less likely to say this. Hispanic Democrats are more likely than other Democrats to say they are somewhat concerned individual rights and protection may differ by state.
Among Republicans, there are only modest differences in the level of concern that individual rights and protections may differ from state to state.
Read Also: Stimulus Check From Government 2020 Schedule
United States Supreme Court Building
|Show map of Central Washington, D.C.Show map of the United States|
|, Cass Gilbert Jr.|
|NRHP reference No.|
The Supreme Court Building houses the . Also referred to as “The Marble Palace,” the building serves as the official workplace of the and the eight . It is located at 1 First Street in , in the block immediately east of the and north of the . The building is managed by the . On May 4, 1987, the Supreme Court Building was designated a .
The proposal for a separate building for the Supreme Court was suggested in 1912 by , who became Chief Justice in 1921. In 1929, Taft successfully argued for the creation of the new building, but did not live to see it built. Physical construction began in 1932 and was officially completed in 1935 under the guidance of Chief Justice , Taft’s successor. The building was designed by architect , a friend of Taft.
Gpersonal Autonomy And Individual Rights
|Do individuals enjoy freedom of movement, including the ability to change their place of residence, employment, or education?||4.0044.004|
There are no significant undue restrictions on freedom of movement within the United States, and residents are generally free to travel abroad without improper obstacles. A patchwork of temporary movement restrictions were imposed across the country in response to the COVID-19 pandemic, with states acting independently based on local conditions and strategies, though the rules were loosely enforced and relied mainly on voluntary compliance.
|Are individuals able to exercise the right to own property and establish private businesses without undue interference from state or nonstate actors?||4.0044.004|
Property rights are widely respected in the United States. The legal and political environments are supportive of entrepreneurial activity and business ownership. President Trumps shifting and exemption-filled tariff policies prompted concern throughout his administration that political favoritism was distorting markets involving tariff-sensitive businesses. Similarly, perceived support for the administration allegedly influenced the awarding of government aid and contracts, including during the COVID-19 pandemic. Coronavirus-related business restrictions at the state and local levels caused significant disruption and confusion, prompting civil disobedience and public protests by some private business owners.
You May Like: Government Grant For Child Care
Relationships Between State And Federal Courts
Separate from, but not entirely independent of, this federal court system are the court systems of each state, each dealing with, in addition to federal law when not deemed preempted, a state’s own laws, and having its own court rules and procedures. Although state governments and the federal government are legally dual sovereigns, the Supreme Court of the United States is in many cases the appellate court from the State Supreme Courts . The Supreme Courts of each state are by this doctrine the final authority on the interpretation of the applicable state’s laws and Constitution. Many state constitution provisions are equal in breadth to those of the U.S. Constitution, but are considered “parallel” .
Levels Of Government: Federal State Local
Americans have long had a more favorable view of their state and local governments than the federal government, and this continues to be the case today.
About two-thirds say they have a favorable view of their local government, compared with 54% who have a favorable view of their state government and just 32% who have a favorable view of the federal government.
The share who say they have a favorable view of the federal government is identical to the share who said this three years ago, though there has been substantial movement within each party. Just over one-in-ten Republicans now hold a favorable view of the federal government, down from 41% in August 2019. And about half of Democrats now hold favorable views of the federal government, up from 26% in 2019.
Favorable views of both state and local governments are down slightly since 2019 .
Both Republicans and Democrats tend to hold more favorable views of their state government if they live in a state where their party is currently in control.
Three-quarters of Republicans living in states with a Republican governor and Republican control of the state legislature have a very favorable or mostly favorable view of their state government. A nearly identical share of Republicans living in Democratic-controlled states have unfavorable views of their state government: 35% say they have a very unfavorable view while 41% have a mostly unfavorable view.
You May Like: How To Get Government Medical Insurance
How Many Local Governments Are In The Usa
If you think theres too much government in the United States, you may be on to something. There are over 90,000 government units in the US, with over $3.4 trillion spent annually on direct expenditures for state and local governments. From state, county, local towns and villages all the way to special districts and independent school districts, that makes for a huge amount of bureaucracy.
A breakdown of the total number of local governments in the United States by state and government type can be found in the infographic below:
This data was compiled from the 2017 Census of Governments: Organization, published in 2019. In addition to the federal government and the 50 state governments, the Census Bureau recognizes five basic types of local governments. Three are general-purpose governments: County, municipal, and township governments. Legislative provisions for school district and special district governments are more diverse. Single-function and multiple-function districts, authorities, commissions, boards, and other entities have varying degrees of autonomy that varies by state.
Public Access To The Building
On May 3, 2010, citing security concerns and as part of the building’s modernization project, the Supreme Court announced that the public would no longer be allowed to enter the building through the main door on top of the steps on the west side. Visitors must now enter through ground-level doors located at the plaza, leading to a reinforced area for security screening. The main doors at the top of the steps may still be used to exit the building. Justice released a statement, joined by Justice Ginsburg, expressing his opinion that although he recognizes the security concerns that led to the decision, he does not believe on balance that the closure is justified. Calling the decision “dispiriting”, he said he was not aware of any Supreme Court in the world that had closed its main entrance to the public.
Since recording devices have been banned inside the courtroom, the fastest way for decisions of landmark cases to reach the press is through the .
Also Check: Government Jobs In Surprise Az
Cabinet Of The United StatesCabinet of the United States
|pictured in July 2021|
|Advisory body to the president of the United States|
The Cabinet of the United States is a body consisting of the and the heads of the ‘s in the . It is the principal official advisory body to the . The president chairs the meetings but is not formally a member of the Cabinet. The heads of departments, appointed by the president and confirmed by the , are members of the Cabinet, and acting department heads also participate in Cabinet meetings whether or not they have been officially nominated for Senate confirmation. The president may designate heads of other agencies and non-Senate-confirmed members of the as members of the Cabinet.
The Cabinet does not have any collective executive powers or functions of its own, and no votes need to be taken. There are 24 members : 15 department heads and nine Cabinet-level members, all of whom, except two, had received . The Cabinet meets with the president in . The members sit in the order in which their respective department was created, with the earliest being closest to the president and the newest farthest away.
Smaller States And Bigger States
When the Constitution was ratified in 1787, the ratio of the populations of large states to small states was roughly twelve to one. The gave every state, large and small, an equal vote in the Senate. Since each state has two senators, residents of smaller states have more clout in the Senate than residents of larger states. But since 1787, the population disparity between large and small states has grown in 2006, for example, had seventy times the population of . Critics, such as constitutional scholar , have suggested that the population disparity works against residents of large states and causes a steady redistribution of resources from “large states to small states”. Others argue that the Connecticut Compromise was deliberately intended by the Founding Fathers to construct the Senate so that each state had equal footing not based on population, and contend that the result works well on balance.
Also Check: New Government Harp Replacement Program
See The Data By State
According to these data sources, there are 519,682 politicians across the United States. Of these, 535 are Federal politicians, 18,749 are State Politicians, and 500,396 are local politicians. Note that the state data is slightly lower than the totals, as we do not include territories in the table.
We break this down further by each level:
Renamed Heads Of The Executive Departments
- : created in July 1781 and renamed Secretary of State in September 1789.
- : created in 1789 and was renamed as by the . The 1949 Amendments to the National Security Act of 1947 made the secretary of the Army a subordinate to the secretary of defense.
- : created in 1903 and renamed in 1913 when its labor functions were transferred to the new .
- Secretary of Health, Education, and Welfare: created in 1953 and renamed in 1979 when its education functions were transferred to the new .
Read Also: How Do I Know If I Owe The Government Money
Vice President And The Heads Of The Executive Departments
The Cabinet permanently includes the and the heads of 15 executive departments, listed here according to their . The and the follow the vice president and precede the secretary of state in the order of succession, but both are in the legislative branch and are not part of the Cabinet.Cabinet
- , headed by the : became a military department within the . |
It is said that the gravitational waves are a consequence of the theory of relativity, of Albert Einstein, but it is difficult to find texts in which they argue why this relationship, except that they are arising from the study of the Einstein field equations. Let’s try to apply a little logic and some knowledge of relativity in this issue.
The key question is the following question: is possible to convey the effect of gravity faster than light?
Let’s assume for a moment that that is possible. For simplicity suppose that gravity affects instantaneously, or at infinite speed. That has a very important consequence, that is that you could create an experiment to transmit information instantaneously through instantaneous changes of gravitational effects. This break the relativity of simultaneity and would be able to define a simultaneity “objective”, that would lead us to be able to determine the “absolute space of reference”. The theory of relativity no longer would be valid from the point of view of affirming that all inertial reference systems are equivalent and that we cannot differentiate between them in any way. We could determine the existence of a privileged frame of reference.
But if the principle of relativity is valid… then by reduction to the absurd, we must think that gravitational effects can not be transmitted faster than the light. Indeed, no effect or anything can be transmitted faster than light.
Thus we have that the gravitational effects are transmitted at a finite speed which does not exceed at the light, and probably is transmitted at a rate equal to that of light.
So, a sudden change of mass or a mass movement will cause a gravitational shift at a point which will be broadcasted on the space at the speed of light. Thus gravitational waves arise. At the end of 1916 Einstein shows that the field equations also admit solutions in the form of waves. They are gravitational waves.
For example if two stars are turning on themselves at high speed and at a distance not far from our solar system, the slight changes in the gravitational field that we perceive in our solar system should be received with a lapse of time of difference, for example, in the Earth than in Jupiter, and even a few miliseconds of difference between one point and one other of the world, and you could create an experiment that detected it.
By means of artificial satellites it has been able to detect small changes in distance between the satellite and the Earth that can be attributed to gravitational waves, but we still have to wait to get truly conclusive results by means of experiments of this kind.
In 1974 was detected a double pulsar whose observation provided interesting data for relativity . Its periapsis is moving about four degrees per year, and in addition the orbit of the star is shrinking in spiral and its period decreases. This shows a loss of energy which is attributed to intense gravitational waves.
New experiments are designed to detect gravitational waves as the LISA.
[Via: gravitational waves in relatividad.org]
Please let me know if you’re looking for a article writer for your blog.
You have some really great articles and I think I
would be a good asset. If you ever want to take some of the load off, I’d really like to write
some material for your blog in exchange for a link back to mine.
Please shoot me an email if interested. Kudos! |
How OSPF(Open Short Path First) Routing Protocol implemented using Dijkstra Algorithm behind the scene
What is OSPF(Open Shortest Path First)?
Open Shortest Path First(OSPF) is a standard routing protocol that’s been used in the world for many years. Supported by practically every routing vendor, as well as the open-source community, OSPF is one of the few protocols in the IT industry you can count on being available just about anywhere you might need it.
OSPF uses a shorted path first algorithm to build and calculate the shortest path to all known destinations. The shortest path is calculated with the use of the Dijkstra algorithm. The algorithm by itself is quite complicated. This is a very high-level, simplified way of looking at the various steps of the algorithm.
Working of OSPF:
- OSPF is based on a link-state routing algorithm in which each router contains the information of every domain and based on this information it defines the shortest path also known as the Dijkstra algorithm. The OSPF learns about every router and subnet within the entire network. A link-state routing protocol is a protocol that uses the concept of triggered updates, i.e., if there is a change observed in the learned routing table then the updates are triggered only
- The way through which OSPF learns about other routers is by sending Link State Advertisement or LSA. This LSA contains information about subnets, routers, and some of the network information. Once all the LSA’s are transferred within the network, OSPF put’s these in a database called as LSDB i.e Link State Database. The main goal here is to have each router with the same information in their LSDB’s.
- OSPF maintains information in three tables named “Neighbor Table” that contain all discovered OSPF neighbors with whom routing information will be interchanged. “Topology Table” contains the entire road map of the network with all available OSPF routers and calculated best and alternative paths. The “Routing Table” where the current working best paths will store and it is used to forward the data traffic between neighbors.
What is Dijkstra Algorithm? How does OSPF use Dijkstra behind the scene?
- In Dijkstra’s Algorithm, you can find the shortest path between nodes in a graph. Particularly, you can find the shortest path from a node (called the “source node”) to all other nodes in the graph, producing a shortest-path tree.
- Dijkstra’s Algorithm basically starts at the node that you choose (the source node) and it analyzes the graph to find the shortest path between that node and all the other nodes in the graph.
- The algorithm keeps track of the currently known shortest distance from each node to the source node and it updates these values if it finds a shorter path.
- Once the algorithm has found the shortest path between the source node and another node, that node is marked as “visited” and added to the path.
- The process continues until all the nodes in the graph have been added to the path. This way, we have a path that connects the source node to all other nodes following the shortest path possible to reach each node.
- Dijkstra Algorithm is a very famous greedy algorithm. It is used for solving the single-source shortest path problem. It computes the shortest path from one particular source node to all other remaining nodes of the graph.
- So it’s not like we run Dijkstra’s algorithm and it answers all of the best paths. We run it each time we have to get to a unique destination network. And the way that it works is it assigns a cost to the links. And when it gets to a certain point when it says oh, I got something better, I’m going to stop running that calculation because I’ve already established a better pathway to that destination. And so Dijkstra’s algorithm, a complex algorithm, but ultimately it just tells us here’s the best way to go, and then where does that information go inside of our router? Well, that path with the shortest metric to get to that destination network ends up in our routing table.
- The way through which OSPF chooses the best route is by a metric called cost. OSPF cost is the value to give to a link based on the bandwidth of that interface.
Cost = Reference Bandwidth / Interface Bandwidth, where reference bandwidth is 100 Mb/s.
I Hope You Like It…
Thank You For Reading…
If you like it please clap it.😁🤗 |
What are number sequences? Number sequences are a pattern of numbers that follow a certain rule. All times tables are effectively number sequences. For example, the 2 times table is a sequence of numbers that increase by 2 each time: 2, 4, 6, 8, 10 etc.
Step 4: Number Sequences Homework Extension Year 5 Spring Block 2. Number Sequences Homework Extension Extension provides additional questions which can be used as homework or an in-class extension for the Year 5 Number Sequences Resource Pack.These are differentiated for Developing, Expected and Greater Depth.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice.
They range from a basic level for younger children to educational games that will provide more of a challenge for able Key Stage 1 children. Caterpillar Ordering. A flexible game for ordering numbers and for number sequences. Fantastic on an interactive whiteboard. Early levels involve ordering numbers up to 5. Not Flash. Helicopter Rescue. Great number square games which will help you to find.
Number and place value - Fun teaching resources, ideas, games and interactive programs for KS2 numeracy and maths. Ideal for year 3 4 5 and 6 lessons about place value, money, time, number facts, problem solving and the four operations.Learn More
These lower primary tasks are all based around the idea of counting and ordering numbers. Skip over navigation. NRICH. Main menu. Counting and Ordering KS1. Incey Wincey Age 3 to 5. In this game, children roll the dice and count how many steps to move the spider up or down the drainpipe. Number Book Age 3 to 5. Creating a 'Book of Four' provides an opportunity for children to collect.Learn More
Numbers for kids in Year 2 KS1. Maths homework help with place value, partitioning and ordering numbers. Number games for kids learning to count.Learn More
The sequence of numbers, forward and backward by 1, 10, 100, and 1000 to a million and give the number 1, 10, 100, 1000 before or after a given number in the range 0 - 1 000 000. Number Madness Before and After - Big numbers: The sequence of decimal numbers in tenths and hundredths. Calculator Tenths Counting In Tenths: How to order unit fractions.Learn More
Day 1 Teaching Count reliably up to 20 objects and write the numbers in numerals. Group Activities-- Making cube snakes and re-ordering muddled numbers on a number line. Day 2 Teaching Place numbers up to 20 on a washing line or a bead bar using the landmarks of 5, 10, 15, 20. Group Activities-- Estimate the length of Plasticine worms and measure against bead strings.Learn More
In KS1 your child will be introduced to the idea of sequences used in writing algorithms. A sequence is a series of events that must be performed in order to achieve a task. A common example of a task given at KS1 might be to draw a character using an algorithm to show what body parts need to be added at what time. For example, you might decide.Learn More
Do your children need practice with missing number problem in addition and subtraction questions? Do they need additional materials to provide a little more practice, both in school and for homework? They will love this lesson! Solve missing number problems using Inverse. In math, addition and subtraction are inverse operations. When we add, we.Learn More
What linear number sequence homework can I give my class? In these differentiated home learning tasks, children apply their learning of term to term rules to finding the distance jumped by different animals. As a challenge, children are then given a formula to use to find the distance of any term in the jumping sequence. What is a linear number sequence? A linear number pattern is a sequence.Learn More
Free online maths games for kids. Primary or Elementary, Key Stage 1, ages 5-7 years, Numeracy, Math help activities and teacher resources to use in the classroom or for parents at home to help your child with maths. Perfect as part of a homeschool curriculum. Use on an IWB, PC or mac. Learn math the easy way with kids games.Learn More
Number Patterns. Recognize abstract patterns in number sequences. Finding the pattern in numbers is a skill that lays the foundation for data analysis abilities later. The numbers in these series range from simple addition or subtraction patterns (at the easy level) to rolling mixed computations (at the complex level). The highest level may prove to be a good challenge to students as far along.Learn More
Maths Homework Challenge Cards (SB3268) A set of 30 homework activity cards for children to take home. Includes tasks such as finding who has the largest hands in the family, adding numbers on a car number plate, counting activities, 2D shape hunts and much more.Learn More
Fun teaching resources, ideas, games and interactive programs for KS1 maths and numeracy. Ideal for class lessons about data handling, number, shape, space and measure.Learn More |
Shang dynasty kings ruled over a well-organized and civilized state in northern China between 1766 BCE and 1122 BCE. Under them, many of the key features of later Chinese civilization began to develop.
The Shang dynasty was, according to traditional Chinese histories, the second dynasty of ancient China, ruling from 1766 BCE to 1122 BCE. The dynasty was held to have been founded by a rebel king, Tang, who overthrew the last king of the Xia dynasty, the first of ancient China’s dynasties.
King Tang of Shang Dynasty
as imagined by by Song Dynasty painter Ma Lin
To date there has come down to us no direct historical evidence for the existence of the Xia dynasty. However, there is no reason why such a dynasty may not have existed in China’s pre-history. Given that the roots of Chinese civilization go much further back than the Shang, there are plenty of reasons to think that the Xia dynasty did indeed rule in northern China.
A Xia dynasty?
For centuries prior to the rise of the Shang dynasty, the farming culture of northern China had been advancing in social complexity and technological sophistication, for example with the introduction of wheel-thrown pottery. By c. 1800 BCE, knowledge of bronze casting had entered the Yellow River Valley from western China, as is shown by the bronze bells and other objects found in late 3rd millennium sites.
During this period, walled towns grew dramatically in size and density. The earliest-known city in ancient China appeared in c. 1800 BCE, at Erlitou, just south of the Yellow River. In the early 2nd millennium BCE, what had been a fairly typical walled town of the period suddenly mushroomed into a city some 300 hectares (741 acres) in area (about ten times the area of earlier settlements in the region). This change on the order of magnitude of settlement size was accompanied by other changes, such as the appearance of elaborate elite burials, ritual bronze vessels, and the presence of two palaces.
Erlitou is only one urban site amongst several dating from this ancient period of history. Sharp social divisions were increasingly apparent, and the power and status of rulers is shown by the occurrence of human sacrifice – always a sign of overwhelming, usually sacred authority – as well as by magnificent graves.
It is clear that warfare became much more prevalent during the early 2nd millennium BCE, presumably the result of population expansion. Throughout history, success in war has always been dependent upon good organization as much as military valour, and this was clearly an age in which tribal chiefdoms were being transformed into proper kingdoms. In these, the kings and their officials were able to exercise control over comparatively wide areas, collecting sufficient tribute from the farming population to pay for luxurious courts where power was concentrated. These courts were serviced by specialized and highly skilled craftsmen – the range and quality of the bronze vessels recovered from sites of the period is staggering, appearing so suddenly in the archaeological record. These court centres, with their temples and palaces, were surrounded by sizeable towns and cities, enclosed by stamped-earth walls and moats.
One of these early kingdoms – in all likelihood that of which Erlitou was the capital – was probably ruled by a dynasty which later generations called the Xia. This state would be the forebear to all the great dynasties of Chinese history that followed.
If traditional Chinese historiography is followed, the Xia kingdom was conquered in c. 1766 BCE by a ruler of a neighbouring state which was subordinate to the Xia. This ruler was the founder of the Shang dynasty. The resulting kingdom became the most powerful state in northern China.
The heartland of the Shang kingdom lay where the Yellow River leaves the mountains and enters the eastern plains. This is an extremely fertile area, due to the soil deposits brought down by the river system from the mountains. It is also conveniently situated near metal-rich deposits in the uplands.
Under the Shang, ancient China emerges into the light of history. Written records, mostly in the form of inscriptions on oracle bones, or on pottery or bronze vessels, shed light on the society and politics of the period. The Shang script was a fully developed system of writing, similar to that still in use on China today.
The Shang kings were, like many rulers of the ancient world, high priests as well as political and military leaders. As the oracle bone inscriptions reveal, one of the main functions falling to the king was to offer sacrifices to his royal ancestors; and he also led the worship of the high god, Di.
The king was assisted in his duties by a staff of literate scribes, headed by officers who had titles which reflected specific, departmentalised responsibilities. The Shang royal precinct, with its palaces and temples, functioned as much as a religious ceremonial centre as a centre of government. It lay at the centre of the royal capital, with its workshops and houses, and the whole surrounded by a stamped earth wall.
Outside the capital were the villages in which the majority of the people lived. Most of these people were peasants, farming small plots of land. The land was owned by the king or by local lords; in exchange for being allowed to farm these plots, the peasants had to give part of their crops as tribute to their lord, and were also required, when ordered, to follow their lord to war, or to work on a project which he wanted carrying out, such as digging a pond or canal.
Further away from the capital and the land which immediately surrounded it, which was controlled directly by the king, the Shang kingdom was partitioned amongst many such local lords. These lords all owed obedience to the king, and they had to see to it that the people in their area obeyed the law, and that tribute owing to the king was collected and sent to the capital. The lords also provided soldiers for the royal army from amongst their peasants, as well as labourers for the large projects which the king ordered to be carried out from time to time, such as building temples or royal palaces. The lords also had the duty of advising the king as to what policies he should follow.
This inner area of subordinate lordships was surrounded by an outer ring of semi-independent kingdoms and tribes who owed the Shang king some form of loyalty. Kingdoms and tribes which were defeated by the Shang had to submit to them. Sometimes the Shang king took some or all of their territory away from a defeated ruler and gave it to his own followers to control.
At other times defeated rulers were allowed to keep their land. However, they had to go to the Shang capital, and bow down to the Shang king as their overlord. In return, they received badges of office, and were allowed to govern their territories as long as they remained loyal to the Shang.
The Shang state therefore came to be made up of different zones. In the centre was the capital city itself, and its surrounding countryside. The Shang king controlled this area directly. Surrounding this central area was the rest of the Shang kingdom, which was divided into many localities, each controlled by a subordinate lord. Finally, there was a large outer territory of kingdoms and chiefdoms which acknowledged the Shang king as their overlord. When required, these would send soldiers to fight in the Shang army.
The Shang kingdom can best be seen, therefore, as a confederation of states clustered around a central domain, owing greater or lesser obedience to the Shang king. This confederation covered much of northern China, reaching at times into the Yangtze Valley. At its peak, it must have been a formidable power, with the Shang king able to muster troops from a very wide area, in addition to the army recruited from his own domain; and, waxing and waning in size, the Shang confederation endured for several centuries, witnessing to an impressive inner cohesion.
The Shang kings shifted their capital from time to time. Archaeologists identify the city of Zhenzhou with one of the first capitals of the Shang dynasty, probably the capital traditionally known as Bo or Ao. This city represents a step up from Erlitou in terms of size and material culture. With its extensive suburbs it covered 25 sq. km. (9.7 sq. miles). At its centre lay a moated palace, near richly-endowed graves for the ruling family which contained beautifully-crafted bronze ritual vessels and jade and bronze ornaments. The city centre was enclosed by stamped earth walls, reaching a width of 36 m (118 ft) at the base, and probably dating to c. 1650 BCE. Outside these walls were workshops for bronze, bone and ceramic work; homes for the specialist craftsmen employed in them; and other residential areas. The Shang period saw great advances in craftsmanship, a response to a demand for sophisticated goods from a wealthy and privileged governing elite.
Zhenzhou was superseded as capital by the city uncovered at Huanbei (probably ancient Xi’ang). Here, a huge palace precinct in the centre of the walled area incorporated at least 25 individual buildings.
The final Shang capital of Anyang was roughly the same size as Zhenzhou. The centre contained the normal palace precinct, complete with ancestral temples. In the royal graves, jade ornaments and ritual objects abound. Some of the graves contained complete light two-wheeled chariots.
The Shang army fought frequent wars with neighbouring barbarians, including nomads from the inner Asian steppes. Inscriptions on oracle bones repeatedly express anxieties about such barbarians, who lived beyond the frontiers of the civilized world of the Shang and their allies.
It seems that the Shang kings maintained a force of about a thousand troops at their capital and would personally lead this force into battle. When a larger force was required, the king called on his nobles to raise troops from their territories and contribute them to the royal army. The lords were obligated to furnish these troops with all the necessary equipment, armour and weapons. Subordinate kings and chiefs would also be asked to contribute contingents.
In this way the Shang could quickly muster armies more than ten thousand strong. Most of these troops would be peasants conscripted as soldiers for the duration of a campaign. They would fight as infantry, armed with a variety of stone and bronze weaponry including spears, pole-axes, composite bows, and leather helmets. Bronze weapons and armour would be limited to the nobility, as these were far to expensive to equip the ordinary troops with.
Military – A bronze axe of the Shang dynasty.
Reproduced under Creative Commons 2.5
The aristocrats would therefore have taken on a disproportionately large share of the fighting. This would have been accentuated in later Shang times by the introduction of chariots. The design of the light two-wheeled chariots show overwhelming evidence for a western Asian origin for this new military technology, with the nomads of central Asia acting as the intermediaries
Chariots, as well as being efficient weapons of war, functioned as mobile command units. They became indispensable weapons of war, against which infantry troops found it very hard to stand.
Fighting effectively as chariot-borne warriors required a great deal of training, the time for which only aristocrats could afford to give. Thus later Shang warfare would have depended even more upon the military skills of the nobility than in previous times.
The late Shang period is famous for the oracle bones used for divining the future. Some 100,000 of them have been recovered from the period. Kings, officials and others would have questions written onto bones or shells, which would then be heated. Priests would then study the cracks formed by the heat and interpret them to give the questioner an answer.
Oracle bones have been found from eras preceding the Shang, though without the questions and interpretations inscribed on them. The very existence of these oracles show that divination was already a well-established practice by the time literate civilization developed in China.
The interpretations shed invaluable light on Shang politics, religion and society. Hunting, war, rainfall, the success of the harvest, the health of members of royal family – these were the predictable concerns of the Shang rulers. By the time of the oracle bone inscriptions which have been discovered so far, Shang political power was in decline, and a sense of anxiety and threat can be detected in many of the questions which had been put and the written interpretations given.
Oracle bone script, the earliest known form of Chinese.
Reproduced under licence 3.0
The texts also show that the veneration of ancestors was also already a key element within Chinese culture.
Another indication that many elements of modern Chinese thought were already present in Shang times was that the royal graves were aligned in accordance with the ideas of Sheng fui.
There were therefore clear continuities between thought and religious practices at this early stage in China’s history and with those of later periods. However, there were also differences. One feature of Shang religious practice (as elsewhere in the Bronze Age) was human sacrifice, which was practiced on a large scale by the Shang royal court. This fell out of common use under the Zhou dynasty.
Another difference was that the king sacrificed to the high god, Di, responsible for the rain, wind, and thunder. It seems that Di’s importance in the religion of this period was more important than it would later become, when the more impersonal concept of “Heaven” became dominant.
The economy of the Shang kingdom was, as with all pre-modern economies, based on agriculture. By far the majority of the population were peasants who worked by farming the land.
The peasants worked the land, but the land was controlled by lords to whom the peasants contributed a large share of the crops they grew. The peasants were also called upon to perform a range tasks such as building and repairing dykes to prevent erosion or protect against flooding; digging ponds and channels for storing and directing water to where it was needed, and so on. To do these things the peasants had to work together in large groups, which had to be supervised by village headmen and overseers for the local lords. They also had to fight in the royal army or work on major royal projects should their lords require them to do so.
The lords lived in far greater luxury than the majority of the population. They spent their time in administering their lands, in training for war and fighting on campaigns, in attending the king at court, advising him in matters of government and assisting him in religious ceremonies, or in presiding over the rites to do with their own ancestors. They may also have been involved in the worship of local deities whose existence has long been forgotten.
Apart from the peasants and lords, there were also groups of people who lived in the capital and were attached to the service of the king and his court. One group was made up of scribes or priests. These played a key role in the religious rites of the kingdom, and must have enjoyed a high social status. Other groups were the bronze workers and other skilled craftsmen who produced the fine objects which adorned the palaces and temples; and the labourers – possibly slaves – involved in the mining, refining and transporting of the tin, copper and lead ores which went to make bronze.
The Shang kingdom lay at the centre of a network of long-distance trade routes. This is shown by the cowrie shells which served as a unit of currency. All were imported, some from tropical seas thousands of kilometres to the south. Turtle shells used for divination purposes, as well as the jade used in ornamental objects, are also imported from the south. Exactly how the traders fitted into society is unknown, but it is likely that some were closely associated with the royal court, perhaps even royal servants.
As we have seen, the Shang dynasty kingdom did not form a unified state structure covering a large area of China. Even in northern China there were many other kingdoms and tribes. Some of these were long-term allies of the Shang; others were at times allies, at times enemies, of the Shang. Others lay mostly beyond the reach of the Shang.
Shang cultural influences, however, spread far wider across China. The Shang kingdom lay at the centre of a network of long-distance trade routes. This is shown by the cowrie shells which served as a unit of currency. All were imported, some from tropical seas thousands of kilometres to the south. Turtle shells used for divination purposes, as well as the jade used in ornamental objects, are also imported from the south.
The rice-growing region of southern China, centred on the Yangze Valley, had developed a level of civilization to rival that of the Shang. Large, wealthy urban centres had emerged contemporary with Zhenzhou. They displayed a distinctive southern bronze work tradition, for example casting tigers onto handles of bronze vessels. However, they do not seem to have used writing, either their own or imported from the north.
We have seen that the Shang political system formed a kind of confederation of states in which many semi-independent rulers acknowledged the overlordship of the Shang king. One of these states was the kingdom of Zhou, which lay on the western frontiers of the Shang-dominated area, and may not have been fully assimilated into it. At times it appears to have acknowledged Shang suzerainty, but at other times was amongst Shang’s enemies.
In c. 1122 BCE (1045 BCE by some reckonings), the powerful and ambitious king of Zhou sent his army, which according to traditional accounts included 300 chariots, to defeat the Shang army in the battle of Muye. The last Shang king committed suicide in his burning palace. The victor moved the capital to the city of Zengzhou, and that period known in ancient Chinese history as the Zhou dynasty had begun.
To view maps charting the rise and fall of Ancient Chinese dynasties, go to our TimeMap of World History pages on Ancient China |
Earth acts as thermal mass, making it easier to maintain a steady indoor air temperature and therefore reduces energy costs for heating or cooling.
Earth sheltering became relatively popular after the mid 1970s, especially among environmentalists. However, the practice has been around for nearly as long as humans have been constructing their own shelters.
- "Earth-sheltering is [...] a generic term with the general meaning: building design in which soil plays an integral part." This definition is problematic however, since earth structures (e.g. rammed earth or cob) are not usually considered as earth shelters as they are above ground.
- "A building can be described as earth-sheltered when it has a thermally significant amount of soil or substrate in contact with its external envelope, where “thermally significant” means making a functional contribution to the thermal effectiveness of the building in question.
- "Structures built with the use of earth mass against building walls as external thermal mass, which reduces heat loss and maintains a steady indoor air temperature throughout the seasons."
- "A residence with an earth covering for its roof or walls."
- "Homes that have been built underground, either partially or completely."
- "The use of earth cover to moderate and improve living conditions in buildings."
Earth house, earth bermed house / home, underground house / home.
Earth sheltered is one of the oldest forms of building. It is thought that from about 15,000 BC migratory hunters in Europe were using turf and earth to insulate simple round huts that were also sunk into the ground. The use of some form of earth sheltered construction is found across many cultures in history, distributed widely across the world. Normally these examples of cultures using earth sheltered buildings occur without any knowledge of the construction method elsewhere. These structures have many different forms and are referred to by many different names. General terms include pit-house and dugout.
One of the oldest examples of berming, dating back some 5,000 years, can be found at Skara Brae in the Orkney Islands off northern Scotland. Another historical example of in-hill earth shelters would be Mesa Verde, in southwest USA. These building are constructed directly onto the ledges and caves on the face of the cliffs. The front wall is built up with local stone and earth to enclose the structure.
In North America, almost every native American group used earth sheltered structures to some extent. These structures are typically termed earth lodges (see also: Barabara). When Europeans colonized North America, sod houses ("soddies") were common on the Great Plains.
The 1973 Oil Crisis saw the price of oil dramatically increase, which influenced vast social, economic and political changes worldwide. Combined with growing interest in alternative lifestyles and the back-to-the-land movement, the public of the US and elsewhere were becoming more interested in saving energy and protecting the environment.
As early as the 1960s in the US, some innovators were designing contemporary earth shelters. After the oil crisis and until the early 1980s there was a new resurgence in interest earth shelter/underground home construction, which has been termed the first wave of earth covered dwellings. In the UK in 1975, architect Arthur Quarmby finishes an earth sheltered building in Holme, England. Named "Underhill," It is recorded in the Guinness World Records as the "first underground house" in the UK.
The majority of publications about earth sheltering date to this period, with dozens of books dedicated to the topic being published in the years leading up to 1983. The first International Conference on Earth-Sheltered Buildings was hosted in Sydney, Australia in 1983. A second conference was planned for 1986 in Minneapolis, USA.
In the last 30 years earth sheltered homes have become increasingly popular. The technique is more common in Russia, China and Japan. It is possible that Northern China has more earth shelters than any other region. It is estimated that approximately 10 million people live in underground homes in the region.
Some claim that thousands of people live underground in Europe and America. Notable European examples are the "Earth Houses" of Swiss architect Peter Vetsch. There are about 50 such earth shelters in Switzerland, including a residential estate of nine earth shelters (Lättenstrasse in Dietikon). Possibly the most well known examples of modern earth sheltering in the English-speaking world are Earthships, the brand of passive solar earth shelters sold by Earthship Biotecture. Earthships are concentrated in New Mexico, USA, but are found less commonly throughout the world. In other areas such as the UK earth sheltering is more uncommon.
Overall earth shelter construction is often viewed by architects, engineers, and the public as an unconventional method of building. Techniques of earth sheltering have not become common knowledge, and much of society is unaware of this type of building construction. Generally speaking, the cost of excavation, increased need for damp-proofing and the requirement for the structure to withstand greater weight relative to above grade houses means that earth sheltering remains relatively rare. In this respect, the Passive House (PassivHaus) energy performance standard applied to above grade airtight, superinsulated low carbon or zero carbon buildings has had much wider uptake in modern times. Over 20,000 buildings certified to PassivHaus standards have been constructed across Northern Europe. Some postulate that over time the reducing availability of building space, and the increasing need and interest for environmentally friendly housing will make earth shelters more common.
Three main types of earth shelter are described. There is also great variation in the approach to earth sheltering in terms of materials used and expenditure. The "low tech" approach might involve natural building techniques, wooden posts and shed style roofs, recycling of materials, owner labor, hand excavation, etc. The relatively more high tech approach would be larger, using concrete and steel. While typically more energy efficient post construction, the high tech approach has higher embodied energy and significantly more costs.
In the earth bermed (also termed "bunded") type, earth is banked against the exterior walls, sloping down away from the building. The berm can be partial or total. The polar facing wall may be bermed, leaving the equator-facing wall un-bermed (in temperate regions). Usually this type of earth shelter is built on, or only slightly below the original grade. Due to the building being above the original ground level, fewer moisture problems are associated with earth berming in comparison to underground/fully-recessed construction, and costs less to construct. According to one report, earth berming provided 90-95% of the energy advantage as a completely below grade structure.
The in-hill (also termed "earth covered", or "elevational") construction is where the earth shelter is set into a slope or hillside, and earth covers the roof in addition to the walls. The most practical application is using a hill facing towards the equator (south in the Northern Hemisphere and north in the Southern Hemisphere), towards the aphelion (north) in the Tropics, or east just outside the Tropics. There is only one exposed wall in this type of earth sheltering, the wall facing out of the hill, all other walls are embedded within the earth/hill. This is the most popular and energy efficient form of earth shelter in cold and temperate climates.
The true underground (also termed "chambered" or "subterranean") earth shelter describes a house where the ground is excavated, and the house is set in below grade. They can feature an atrium or courtyard constructed in the middle of the shelter to provide adequate light and ventilation. The atrium is not always fully enclosed by raised ground, sometimes a U-shaped atrium is used, which is open on one side.
With an atrium earth shelter, the living spaces tend to be located around the atrium. The atrium arrangement provides a much less compact plan than that of the one or two-story bermed/in hill design; therefore it is commonly less energy efficient, in terms of heating needs. Therefore, atrium designs are found mainly in warmer climates. However, the atrium does tend to trap air within it which is then heated by the sun and helps reduce heat loss. Atrium designs are well suited to flat sites, and are fairly common.
Depending on what definition of earth sheltering is used, other types are sometimes included. In culvert homes ("Cut and Cover"), precast concrete containers and large diameter pipes are arranged into a connecting design to form a living space and then backfilled with earth. A project in Japan called Alice City will use a wide and deep cylindrical shaft sunk into the earth, with a domed skylight roof. The project will involve some residential areas. Constructed Caves are formed by tunnelling into the earth. Earth sheltering can be used for structures other than residential homes, such as greenhouses, schools, commercial centres, government buildings and other public buildings.
Active and passive solar
Earth sheltering is often combined with solar heating systems. Most commonly, the utilization of passive solar design techniques is used in earth shelters. In most of the Northern Hemisphere, a south-facing structure with the north, east, and west sides covered with earth is the most effective application for passive solar systems. A large double glazed window, triple glazed, spanning most of the length of the south wall is critical for solar heat gain. It is helpful to accompany the window with insulated drapes to protect against heat loss at night. Also, during the summer months, providing an overhang, or some sort of shading device, is used to block out an excess solar gain.
Passive annual heat storage
Passive annual heat storage is a building concept theorized to create a year-round constant temperature in an earth shelter by means of direct gain passive solar heating and a thermal battery effect lasting several months. It is claimed that an earth shelter designed according to these principles would store the sun's heat in the summer and release it slowly over the winter months without need for other forms of heating. This method was first described by inventor and physicist John Hait in his 1983 book. The main component of is an insulated and waterproof "umbrella" which extends out from the earth shelter for several meters in all directions. Hence the term "umbrella house". The earth under this umbrella is kept warm and dry relative to surrounding earth, which is subject to constant daily and seasonal temperature changes. This creates a large heat storage area of earth, effectively a huge thermal mass. Heat is gained via passive solar in the earth shelter and transferred to the surrounding earth by conduction. Thus, when the temperature in the earth shelter dips below the temperature in the surrounding earth, heat will return to the earth shelter. After a time, a stable temperature is reached which is an average of annual heat changes in the external environment. Some criticize the technique (along with the earth sheltering technique as a whole), stating concerns including difficulty and expense of construction, moisture and lack of evidence.
Annualized geo solar
Earth tube ventilation
Passive cooling which pulls air with a fan or convection from a nearly constant temperature air into buried Earth cooling tubes and then into the house living space. This also provides fresh air to occupants and the air exchange required by ASHRAE.
Comparison with standard housing
|Claimed benefits of Oehler's style of earth-sheltered homes.|
Three main factors influence overall cost of home construction, namely, design complexity, materials used, and whether the owner(s) carries out some or all of the construction or pays others to do it. Custom houses with complex designs tend to be more expensive and take longer to build than stock houses. Houses which use expensive materials will be more expensive than houses which use low cost materials. Owner labor can dramatically cut construction costs.
Both earth sheltered projects and construction of regular houses have significant variability in the design, materials and labor involved. As such it is difficult to make a general comparison of cost between the two. For example, a small "underground home" built in the style of Oehler would tend to be significantly cheaper than a regular house since this approach emphasized the owner(s) doing much of the excavation and work themselves and using recycled materials, e.g. for windows. So earth sheltered houses can be cheaper, with some claiming up to 30% less costs, but they can also be more expensive. A custom project with a complex design from a hired architect, with expensive materials and features, and constructed by a specialist contractor may be significantly more expensive than a regular house.
A particular factor that strongly influences the cost of an earth shelter is the amount of earth that covers it. The more earth covering the structure, the greater the expense is needed in having a structure capable of withstanding the load (see also: Roof). Another important cost factor that tends to be unique to earth shelters is site excavation and backfilling. The amount of waterproofing is also more costly. On the other hand, earth shelters should have lower maintenance costs since they are mostly covered with little exposed exterior.
Passive heating and cooling
Due to its density, compacted earth acts as thermal mass, meaning that it stores heat and releases it again slowly. Compacted soil is more of a conductor of heat than an insulator. Soil is stated as having an R-value of about 0.65-R per centimeter (0.08-R per 1 inch), or 0.25-R per 1 inch. Variations in R-value of soil may be attributed to different soil moisture levels, with lower R values as moisture level increases. The most superficial layer of earth typically is less dense and contains the root systems of many different plants, thereby acting more like thermal insulation, meaning, it reduces the rate of temperature flowing through it.
Approximately 50% of the heat from the Sun is absorbed at the surface. Consequently, the temperature at the surface may vary considerably according to the day / night cycle, according to weather and particularly according to season. Underground, these temperature changes are blunted and delayed, termed thermal lag. The thermal properties of earth therefore mean that in Winter the temperature below the surface will be higher than the surface air temperature, and conversely in Summer the earth temperature will be lower than the surface air temperature.
Indeed, at a deep enough point underground, the temperature remains constant year round, and this temperature is approximately the mean of Summer and Winter temperatures. Sources vary in their stated values for this deep earth constant temperature (also termed amplitude correction factor). Reported values include 5–6 m (16–20 ft), 6 m (20 ft), 15 m (49 ft), 4.25 m (13.9 ft) for dry soil, and 6.7 m (22 ft) for wet soil. Below this level the temperature increases on average 2.6 °C (4.68 °F) every 100 m (330 ft) due to heat rising from the interior of the Earth.
Diurnal temperature changes between maximum and minimum temperatures can be modelled as a wave, as can seasonal temperature changes (see diagram). In architecture, the relationship between the maximum fluctuations of external temperature compared to internal temperature is termed amplitude dampening (or temperature amplitude factor). Phase shifting is the time take for the minimum external temperature to reach the interior.
Partially covering a building with earth adds to the thermal mass of the structure. Combined with insulation, this results in both amplitude dampening and phase shifting. In other terms, earth sheltered structures receive both a degree of cooling in Summer and heating in Winter. This reduces need for other measures of heating and cooling, saving energy. A potential disadvantage of a thermally massive building in cooler climates is that after a prolonged period of cold, when the external temperature increases again, the structures internal temperature tends to lag behind and take longer to warm up (assuming no other form of heating).
A further advantage is the higher air humidity of 50 to 70% compared to overheated rooms of conventional houses in winter. Furthermore, as earth houses are impermeable, they can be considered ideal for controlled air conditioning.
Wind and earthquake protection
The unique architecture of earth houses protects them against severe windstorms. They cannot be torn away or tipped over by strong winds. Structural engineering and, above all, the lack of corners and exposed parts (roof), eliminate vulnerable surfaces which would otherwise suffer from storm damage. Furthermore, earth houses benefit from improved stability due to the more natural shapes of arches.
Landscape protection and land use
Another benefit of underground sheltering is the efficient use of land. Many houses can sit below grade without spoiling the habitat above ground. Each site can contain both a house and a lawn/garden.
Compared to conventional buildings, earth houses fit perfectly into their surroundings. The soil-covered roofs help incorporate the environment, protect the natural scenery, and contribute to the oxygen-nitrogen balance of the soil, which would otherwise be covered by the foundation of a traditional house, inhibiting nitrogen fixation and aeration of the soil. Contrary to conventional roofs, earth-house roofs restore usable surface area to the environment. They can also be built as terraced structures if the slope is appropriate, thus using far less land area, because the structure can be built right up to the property boundary. Owing to the condensed means of construction, more green space remains available. Furthermore, earth-house structures can easily be built into hilly terrain, as opposed to conventional houses, which would require flat land.
Compared to other building materials, such as wood, earth houses feature efficient fire protection owing both to the use of concrete and the insulation provided by the roof. Taking the example of Earthships, there is a reported case where the structure survived fire better compared to other types of buildings.
Roof covering is done using excavation material which allows for planting useful plants. As the roof collects and ties up most of the rain water, rivers are relieved of sudden and large amounts of water.
Earth houses can be built using wide glass façades and dome-lights, allowing rooms to become bright and suffused with light. Dome-lights provide natural light for bathrooms and secondary rooms.
Structural resiliency and survivability
Due to the mass of the earth between the living area of an earth house and the surface grade, an earth home offers significant protection from impact/blast damage, or fallout associated with a nuclear bomb.
They include usage of the earth as a thermal mass, extra protection from the natural elements, energy savings, substantial privacy in comparison to more conventional homes, efficient use of land in urban settings, low maintenance requirements, and the ability to take advantage of passive solar building design.
The reduction of air infiltration within an earth shelter can be highly profitable. Because three walls of the structure are mainly surrounded by earth, very little surface area is exposed to the outside air. This alleviates the problem of warm air escaping the house through gaps around windows and door. Furthermore, the earth walls protect against cold winter winds which might otherwise penetrate these gaps. However, this can also become a potential indoor air quality problem. Healthy air circulation is key.
As a result of the increased thermal mass of the structure, the thermal lag of the earth, the protection against unwanted air infiltration and the combined use of passive solar techniques, the need for extra heating and cooling is minimal. Therefore, there is a drastic reduction in energy consumption required for the home compared to homes of typical construction.
Earth shelters may provide privacy from neighbors, as well as soundproofing. The ground provides acoustic protection against outside noise. This can be a major benefit in urban areas or near highways.
Overall it is more technically challenging to design an earth shelter compared to a regular home. Because of the unorthodox design and construction of earth-sheltered homes, local building codes and ordinances may need to be researched and/or navigated. Many construction companies have limited or no experience with earth-sheltered construction, potentially compromising the physical construction of even the best designs. The specific architecture of earth houses usually leads to non-righted, round-shaped walls, which can cause problems concerning the interior decoration, especially regarding furniture and large paintings. However, these problems can be anticipated during the conceptual design of an earth house.
In Green building, four "lifetime" phases of a building are described, namely material sources, construction, in use, and deconstruction (life-cycle assessment). Terms carbon zero and negative carbon buildings refer to the net greenhouse gas emissions over these four phases. Questions therefore arise as to whether certain structures are truly environmentally friendly. For example, raw materials must be extracted from the earth, transported and then manufactured into building materials and transported again to be sold and finally transported to the build site. A lot of fossil fuels may be used during each of these stages.
Earth sheltering often requires heavier construction materials to resist the weight of the earth against the walls and/or roof. Reinforced concrete in particular tends to be used in larger quantities. The manufacture of cement in concrete tends to use release a lot of greenhouse gases.
The materials involved tend to be non-biodegradable substances. Because the materials must keep water out, they are often made of plastics. Concrete is another material that is used in great quantity. More sustainable products are being tested to replace the cement within concrete (such as fly ash), as well as alternatives to reinforced concrete . The excavation of a site is also drastically time- and labor-consuming. Overall, the construction is comparable to conventional construction, because the building requires minimal finishing and significantly less maintenance.
Moisture and indoor air quality
Problems of water seepage, internal condensation, bad acoustics, and poor indoor air quality can occur if an earth shelter has not been properly designed and ventilated. Very high humidity levels can allow mold or mildew growth, associated with a musty smell and potentially with health problems. The below-ground orientation of many earth-sheltered homes can allow accumulation of radon gas (which is known to increase the risk of lung cancer) or other undesirable materials (e.g. off gassing from construction materials).
The threat of water seepage occurs around areas where the waterproofing layers have been penetrated. Earth usually settles gradually. Vents and ducts emerging from the roof can cause specific problems due to the possibility of movement. Precast concrete slabs can have a deflection of 1/2 inch or more when the earth/soil is layered on top of them. If the vents or ducts are held rigidly in place during this deflection, the result is usually the failure of the waterproofing layer. To avoid this difficulty, vents can be placed on other sides of the building (besides the roof), or separate segments of pipes can be installed. A narrower pipe in the roof that fits snugly into a larger segment of the building can also be used. The threat of water seepage, condensation, and poor indoor air quality can all be overcome with proper waterproofing and ventilation.
Condensation and poor quality indoor air problems can be solved by using earth tubes, or what is known as a geothermal heat pump—a concept different from earth sheltering. With modification, the idea of earth tubes can be used for underground buildings: instead of looping the earth tubes, leave one end open downslope to draw in fresh air using the chimney effect by having exhaust vents placed high in the underground building.
Limited natural light
Despite large windows (usually facing south in the Northern Hemisphere), many earth-sheltered homes have dark areas in the areas opposite the windows. All natural light coming from one side of the home can give a "tunnel or cave effect". This may be alleviated by strategic use of skylights, solar tubes, or artificial light sources.
Risk of collapse
Reports of collapse seem to be rare. In one case, an author and proponent of earth sheltering died when an earth roof he designed collapsed on him.
Limited escape routes
Compared to above ground house, earth-shelters may have limited escape routes in case of emergency, which can fail egress and fenestration building codes. For example, a passive solar earth shelter with only one exposed side, and earth covering the other three walls and roof.
Design and construction
Earth sheltered homes are often constructed with energy conservation and savings in mind. Specific designs of earth shelters allow for maximum savings. For bermed or in-hill construction, a common plan is to place all the living spaces on the side of the house facing the equator (or north or east, depending on latitude; see "Topography"). This provides maximum solar radiation to bedrooms, living rooms, and kitchen spaces. Rooms that do not require natural daylight and extensive heating such as the bathroom, storage, and utility room are typically located on the opposite (or in-hill) side of the shelter. This type of layout can also be transposed to a double level house design with both levels completely underground. This plan has the highest energy efficiency of earth sheltered homes because of the compact configuration as well as the structure being submerged deeper in the earth. This provides it with a greater ratio of earth cover to an exposed wall than a one-story shelter would.
The soil type is one of the essential factors during site planning. The soil needs to provide adequate bearing capacity and drainage, and help to retain heat. With respects to drainage, the most suitable type of soil for earth sheltering is a mixture of sand and gravel. Well graded gravels have a large bearing capacity (about 8,000 pounds per square foot), excellent drainage and a low frost heave potential. Sand and clay can be susceptible to erosion. Clay soils, while least susceptible to erosion, often do not allow for proper drainage, and have a higher potential for frost heaves. Clay soils are more susceptible to thermal shrinking and expanding. Being aware of the moisture content of the soil and the fluctuation of that content throughout the year will help prevent potential heating problems. Frost heaves can also be problematic in some soil. Fine grain soils retain moisture the best and are most susceptible to heaving. A few ways to protect against capillary action responsible for frost heaves are placing foundations below the freezing zone or insulating ground surface around shallow footings, replacement of frost-sensitive soils with granular material, and interrupting capillary draw of moisture by putting a drainage layer of coarser material in the existing soil.
Water can cause potential damage to earth shelters if it ponds around the shelter. Avoiding sites with a high water table is crucial. Drainage, both surface, and subsurface must be properly dealt with. Waterproofing applied to the building is essential.
Atrium designs have an increased risk of flooding, so the surrounding land should slope away from the structure on all sides. A drain pipe at the perimeter of the roof edge can help collect and remove additional water. For bermed homes, an interceptor drain at the crest of the berm along the edge of the rooftop is recommended. An interceptor drainage swale in the middle of the berm is also helpful or the back of the berm can be terraced with retaining walls. On sloping sites runoff may cause problems. A drainage swale or gully can be built to divert water around the house, or a gravel-filled trench with a drain tile can be installed along with footing drains.
Soil stability should also be considered, especially when evaluating a sloping site. These slopes may be inherently stable when left alone, but cutting into them can greatly compromise their structural stability. Retaining walls and backfills may have to be constructed to hold up the slope prior to shelter construction.
On land that is relatively flat, a fully recessed house with an open courtyard is the most appropriate design. On a sloping site, the house is set right into the hill. The slope will determine the location of the window wall; the most practical orientation in moderate to cold climates is a south-facing exposed wall in the Northern hemisphere (and north-facing in the Southern hemisphere) due to solar benefits. The most practical orientation in the Tropics nearest the equator is north-facing toward the aphelion (or perhaps northeast) to moderate the temperature extremes. Just outside the Tropics, the most practical way to avoid afternoon heat excess may be an east-facing house or, if near a west coast, exposure of the east end and the west end, with the two long sides embedded in the earth.
Depending on the region and site selected for earth-sheltered construction, the benefits and objectives of the earth shelter construction vary. For cool and temperate climates, objectives consist of retaining winter heat, avoiding infiltration, receiving winter sun, using thermal mass, shading and ventilating during the summer, and avoiding winter winds and cold pockets. For hot, arid climates objectives include maximizing humidity, providing summer shade, maximizing summer air movement, and retaining winter heat. For hot, humid climates objectives include avoiding summer humidity, providing summer ventilation, and retaining winter heat.
Regions with extreme daily and seasonal temperatures emphasize the value of earth as a thermal mass. In this way, earth sheltering is most effective in regions with high cooling and heating needs and high-temperature differentials. In regions such as the southeastern United States, earth sheltering may need additional care in maintenance and construction due to condensation problems in regard to the high humidity. The ground temperature of the region may be too high to permit earth cooling if temperatures fluctuate only slightly from day to night. Preferably, there should be adequate winter solar radiation and sufficient means for natural ventilation. Wind is a critical aspect to evaluate during site planning, for reasons regarding wind chill and heat loss, as well as ventilation of the shelter. In the Northern Hemisphere, south facing slopes tend to avoid cold winter winds typically blown in from the north. Fully recessed shelters also offer adequate protection against these harsh winds. However, atria within the structure have the ability to cause minor turbulence depending on the size. In the summer, it is helpful to take advantage of the prevailing winds. Because of the limited window arrangement in most earth shelters, and the resistance to air infiltration, the air within a structure can become stagnant if proper ventilation is not provided. By making use of the wind, natural ventilation can occur without the use of fans or other active systems. Knowing the direction, and intensity, of seasonal winds, is vital in promoting cross ventilation. Vents are commonly placed in the roof of bermed or fully recessed shelters to achieve this effect.
In earth-sheltered construction, there is often extensive excavation done on the building site. An excavation several feet larger than the walls' planned perimeter is made to allow for access to the outside of the wall for waterproofing and insulation.
Once the site is prepared and the utility lines installed, a foundation of reinforced concrete is poured. The walls are then installed. Usually, they are either poured in place or formed either on or off-site and then moved into place. Reinforced concrete is the most common choice. The process is repeated for the roof structure. If the walls, floor, and roof are all to be poured in place, it is possible to make them with a single pour. This can reduce the likelihood of there being cracks or leaks at the joints where the concrete has cured at different times. The foundation of the buildings designed by Vetsch are built conventionally.
Several different methods of external (load-bearing) wall construction in earth shelters have been used successfully. These include concrete block (either conventionally mortared or surface-bonded), stone masonry, coordwood masonry, poured concrete, and pressure-treated wood. Earthships classically use rammed earth tire walls, which are labor-intensive but recycle used tires. Mike Oehler described a very low budget method he termed "post, shoring and polyethylene" (PSP). This involves buried posts shored up with planks, and with a waterproofing barrier of polyethylene sheet between the planks and the backfill.
Reinforced concrete is the most commonly used structural material in earth shelter construction. It is strong and readily available. Untreated wood rots within five years of use in earth shelter construction. Steel can be used but needs to be encased by concrete to keep it from direct contact with the soil which corrodes the metal. Bricks and CMUs (concrete masonry units) are also possible options in earth shelter construction but must be reinforced to keep them from shifting under vertical pressure unless the building is constructed with arches and vaults.
Unfortunately, reinforced concrete is not the most environmentally sustainable material. The concrete industry is working to develop products that are more earth-friendly in response to consumer demands. Products like Grancrete and Hycrete are becoming more readily available. They claim to be environmentally friendly and either reduce or eliminate the need for additional waterproofing. However, these are new products and have not been extensively used in earth shelter construction yet.
Some unconventional approaches are also proposed. One such method is a PSP method proposed by Mike Oehler. The PSP method uses wooden posts, plastic sheeting and non-conventional ideas that allow more windows and ventilation. This design also reduces some runoff problems associated with conventional designs. The method uses wood posts, a frame that acts like a rib to distribute settling forces, specific construction methods which rely on fewer pieces of heavy equipment, plastic sheeting, and earth floors with plastic and carpeting.
The roof of an earth shelter may not be covered by earth (earth berm only), or the roof may support a green roof with only a minimal thickness of earth. Alternatively a larger mass of earth might cover the roof. Such roofs must deal with significantly greater dead load and live load (e.g. increased weight of water in the earth after rain, or snow). This requires stronger and more substantial roof support structure. Some advise to have just enough thickness of earth on the roof to maintain a green roof (approximately 6 inches / 15 cm), since this means less load on the structure. Increasing the amount of earth on the roof past this gives only modest increases in the benefits while increasing costs significantly.
Despite being underground, drainage of water is still important. Therefore, earth shelters do not tend to have flat roofs. A flat roof is also less resistant to the weight of the earth. It is common for earth shelter designs to have arches and shallow domed roofs since this form resists vertical load well. One method uses finely meshed metal bent into the intended shape and welded to the supporting armature. Onto this mesh concrete is sprayed forming a roof. Terra-Dome (USA) is a company specializing in construction of earth-sheltered houses and sells a modular system of concrete domes intended to be covered by earth. Others advise the use of timber framed, gable roofs of pitch at least 1:12 to promote drainage. The roofs of Earthships tend to be mono-pitched, classically using vigas.
On the outside of the concrete, a waterproofing system is applied. The most frequently used waterproofing system includes a layer of liquid asphalt onto which a heavy grade waterproof membrane is affixed, followed by a final liquid water sealant which may be sprayed on. It is very important to make sure that all of the seams are carefully sealed. It is very difficult to locate and repair leaks in the waterproofing system after the building is completed. Several layers are used for waterproofing in earth shelter construction. The first layer is meant to seal any cracks or pores in the structural materials, also working as an adhesive for the waterproof membrane. The membrane layer is often a thick flexible polyethylene sheeting called EPDM. EPDM is the material usually used in a water garden, pond and swimming pool construction. This material also prevents roots from burrowing through the waterproofing. EPDM is very heavy to work with and can be chewed through by some common insects like fire ants. It is also made from petrochemicals, making it less than perfectly environmentally friendly.
There are various cementitious coatings that can be used as waterproofing. The product is sprayed directly onto the unprotected surface. It dries and acts like a huge ceramic layer between the wall and earth. The challenge with this method is, if the wall or foundation shifts in any way, it cracks and water is able to penetrate through it easily.
Bituthene (registered name) is very similar to the three coat layering process only in one step. It comes already layered in sheets and has a self-adhesive backing. The challenge with this is the same as with the manual layering method, in addition, it is sun sensitive and must be covered very soon after application.
Eco-Flex is an environmentally friendly waterproofing membrane that seems to work very well on foundations, but not much is known about its effectiveness in earth sheltering. It is among a group of liquid paint-on waterproofing products. The main challenges with these are they must be carefully applied, making sure that every area is covered to the right thickness, and that every crack or gap is tightly sealed.
Bentonite clay is the alternative that is closest to optimum on the environmental scale. It is naturally occurring and self-healing. The drawback to this system is that it is very heavy and difficult for the owner/builder to install and subject to termite damage.
Bi-membranes have been used extensively throughout Australia where 2 membranes are paired together—typically 2 coats of water-based epoxy as a 'sealer' and stop the internal vapor pressure of the moist concrete exploding bubbles of vapor up underneath the membrane when exposed to hot sun. The bond strength of epoxy to concrete is stronger than the internal bond strength of concrete so the membranes won't 'blow' off the wall in the sun. Epoxies are very brittle so they are paired up with an overcoat of a high-build flexible water-based acrylic membrane in multiple coats of different colors to ensure film coverage—this is reinforced with non-woven polypropylene textile in corners and changes in direction.
One or more layers of insulation board or foam are added on the outside of the waterproofing. If the insulation chosen is porous, a top layer of waterproofing is added. Unlike the conventional building, earth shelters require the insulation on the exterior of the building rather than inside the wall. One reason for this is that it provides protection for the waterproof membrane against freeze damage, another is that the earth shelter is able to better retain its desired temperature. There are two types of insulation used in earth shelter construction. The first is close-celled extruded polystyrene sheets. Two to three inches glued to the outside of the waterproofing is generally sufficient. The second type of insulation is a spray on foam (e.g. polyurethane solid foam insulation). This works very well where the shape of the structure is unconventional, rounded or difficult to get to. Foam insulation requires an additional protective top coat such as foil or fleece filter to help it resist water penetration.
In some low budget earth shelters, insulation may not be applied to the walls. These methods rely on the U factor or thermal heat storage capacity of the earth itself below the frost layer. These designs are the exception however and risk frost heave damage in colder climates. The theory behind no insulation designs relies on using the thermal mass of the earth to store heat, rather than relying on a heavy masonry or cement inner structures that exist in a typical passive solar house. This is the exception to the rule and cold temperatures may extend down into the earth above the frost line making insulation necessary for higher efficiencies.
After previous construction stages are complete, the earth is backfilled against the external walls to create the berm. Depending on the drainage characteristics of the earth may not be suitable to place in direct contact with the external wall. Some advise that topsoil and turf (sod) be put aside from the initial excavation and be used for the grass roof and to place as the topmost layer on the berm.
In the earth houses designed by Vetsch, interior walls are furnished using loam rendering which provides superior humidity compensation. The loam rendering is finally coated with lime-white cement paint.
- Lättenstrasse estate ("Earth Homes") in Dietikon, by Peter Vetsch.
- Underhill, Holme, West Yorkshire. The first modern earth sheltered building in the UK.
- Hockerton Housing Project, a community of 5 homes in Nottinghamshire, England.
- "The Burrow" in Canterbury, UK designed by Patrick Kennedy-Sanigar.
- There are 2 earthships in the UK, at Fife, Scotland and the Earthship Brighton in England.
- "The Underground House" in Great Ormside, Cumbria. Two storey earth shelter built in a disused quarry.
- Malator, Pembrokeshire. Built for former Labour MP Bob Marshall-Andrews in 1998.
- Bill Gates' house, on the shore of Lake Washington (Medina, Washington, USA). This is a well-known example of an earth-sheltered home.
- Forestiere Underground Gardens in Fresno, California.
Hôtel Sididriss in Matmata (Tunisia)
Cave house in Louresse-Rochemenier (France)
Earth Sheltered rest area along Interstate 77 in Ohio, USA
- Cosanti—site of "Earth House" designed by architect Paolo Soleri
- Earl Young (architect)—works commonly referred to as gnome homes, mushroom houses, or Hobbit houses
- Demonstrating the Viability and Growing Acceptability of Earth-Sheltered Buildings in the UK. J Harral, 2012
- Earth Sheltered Houses page on Lowimpact.org
- R McConkey (2011). The Complete Guide to Building Affordable Earth-Sheltered Homes: Everything You Need to Know Explained Simply. Atlantic Publishing Company. ISBN 9781601383730.
- J Gray (2019). "Underground Construction". www.sustainablebuild.co.uk. SustainableBuild.
- M Terman (2012). Earth Sheltered Housing Principles in Practice. Springer Verlag. ISBN 9781468466461. OCLC 861213769.
- Allen Noble, Vernacular Buildings: A Global Survey (London: Bloomsbury Publishing, 2013), 112-17. ISBN 0857723391; Gideon S. Golany, Chinese Earth-Sheltered Dwellings (Honolulu: Univ. of Hawaii Press, 1992); Golany, Earth-Sheltered Dwellings in Tunisia. (Newark: Univ. of Delaware Press, 1988); and David Douglas DeBord and Thomas R. Dunbar, Earth-Sheltered Landscapes: Site Considerations for Earth-Sheltered Environments (NY: Van Nostrand Reinhold, 1985), 11. ISBN 0442218915
- L Kahn; B Easton (1990). Shelter. Shelter Publications, Inc. ISBN 9780936070117.
- LL Boyer, WT Grondzik (1987). Earth shelter technology. Texas A & M University Press. ISBN 9780890962732. OCLC 925048286.
- L Kahn; B Easton (2010). Shelter II. Shelter Publications, Inc. ISBN 9780936070490.
- "Earth-sheltered Homes". Mother Earth News.
- D Thorpe (2018). Passive solar architecture pocket reference. Routledge. ISBN 9781138501287. OCLC 1032285568.
- Roy, Robert L (2006). Earth-sheltered houses: how to build an affordable underground home. New Society. ISBN 9780865715219. OCLC 959772584.
- RL Sterling (1980). Earth Sheltered Buildings Construction Activity And Research In The U.S. International Society for Rock Mechanics and Rock Engineering.
- M Oehler (2007). The earth-sheltered solar greenhouse book: how to build an energy-free year-round greenhouse. Mole Pub. Co. ISBN 9780960446407. OCLC 184985256.
- Passive annual heat storage: Improving the design of earth shelters. John Hait. 2013
- Green Building Advisor: Can we live happlily underground?
- M Oehler (1981). The $50 and Up Underground House Book. Mole Publishing Company. ISBN 9780442273118.
- P Reddy (July–August 2003). "Going underground - A Cumbrian perspective" (PDF). Ingenia (16).
- L Wampler (2003). Underground homes. Pelican Pub. Co. ISBN 9780882892733. OCLC 58835250.
- "BGS Reference and research reports - Ground source heat pumps". www.bgs.ac.uk. British Geological Survey.
- Mechanical and Electrical Equipment for Buildings. Walter T. Grondzik, Alison G. Kwok 2014
- P Vetsch, E Wagner, C Schubert-Weller (1994). Erd- und Höhlenhäuser von Peter Vetsch = Earth and cave architecture (in German). A. Niggli. ISBN 9783721202823. OCLC 441647358.CS1 maint: multiple names: authors list (link)
- "Earthship buildings are fire resistant, not a total loss". Pangea Builders. 12 September 2018.
- "Terra-Dome Corporation - Earth Sheltered Housing". Terra-Dome Corporation. Retrieved 29 January 2019.
- "Hockerton Housing Project".
- S Lonsdale (2005). "A buried treasure". www.telegraph.co.uk.
- Berge, Bjørn. The Ecology of Building Materials. Architectural Press, 2000.
- Campbell, Stu. The Underground House Book. Vermont: Garden Way, Inc., 1980.
- De Mars, John. Hydrophobic Concrete Sheds Waterproofing Membrane' Concrete Products, January 2006..
- Debord, David Douglas, and Thomas R. Dunbar. Earth Sheltered Landscapes. New York: Wan Nostrand Reinhold Company, 1985.
- Edelhart, Mike. The Handbook of Earth Shelter Design. Dolphin Books, 1982.
- Miller, David E. Toward a New Regionalism. University of Washington Press, 2005.
- Reid, Esmond. Understanding Buildings. The MIT Press, 1984.
- The Underground Space Center University of Minnesota. Earth Sheltered Housing Design. Van Nostrand Reinhold Company, ed. 1978 and ed. 1979.
- Wade, Herb, Jeffrey Cook, Ken Labs, and Steve Selkowitz. Passive Solar: Subdivisions, windows, underground. Kansas City: American Solar Energy Society, 1983.
|Wikimedia Commons has media related to Earth sheltering.| |
What is ORP?
ORP stands for oxidation-reduction potential, which is a measure, in millivolts, of the tendency of a chemical substance to oxidize or reduce another chemical substance.
Oxidation is the loss of electrons by an atom, molecule, or ion. It may or may not be accompanied by the addition of oxygen, which is the origin of the term. Familiar examples are iron rusting and wood burning.
When a substance has been oxidized, its oxidation state increases. Many substances can exist in a number of oxidation states. A good example is sulfur, which can exhibit oxidation states of -2 (H2S); 0 (S); +4 (SO2); and +6 (SO4 -2). Substances with multiple oxidation states can be sequentially oxidized from one oxidation state to the next higher. Adjacent oxidation states of a particular substance are referred to as redox couples. In the case below, the redox couple is Fe+2/Fe:
The chemical equation shown above is called the half-reaction for the oxidation, because, as will be seen, the electrons lost by the iron atom cannot exist in solution and have to be accepted by another substance in solution. So the complete reaction involving the oxidation of iron will have to include another substance, which will be reduced. The oxidation reaction shown for iron is, therefore, only half of the total reaction that takes place.
Reduction is the net gain of electrons by an atom, molecule, or ion.
When a chemical substance is reduced, its oxidation state is lowered. As was the case with oxidation, substances that can exhibit multiple oxidation states can also be sequentially reduced from one oxidation state to the next lower oxidation state.
The chemical equation shown below is the half-reaction for the reduction of chlorine:
The redox couple in the above case is Cl2/Cl- (chlorine/chloride).
Oxidation reactions are always accompanied by reduction reactions. The electrons lost in oxidation must have another substance as a destination, and
the electrons gained in reduction reactions have to come from a source.
When two half-reactions are combined to give the overall reaction, the electrons lost in the oxidation reaction must equal the electrons gained in the reduction reaction.
In the reaction above, iron (Fe) reduces chlorine (Cl2) and is called a reductant or reducing agent.
Conversely, chlorine (Cl2) oxidizes iron (Fe) and is called an oxidant or oxidizing agent:
Oxidizing and Reducing Agents
How easily a substance is oxidized or reduced is given by the standard potential of its redox couple, symbolized by E°. The standard potentials of quite a number of redox couples are tabulated in reference books, along with their half-reactions. All are referenced to the redox couple for hydrogen ion/hydrogen (H+/H2), which is assigned a standard potential of 0 millivolts. The standard potential refers to half reaction written as a reduction. The negative of the tabulated standard potential gives the standard potential for the oxidation half-reaction. An example of these follows:
ORP in Solutions
The standard potential for a half reaction is based on the assumption that the concentrations of all the chemical substances shown in the half reaction are at 1 molar concentration. In a process, however, the concentrations can vary independently of one another. So, to arrive at the ORP of a particular solution, it is necessary to use the Nernst equation to calculate the ORP for each case.
NERNST Equation for ORP
The ORP of a general half-reaction can be written in terms of molar concentrations as follows:
aA+bB+cC+ …+ ne- = xX+yY+zZ + …
Hypochlorous acid (chlorine in water) provides a useful example of the Nernst equation:
Examining the hypochlorous acid/chloride equation shows some important properties of ORP:
1. The ORP depends upon the concentrations of all the substances in the half-reaction (except water). Therefore, the ORP of hypochlorous acid
depends as much on chloride ion (Cl-) and pH (H+) as it does on hypochlorous acid.
2. The ORP is a function of the logarithm of the concentration ratio.
3. The coefficient that multiplies this logarithm of concentration is equal to -59.16 mV, divided by the number of electrons in the half-reaction (n). In this case, n = 2; therefore, the coefficient is -29.58. A 10-fold change in the concentration of Cl-, HOCl, H+ will only change the ORP ±29.58mV.
4. There is no specific temperature dependence shown. Temperature can affect an ORP reaction in a variety of ways, so no general ORP temperature behavior can be characterized, as is the case with pH. Therefore, ORP measurements are almost never temperature compensated.
When checking the influence of an individual substance in the half reaction, the Nernst equation can be partitioned into individual logarithms for each substance, and the contribution of that substance calculated over its expected concentration range.
Measurement of ORP
An ORP sensor consists of an ORP electrode and a reference electrode, in much the same fashion as a pH measurement.
The principle behind the ORP measurement is the use of an inert metal electrode (platinum, sometimes gold), which, due to its low resistance, will give up electrons to an oxidant or accept electrons from a reductant. The ORP electrode will continue to accept or give up electrons until it develops a potential, due to the build up charge, which is equal to the ORP of the solution. The typical accuracy of an ORP measurement is ±5 mV.
Sometimes the exchange of electrons between the ORP electrode and certain chemical substances is hampered by a low rate of electron exchange (exchange current density). In these cases, ORP may respond more strongly to a second redox couple in the solution (like dissolved oxygen). This leads to measurement errors, and it is recommended that new ORP applications be checked out in the laboratory before going on-line.
The reference electrode used for ORP measurements is typically the same silver-silver chloride electrode used with pH measurements. In contrast with pH measurements, some offset in the reference is tolerable in ORP since, as will be seen, the mV changes measured in most ORP applications are large. In certain specific applications (for example, bleach production), an ORP sensor may use a silver billet as a reference, or even a pH electrode.
Due to its dependence upon the concentrations of multiple chemical substances, the application of ORP for many has been a puzzling and often frustrating experience. When considering ORP for a particular application, it is necessary to know the half-reaction involved and the concentration range of all the substances appearing in the half-reaction. It is also necessary to use the Nernst equation to get an idea of the expected ORP behavior.
Concentration Measurement with ORP
ORP is often applied to a concentration measurement (chlorine in water for example) without a clear understanding of all the factors involved. When the equation for the ORP of a hypochlorous solution (in the previous section) is considered, the problems associated with a concentration measurement can be outlined:
1. The ORP depends upon chloride ion (Cl-) and pH (H+) as much as it does hypochlorous acid (chlorine in water). Any change in the chloride concentration or pH will affect the ORP. Therefore, to measure chlorine accurately, chloride ion and pH must be measured to a high accuracy or carefully controlled to constant values.
2. To calculate hypochlorous concentration from the measured millivolts, the measured millivolts will appear as the exponent of 10. The typical accuracy of an ORP measurement is ±5 mV. This error alone will result in the calculated hypochlorous acid concentration being off by more than ±30%. Any drift in the reference electrode or the ORP analyzer will only add to this error.
3. Any change in the ORP with temperature is not compensated, further increasing the error in the derived concentration.
In general, ORP is not a good technique to apply to concentration measurements. Virtually all ORP halfreactions involve more than one substance, and the vast majority have pH dependence. The logarithmic dependence of ORP on concentration multiplies any errors in the measured millivolts.
Applications that use ORP for monitoring and controlling oxidation-reduction reactions include cyanide destruction, dechlorination, chromate reduction, hypochlorite bleach production, and chlorine and chlorine dioxide scrubber monitoring using bisulfite.
Concentration measurement with ORP, as was seen, is problematic, but ORP can be used in some cases for leak detection to detect the presence of an oxidant or reductant.
Finally, ORP is measured, in some instances, for the control of biological growth. The principle behind these applications is that a minimum ORP value will successfully destroy microorganisms. This approach has been used in the chlorination of swimming pools and cooling towers. It should be noted that both of these applications also include pH control.
The oxidation-reduction potential of a solution is a measure of the oxidizing or reducing power of the solution. Every oxidation or reduction can be characterized by a half-reaction, which gives all of the chemical substances participating in the reaction. The ORP of the solution depends upon the logarithm of the concentrations of the substances participating in the halfreaction. The ORP can be calculated using the Nernst equation. ORP is not a good method for measuring concentration due to its logarithmic dependence on concentration and its dependence on multiple solution components. The best use of an ORP measurement is in monitoring and controlling oxidation-reduction reactions.
- Carbon Monoxide Gas Sensor Principle
- Free chlorine Analyzer Principle
- Oxidation-Reduction Potential (ORP) Sensor Calibration Procedure
- Secondary Reference Electrodes Questions & Answers
- Dissolved Oxygen Analyzer Working Principle
- Biosensors Questions & Answers
- Hydrogen and Glass Electrodes Questions & Answers
- How to Select a pH/ORP Meter
- Chlorine dioxide Analyzer Principle
- Chlorine Gas Hazards |
Angle of Incidence, Angle of Incidence and Reflection | [email protected]
The law of reflection states that the incident ray, the reflected ray, and the normal to the surface of the mirror all lie in the same plane. Furthermore, the angle of. Learn about the properties of light waves and how they can be reflected, refracted and drawn at 90° to the surface of the mirror; the angle of incidence. Reflected light obeys the law of reflection, that the angle of reflection equals the angle of incidence. In a ray diagram, rays of light are drawn from the object to the mirror, . The magnification, m, is defined as the ratio of the image height to the object height, which is closely related to the ratio of the image.
The angles of incidence and reflection are measured from a normal to the plane of the mirror as shown in Figure 1.
- The Law of Reflection
- All Things Equal
- Angle of Incidence
Reflection from a Diffuse Surface Figure 2. Some surfaces seem quite smooth; for example, a sheet of paper. However, we do see any reflections as with a plane-mirror.
At the microscopic scale the law of reflection is obeyed but the surface is irregular which means the incident rays of light are reflected in many directions and the information contained in the in the light does not reach the eye in the correct order.
This is known as diffuse reflection Rotation of Plane-Mirror Figure 3. Sign conventions What does a positive or negative image height or image distance mean? To figure out what the signs mean, take the side of the mirror where the object is to be the positive side.
Any distances measured on that side are positive. Distances measured on the other side are negative. When the image distance is positive, the image is on the same side of the mirror as the object, and it is real and inverted. When the image distance is negative, the image is behind the mirror, so the image is virtual and upright. A negative m means that the image is inverted.
Positive means an upright image. Steps for analyzing mirror problems There are basically three steps to follow to analyze any mirror problem, which generally means determining where the image of an object is located, and determining what kind of image it is real or virtual, upright or inverted.
Step 1 - Draw a ray diagram. The more careful you are in constructing this, the better idea you'll have of where the image is. Step 2 - Apply the mirror equation to determine the image distance. Or to find the object distance, or the focal length, depending on what is given.
Step 3 - Make sure steps 1 and 2 are consistent with each other. An example A Star Wars action figure, 8. Where is the image? How tall is the image? What are the characteristics of the image? The first step is to draw the ray diagram, which should tell you that the image is real, inverted, smaller than the object, and between the focal point and the center of curvature.
The location of the image can be found from the mirror equation: The image distance is positive, meaning that it is on the same side of the mirror as the object. This agrees with the ray diagram. Note that we don't need to worry about converting distances to meters; just make sure everything has the same units, and whatever unit goes into the equation is what comes out.Intro to Reflections from Concave Mirrors - Geometric Optics - Doc Physics
Calculating the magnification gives: Solving for the image height gives: The negative sign for the magnification, and the image height, tells us that the image is inverted compared to the object. To summarize, the image is real, inverted, 6. Example 2 - a convex mirror The same Star Wars action figure, 8. Where is the image in this case, and what are the image characteristics? Again, the first step is to draw a ray diagram.
This should tell you that the image is located behind the mirror; that it is an upright, virtual image; that it is a little smaller than the object; and that the image is between the mirror and the focal point. The second step is to confirm all those observations.
The mirror equation, rearranged as in the first example, gives: Solving for the magnification gives: This gives an image height of 0. All of these results are consistent with the conclusions drawn from the ray diagram. The image is 5. Refraction When we talk about the speed of light, we're usually talking about the speed of light in a vacuum, which is 3.
When light travels through something else, such as glass, diamond, or plastic, it travels at a different speed. Diffuse Reflection Light is known to behave in a very predictable manner. If a ray of light could be observed approaching and reflecting off of a flat mirror, then the behavior of the light as it reflects would follow a predictable law known as the law of reflection.
The diagram below illustrates the law of reflection. In the diagram, the ray of light approaching the mirror is known as the incident ray labeled I in the diagram.
The ray of light that leaves the mirror is known as the reflected ray labeled R in the diagram. At the point of incidence where the ray strikes the mirror, a line can be drawn perpendicular to the surface of the mirror. This line is known as a normal line labeled N in the diagram. The normal line divides the angle between the incident ray and the reflected ray into two equal angles. The angle between the incident ray and the normal is known as the angle of incidence.
The angle between the reflected ray and the normal is known as the angle of reflection. These two angles are labeled with the Greek letter "theta" accompanied by a subscript; read as "theta-i" for angle of incidence and "theta-r" for angle of reflection.
Relfection from a Plane Mirror
The law of reflection states that when a ray of light reflects off a surface, the angle of incidence is equal to the angle of reflection.
Reflection and the Locating of Images It is common to observe this law at work in a Physics lab such as the one described in the previous part of Lesson 1. To view an image of a pencil in a mirror, you must sight along a line at the image location.
As you sight at the image, light travels to your eye along the path shown in the diagram below. The diagram shows that the light reflects off the mirror in such a manner that the angle of incidence is equal to the angle of reflection. |
Physical Science Unit 1: How and Why Do We Science?
- To develop your understanding of the nature of science as it pertains to the physical world.
- To understand and describe, qualitatively and quantitatively, the nature of matter and energy and apply your understanding to natural phenomena you observe
- To experience the engineering design process and demonstrate the interrelationship with science.
- To investigate and understand the interactions between humans and the Earth.
Course Essential Questions
- How might I use scientific inquiry to investigation the natural world?
- How can I use my experience in science to learn to think and communicate clearly, logically, and critically in preparation for college and a career?
- How can I best assess my own learning and progress?
- How can I use technology in my learning and become a better digital citizen?
- How can I think more divergently, creatively, and innovatively as a scientist?
- How are a system’s characteristics, form, and function attributed to the quantity, type, and nature of its components?
Unit Essential Questions
- How are the basic concepts, skills, and understandings in science related to one another? interrelated?
- In what ways is measurement used to describe the patterns in the natural world?
- In what ways can data be used to visualize, display, and share new information?
- Scientific measurement; SI system and why it is used - mass, length, volume, time
- Precision and accuracy in measurement
- Calculations are the justification for your results
- Measurements and observations are analyzed using mathematical processes to discover connections and trends.
|Learning Targets: Skills and Measurement
Students will be able to....
- Measure and record length (meters and centimeters), volume (milliliters) and mass (grams.)
- Identify testable questions and explain how each would be falsifiable
- Explain how to choose the best type of graph (bar, line, sctter plot) for a given set of data.
- Choose a type of graph that best communicates results and correctly construct the graph.
- Define independent variable and dependent variable and explain their relationship.
- Construct a graph (bar, line, scatterplot) for a set of data on paper or using a computer app.
|Learning Targets: Essential Question
Students will be able to.....
- State the essential question, and
- Explain how each topic of study is foundational to or otherwise related to our essential question.
|Academic Vocabulary: Bricks
||Academic Vocabulary: Mortar
- SI system
- significant figures (measurement)
- testable question
- particle model
- thermal energy
- pure substance
- list; state
- law (in science) |
The landslide hazard causes severe loss of life, injury, damage to property, destruction of communication networks and loss of precious soil and land. Although the occurrence of landslides is . declining all over the world due to greater scientific understanding and public awareness, in many areas the mounting pressure of population at the base of slopes, canyons and unstable borders of plateau have led to an increase in dangers due to landslides. Landslides are universal phenomena, but more than being ‘natural hazards’, they are induced by human activity.
M.A. Carson and M.J. Kirkby (1972) divided hill slopes into (i) weathering-limited slopes and (ii) transport-limited slope. In the former case, rock disintegrates in situ, whereas, in the latter case, slopes are covered by thick soil or disintegrated rock materials, known as regolith. Due to the presence of regolith, transport-limited slopes experience frequent landslides.
The term, ‘landslide’ encompasses falling, toppling, sliding, flowing and subsidence of soil and rock materials under the strong influence of gravity and other factors. Some geomorphologists thus prefer to use the term mass movement instead of landslides. The resultant landforms produced by mass movements are termed mass wasting. Mass movement occurs when the slope gradient exceeds its threshold angle of stability.
Factors Responsible for Landslides:
Slope instability may be caused by removal of lateral or underlying support, mainly by river erosion and road cuts, landfill dumping, faulting, tectonic movement or the creation of artificial slopes by constructional activities.
Weathering involves rock disintegration, causing weakening of soil and decreased resistance to shearing. A Significant cause of landslide is related to increased water infiltration which causes saturation of soil. It may be due to ploughing or poor organisation of drainage on a sloping area that has under-gone modification due to deforestation and urbanisation. Pour water pressure is increased by soil saturation which results in a positive force on the slope.
Landslides due to slumping may occur due to construction of settlement built on filled up land that suffers from poor compaction or engineering. In forests, timber harvesting may negatively affect slope stability. Tractors, in general, cause immense damage as runoff follows the wheelings.
Apart from the above-mentioned forces, the causes of slope failure may be distinguished as (i) immediate causes such as vibrations, earthquake tremors, heavy precipitation and freezing and thawing; and (ii) long-term causes such as the slow and progressive steepening of the slope.
R.U. Cooke and J.C. Doornkamp (1974) suggested a few factors that contribute to landslides.
(i) Factors leading to accelerated shear stress:
a. Surcharge i.e., loading of the crest of slopes with an additional load;
b. Undermining of slope;
c. Lateral pressure exerted on cracks due to factors like freezing.
(ii) Factors that cause reduced shear strength:
a. Characteristic of some soil particles like clay to swell and shrink alternatively in wet and dry periods;
b. Rock structure such as faults, joints, bedding etc.;
c. Pore-pressure effects;
d. Drying and desiccation;
e. Loss of capillary action;
f. Crumbling soil structure that leads to reduced cohesion in soil.
According to Cooke and Doornkamp, the process of movement which follows planes is called shear. Applied forces are called stresses. Slope failure takes place as a result of shear stresses operational along straight or curved shear planes.
Strain is the deformation caused by movement. If it is the result of shear stresses it is called shear strain. The amount of resistance offered by the slope to movement is measured by the strength of the slope. The component of this which is directed against shear stresses is termed the shear strength.
Types of Landslides:
Landslides are extremely complicated and varied phenomena. They differ in terms of sliding, flowing, creeping, toppling or speed of movement so markedly that it is extremely difficult to combine all these diagnostic phenomena into a standard taxonomy.
Classifications of landslides have been attempted by T.H. Nilsen (1979), R.J. Blong (1973), A.J. Nemcock (1972), A.W. Skempton and J.N. Hutchinson (1964), and D.J. Varnes (1978).
The scheme advanced by Varnes has received widest acceptance:
1. Rotational slide:
It is a classic form of landslide. Some cases produce multiple regressive phenomena when continued instability produces new head carps to develop progressively up the slope.
2. Translational slide:
It involves relatively flat, planar movement following the surface. This type of movement is found in bedding planes made of sedimentary or metamorphic rocks dipping in the direction of slope,
3. Roto-translational slide:
It is a complex type where ‘a combination of slip along a circular arc and a flat plane is found.
4. Soil-slab failure:
In this case, a slab of saturated regolith is converted into a thick liquid. So the speed of landslide accelerates to as high as 10m/sec.
5. Debris slide or avalanche:
It occurs in surface deposits of granular materials. The surface of rupture is almost parallel to the inclination of bedrock.
6. Debris flow:
It occurs when debris is saturated with water. When rigid solid also falls along with the sliding mass, the phenomenon is called plug flow.
These take place through air; for example, jointed weathered rock falls from vertical cliffs.
After detachment from cliffs the outward rotation of angular blocks and rock columns cause toppling.
It contains 20 to 80 per cent fine sediments saturated with water. Friction is caused by viscous movement that generates enough power to carry even large boulders.
10. Soil creep:
It is the least destructive of landslide phenomena. Creep is slow and superficial.
PE. Kent (1966) proposed a hypothesis based on fluidisation of rock mass. He said that accumulated stress within rock particles causes compression of air in the pore spaces. This results in a fast-moving stream of debris. A. Heim (1932) held elasto-mechanical collisions responsible for landslides. His emphasis was on exchange of stresses between solid particles rather than fluids.
Methods to Minimise Damage:
R.U. Cooke (1984) and W.J: Kochelman (1986) have proposed some methods for reducing the landslide hazard.
One way is to avoid landslides by controlling the location, timing and nature of development.
The measures include:
i. Bypassing unstable areas; putting restrictions on land use;
ii. Mapping of hazard-prone areas and land use zoning;
iii. Acquiring and restructuring of public property;
iv. Spreading social awareness among people;
v. Disclosing the nature of hazard to prospective property buyers;
vi. Promoting insurance against hazard;
vii. Giving financial assistance such as loans, tax credits, etc., to promote the reduction of the hazard.
2. Reducing shear stress:
One could reduce shear stress:
i. Limit or reduce angles of slope, cut and fill;
ii. Limit or reduce unit lengths of slope;
iii. Remove unstable material.
3. Reducing shear stress and augmenting shear resistance:
This could be achieved through an improved drainage system which involves
i. Improving surface drainage that covers terrace drains and other drains;
ii. Improving subsurface drainage;
iii. Controlling unsustainable agriculture.
4. Increasing shear resistance:
This would be through
i. Retaining structures such as cribs or building retaining walls;
ii. Adoption of engineering methods by piling, tie-rods, anchors etc.;
iii. Building hard surface e.g., concrete surface;
iv. Controlling fill compaction. |
Talmud and Midrash, commentative and interpretative writings that hold a place in the Jewish religious tradition second only to the Bible (Old Testament).
Definition of terms
The Hebrew term Talmud (“study” or “learning”) commonly refers to a compilation of ancient teachings regarded as sacred and normative by Jews from the time it was compiled until modern times and still so regarded by traditional religious Jews. In its broadest sense, the Talmud is a set of books consisting of the Mishna (“repeated study”), the Gemara (“completion”), and certain auxiliary materials. The Mishna is a collection of originally oral laws supplementing scriptural laws. The Gemara is a collection of commentaries on and elaborations of the Mishna, which in “the Talmud” is reproduced in juxtaposition to the Gemara. For present-day scholarship, however, Talmud in the precise sense refers only to the materials customarily called Gemara—an Aramaic term prevalent in medieval rabbinic literature that was used by the church censor to replace the term Talmud within the Talmudic discourse in the Basel edition of the Talmud, published 1578–81. This practice continued in all later editions.
The term Midrash (“exposition” or “investigation”; plural, Midrashim) is also used in two senses. On the one hand, it refers to a mode of biblical interpretation prominent in the Talmudic literature; on the other, it refers to a separate body of commentaries on Scripture using this interpretative mode.
Opposition to the Talmud
Despite the central place of the Talmud in traditional Jewish life and thought, significant Jewish groups and individuals have opposed it vigorously. The Karaite sect in Babylonia, beginning in the 8th century, refuted the oral tradition and denounced the Talmud as a rabbinic fabrication. Medieval Jewish mystics declared the Talmud a mere shell covering the concealed meaning of the written Torah, and heretical messianic sects in the 17th and 18th centuries totally rejected it. The decisive blow to Talmudic authority came in the 18th and 19th centuries when the Haskala (the Jewish Enlightenment movement) and its aftermath, Reform Judaism, secularized Jewish life and, in doing so, shattered the Talmudic wall that had surrounded the Jews. Thereafter, modernized Jews usually rejected the Talmud as a medieval anachronism, denouncing it as legalistic, casuistic, devitalized, and unspiritual.
There is also a long-standing anti-Talmudic tradition among Christians. The Talmud was frequently attacked by the church, particularly during the Middle Ages, and accused of falsifying biblical meaning, thus preventing Jews from becoming Christians. The church held that the Talmud contained blasphemous remarks against Jesus and Christianity and that it preached moral and social bias toward non-Jews. On numerous occasions the Talmud was publicly burned, and permanent Talmudic censorship was established.
On the other hand, since the Renaissance there has been a positive response and great interest in rabbinic literature by eminent non-Jewish scholars, writers, and thinkers in the West. As a result, rabbinic ideas, images, and lore, embodied in the Talmud, have permeated Western thought and culture.
Content, style, and form
The Talmud is first and foremost a legal compilation. At the same time it contains materials that encompass virtually the entire scope of subject matter explored in antiquity. Included are topics as diverse as agriculture, architecture, astrology, astronomy, dream interpretation, ethics, fables, folklore, geography, history, legend, magic, mathematics, medicine, metaphysics, natural sciences, proverbs, theology, and theosophy.
This encyclopaedic array is presented in a unique dialectic style that faithfully reflects the spirit of free give-and-take prevalent in the Talmudic academies, where study was focussed upon a Talmudic text. All present participated in an effort to exhaust the meaning and ramifications of the text, debating and arguing together. The mention of a name, situation, or idea often led to the introduction of a story or legend that lightened the mood of a complex argument and carried discussion further.
This text-centred approach profoundly affected the thinking and literary style of the rabbis. Study became synonymous with active interpretation rather than with passive absorption. Thinking was stimulated by textual examination. Even original ideas were expressed in the form of textual interpretations.
The subject matter of the oral Torah is classified according to its content into Halakha and Haggada and according to its literary form into Midrash and Mishna. Halakha (“law”) deals with the legal, ritual, and doctrinal parts of Scripture, showing how the laws of the written Torah should be applied in life. Haggada (“narrative”) expounds on the nonlegal parts of Scripture, illustrating biblical narrative, supplementing its stories, and exploring its ideas. The term Midrash denotes the exegetical method by which the oral tradition interprets and elaborates scriptural text. It refers also to the large collections of Halakhic and Haggadic materials that take the form of a running commentary on the Bible and that were deduced from Scripture by this exegetical method. In short, it also refers to a body of writings. Mishna is the comprehensive compendium that presents the legal content of the oral tradition independently of scriptural text.
Modes of interpretation and thought
Midrash was initially a philological method of interpreting the literal meaning of biblical texts. In time it developed into a sophisticated interpretive system that reconciled apparent biblical contradictions, established the scriptural basis of new laws, and enriched biblical content with new meaning. Midrashic creativity reached its peak in the schools of Rabbi Ishmael and Akiba, where two different hermeneutic methods were applied. The first was primarily logically oriented, making inferences based upon similarity of content and analogy. The second rested largely upon textual scrutiny, assuming that words and letters that seem superfluous teach something not openly stated in the text.
The Talmud (i.e., the Gemara) quotes abundantly from all Midrashic collections and concurrently uses all rules employed by both the logical and textual schools; moreover, the Talmud’s interpretation of Mishna is itself an adaptation of the Midrashic method. The Talmud treats the Mishna in the same way that Midrash treats Scripture. Contradictions are explained through reinterpretation. New problems are solved logically by analogy or textually by careful scrutiny of verbal superfluity.
The strong involvement with hermeneutic exegesis—interpretation according to systematic rules or principles—helped develop the analytic skill and inductive reasoning of the rabbis but inhibited the growth of independent abstract thinking. Bound to a text, they never attempted to formulate their ideas into the type of unified system characteristic of Greek philosophy. Unlike the philosophers, they approached the abstract only by way of the concrete. Events or texts stimulated them to form concepts. These concepts were not defined but, once brought to life, continued to grow and change meaning with usage and in different contexts. This process of conceptual development has been described by some as “organic thinking.” Others use this term in a wider sense, pointing out that, although rabbinic concepts are not hierarchically ordered, they have a pattern-like organic coherence. The meaning of each concept is dependent upon the total pattern of concepts, for the idea content of each grows richer as it interweaves with the others.
Ezra the scribe who, according to the Book of Ezra, reestablished and reformed the Jewish religion in the 5th century bce, began the “search in the Law . . . to teach in Israel statutes and ordinances.”
His work was continued by soferim (scribes), who preserved, taught, and interpreted the Bible. They linked the oral tradition to Scripture, transmitting it as a running commentary on the Bible. For almost 300 years they applied the Torah to changing circumstances, making it a living law. They also introduced numerous laws that were designated “words of the soferim” by Talmudic sources. By the end of this period, rabbinic Judaism—the religious system constructed by the scribes and rabbis—was strong enough to withstand pressure from without and mature enough to permit internal diversity of opinion.
At the beginning of the 2nd century bce, a judicial body headed by the zugot—pairs of scholars—assumed Halakhic authority. There were five pairs in all, between c. 150 and 30 bce. The first of the zugot also introduced the Mishnaic style of transmitting the oral tradition.
The making of the Mishna: 2nd–3rd centuries
Hillel and Shammai, the last of the zugot, ushered in the period of the tannaim—“teachers” of the Mishna—at the end of the 1st century bce. This era, distinguished by a continuous attempt to consolidate the fragmentary Midrashic and Mishnaic material, culminated in the compilation of the Mishna at the beginning of the 3rd century ce. The work was carried out in the academies of Hillel and Shammai and in others founded later. Most scholars believe that Halakhic collections existed prior to the fall of Jerusalem, in 70 ce. Other compilations were made at Yavne, a Palestinian town near the Mediterranean, as part of the effort to revitalize Judaism after the disaster of 70 ce. By the beginning of the 2nd century there were many such collections. Tradition has it that Rabbi Akiba organized much of this material into separate collections of Midrash, Mishna, and Haggada and introduced the formal divisions in tannaitic literature. His students and other scholars organized new compilations that were studied in the different academies.
After the rebellion of the Jews against Roman rule led by Simeon bar Kokhba in 132–135, when the Sanhedrin (the Jewish supreme court and highest academy) was revived, the Mishnaic compilation adopted by the Sanhedrin president became the official Mishna. The Sanhedrin reached its highest stature under the leadership of Judah ha-Nasi (Judah the Prince, or President); he was also called Rabbi, as the preeminent teacher.
It seems certain that the official Mishna studied during his presidency was the Mishna we know and that he was its editor. Judah aimed to include the entire content of the oral tradition. He drew heavily from the collections of Akiba’s pupils but also incorporated material from other compilations, including early ones. Nevertheless, the accumulation was such that selection was necessary. Thus almost no Midrash or Haggada was included. Colleagues and pupils of Judah not only made minor additions to the Mishna but tried to preserve the excluded material, the Baraitot (“Exclusions”), in separate collections. One of these was the Tosefta (“Addition”). Midrashic material was gathered in separate compilations, and later revisions of some of these are still extant. The language of all of the tannaitic literature is the new Hebrew developed during the period of the Second Temple (c. 6th century bce–1st century ce).
The making of the Talmuds: 3rd–6th century
The expounders of the Mishna were the amoraim (“interpreter”), and the two Talmuds—the Palestinian (or Jerusalem) and the Babylonian—consist of their explanations, discussions, and decisions. Both take the form of a running commentary on the Mishna.
The foundations for these two monumental works were begun by three disciples of Judah ha-Nasi: Joḥanan bar Nappaḥa, Rav (Abba Arika), and Samuel bar Abba, in their academies at Tiberias, in Palestine, and at Sura and Nehardea in Babylonia, respectively. Centres of learning where the Mishna was expounded existed also at Sepphoris, Caesarea, and Lydda in Palestine. In time new academies were established in Babylonia, the best known being those at Pumbedita, Mahoza and Naresh, founded by Judah bar Ezekiel, Rava, and Rav Pappa, respectively. The enrollment of these centres often numbered in the thousands, and students spent many years there. Those who no longer lived on the academy grounds returned twice annually for the kalla, a month of study in the spring and fall.
Academies differed in their methods of study. Pumbedita, for example, stressed casuistry, while Sura emphasized breadth of knowledge. Students often moved from one academy to another and even from Palestine to Babylonia or from Babylonia to Palestine. This kept open the channels of communication between the various academies and resulted in the inclusion of much Babylonian material in the Palestinian Talmud, and vice versa.
Despite the overwhelming similarity of the two Talmuds, however, they do differ in some ways. The Palestinian Talmud is written in the Western Aramaic dialect, the Babylonian in the Eastern. The former is invariably shorter, and, not having been subject to final redaction, its discussions are often incomplete. Its explanations tend to remain closer to the literal meaning of the Mishna, preferring textual emendation to casuistic interpretation. Finally, some of the legal concepts in the Babylonian Talmud reflect the influence of Persian law, for Babylonia was under Persian rule at the time.
The main endeavour of the amoraim was to thoroughly explain and exhaust the meaning of the Mishna and the Baraitot. Apparent contradictions were reconciled by such means as explaining that conflicting statements referred to different situations or by asserting that they stemmed from the Mishnayot (Mishnas) of different tannaim. The same techniques were used when amoraic statements contradicted the Mishna. These discussions took place for hundreds of years, and their content was passed on from generation to generation, until the compilation of the Talmud.
The portion of the Palestinian Talmud dealing with the three Bavot (“gates”)—i.e., the first three tractates of the fourth order of the Mishna (for orders and tractates, see Talmudic and Midrashic literature, below)—was compiled in Caesarea in the middle of the 4th century and is distinguished from the rest by its brevity and terminology. The remainder was completed in Tiberias some 50 years later. It seems likely that its compilation was a rescue operation designed to preserve as much of the Halakhic material collected in Palestinian academies as possible, for by that time the deterioration of the political situation had forced most Palestinian scholars to emigrate to Babylonia.
The Babylonian Talmud was compiled up to the 6th century. Some scholars suggest that the organization of the Talmud began early and that successive generations of amoraim added layer upon layer to previously arranged material. Others suggest that at the beginning a stratum called Gemara, consisting only of Halakhic decisions or short comments, was set forth. Still others theorize that no overall arrangement of Talmudic material was made until the end of the 4th century.
The statement in the tractate Bava metzia that “Rabina and Rav Ashi were the end of instruction” is most often understood as referring to the final redaction of the Talmud. Since at least two generations of scholars following Rav Ashi (died 427) are mentioned in the Talmud, most scholars suggest that “Rabina” refers to Rabina bar Huna (died 499) and that the redaction was a slow process lasting about 75 years to the end of the 5th century.
According to the tradition of the geonim—the heads of the academies at Sura and Pumbedita from the 6th to the 11th centuries—the Babylonian Talmud was completed by the 6th-century savoraim (“expositors”). But the extent of their contribution is not precisely known. Some attribute to them only short additions. Others credit them with creating the terminology linking the phases of Talmudic discussions. According to another view, they added comments and often decided between conflicting opinions. The proponents of the so-called Gemara theory noted above ascribe to them the entire dialectic portion of Talmudic discourse.
Talmudic and Midrashic literature
The Mishna is divided into six orders (sedarim), each order into tractates (massekhtot), and each tractate into chapters (peraqim). The six orders are Zeraʿim, Moʿed, Nashim, Neziqin, Qodashim, and Ṭohorot.
1. Zeraʿim (“Seeds”) consists of 11 tractates: Berakhot, Pea, Demai, Kilayim, Sheviʿit, Terumot, Maʿaserot, Maʿaser sheni, Ḥalla, ʿOrla, and Bikkurim. Except for Berakhot (“Blessings”), which treats of daily prayers and grace, this order deals with laws related to agriculture in Palestine. It includes prohibitions against mixtures in plants (hybridization), legislation relating to the sabbatical year (when land lies fallow and debts are remitted), and regulations concerning the portions of harvest given to the poor, the Levites, and the priests.
2. Moʿed (“Season” or “Festival”) consists of 12 tractates: Shabbat, ʿEruvin, Pesaḥim, Sheqalim, Yoma, Sukka, Betza, Rosh Hashana, Taʿanit, Megilla, Moʿed qaṭan, and Ḥagiga. This order deals with ceremonies, rituals, observances, and prohibitions relating to special days of the year, including the Sabbath, holidays, and fast days. Since the half-shekel Temple contribution was collected on specified days, tractate Sheqalim, regarding this practice, is included.
3. Nashim (“Women”) consists of seven tractates: Yevamot, Ketubbot, Nedarim, Nazir, Soṭa, Giṭṭin, and Qiddushin. This order deals with laws concerning betrothal, marriage, sexual and financial relations between husband and wife, adultery, and divorce. Since Nazirite (ascetic) and other vows may affect marital relations, Nedarim (“Vows”) and Nazir (“Nazirite”) are included here.
4. Neziqin (“Damages”) consists of 10 tractates, the first three of which were originally considered one (the Bavot): Bava qamma, Bava metzia, Bava batra, Sanhedrin, Makkot, Shevuʿot, ʿEduyyot, ʿAvoda zara, Avot, and Horayot. This order deals with civil and criminal law concerning damages, theft, labour relations, usury, real estate, partnerships, tenant relations, inheritance, court composition, jurisdiction and testimony, erroneous decisions of the Sanhedrin, and capital and other physical punishments. Since idolatry, in the literal sense of worship or veneration of material images, is punishable by death, ʿAvoda zara (“Idolatry”) is included. Avot (“Fathers”), commonly called “Ethics of the Fathers” in English, seems to have been included to teach a moral way of life that precludes the transgression of law.
5. Qodashim (“Sacred Things”) consists of 11 tractates: Zevaḥim, Menaḥot, Ḥullin, Bekhorot, ʿArakhin, Temura, Keretot, Meʿila, Tamid, Middot, and Qinnim. This order incorporates some of the oldest Mishnaic portions. It treats of the Temple and includes regulations concerning sacrifices, offerings, and donations. It also contains a detailed description of the Temple complex.
6. Ṭohorot (“Purifications”) consists of 12 tractates: Kelim, Ohalot, Negaʿim, Para, Ṭohorot, Miqwaʾot, Nidda, Makhshirin, Zavim, Ṭevul yom, Yadayim, and ʿUqtzin. This order deals with laws governing the ritual impurity of vessels, dwellings, foods, and persons, and with purification processes.
The Tosefta (“Addition”) closely resembles the Mishna in content and order. In its present form it at times supplements the Mishna, at other times comments on it, and often also opposes it. There is no Tosefta on the tractates Avot, Tamid, Middot, and Qinnim. The Talmud quotes from many other collections of Mishnaiot and Baraitot: some are attributed to tannaim, and predate the established Mishna; and others, to amoraim. The original material is lost.
Although the entire Mishna was studied at the Palestinian and Babylonian academies, the Palestinian Talmud (Gemara) covers only the first four orders (except chapters 21–24 of Shabbat and chapter 3 of Makkot) and the first three chapters of Nidda in the sixth order. Most scholars agree that the Palestinian Talmud was never completed to the fifth and sixth orders of the Mishna and that the missing parts of the other orders were lost. A manuscript of chapter 3 of Makkot was, in fact, found and was published in 1946.
The Babylonian Talmud does not cover orders Zeraʿim (except Berakhot) and Ṭohorot (except Nidda) and tractates Tamid (except chapters 1,2,4), Sheqalim, Middot, Qinnim, Avot, and ʿEduyyot. Scholars concur that the Talmud for these parts was never completed, possibly because their content was not relevant in Babylonia.
Halakhic Midrashim are exegetic commentaries on the legal content of Exodus, Leviticus, Numbers, and Deuteronomy. The five extant collections are Mekhilta, on Exodus; Mekhilta deRabbi Shimʿon ben Yoḥai, on Exodus; Sifra, on Leviticus; Sifre, on Numbers and Deuteronomy; Sifre zuṭa, on Numbers. (Mekhilta means “measure,” a norm or rule; Sifra, plural Sifre, means “writing” or “book.”) Critical analysis reveals that Mekhilta and Sifre on Numbers differ from the others in terminology and method. Most scholars agree that these two originated in the school of Ishmael and the others in that of Akiba. In their present form they also include later additions. Mention should also be made of Midrash tannaim on Deuteronomy, consisting of fragments recovered from the Yemenite anthology Midrash ha-gadol.
Haggadic Midrashim originated with the weekly synagogue readings and their accompanying explanations. Although Haggadic collections existed in tannaitic times, extant collections date from the 4th–11th centuries. Midrashic compilations were not authoritatively edited and tend to be coincidental and fragmentary.
Most notable among biblical collections is Midrash rabba (“Great Midrash”), a composite of commentaries on the Pentateuch and five Megillot (Song of Songs, Ruth, Ecclesiastes, Esther, Lamentations) differing in nature and age. Its oldest portion, the 5th-century Genesis rabba, is largely a verse-by-verse commentary, while the 6th-century Leviticus rabba consists of homilies and Lamentations rabba (end of 6th century) is mainly narrative. The remaining portions of Midrash rabba were compiled at later dates.
The Tanḥuma (after the late-4th-century Palestinian amora Tanḥuma bar Abba), of which two versions are extant, is another important Pentateuchal Midrash. Additional Midrashic compilations include those to the books of Samuel, Psalms, and Proverbs. Mention should also be made of Pesiqta (“Section” or “Cycles”) deRab Kahana (after a Babylonian amora) and Pesiqta rabbati (“The Great Cycle”), consisting of homilies on the Torah (Pentateuch) readings that occur on festivals and special Sabbaths.
Haggadic compilations independent of biblical text include Avot deRabbi Natan, Tanna deve Eliyyahu, Pirqe (“Chapters”) deRabbi Eliezer, and tractates Derekh eretz (“Correct Conduct”). These primarily deal with ethics, moral teachings, and biblical narrative.
Among the medieval anthologies are the Yalquṭ (“Compilation”) Shimoni (13th century), Yalquṭ ha-makhiri (14th century), and ʿEn Yaʿaqov (“Eye of Jacob,” 16th century). The two most important modern Haggadic anthologies are those of Wilhelm Bacher and Louis Ginzberg.
The Talmud’s dialectic style and organization are not those of a code of laws. Accordingly, codification efforts began shortly after the Talmud’s completion. The first known attempt was Halakhot pesuqot (“Decided Laws”), ascribed to Yehudai Gaon (8th century). Halakhot gedolot (“Great Laws”), by Simeon Kiyyara, followed 100 years later. Both summarize Talmudic Halakhic material, omitting dialectics but preserving Talmudic order and language. The later geonim concentrated on particular subjects, such as divorce or vows, introducing the monographic style of codification.
Codification literature gained impetus by the beginning of the 11th century. During the next centuries many compilations appeared in Europe and North Africa. The most notable, following Talmudic order, were the Hilkhot Harif, by Isaac Alfasi (11th century), and Hilkhot Harosh, by Asher ben Jehiel (13th–14th centuries). Though modelled after Halakhot gedolot, the Hilkhot Harif encompasses only laws applicable after the destruction of the Temple but includes more particulars. The Hilkhot Harosh closely follows Alfasi’s code but often also includes the reasoning underlying decisions.
The most important of the topically arranged codifications were: the Mishne Torah, Sefer ha-ṭurim, and Shulḥan ʿarukh. (1) The Mishne Torah (“The Torah Reviewed”) by Maimonides (12th century), is a monumental work, original in plan, language, and order; it encompasses all religious subject matter under 14 headings and includes theosophy, theology, and religion. (2) The Sefer ha-ṭurim (“Book of Rows,” or “ Parts”), by Jacob ben Asher (14th century), the son of Asher ben Jehiel, introduced new groupings, dividing subject matter into four major categories (ṭurim) reminiscent of the Mishnaic orders; it includes only laws applicable after the destruction of the Temple. (3) The Shulḥan ʿarukh (“The Prepared Table”) by Joseph Karo (16th century), the last of the great codifiers, is structured after the Sefer ha-ṭurim, but presents the Sefardic (Middle Eastern and North African) rather than the Ashkenazic (Franco-German and eastern European) tradition, with decisions largely following those of Alfasi, Maimonides, and Rabbi Asher. When the 16th-century Ashkenazic codifier Moses Isserles added his notes, this became the standard Halakhic code for all Jewry.
The interpretive literature on the Talmud began with the rise of academies in Europe and North Africa. The earliest known European commentary, though ascribed to Gershom ben Judah (10th–11th centuries), is actually an eclectic compilation of notes recorded by students of the Mayence (Mainz) Academy. Compilations of this kind, known as qunṭresim (“notebooks”), also developed in other academies. Their content was masterfully reshaped and reformulated in the renowned 11th-century commentary of Rashi (acronym of Rabbi Shlomo Yitzḥaqi), in which difficulties likely to be encountered by students are anticipated and detail after detail is clarified until a synthesized, comprehensible whole emerges.
The commentaries of Ḥananel ben Ḥushiel and Nissim ben Jacob ben Nissim, the first to appear in North Africa (11th century), are introductory in nature. They summarize the content of Talmudic discussions, assuming that details will be understood once the general idea becomes comprehensible. This style was later followed by the Spanish school, including Joseph ibn Migash and Maimonides. However, as Rashi’s work became known, it displaced all other commentaries. (Note its predominant role in the sample page of Talmud.)
A new phase in Talmudic literature was initiated by Rashi’s grandchildren, Rabbis Isaac, Samuel, and Jacob, the sons of Meir, who established the school of tosafot. (These medieval “additions” are not to be confused with the tannaitic Tosefta discussed above.) Reviving Talmudic dialectic, they treated the Talmud in the same way that it had treated the Mishna. They linked apparently unrelated statements from different Talmudic discourses and pointed out the fine distinctions between seemingly interdependent statements. This dialectic style was soon adopted in all European academies. Even the writings of Ravad (Abraham ben David), Zerahiah ha-Levi, and Yeshaya deTrani, three of the most original Talmudists (12th century), reflect the impact of Tosafist dialectic.
The works of Meir Abulafia and Menaḥem Meiri, although of the North African genre, include a strong dialectic element. In Spain such dialectic works were known as ḥiddushim or novellae (since they sought “new insights”), the most famous being those written by four generations (13th–14th centuries) of teacher and pupil: Ramban (Naḥmanides, or Moses ben Naḥman), Rashba (Solomon ben Adret), Ritba (Yomtov ben Abraham), and Ran (Nissim ben Reuben Gerondi).
A major role in establishing Talmudic authority was also played by the responsa literature, replies (responsa) to legal and religious questions. Beginning in the 7th century, when the Babylonian geonim responded in writing to questions concerning the Talmud, it developed into a branch of Talmudic literature that continued to the present. Then, as now, Talmudic authorities were approached for explanations and decisions. Among the geonim the best known were Sherira (10th century) and his son Hai. In the Middle Ages the most important were Alfasi, Ibn Migash (Joseph ibn Migash), Maimonides, Ravad (Abraham ben David of Posquières), Ramban, Rashba, Rosh (Asher ben Jehiel), Ran, and Ribash (Isaac ben Sheshet Perfet).
Writing and printing of the Talmuds
Study in the academies was always oral; hence the question of when the Mishna and Talmud were first committed to writing has been the subject of much discussion. According to some scholars, the process of writing began with Judah ha-Nasi. Others attribute it to the savoraim.
The Palestinian Talmud was first printed in Venice (1523–24). All later editions followed this one. Printing of the Babylonian Talmud was begun in Spain about 1482, and there have been more than 100 different editions since. The oldest extant full edition appeared in Venice (1520–23). This became the prototype for later printings, setting the type of page and pagination (a total of close to 5,500 folios). The standard edition was printed in Vilna beginning in 1886. It carries many commentaries and commentaries upon commentaries. In the sample page reproduced here, the Mishna and the Gemara are placed in the centre column of the page and are printed in the heavy type. The commentary of Rashi is always located in the inner column of the page and the tosafot in the outer column. Other commentaries and references to legal codes and to scriptural verses surround the major commentaries, in smaller type. Talmudic citations are made by tractate name, folio number, and side of the folio (a or b). |
Reading Aloud to Children
Vocabulary Development During Read-Alouds: Primary PracticesBy: Karen J. Kindle (2009)
Reading aloud is a common practice in primary classrooms and is viewed as an important vehicle for vocabulary development. Read-alouds are complex instructional interactions in which teachers choose texts, identify words for instruction, and select the appropriate strategies to facilitate word learning. This study explored the complexities by examining the read-aloud practices of four primary teachers through observations and interviews.
In this article:
Reading storybooks aloud to children is recommended by professional organizations as a vehicle for building oral language and early literacy skills (International Reading Association & National Association for the Education of Young Children, 1998). Reading aloud is widely accepted as a means of developing vocabulary (Newton, Padak, & Rasinski, 2008), particularly in young children (Biemiller & Boote, 2006). Wide reading is a powerful vehicle for vocabulary acquisition for older and more proficient readers (Stanovich, 1986), but since beginning readers are limited in their independent reading to simple decodable or familiar texts, exposure to novel vocabulary is unlikely to come from this source (Beck & McKeown, 2007). Read-alouds fill the gap by exposing children to book language, which is rich in unusual words and descriptive language.
Much is known about how children acquire new vocabulary and the conditions that facilitate vocabulary growth. Less is known about how teachers go about the business of teaching new words as they read aloud. The effortless manner in which skilled teachers conduct read-alouds masks the complexity of the pedagogical decisions that occur. Teachers must select appropriate texts, identify words for instruction, and choose strategies that facilitate word learning. This study sheds light on the process by examining the strategies that teachers use to develop vocabulary as they read aloud to their primary classes.
What we know about vocabulary and read-alouds
Reading aloud to children provides a powerful context for word learning (Biemiller & Boote, 2006; Bravo, Hiebert, & Pearson, 2007). Books chosen for readalouds are typically engaging, thus increasing both children's motivation and attention (Fisher, Flood, Lapp, & Frey, 2004) and the likelihood that novel words will be learned (Bloom, 2000). As teachers read, they draw students' attention to Tier 2 words-the "high frequency words of mature language users" (Beck, McKeown, & Kucan, 2002, p. 8). These words, which "can have a powerful effect on verbal functioning" (Beck et al., 2002, p. 8), are less common in everyday conversation, but appear with high frequency in written language, making them ideal for instruction during read-alouds. Tier 1 words, such as car and house, are acquired in everyday language experiences, seldom requiring instruction. Tier 3's academic language is typically taught within content area instruction.
During read-aloud interactions, word learning occurs both incidentally (Carey, 1978) and as the teacher stops and elaborates on particular words to provide an explanation, demonstration, or example (Bravo et al., 2007). Even brief explanations of one or two sentences, when presented in the context of a supportive text, can be sufficient for children to make initial connections between novel words and their meanings (Biemiller & Boote, 2006). Word learning is enhanced through repeated readings of text, which provide opportunities to revise and refine word meanings (Carey, 1978). These repetitions help students move to deeper levels of word knowledge from never heard it, to sounds familiar, to it has something to do with, to well known (Dale, 1965).
Incidental word learning through read-alouds
Carey (1978) proposed a two-stage model for word learning that involves fast and extended mapping. Fast mapping is a mechanism for incidental word learning, consisting of the connection made between a novel word and a tentative meaning. Initial understandings typically represent only a general sense of the word (Justice, Meier, & Walpole, 2005) and are dependent on students' ability to infer meaning from context (Sternberg, 1987).
Extended mapping is required to achieve complete word knowledge, because "initial learning of word meanings tends to be useful but incomplete" (Baumann, Kame'enui, & Ash, 2003, p. 755). Through additional exposures, the definition is revised and refined to reflect new information (Carey, 1978; Justice et al., 2005).
Adult mediation in read-alouds
The style of read-aloud interaction is significant to vocabulary growth (Dickinson & Smith, 1994; Green Brabham & Lynch-Brown, 2002) with reading styles that encourage child participation outperforming verbatim readings. Simply put, "the way books are shared with children matters" (McGee & Schickedanz, 2007, p. 742).
High-quality read-alouds are characterized by adult mediation. Effective teachers weave in questions and comments as they read, creating a conversation between the children, the text, and the teacher. To facilitate word learning, teachers employ a variety of strategies such as elaboration of student responses, naming, questioning, and labeling (Roberts, 2008).
Analysis of the literature on vocabulary learning through read-alouds leads to two conclusions. First, adult mediation facilitates word learning (i.e., Justice, 2002; Walsh & Blewitt, 2006). Biemiller and Boote (2006) concluded that "there are repeated findings that encouraging vocabulary acquisition in the primary grades using repeated reading combined with word meaning explanations works" (p. 46).
Second, the relative effectiveness of different types of mediation remains less clear. Adult explanations are clearly linked to greater word learning, but it is not evident which aspects of the explanations are the critical components: the context, a paraphrased sentence, or even the child's interest in the story (Brett, Rothlein, & Hurley, 1996; Justice et al., 2005). It is also possible that active involvement in discussions is more salient than the type of questions posed (Walsh & Blewitt, 2006).
Setting for the study
This study was conducted at a small private school in the south central United States. Westpark School (pseudonym) is located in an ethnically diverse, middle class neighborhood in a suburb of a large metropolitan area. Four of the six primary teachers at Westpark agreed to participate in the study: one kindergarten, one first-grade, and two second-grade teachers. Cindy, Debby, Patricia, and Barbara (all pseudonyms) varied in their years of experience. Debby, who had previously retired from public school teaching, was the most experienced with more than 20 years in the classroom. Barbara was also a veteran with 10 years of experience. At the other end of the spectrum, Patricia was in her third year of teaching, and Cindy was in her internship year of an alternative licensure program.
Observations and interviews
To determine the teachers' practices for developing vocabulary within read-alouds, the teachers' "own written and spoken words and observable behavior" (Bliss, Monk, & Ogborn, 1983, p. 4) provided the best sources of data. By constructing detailed, extensive descriptions of teacher practice within a single site, patterns of interaction and recurring themes can be identified (Merriam, 2001).
Carspecken's (1996) critical ethnography methodology was adapted and used to collect and analyze data. Observations were conducted to identify patterns of teacher-student interactions within readalouds. Following preliminary coding, individual interviews were conducted. The combined data provide a rich description of the pedagogical context of vocabulary development during read-alouds.
Each teacher was observed four times over a sixweek period. The teachers were asked to include a read-aloud during each observation and were informed that vocabulary development was the focus of this study. They were encouraged to "just do what they normally would do" when reading to their classes. The hour-long observations, scheduled at the teachers' convenience, were audiotaped and transcribed. Additional data, such as gestures, actions, and descriptions of student work, were recorded in field notes. Transcriptions and field notes were compiled in a thick record for analysis.
Following the observations and preliminary data coding, semistructured individual interviews were conducted. An interview protocol was developed and peer-reviewed. Topics for discussion included teaching experience, understanding of vocabulary development, use of read-alouds, and instructional strategies. Lead-off questions and possible followup questions were generated to ensure that key areas were adequately addressed in the interview. Transcripts of the interviews were coded and the observation data were re-analyzed and peer-reviewed.
Vocabulary instruction during read-alouds
The determination that a particular word in a readaloud is unfamiliar to students triggers a series of decisions. The teacher must decide both the extent and intent of instruction. How much time should be spent? What do students need to know about this word? Also, the teacher must select an appropriate instructional strategy from a wide range of possibilities. Which strategy will be most effective? What is the most efficient way to build word knowledge without detracting from the story? The teachers at Westpark used a variety of instructional strategies and levels of instructional foci in their read-alouds.
Categories of instructional focus emerged during data coding. Interactions centered on vocabulary differed in both extent and intent. The extent, or length, of interactions varied greatly. Typically, more instructional time was spent on words that were deemed critical to story comprehension or that students would be using in a subsequent activity. Pragmatic issues of time seemed to impact the extent of the interactions as well. The frequency and length of interactions tended to decrease through the course of the read-aloud as the time allotted came to an end or children's attention began to wane.
Levels of Instruction Level of instruction Example Explanation Incidental exposure I don't know what I would have done. Curiosity might have gotten the better of me. Teacher infuses a Tier 2 word into a discussion during the read-aloud. Embedded instruction And he's using a stick-an oar-to help move the raft [pointing to illustration]. Teacher provides a synonym before the target term oar, pointing to the illustration. Focused instruction Let's get set means let's get ready [elicit examples of things students get ready for]. Teacher leads a discussion on what it means to get set, including getting set for school and Christmas.
As seen in the table, three different levels of instruction were identified in the data: incidental exposure, embedded instruction, and focused instruction. Incidental exposure occurred during the course of discussions before, during, and after reading and resulted from teachers' efforts to infuse rich vocabulary into class discourse. For example, during one discussion, Cindy commented that the character was humble; in another that she came bearing gifts. Even though no direct instruction was provided for these terms, the intent is instructional since Cindy deliberately infused less common words to build vocabulary knowledge through context clues.
Embedded instruction is defined as attention to word meaning, consisting of fewer than four teacher-student exchanges. The teachers used embedded instruction when the target word represented a familiar concept for the students or when it was peripheral to the story. Information was provided about word meaning with minimal disruption to the flow of the reading. Typically, teachers gave a synonym or a brief definition and quickly returned to the text.
Focused instruction occurred when target words were considered important to story comprehension or when difficulties arose communicating word meaning. These interactions varied greatly in length from 4 to 25 teacher-student exchanges. Focused instruction often took place before or after reading. In most cases, the teachers had identified keywords that they felt were important for students to learn, warranting additional time and attention. Other times, focused instruction appeared to be spontaneous, triggered by students' questions or "puzzled looks" during the reading.
Instruction also varied in its intent. Teachers sought to develop definitional, contextual, or conceptual word knowledge (Herman & Dole, 1988) based on the specific situation. The learning goal shaped the nature of the interactions.The definitional approach was used when the underlying concept was familiar to the students or when the goal of instruction was to simply provide exposure to a word. Teachers either provided or elicited a synonym or phrase that approximated the meaning of the target word. This approach can be quite efficient, requiring little investment of time (Herman & Dole, 1988), thus allowing attention to be given to many words during the course of the read-aloud.Teachers developed contextual knowledge when they referred students back to the text to determine word meaning. In such cases, the teacher might refer students back to the text or reread the sentence in which the target term occurred, helping students to confirm or disconfirm their thinking as in this example from Sarah, Plain and Tall (MacLachlan, 1985):
Cindy: Wooly ragwort. Where is that? [looks through text] What was wooly ragwort? Do you remember? It was part of Caleb's song.
Cindy: It said-or Sarah said [reads from the text], "We don't have these by the sea. We have seaside goldenrod and wild asters and wooly ragwort."
Cindy's intent was for students to gain contextual knowledge using the information in the text to draw a tentative conclusion about word meaning. This example highlights one of the problems inherent with contextual strategies. Students, perhaps misled by the word sea in the text, suggested that wooly ragwort might be a seal, a bird, or a stone. Since they were unfamiliar with goldenrod and asters, they were unable to use these clues effectively to conclude that wooly ragwort was a plant. In this case, reminding students that the characters were picking wildflowers might have helped.
Learning a definition is seldom enough for children to develop deep word knowledge. Students need conceptual knowledge to make connections between new words, their prior experiences, and previously learned words and concepts (Newton et al., 2008). Cindy relayed an incident that taught her the importance of building conceptual knowledge when working with unfamiliar words. She had instructed her students to look up the word pollinate in the dictionary, write two or three sentences using the word, and then draw a picture illustrating its meaning. Unfortunately, the definition contained many words that the children did not know such as pistil and stamen. It was obvious when she reviewed their work that her students "didn't get it." Cindy realized that the definition was not sufficient for them to understand the concept of pollination.
- Providing a definition
- Providing a synonym
- Providing examples
- Clarifying or correcting students' responses
- Extending a student-generated definition
- Morphemic analysis
Each of these strategies is described along with examples from the observation data.
Questioning. The most commonly used strategy was questioning. As the teachers read and encountered a word that they thought might be unfamiliar, they would simply stop and ask about it. This strategy usually occurred at the beginning of an instructional exchange. For example, after reading a section of Sarah, Plain and Tall (MacLachlan, 1985), Debby paused to ask her students about the word bonnet.
Debby: What's a bonnet? Do you all know what a bonnet is? What's a bonnet?
It is interesting to note that most of the teachers repeated the question several times in their initial utterance. This practice gives students time to formulate a response and also helps to establish a phonological representation of the new word, which is linked to word learning (Beck & McKeown, 2001).Questioning was also used to assess the students' existing word knowledge and to determine if students had effectively used context clues. Once a correct response was given, the exchange ended and the teacher resumed reading, as seen in the following sequence.
Debby: [reads from The BFG, Dahl, 1982] "So I keep staring at her and in the end her head drops on to her desk and she goes fast to sleep and snorkels loudly." What is that?
Debby: [resumes reading] "Then in marches the head teacher."
Alternatively, the teacher might provide the definition and ask students to supply the term. For example, in an after-reading discussion, Patricia asked students to recall the meaning of research to review or assess word learning.
Patricia: And what was it called when they look in the encyclopedia for information? What was that word, John?
This strategy can prove difficult. John and several of his classmates made incorrect responses before the correct answer was given.Providing the Definition. At times, teachers chose to provide a definition of a word. Word learning is enhanced when the explanation is made in simple, child-friendly language and the typical use of the word is discussed (Beck et al., 2002). This strategy was more commonly used in embedded instruction, as seen in the following example.
Barbara: [reading Duck for President (Cronin, 2008)] "On election day, each of the animals filled out a ballot and placed it in a box." Filled out a piece of paper. Wrote down who they wanted to vote-or who they wanted to win the election.
Barbara thought it unlikely that her students would be familiar with the word ballot, so she simply provided the definition in terms that kindergartners could understand.Providing Synonyms. An expedient means of providing word meaning is to state a synonym for the word. This method was used often in conjunction with recasting. That is, the teacher repeated a sentence, replacing the target word with a synonym, as seen in this example.
Barbara: Let's get ready. Let's get set.
This strategy was used extensively by Barbara to reinforce word meanings. For example, in a postreading discussion, she went back and reviewed key events in the story, simultaneously reinforcing the meaning of the phrase a bit. Although her focus was comprehension, the students heard the target word alongside a recasting with a synonym many times.
Barbara: So remember, a bit of blue means-how much is she going to add?
Student: Um-a little bit?
Barbara: A little bit, right. Just a small amount.
Barbara: So what happened here? They mixed red, they mixed blue-but it's still red. But why? Why is that Sarah?
Student: Because Sal adds a bit of blue.
Barbara: Right, just a little bit of blue. Just a tiny small amount. But that wasn't enough to change the color, was it?
Barbara: Just a little bit, right.
Providing Examples. Word knowledge can be extended and clarified through examples that may be provided by the teacher or elicited from the students. Students learn how the target word is related to other known words and concepts and are given opportunities to use the target words, further strengthening word learning (Beck et al., 2002). Teachers help students make their own connections when they ask for examples of how or where students have heard the word used, or remind them of situations in which they might have encountered a specific word.
As Patricia introduced a folk tale, she wanted her students to be prepared for the regional language they would hear. Although she did not use the word dialect, she explained that the language in the story would sound different to them and asked them for examples from their own experiences.
Patricia: This is a story from Appalachia and they use a different kind of language. Uh, they speak in English, but they kind of talk — what do you call it — country. Have you ever heard people talk like that?
Student 1: Yeah.
Student 2: My grandma.
Patricia: They use different little sayings and maybe have a different accent to their voice.
Student 3: But they're still speaking English.
Student 4: Like New York?
Student 5: England, England!
Student 6: Kind of like cowboys
Two students demonstrated their understanding of the concept as they generated their examples of New York and English accents. Another student made the connection between dialect and the cowboy lingo the class had learned during a recent unit of study.
Clarification and Correction. Teacher guidance is an important part of the instructional process (Beck et al., 2002). At times, students suggest definitions for target words that reflect misconceptions or partial understandings. The teacher must then either correct or clarify students' responses. When Patricia asked her students for the meaning of the word glared, a student gave a response that was partially correct, but missed the essence of the meaning. Patricia's additional question helped the students to refine their understandings.
Patricia: What does it mean to glare at somebody?
Student: Stare at them?
Patricia: Yeah. Is it a friendly stare?
Student: No-like [makes an angry face].
Extension. Due to the gradual nature of word learning, students may provide definitions that are correct but simplistic. The teacher may elect to extend the definition, providing additional information that builds on the student's response. For example, when a student stated that a bonnet was something you wear on your head, Debby extended the definition by providing some historical information and describing its function or use.
Debby: They wore it a lot on in the prairie days because they traveled a lot and they got a lot of you-those wagon trains and the stagecoaches and all were kind of windy. And so they would keep their bonnets on-to keep their head-their hair from blowing all over the place. Very, very common to use-to wear bonnets back then.
Labeling. Labeling was most often used with picture book read-alouds. As the teacher named the unfamiliar item, she pointed to the illustration, connecting the word with the picture. Debby used this strategy while reading Leonardo and the Flying Boy (Anholt, 2007) to her second graders, pointing to the depictions of various inventions mentioned in the text. Thus, without interrupting the flow of the reading, word meaning was enhanced as children related novel terms with the visual images.
Barbara used the strategy extensively with her kindergartners. While reading Duck for President(Cronin, 2008), she pointed to the picture of the lawnmower as she described how a push mower is different from the more familiar power mowers. In another text, she reversed the process, providing the unfamiliar word raft for the boat pictured in the illustration.
Imagery. At times, teachers used facial expressions, sounds, or physical movements to demonstrate word meaning during the course of read-alouds. Gestures of this type occurred more frequently when the teachers were reading aloud from chapter books, perhaps due to the lack of illustrations to provide such visual support. In some cases, imagery appeared to be intrinsic to expressive reading, rather than a deliberate effort to enhance word meaning. For example, Debby lowered her head and looked sad as she read about a character hanging his head in shame. Although her intent was to create a dramatic reading, the addition of the simple actions would also serve to facilitate word learning if that particular expression was unknown to students. In the following example, Debby provided two imagery clues as she read the text.
Debby: [reads text] "There was a hiss of wind." [extends /s/ to create a hissing sound] "A sudden pungent smell." [holds her hand up to her nose].
The use of imagery was more common with embedded instruction than with the longer focused instructional exchanges. Typically, imagery was used to enhance students' understanding of the text without impeding the flow of the story, although in some instances, imagery was used after discussion as a means of reinforcing the stated definition. At times, however, the use of imagery was a more integral part of instruction and was even used by the children when they could demonstrate a word meaning more easily than put it in words. When Patricia asked her students about the meaning of the word pout, several responded nonverbally, sticking out their lower lips and looking sad. Cindy used the strategy to help her students understand the meaning of the word rustle. Although a student provided a synonym, Cindy used imagery to extend word learning.
Cindy: What does rustle mean?
Cindy: Movement. OK. What's a rustle sound like? Somebody rustle for me. [students begin moving their feet under their desks] Maybe like [shuffles her feet], like really soft sounds. Like a movement. They're not meaning to make a noise, but they are just kind of moving around in the grass and stuff.
Morphemic Analysis. Even young children need to become aware of how word parts are combined to make longer, more complex words. Children can be taught to "look for roots and/or familiar words when trying to figure out the meaning of an unfamiliar word" (Newton et al., 2008, p. 26). Instructional strategies that draw children's attention to structural analysis are an appropriate choice when the meaning of the root word is familiar. In the exchange that follows, Barbara drew attention to the prefix re-, affixed to the familiar word count.
Barbara: [reads text] "Farmer Brown demanded a recount." A recount is-do you know what a recount is, Jeremy?
Jeremy: Uh, no.
Barbara: A recount is-he said he wanted the votes to be counted again.
Multiple Strategies. Teachers often employed more than one strategy during focused instruction. Although questioning was commonly used to initiate instruction, the target word must be either partially known or appear in a very supportive context for this strategy to be effective. Questioning can lead to guessing, so "it is important to provide guidance if students do not quickly know the word's meaning" (Beck et al., 2002, p. 43). In cases where questioning yielded either an incorrect response or no response at all, teachers added additional strategies, such as providing the definition, examples, or imagery.
The practices of the teachers at Westpark are both unremarkable and remarkable. They are unremarkable in that their practices are consistent with the descriptions of read-alouds in the literature. The teachers selected appropriate texts, words for instruction, and strategies to teach unknown words. They engaged in discussions before, during, and after reading the texts. Practitioners and researchers alike will find familiarity in the descriptions of the read-alouds.
At the same time, their practices were remarkable. The intricate series of interactions between teacher, students, and text in a read-aloud reflects countless instructional decisions, underlying pedagogical beliefs, and the unique quality of the relationship that has been built between teacher and students. The data obtained from the observations and interviews provide a window into the processes of the readaloud, providing brief but significant glimpses that have important implications. There were many similarities noted in the readaloud practices of the teachers in this study. With the exception of one performance-style reading, readalouds were interactive with the children actively engaged. Attention to word meaning occurred in every read-aloud, providing evidence of the importance placed on vocabulary by the teachers. At the same time, individual differences were noted in the way the teachers went about developing word meaning. They varied in their use of incidental exposure, embedded instruction, and focused instruction. Cindy felt it was important for her students to be able to independently figure out word meaning from context. Consistent with that conviction, she most frequently used focused instruction with questioning and incidental exposures, with relatively few incidences of embedded instruction. In contrast, Barbara's pattern of interaction seems to reflect a preference for adult mediation over incidental learning, perhaps stemming from a belief that kindergarten children require more support to learn words during read-alouds than their older schoolmates.
In addition to variance in the level of instruction used by the teachers, they also exhibited differences in their use of instructional strategies. Some differences were directly related to the type of book being read. For example, labeling was common when reading picture books, but was seldom used with chapter books. Differences in strategy use may also reflect the teachers' perceptions of appropriate practice for a specific grade. Both second-grade teachers stressed the importance of context clues in teaching vocabulary. This conviction was evident in their frequent use of questioning and context strategies. Other strategies were only used when an adequate response was not obtained, or when a more extensive definition was required for comprehension. The increased use of multiple strategies seen in kindergarten and first grade may reflect the teachers' beliefs that vocabulary development was an important goal apart from story comprehension.
There may be a more pragmatic explanation as well. When reading chapter books, the teachers seemed to have a set stopping point in mind each day. Completing a chapter on time appeared to take precedence over vocabulary instruction. Shorter picture books seemed to afford teachers more time to develop words and employ more strategies within instructional sequences. This would suggest that text selection impacts strategy use in addition to word selection.
Individual differences in read-aloud practice are significant because they impact word learning. Even when scripts were used for read-alouds, Biemiller and Boote (2006) found that "some teachers were more effective than others in teaching vocabulary to children" (p. 51). They concluded that intangible qualities such as the teachers' attitudes about and enthusiasm for word learning could be a factor in the number of words children learn. Given the degree of variance in word learning, evident when teachers were constrained by a script, it would certainly be expected that differences would only increase when teachers are free to conduct read-alouds in their own manner.
Recommendations for practice
Read-alouds are instructional events and require the same advance planning as any other lesson. Although the teachers in this study used many strategies identified in the literature as effective, additional time and thought in advance of the reading would have decreased confusions, used time more efficiently, and ultimately increased learning. Books should be selected with vocabulary in mind, previewed, and practiced. Attention to student questions about word meaning that arise during reading is important but may result in extended discourse on words that are not critical to comprehension and can detract significantly from the read-aloud experience. Teachers should select target words in advance and plan instructional support based on those particular words. To increase word learning potential, the following five steps are recommended.
- Identify words for instruction. To maximize learning, words targeted for instruction should be identified in advance. Examine the text for words that are essential for comprehension and Tier 2 words (Beck et al., 2002) that will build reading vocabulary. Look for words that are interesting or fun to say. Narrow the list down to four or five words to target for more in-depth instruction, giving priority to those needed for comprehension.
- Consider the type of word learning required. Does the target word represent a new label for something familiar or an unfamiliar concept, or is it a familiar word used in a new way? Is the word critical for comprehension? These questions determine the appropriate level of instruction (incidental, embedded, or focused); whether instruction should occur before, during, or after reading; and strategy selection.
- Identify appropriate strategies. Select strategies that are consistent with your instructional goals. When the novel word represents a new label for a familiar term, a synonym or gesture may be adequate. Providing examples and questioning might be used to develop a new concept prior to reading, with a simple definition included during the reading to reinforce learning.
- Have a Plan B. If a strategy proves ineffective, be prepared to intervene quickly and provide correction or clarification. Have an easy-tounderstand definition at the ready. Be able to provide a synonym or an example.
- Infuse the words into the classroom. Find opportunities for the new words to be used in other contexts to encourage authentic use and deepen word learning.
Read-alouds can be viewed as microcosms of balanced instruction. This balance does not result from adherence to a prescribed formula, but rather from countless decisions made by teachers. These instructional decisions affect the balance of direct and incidental instruction, between planning in advance and seizing the teachable moment, the quantity and quality of vocabulary instruction within the readalouds, and ultimately student learning. Teachers' perceptions of an appropriate balance are evident in their uses of read-alouds, styles of reading, text selection, and in the way that vocabulary is developed.
The read-aloud context has proven to be an effective vehicle for vocabulary instruction, but further research is needed to clarify the conditions that optimize word learning and to determine the most effective manner of adding elaborations and explanations during story reading without detracting from the pleasure of the reading itself. Identifying the practices that are commonly used by primary classroom teachers provides researchers with valuable information that can lead to the development of effective instructional strategies, inservice teachers' staff development, and preservice teacher training.
Related video: Strengthening Vocabulary with Read Alouds
During read alouds, exploring expressive language through movement, listening, and discus boosts interest among these first graders while increasing their vocabulary. (This clip was excerpted from Stenhouse Publishers' "Organizing for Literacy" video.)
Click the "References" link above to hide these references.
Baumann, J.F., Kame'enui, E.J., & Ash, G.E. (2003). Research on vocabulary instruction: Voltaire redux. In J. Flood, D. Lapp, J.R. Squire, & J.M. Jensen (Eds.), Handbook of research on teaching the English language arts (pp. 752-785). Mahwah, NJ: Erlbaum.
Beck, I.L., & McKeown, M.G. (2001). Text talk: Capturing the benefits of read-aloud experiences for young children. The Reading Teacher, 55(1), 10-20.
Beck, I.L., & McKeown, M.G. (2007). Different ways for different goals, but keep your eye on the higher verbal goals. In R.K. Wagner, A.E. Muse, & K.R. Tannenbaum (Eds.), Vocabulary acquisition: Implications for reading comprehension (pp. 182-204). New York: Guilford.
Beck, I.L., McKeown, M.G., & Kucan, L. (2002). Bringing words to life: Robust vocabulary instruction. New York: Guilford.
Biemiller, A., & Boote, C. (2006). An effective method for building meaning vocabulary in primary grades. Journal of Educational Psychology, 98(1), 44-62. doi:10.1037/0022-0618.104.22.168
Bliss, J., Monk, M., & Ogborn, J. (1983). Qualitative data analysis for educational research: A guide to uses of systematic networks. Croon Helm, Australia: Croon Helm.
Bloom, L. (2000). The intentionality model of word learning: How to learn a word, any word. In R.M. Golinkoff, K. Hirsh-Pasek, L. Bloom, L.B. Smith, A.L. Woodward, N. Akhtar, et al. (Eds.), Becoming a word learner: A debate on lexical acquisition (pp. 19-50). New York: Oxford University Press.
Bravo, M.A., Hiebert, E.H., & Pearson, P.D. (2007). Tapping the linguistic resources of Spanish/English bilinguals: The role of cognates in science. In R.K. Wagner, A.E. Muse, & K.R. Tannenbaum (Eds.), Vocabulary acquisition: Implications for reading comprehension (pp. 140-156). New York: Guilford.
Brett, A., Rothlein, L., & Hurley, M. (1996). Vocabulary acquisition from listening to stories and explanations of target words. The Elementary School Journal, 96(4), 415-422. doi:10.1086/461836
Carey, S. (1978). The child as word learner. In M. Halle, J. Bresnan, & G.A. Miller (Eds.), Linguistic theory and psychological reality(pp. 359-373). Cambridge, MA: MIT Press.
Carspecken, P.F. (1996). Critical ethnography in educational research: A theoretical and practical guide. New York: Routledge.
Dale, E. (1965). Vocabulary measurement: Techniques and major findings. Elementary English, 42, 82-88.
Dickinson, D., & Smith, M.W. (1994). Long-term effects of preschool teachers' book readings on low-income children's vocabulary and story comprehension. Reading Research Quarterly, 29(2), 104-122. doi:10.2307/747807
Fisher, D., Flood, J., Lapp, D., & Frey, N. (2004). Interactive readalouds: Is there a common set of implementation practices? The Reading Teacher, 58(1), 8-17. doi:10.1598/RT.58.1.1
Green Brabham, E., & Lynch-Brown, C. (2002). Effects of teachers' reading-aloud styles on vocabulary comprehension in the early elementary grades. Journal of Educational Psychology, 94(3), 465-473. doi:10.1037/0022-0622.214.171.1245
Herman, P.A., & Dole, J. (1988). Theory and practice in vocabulary learning and instruction. The Elementary School Journal, 89(1), 42-54. doi:10.1086/461561
International Reading Association & National Association for the Education of Young Children. (1998). Learning to read and write: Developmentally appropriate practices for young children. Newark, DE: International Reading Association.
Justice, L.M. (2002). Word exposure conditions and preschoolers' novel word learning during shared storybook reading. Reading Psychology, 23(2), 87-106. doi:10.1080/027027102760351016
Justice, L.M., Meier, J., & Walpole, S. (2005). Learning words from storybooks: An efficacy study with at-risk kindergartners. Language, Speech, and Hearing Services in Schools, 36(1), 17-32. doi:10.1044/0161-1461(2005/003)
McGee, L.M., & Schickedanz, J.A. (2007). Repeated interactive read-alouds in preschool and kindergarten. The Reading Teacher, 60(8), 742-751. doi:10.1598/RT.60.8.4
Merriam, S.B. (2001). Qualitative research and case study applications in education (2nd ed.). San Francisco: Jossey-Bass.
Newton, E., Padak, N.D., & Rasinski, T.V. (2008). Evidence-based instruction in reading: A professional development guide to vocabulary. Boston, MA: Pearson Education.
Roberts, T.A. (2008). Home storybook reading in primary or second language with preschool children: Evidence of equal effectiveness for second-language vocabulary acquisition. Reading Research Quarterly, 43(2), 103-130. doi:10.1598/RRQ.43.2.1
Stanovich, K.E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21(4), 360-406. doi:10.1598/RRQ.21.4.1
Sternberg, R.J. (1987). Most vocabulary is learned from context. In M.C. McKeown & M.E. Curtis (Eds.), The nature of vocabulary acquisition (pp. 89-105). Hillsdale, NJ: Erlbaum.
Walsh, B.A., & Blewitt, P. (2006). The effect of questioning style during storybook reading on novel vocabulary acquisition of preschoolers. Early Childhood Education Journal, 33(4), 273-278. doi:10.1007/s10643-005-0052-0.
Anholt, L. (2007). Leonardo and the flying boy: A story about Leonardo da Vinci. Hauppauge, NY: Barron's Educational Books.
Cronin, D. (2008). Duck for president. New York: Atheneum.
Dahl, R. (1982). The BFG. New York: Puffin.
MacLachlan, P. (1985). Sarah, plain and tall. New York: HarperTrophy.
Kindle, K.J. (2009, November). Vocabulary Development During Read-Alouds: Primary Practices. The Reading Teacher, 63(3), 202-211.Comment on this article |1 recommendations
Get our newsletters!
"I work for a Head Start Program and we enjoy your ideas and children's book recommendations."
~ Audrey T.
Featured Sister Site
AdLit.org: Resources for parents and educators of struggling adolescent readers and writers.«» |
Guide: Normality Test
A Normality Test is a statistical procedure that helps you determine if a given set of data follows a normal distribution or not. This is an important aspect of statistical analysis, quality control, and even machine learning models. Why? Many statistical techniques, such as t-tests, ANOVAs, and regression models, assume that the underlying data is normally distributed. If this assumption is violated, the results may be unreliable, leading to inaccurate conclusions and misguided decisions.
The concept of a “normal distribution” might sound complex, but it’s essentially the famous “bell curve” that many of us learned about in school. In a normal distribution, the majority of the data points cluster around the mean, and the frequencies taper off symmetrically towards both ends, forming a bell-like shape.
Table of Contents
As illustrated above, the first graph represents a normal distribution, where the data is symmetrically distributed around the mean. The second graph, on the other hand, represents a non-normal (exponential in this case) distribution. You can see that the majority of data points are skewed towards one side.
Understanding whether your data follows a normal distribution is crucial in fields like manufacturing, logistics, and especially in methodologies like Lean Six Sigma, where data-driven decision-making is key. This guide aims to provide you with a comprehensive understanding of Normality Tests, from the theory behind them to practical ways of performing these tests using various software tools.
Now that you have a basic understanding of what a Normality Test is and why it’s important, let’s delve into the different methods for testing normality.
Why Normality Matters
Understanding the distribution of your data is not just an academic exercise; it has direct implications for how you interpret data and make decisions in various settings. Below are some key areas where the concept of normality plays a critical role:
1. Statistical Assumptions
Many statistical tests, such as t-tests, ANOVAs, and linear regression models, are based on the assumption that the data is normally distributed. If the data doesn’t meet this criterion, these tests can produce misleading results, which in turn may lead to incorrect conclusions.
2. Quality Control in Lean Six Sigma
In methodologies like Lean Six Sigma, ensuring that processes are stable and predictable is crucial. Understanding the distribution of data related to these processes can help you identify variations, anomalies, or trends that need to be addressed. For example, if a manufacturing process is assumed to be normally distributed, you can set control limits and identify outliers more reliably.
3. Predictive Modeling
Machine learning and predictive analytics models also often assume that the errors are normally distributed. Knowing the distribution of your data can help you choose the right model or make necessary adjustments to improve the model’s performance.
Graphical Example: Control Charts
Control charts are a staple in quality control and Lean Six Sigma projects. These charts often assume that the process data follows a normal distribution. Below is a simple control chart illustrating how data points are distributed around the control limits, with the assumption of normality.
You might also be interested in our Contol Chart Tool
By understanding the normality of your process data, you can set these control limits with a higher degree of confidence. This makes your quality control efforts more effective and reliable.
The control chart is a practical example of why understanding data normality is essential, particularly in methodologies like Lean Six Sigma where data-driven decision-making is vital.
Common Methods for Testing Normality
Understanding the distribution of your dataset is a cornerstone of statistical analysis and is particularly important in methodologies like Lean Six Sigma. Testing for normality can be broadly categorized into two methods: Parametric Tests and Graphical Methods. Let’s delve into each in more detail:
Parametric tests are statistical tests that make certain assumptions about the parameters of the population distribution from which the samples are drawn. Here are the most commonly used parametric tests for checking normality:
- When to Use: This test is most accurate when used on small sample sizes (n < 50).
- How It Works: The test calculates a statistic that represents a ratio: the squared sum of the differences between the observed and expected values of a normally distributed variable, divided by the sample variance. A statistic close to 1 indicates that the data is normally distributed.
- Limitations: The test is sensitive to sample size. As the sample size increases, the test may show that the data is not normal even if the deviation from normality is trivial.
- When to Use: This test is better suited for larger sample sizes.
- How It Works: It compares the empirical distribution function of the sample with the distribution expected if the sample were drawn from a normal population. The maximum difference between these two distributions is the statistic.
- Limitations: It is less powerful for identifying deviations from normality at the tails of the distribution.
- When to Use: This test is a modified version of the Kolmogorov-Smirnov test and is used when more weight needs to be given to the tails.
- How It Works: It squares the differences between observed and expected values and gives more weight to the tails of the distribution.
- Limitations: Like the Shapiro-Wilk test, it is sensitive to sample size.
- When to Use: This is an adaptation of the Kolmogorov-Smirnov test for small sample sizes.
- How It Works: It operates similarly to the Kolmogorov-Smirnov test but corrects for the bias caused by the estimation of parameters from the sample data itself.
- Limitations: It’s less commonly used and not as powerful as the Shapiro-Wilk test for very small sample sizes.
Graphical methods provide a visual approach to understanding the distribution of your data. Here are some commonly used graphical methods:
QQ-Plot (Quantile-Quantile Plot)
- A plot of the quantiles of the sample data against the quantiles of a standard normal distribution. A 45-degree line is often added as a reference. If the data points fall along this line, it suggests that the sample data is normally distributed.
P-P Plot (Probability-Probability Plot)
- Similar to a QQ-Plot but plots the cumulative probabilities of the sample data against a standard normal distribution. Useful when you are interested in the fit of different types of distributions, not just the normal.
- A bar graph that shows the frequency of data points in different ranges. If the data is normally distributed, the histogram will resemble a bell curve.
- Provides a visual representation of the data’s spread, skewness, and potential outliers. A symmetric box indicates normality, while skewness or outliers suggest non-normality.
These are the most common methods for testing normality, each with its own advantages and limitations. Choosing the right method depends on your specific needs, the size of your dataset, and the importance of the tails in your analysis.
Step-by-Step Guide to Performing a Normality Test
After understanding the importance of normality and the methods available for testing it, the next step is to actually perform these tests. This section will guide you through conducting normality tests using popular software tools such as Minitab, SPSS, and R.
Using Software Tools
Software tools offer a convenient and efficient way to perform normality tests, particularly when dealing with large datasets. Below we’ll walk you through how to use each of these tools for this purpose.
Introduction to Minitab as a Statistical Software
Minitab is a widely-used statistical software package that offers a range of data analysis capabilities. It is particularly popular in industries like manufacturing and services where Lean Six Sigma methodologies are employed.
Steps for Conducting a Normality Test in Minitab
- Load Your Data: Import your dataset into Minitab.
- Navigate to the Test: Go to
- Select Variables: Choose the variable(s) you want to test for normality.
- Run the Test: Click
OKto run the test. Minitab will generate an output with the results.
Interpretation of Minitab Output
- P-value: A P-value less than 0.05 generally indicates that the data is not normally distributed.
- Test Statistic: Look for the statistic in the case of a Shapiro-Wilk test.
- Graphical Output: Minitab also provides QQ-Plots and histograms for visual inspection.
How to Use SPSS for Normality Tests
SPSS is another comprehensive statistical software package used in various fields such as social sciences, healthcare, and market research.
- Load Data: Import your dataset into SPSS.
- Go to Test Option: Navigate to
- Select Variables: Add the variables you want to test.
- Run: Click
OKto run the test and review the output for the Shapiro-Wilk or Kolmogorov-Smirnov statistics and P-values.
Using R for Normality Tests
R is a free and open-source software environment that is highly extensible and offers numerous packages for statistical analysis.
- Load Data: Use functions like
read.csv()to load your data into R.
- Perform Test: Use functions such as
shapiro.test()for the Shapiro-Wilk test or
ks.test()for the Kolmogorov-Smirnov test.
- Interpret Output: A P-value less than 0.05 typically indicates non-normality.
Python, with its rich ecosystem of data science libraries, offers a powerful environment for conducting normality tests. Below, we’ll explore two examples: the Shapiro-Wilk test and generating a QQ-Plot.
Example 1: Shapiro-Wilk Test
Python Code Snippet for Performing Shapiro-Wilk Test
You can use the
scipy.stats library to perform the Shapiro-Wilk test. First, you’ll need to import the library and then apply the
shapiro() function to your dataset.
Here’s how you can do it:
from scipy import stats # Sample data data = [your_data_here] # Perform Shapiro-Wilk test shapiro_result = stats.shapiro(data) # Output the result print("Shapiro-Wilk Statistic:", shapiro_result) print("P-value:", shapiro_result)
Interpretation of Results
The output will consist of two values:
Shapiro-Wilk Statistic: A value close to 1 indicates that the data is normally distributed.
- A P-value less than 0.05 generally indicates that the data is not normally distributed.
- A P-value greater than or equal to 0.05 suggests that the data is normally distributed.
Example 2: QQ-Plot
Python Code Snippet for Generating a QQ-Plot
You can use the
statsmodels library to generate a QQ-Plot. The
qqplot() function is used for this purpose.
Here’s a sample code snippet:
import statsmodels.api as sm import matplotlib.pyplot as plt # Your data here data = [your_data_here] # Create QQ-Plot fig, ax = plt.subplots(figsize=(10, 6)) sm.qqplot(data, line='45', ax=ax) plt.title('QQ-Plot') plt.show()
Points Along the 45-Degree Line: If the points fall along this line, it suggests that the data is normally distributed.
Points Deviating from the Line: If the points significantly deviate from the 45-degree line, especially at the tails, then the data is not normally distributed.
By using Python, you can easily perform normality tests and visualize the distribution of your dataset. The examples above provide you with the code snippets and interpretation guidelines to get you started.
Once you’re familiar with the basics of normality testing, you may encounter situations that require a more nuanced approach. This section delves into advanced topics such as dealing with non-normal data, data transformation techniques, non-parametric test alternatives, and the impact of sample size on test power.
Dealing with Non-Normal Data
Not all data sets are normally distributed, and that’s okay. The question is, what do you do when your data is not normal?
Check the Importance: First, consider how crucial the normality assumption is for your specific analysis or project. In some cases, slight deviations from normality may not significantly impact your results.
Use Robust Methods: Some statistical methods are robust to deviations from normality. These methods can often be used as a direct replacement for their non-robust counterparts.
Data Transformation Techniques
If normality is essential for your analysis, you might consider transforming your data to fit a normal distribution better. Common transformation techniques include:
Log Transformation: Useful for reducing right skewness.
Square Root Transformation: Effective for count data.
Box-Cox Transformation: A more generalized form that encompasses many other types of transformations.
# Example using Python for Box-Cox Transformation from scipy import stats # Perform the transformation transformed_data, lambda_value = stats.boxcox(original_data)
Non-Parametric Tests as Alternatives
Non-parametric tests don’t assume any specific distribution and can be a useful alternative when dealing with non-normal data. Examples include:
Mann-Whitney U Test: An alternative to the independent samples t-test.
Wilcoxon Signed-Rank Test: An alternative to the paired samples t-test.
Kruskal-Wallis Test: An alternative to the one-way ANOVA.
Power and Sample Size
How Sample Size Affects the Power of a Normality Test
Small Sample Sizes: Normality tests are generally less reliable with small sample sizes. They may not detect non-normality even when it exists.
Large Sample Sizes: On the other hand, with large sample sizes, the tests can detect even trivial deviations from normality, which might not be practically significant.
Understanding the relationship between sample size and test power can help you make more informed decisions when planning your data collection and analysis strategies.
Understanding the theoretical aspects of normality tests is crucial, but real-world applications provide valuable insights into their practical relevance. In this section, we will look at some case studies that demonstrate the importance and usage of normality tests in different scenarios and industries.
Real-World Example of Applying a Normality Test in a Lean Six Sigma Project
In a Lean Six Sigma project focused on reducing the defect rate in an automotive assembly line, a team used normality tests as a part of the Measure phase. The objective was to understand if the distribution of defects over time followed a normal distribution.
Data Collection: Data was collected on the number of defects observed each day for a month.
Normality Test: A Shapiro-Wilk test was conducted using Minitab to test the normality of the defect rates.
Outcome: The P-value was greater than 0.05, indicating that the defect rates were normally distributed.
Implication: The result allowed the team to proceed with parametric tests in the Analyze phase, like t-tests and ANOVAs, to identify the root causes of the defects confidently.
Application of Normality Tests in Various Industries like FMCG, Automotive, and Logistics
FMCG (Fast-Moving Consumer Goods): In quality control of product weights, normality tests are often used to ensure that deviations are random and not skewed in any particular direction.
Automotive: In crash test analyses, normality tests are applied to understand the distribution of impact forces, which helps in designing safer vehicles.
Logistics: For optimizing delivery times, companies often use normality tests to understand the distribution of delays, thereby helping them improve their time estimates and overall efficiency.
This comprehensive guide aimed to serve as a one-stop resource on the extensive topic of understanding and conducting normality tests, a cornerstone in statistical analyses and continuous improvement methodologies like Lean Six Sigma. Starting with the foundational principles, the guide navigated through various methods to test for normality, including parametric tests like Shapiro-Wilk and graphical methods such as QQ-Plots. Special attention was given to the practical application of these tests using popular software tools like Minitab, SPSS, R, and Python, providing step-by-step procedures and code snippets.
The guide also ventured into advanced topics, offering insights into handling non-normal data through transformations and non-parametric tests. The power dynamics influenced by sample size, often overlooked, were highlighted to ensure a more nuanced understanding. Real-world case studies from industries like automotive, FMCG, and logistics were included to bridge the gap between theory and practice.
Whether you’re a seasoned professional or a beginner in the realm of data analysis and continuous improvement, this guide aspires to equip you with the essential skills and knowledge to perform normality tests confidently and interpret their results effectively. Your journey towards mastering this critical aspect of data analysis begins here.
- Das, K.R. and Imon, A.H.M.R., 2016. A brief review of tests for normality. American Journal of Theoretical and Applied Statistics, 5(1), pp.5-12.
- Yazici, B. and Yolacan, S., 2007. A comparison of various tests of normality. Journal of Statistical Computation and Simulation, 77(2), pp.175-183.
- Jarque, C.M. and Bera, A.K., 1987. A test for normality of observations and regression residuals. International Statistical Review/Revue Internationale de Statistique, pp.163-172.
A: Normality tests are essential for determining whether your data follows a normal distribution, a foundational assumption in many statistical analyses. If your data is not normally distributed, using techniques that assume normality may lead to incorrect or misleading results. Normality tests help you validate this assumption before proceeding with further analyses.
A: Yes, you can. There are non-parametric tests designed to analyze non-normal data. These tests do not assume any specific distribution and are often used as alternatives to their parametric counterparts. Additionally, you can transform your data to make it more normal-like and then apply parametric tests.
A: Normality tests are generally less reliable for small sample sizes. With fewer data points, it’s difficult to accurately determine the distribution of the dataset. Therefore, caution should be exercised when interpreting the results of a normality test on small samples.
A: Both QQ-Plots and P-P Plots are graphical methods for assessing the distribution of a dataset. A QQ-Plot compares the quantiles of the sample data against a theoretical distribution, while a P-P Plot compares the cumulative probabilities. QQ-Plots are more sensitive to deviations in the tails, whereas P-P Plots focus on deviations across all data points.
A: Yes, software tools like Minitab and Python libraries offer functionalities to automate normality testing. In Minitab, you can use macros to run the test on multiple datasets, while in Python, you can use loops and functions to perform the tests programmatically. This is particularly useful when dealing with large datasets or running repetitive analyses.
Free Lean Six Sigma Templates
Improve your Lean Six Sigma projects with our free templates. They're designed to make implementation and management easier, helping you achieve better results. |
Fallacy of Propositional Logic
Alias: Fallacy of sentential logic*
In logic, a proposition―or, "statement"―is a sentence that is either true or false. For instance, "it is raining" is a proposition. Some propositions contain other propositions as components, for example: "it is not raining" is a proposition as a whole which contains "it is raining." A proposition that contains one or more simpler propositions as components is called a "compound" proposition. So, "it is not raining" is a compound proposition. A proposition that is not compound, such as "it is raining", is called "simple".
Propositional logic is a system of formal logic that deals with the logical relations holding between propositions taken as a whole, and those compound propositions which are constructed from simpler ones with truth-functional connectives. For instance, consider the following proposition:
Today is Sunday and it's raining.
This is a compound proposition containing the simpler propositions:
|Today is Sunday.
The word "and" which joins the two simpler sentences to make the compound one is a truth-functional connective, that is, the truth-value of the compound proposition is a function of the truth-values of its components. In other words, whether the whole sentence is true or false is determined by whether the simpler sentences that compose it are true or false. The truth-value of a conjunction―a compound proposition formed with "and"―is true if both of its components are true, and false otherwise. So, the compound sentence is true if "today is Sunday" and "it's raining" are both true, and false if one or both are false.
Propositional logic studies the logical relations which hold between propositions as a result of truth-functional combinations, for instance, the example conjunction logically implies that today is Sunday. In other words, if the whole sentence is true then it must also be true that today is Sunday. There are a number of other truth-functional connectives in English in addition to conjunction, and the ones most frequently studied by propositional logic are:
|Today is Sunday or today is Saturday.
|Today is not Sunday.
|Today is Sunday only if yesterday was Saturday.
|if and only if
|Today is Sunday if and only if yesterday was Saturday.
Since a validating argument form is one in which it is impossible for the premisses to be true and the conclusion false, you can use the truth-functions to determine that forms in propositional logic are validating. For instance, the earlier example involving conjunction is an instance of the following argument form:
|p and q.
This form is validating because, no matter what propositions we put for p and q, if the premiss is true, then both p and q will be true, which means that the conclusion will also be true. Thus, to show that a propositional argument form is non-validating, all that you have to do is find an argument of that form which has true premisses and a false conclusion. Such an argument is called a "counter-example", and this method is used throughout the entries for the subfallacies, listed above, to show that the form of these fallacies is non-validating.
A type of argument is a fallacy of propositional logic when two conditions are met:
- Its propositional form is non-validating.
- Its propositional form is similar enough to a validating form to be confused with it.
Another way to put this is that a propositional fallacy is a non-validating propositional form that appears to be validating. For this reason, each entry for a specific propositional fallacy―see the Subfallacies, above―includes a "Similar Validating Form", which is a validating propositional form similar enough to the fallacious form to be confused with it.
This discussion of propositional logic is by necessity brief, since I am only trying to give the minimal background required to understand the subfallacies above. For a lengthier explanation of propositional logic, see the following:
- Howard Pospesel, Introduction to Logic: Propositional Logic (Third Edition) (Prentice Hall, 1998). A good textbook on propositional logic for beginners.
- Peter Suber, "Propositional Logic Terms and Symbols" (1997). A concise class handout for a course in symbolic logic.
*Note: Robert Audi, General Editor, The Cambridge Dictionary of Philosophy (1995), p. 316 |
Composite functions inquiry
Mathematical inquiry processes: test particular cases; make conjectures about relationships; generalise and prove. Conceptual field of inquiry: function notation; inverse functions; composite functions.
The prompt was designed for a year 10 class as the basis for a short inquiry into composite functions. In showing two functions in general form and then an equation involving two composite functions, the prompt invites students to test particular cases.
As function notation is arbitrary knowledge (see a discussion of Hewitt's distinction between arbitrary and necessary knowledge here), the teacher might choose to introduce two preparatory concepts before expecting students to pose questions and make observations about the prompt.
(1) Evaluating functions
The lesson might start with the teacher contrasting an equation to a function (such as y = 3x - 2 and f(x) = 3x - 2) before going on to evaluate, for example, f(4) and f(10) for the same function. A more challenging approach would be to evaluate rational functions in the form:
In a structured inquiry, students could evaluate f(1), f(10), f(100), f(1000) and f(1 000 000) in order to find the limit as x tends to infinity (see the PowerPoint in 'Resources' below.) Would the limit be the same for f(-1), f(-10), f(-100) and so on?
(2) Inverse functions
The next stage is to introduce the class to the concept of an inverse function and to a procedure for finding one. If f(x) = y and g(y) = x, then g(y) is the inverse of f(x) and g(y) can be written as f-1(x). An example that leads into the form of the functions in the prompt is:
At this point, the teacher could prepare the ground for the composite functions in the prompt by emphasising that the relationship holds the other way round:
Later in the inquiry, the teacher could return to the example to show that inverse functions are a special case that satisfy the equation in the prompt:
Sense-making, exploration and proof
Students start the inquiry by trying to make sense of the prompt. They draw on their existing knowledge to question or comment:
You could replace a, b, c, and d with numbers.
f(x) could equal 4x + 1 and g(x) could equal 3x - 2.
Does f(g(x)) mean you multiply the two functions?
Does f(g(x)) mean you put the functions together?
How do you combine two functions?
Is g(f(x)) the inverse of f(g(x))?
The teacher focuses on the three questions about the meaning of composite functions, explaining that f(g(x)) and g(f(x)) involve substituting one function into another. For f(g(x)), g(x) is the inner function and f(x) is the outer function. So, for the functions suggested in the orientation phase, we get
f(g(x)) = f(3x - 2) = 4(3x - 2) + 1 = 12x - 7 and
g(f(x)) = g(4x + 1) = 3(4x + 1) - 2 = 12x + 1
Students immediately notice that the coefficient of x is the same, but not the constant. Would this always be the case? The class decides to explore with the aim of creating a pair of functions for which f(g(x)) = g(f(x)). For students who require more structure, the teacher offers them pairs of functions to test.
The pair at the bottom of each column satisfies the condition in the prompt - that is, the composite functions are equal.
Students start to develop conjectures from their examples. Some look at their examples, list those that 'work' and try to generalise. One student contends that if ad + b = bc + d, then the constants are equal. She presents her reasoning on the board:
f(g(x)) = f(cx + d) = a(cx + d) + b = acx + (ad + b) and
g(f(x)) = g(ax + b) = c(ax + b) + d = acx + (bc + d)
As we want f(g(x)) = g(f(x)), the constants must be equal and, therefore, ad + b = cb + d. She goes on to explain how she could use her formula to find values of a, b, c and d that form functions for which f(g(x)) = g(f(x)).
The teacher concludes the lesson by leading students in the co-construction of a formal proof:
Other lines of inquiry
1. Three functions
Are there three functions f(x) = ax + b, g(x) = cx + d and h(x) = ex + f such that f(g(h(x))) = h(g(f(x)))?
In this case the relationship between the variables simplifies to:
acf + ad + b = ecb + ed + f
2. Changing the degree of the functions
What would happen if one of the functions was quadratic, such as f(x) = ax + b and g(x) = cx2 + dx?
f(g(x)) = f(cx2 + dx) = a(cx2 + dx) + b = acx2 + adx + b
g(f(x)) = g(ax+ b) = c(ax + b)2 + d(ax + b) = a2cx2 + 2abcx + b2c + adx + bd
By matching terms in x2,
acx2 = a2cx2 which leads to a = 1
By matching terms in x and substituting in a = 1,
adx = 2abcx + adx
d = 2bc + d which leads to bc = 0 and b = 0
(c cannot equal zero, otherwise the function would not be quadratic).
So, in order to satisfy the equality, f(x) = x. In this case, c and d can take any value because f(g(x)) = g(f(x)) = cx2 + dx.
3. Inverse functions
Another line of inquiry involve inverse functions. The teacher might use these additional prompts to encourage students to verify particular cases and prove the general result.
A proof for the prompt above: |
Often when we want to make a point that nothing is sacred, we say, “one plus one does not equal two.” This is designed to shock us and attack our fundamental assumptions about the nature of the universe. Well, in this chapter on floating- point numbers, we will learn that “0.1+0.10.1+0.1 does not always equal 0.20.2” when we use floating-point numbers for computations.
In this chapter we explore the limitations of floating-point numbers and how you as a programmer can write code to minimize the effect of these limitations. This chapter is just a brief introduction to a significant field of mathematics called numerical analysis.
The real world is full of real numbers. Quantities such as distances, velocities, masses, angles, and other quantities are all real numbers.1 A wonderful property of real numbers is that they have unlimited accuracy. For example, when considering the ratio of the circumference of a circle to its diameter, we arrive at a value of 3.141592.... The decimal value for pi does not terminate. Because real numbers have unlimited accuracy, even though we can’t write it down, pi is still a real number. Some real numbers are rational numbers because they can be represented as the ratio of two integers, such as 1/3. Not all real numbers are rational numbers. Not surprisingly, those real numbers that aren’t rational numbers are called irrational. You probably would not want to start an argument with an irrational number unless you have a lot of free time on your hands.
Unfortunately, on a piece of paper, or in a computer, we don’t have enough space to keep writing the digits of pi. So what do we do? We decide that we only need so much accuracy and round real numbers to a certain number of digits. For example, if we decide on four digits of accuracy, our approximation of pi is 3.142. Some state legislature attempted to pass a law that pi was to be three. While this is often cited as evidence for the IQ of governmental entities, perhaps the legislature was just suggesting that we only need one digit of accuracy for pi. Perhaps they foresaw the need to save precious memory space on computers when representing real numbers.
Given that we cannot perfectly represent real numbers on digital computers, we must come up with a compromise that allows us to approximate real numbers.2 There are a number of different ways that have been used to represent real numbers. The challenge in selecting a representation is the trade-off between space and accuracy and the tradeoff between speed and accuracy. In the field of high performance computing we generally expect our processors to produce a floating- point result every 600-MHz clock cycle. It is pretty clear that in most applications we aren’t willing to drop this by a factor of 100 just for a little more accuracy. Before we discuss the format used by most high performance computers, we discuss some alternative (albeit slower) techniques for representing real numbers.
Binary Coded Decimal
In the earliest computers, one technique was to use binary coded decimal (BCD). In BCD, each base-10 digit was stored in four bits. Numbers could be arbitrarily long with as much precision as there was memory:
123.45 0001 0010 0011 0100 0101
This format allows the programmer to choose the precision required for each variable. Unfortunately, it is difficult to build extremely high-speed hardware to perform arithmetic operations on these numbers. Because each number may be far longer than 32 or 64 bits, they did not fit nicely in a register. Much of the floating- point operations for BCD were done using loops in microcode. Even with the flexibility of accuracy on BCD representation, there was still a need to round real numbers to fit into a limited amount of space.
Another limitation of the BCD approach is that we store a value from 0–9 in a four-bit field. This field is capable of storing values from 0–15 so some of the space is wasted.
One intriguing method of storing real numbers is to store them as rational numbers. To briefly review mathematics, rational numbers are the subset of real numbers that can be expressed as a ratio of integer numbers. For example, 22/7 and 1/2 are rational numbers. Some rational numbers, such as 1/2 and 1/10, have perfect representation as base-10 decimals, and others, such as 1/3 and 22/7, can only be expressed as infinite-length base-10 decimals. When using rational numbers, each real number is stored as two integer numbers representing the numerator and denominator. The basic fractional arithmetic operations are used for addition, subtraction, multiplication, and division, as shown in [Figure 1].
The limitation that occurs when using rational numbers to represent real numbers is that the size of the numerators and denominators tends to grow. For each addition, a common denominator must be found. To keep the numbers from becoming extremely large, during each operation, it is important to find the greatest common divisor (GCD) to reduce fractions to their most compact representation. When the values grow and there are no common divisors, either the large integer values must be stored using dynamic memory or some form of approximation must be used, thus losing the primary advantage of rational numbers.
For mathematical packages such as Maple or Mathematica that need to produce exact results on smaller data sets, the use of rational numbers to represent real numbers is at times a useful technique. The performance and storage cost is less significant than the need to produce exact results in some instances.
If the desired number of decimal places is known in advance, it’s possible to use fixed-point representation. Using this technique, each real number is stored as a scaled integer. This solves the problem that base-10 fractions such as 0.1 or 0.01 cannot be perfectly represented as a base-2 fraction. If you multiply 110.77 by 100 and store it as a scaled integer 11077, you can perfectly represent the base-10 fractional part (0.77). This approach can be used for values such as money, where the number of digits past the decimal point is small and known.
However, just because all numbers can be accurately represented it doesn’t mean there are not errors with this format. When multiplying a fixed-point number by a fraction, you get digits that can’t be represented in a fixed-point format, so some form of rounding must be used. For example, if you have $125.87 in the bank at 4% interest, your interest amount would be $5.0348. However, because your bank balance only has two digits of accuracy, they only give you $5.03, resulting in a balance of $130.90. Of course you probably have heard many stories of programmers getting rich depositing many of the remaining 0.0048 amounts into their own account. My guess is that banks have probably figured that one out by now, and the bank keeps the money for itself. But it does make one wonder if they round or truncate in this type of calculation.3
The floating-point format that is most prevalent in high performance computing is a variation on scientific notation. In scientific notation the real number is represented using a mantissa, base, and exponent: 6.02 × 1023.
The mantissa typically has some fixed number of places of accuracy. The mantissa can be represented in base 2, base 16, or BCD. There is generally a limited range of exponents, and the exponent can be expressed as a power of 2, 10, or 16.
The primary advantage of this representation is that it provides a wide overall range of values while using a fixed-length storage representation. The primary limitation of this format is that the difference between two successive values is not uniform. For example, assume that you can represent three base-10 digits, and your exponent can range from –10 to 10. For numbers close to zero, the “distance” between successive numbers is very small. For the number 1.72×10−101.72×10-10, the next larger number is 1.73×10−101.73×10-10. The distance between these two “close” small numbers is 0.000000000001. For the number 6.33×10106.33×1010, the next larger number is 6.34×10106.34×1010. The distance between these “close” large numbers is 100 million.
In [Figure 2], we use two base-2 digits with an exponent ranging from –1 to 1.
There are multiple equivalent representations of a number when using scientific notation:
By convention, we shift the mantissa (adjust the exponent) until there is exactly one nonzero digit to the left of the decimal point. When a number is expressed this way, it is said to be “normalized.” In the above list, only 6.00 × 105 is normalized. [Figure 3] shows how some of the floating-point numbers from [Figure 2] are not normalized.
While the mantissa/exponent has been the dominant floating-point approach for high performance computing, there were a wide variety of specific formats in use by computer vendors. Historically, each computer vendor had their own particular format for floating-point numbers. Because of this, a program executed on several different brands of computer would generally produce different answers. This invariably led to heated discussions about which system provided the right answer and which system(s) were generating meaningless results.4
When storing floating-point numbers in digital computers, typically the mantissa is normalized, and then the mantissa and exponent are converted to base-2 and packed into a 32- or 64-bit word. If more bits were allocated to the exponent, the overall range of the format would be increased, and the number of digits of accuracy would be decreased. Also the base of the exponent could be base-2 or base-16. Using 16 as the base for the exponent increases the overall range of exponents, but because normalization must occur on four-bit boundaries, the available digits of accuracy are reduced on the average. Later we will see how the IEEE 754 standard for floating-point format represents numbers.
Effects of Floating-Point Representation
One problem with the mantissa/base/exponent representation is that not all base-10 numbers can be expressed perfectly as a base-2 number. For example, 1/2 and 0.25 can be represented perfectly as base-2 values, while 1/3 and 0.1 produce infinitely repeating base-2 decimals. These values must be rounded to be stored in the floating-point format. With sufficient digits of precision, this generally is not a problem for computations. However, it does lead to some anomalies where algebraic rules do not appear to apply. Consider the following example:
REAL*4 X,Y X = 0.1 Y = 0 DO I=1,10 Y = Y + X ENDDO IF ( Y .EQ. 1.0 ) THEN PRINT *,’Algebra is truth’ ELSE PRINT *,’Not here’ ENDIF PRINT *,1.0-Y END
At first glance, this appears simple enough. Mathematics tells us ten times 0.1 should be one. Unfortunately, because 0.1 cannot be represented exactly as a base-2 decimal, it must be rounded. It ends up being rounded down to the last bit. When ten of these slightly smaller numbers are added together, it does not quite add up to 1.0. When X and Y are
REAL*4, the difference is about 10-7, and when they are
REAL*8, the difference is about 10-16.
One possible method for comparing computed values to constants is to subtract the values and test to see how close the two values become. For example, one can rewrite the test in the above code to be:
IF ( ABS(1.0-Y).LT. 1E-6) THEN PRINT *,’Close enough for government work’ ELSE PRINT *,’Not even close’ ENDIF
The type of the variables in question and the expected error in the computation that produces Y determines the appropriate value used to declare that two values are close enough to be declared equal.
Another area where inexact representation becomes a problem is the fact that algebraic inverses do not hold with all floating-point numbers. For example, using
REAL*4, the value
(1.0/X) * X does not evaluate to 1.0 for 135 values of X from one to 1000. This can be a problem when computing the inverse of a matrix using LU-decomposition. LU-decomposition repeatedly does division, multiplication, addition, and subtraction. If you do the straightforward LU-decomposition on a matrix with integer coefficients that has an integer solution, there is a pretty good chance you won’t get the exact solution when you run your algorithm. Discussing techniques for improving the accuracy of matrix inverse computation is best left to a numerical analysis text.
More Algebra That Doesn't Work
While the examples in the proceeding section focused on the limitations of multiplication and division, addition and subtraction are not, by any means, perfect. Because of the limitation of the number of digits of precision, certain additions or subtractions have no effect. Consider the following example using
REAL*4 with 7 digits of precision:
X = 1.25E8 Y = X + 7.5E-3 IF ( X.EQ.Y ) THEN PRINT *,’Am I nuts or what?’ ENDIF
While both of these numbers are precisely representable in floating-point, adding them is problematic. Prior to adding these numbers together, their decimal points must be aligned as in [Figure 4].
Unfortunately, while we have computed the exact result, it cannot fit back into a
REAL*4 variable (7 digits of accuracy) without truncating the 0.0075. So after the addition, the value in Y is exactly 1.25E8. Even sadder, the addition could be performed millions of times, and the value for Y would still be 1.25E8.
Because of the limitation on precision, not all algebraic laws apply all the time. For instance, the answer you obtain from X+Y will be the same as Y+X, as per the commutative law for addition. Whichever operand you pick first, the operation yields the same result; they are mathematically equivalent. It also means that you can choose either of the following two forms and get the same answer:
(X + Y) + Z (Y + X) + Z
However, this is not equivalent:
(Y + Z) + X
The third version isn’t equivalent to the first two because the order of the calculations has changed. Again, the rearrangement is equivalent algebraically, but not computationally. By changing the order of the calculations, we have taken advantage of the associativity of the operations; we have made an associative transformation of the original code.
To understand why the order of the calculations matters, imagine that your computer can perform arithmetic significant to only five decimal places.
Also assume that the values of X, Y, and Z are .00005, .00005, and 1.0000, respectively. This means that:
(X + Y) + Z = .00005 + .00005 + 1.0000 = .0001 + 1.0000 = 1.0001
(Y + Z) + X = .00005 + 1.0000 + .00005 = 1.0000 + .00005 = 1.0000
The two versions give slightly different answers. When adding
Y+Z+X, the sum of the smaller numbers was insignificant when added to the larger number. But when computing
X+Y+Z, we add the two small numbers first, and their combined sum is large enough to influence the final answer. For this reason, compilers that rearrange operations for the sake of performance generally only do so after the user has requested optimizations beyond the defaults.
For these reasons, the FORTRAN language is very strict about the exact order of evaluation of expressions. To be compliant, the compiler must ensure that the operations occur exactly as you express them.5
For Kernighan and Ritchie C, the operator precedence rules are different. Although the precedences between operators are honored (i.e., * comes before +, and evaluation generally occurs left to right for operators of equal precedence), the compiler is allowed to treat a few commutative operations (+, *, &, ˆ and |) as if they were fully associative, even if they are parenthesized. For instance, you might tell the C compiler:
a = x + (y + z);
However, the C compiler is free to ignore you, and combine X, Y, and Z in any order it pleases.
Now armed with this knowledge, view the following harmless-looking code segment:
REAL*4 SUM,A(1000000) SUM = 0.0 DO I=1,1000000 SUM = SUM + A(I) ENDDO
Begins to look like a nightmare waiting to happen. The accuracy of this sum depends of the relative magnitudes and order of the values in the array A. If we sort the array from smallest to largest and then perform the additions, we have a more accurate value. There are other algorithms for computing the sum of an array that reduce the error without requiring a full sort of the data. Consult a good textbook on numerical analysis for the details on these algorithms.
If the range of magnitudes of the values in the array is relatively small, the straight-forward computation of the sum is probably sufficient.
Improving Accuracy Using Guard Digits
In this section we explore a technique to improve the precision of floating-point computations without using additional storage space for the floating-point numbers.
Consider the following example of a base-10 system with five digits of accuracy performing the following subtraction:
10.001 - 9.9993 = 0.0017
All of these values can be perfectly represented using our floating-point format. However, if we only have five digits of precision available while aligning the decimal points during the computation, the results end up with significant error as shown in [Figure 5].
To perform this computation and round it correctly, we do not need to increase the number of significant digits for stored values. We do, however, need additional digits of precision while performing the computation.
The solution is to add extra guard digits which are maintained during the interim steps of the computation. In our case, if we maintained six digits of accuracy while aligning operands, and rounded before normalizing and assigning the final value, we would get the proper result. The guard digits only need to be present as part of the floating-point execution unit in the CPU. It is not necessary to add guard digits to the registers or to the values stored in memory.
It is not necessary to have an extremely large number of guard digits. At some point, the difference in the magnitude between the operands becomes so great that lost digits do not affect the addition or rounding results.
History of IEEE Floating-Point Format
Prior to the RISC microprocessor revolution, each vendor had their own floating- point formats based on their designers’ views of the relative importance of range versus accuracy and speed versus accuracy. It was not uncommon for one vendor to carefully analyze the limitations of another vendor’s floating-point format and use this information to convince users that theirs was the only “accurate” floating- point implementation. In reality none of the formats was perfect. The formats were simply imperfect in different ways.
During the 1980s the Institute for Electrical and Electronics Engineers (IEEE) produced a standard for the floating-point format. The title of the standard is “IEEE 754-1985 Standard for Binary Floating-Point Arithmetic.” This standard provided the precise definition of a floating-point format and described the operations on floating-point values.
Because IEEE 754 was developed after a variety of floating-point formats had been in use for quite some time, the IEEE 754 working group had the benefit of examining the existing floating-point designs and taking the strong points, and avoiding the mistakes in existing designs. The IEEE 754 specification had its beginnings in the design of the Intel i8087 floating-point coprocessor. The i8087 floating-point format improved on the DEC VAX floating-point format by adding a number of significant features.
The near universal adoption of IEEE 754 floating-point format has occurred over a 10-year time period. The high performance computing vendors of the mid 1980s (Cray IBM, DEC, and Control Data) had their own proprietary floating-point formats that they had to continue supporting because of their installed user base. They really had no choice but to continue to support their existing formats. In the mid to late 1980s the primary systems that supported the IEEE format were RISC workstations and some coprocessors for microprocessors. Because the designers of these systems had no need to protect a proprietary floating-point format, they readily adopted the IEEE format. As RISC processors moved from general-purpose integer computing to high performance floating-point computing, the CPU designers found ways to make IEEE floating-point operations operate very quickly. In 10 years, the IEEE 754 has gone from a standard for floating-point coprocessors to the dominant floating-point standard for all computers. Because of this standard, we, the users, are the beneficiaries of a portable floating-point environment.
IEEE Floating-Point Standard
The IEEE 754 standard specified a number of different details of floating-point operations, including:
- Storage formats
- Precise specifications of the results of operations
- Special values
- Specified runtime behavior on illegal operations
Specifying the floating-point format to this level of detail insures that when a computer system is compliant with the standard, users can expect repeatable execution from one hardware platform to another when operations are executed in the same order.
IEEE Storage Format
The two most common IEEE floating-point formats in use are 32- and 64-bit numbers. [Table below] gives the general parameters of these data types.
|IEEE75||FORTRAN||C||Bits||Exponent Bits||Mantissa Bits|
In FORTRAN, the 32-bit format is usually called REAL, and the 64-bit format is usually called DOUBLE. However, some FORTRAN compilers double the sizes for these data types. For that reason, it is safest to declare your FORTRAN variables as
REAL*8. The double-extended format is not as well supported in compilers and hardware as the single- and double-precision formats. The bit arrangement for the single and double formats are shown in [Figure 6].
Based on the storage layouts in [Table 1], we can derive the ranges and accuracy of these formats, as shown in [Table 1].
|IEEE754||Minimum Normalized Number||Largest Finite Number||Base-10 Accuracy|
|Single||1.2E-38||3.4 E+38||6-9 digits|
|Double||2.2E-308||1.8 E+308||15-17 digits|
|Extended Double||3.4E-4932||1.2 E+4932||18-21 digits|
Converting from Base-10 to IEEE Internal Format
We now examine how a 32-bit floating-point number is stored. The high-order bit is the sign of the number. Numbers are stored in a sign-magnitude format (i.e., not 2’s - complement). The exponent is stored in the 8-bit field biased by adding 127 to the exponent. This results in an exponent ranging from -126 through +127.
The mantissa is converted into base-2 and normalized so that there is one nonzero digit to the left of the binary place, adjusting the exponent as necessary. The digits to the right of the binary point are then stored in the low-order 23 bits of the word. Because all numbers are normalized, there is no need to store the leading 1.
This gives a free extra bit of precision. Because this bit is dropped, it’s no longer proper to refer to the stored value as the mantissa. In IEEE parlance, this mantissa minus its leading digit is called the significand.
[Figure 7] shows an example conversion from base-10 to IEEE 32-bit format.
The 64-bit format is similar, except the exponent is 11 bits long, biased by adding 1023 to the exponent, and the significand is 54 bits long.
The IEEE standard specifies how computations are to be performed on floating-point values on the following operations:
- Square root
- Remainder (modulo)
- Conversion to/from integer
- Conversion to/from printed base-10
These operations are specified in a machine-independent manner, giving flexibility to the CPU designers to implement the operations as efficiently as possible while maintaining compliance with the standard. During operations, the IEEE standard requires the maintenance of two guard digits and a sticky bit for intermediate values. The guard digits above and the sticky bit are used to indicate if any of the bits beyond the second guard digit is nonzero.
In [Figure 8], we have five bits of normal precision, two guard digits, and a sticky bit. Guard bits simply operate as normal bits — as if the significand were 25 bits. Guard bits participate in rounding as the extended operands are added. The sticky bit is set to 1 if any of the bits beyond the guard bits is nonzero in either operand.6 Once the extended sum is computed, it is rounded so that the value stored in memory is the closest possible value to the extended sum including the guard digits. [Table 3] shows all eight possible values of the two guard digits and the sticky bit and the resulting stored value with an explanation as to why.
|Extended Sum||Stored Value||Why|
|1.0100 000||1.0100||Truncated based on guard digits|
|1.0100 001||1.0100||Truncated based on guard digits|
|1.0100 010||1.0100||Rounded down based on guard digits|
|1.0100 011||1.0100||Rounded down based on guard digits|
|1.0100 100||1.0100||Rounded down based on sticky bit|
|1.0100 101||1.0101||Rounded up based on sticky bit|
|1.0100 110||1.0101||Rounded up based on guard digits|
|1.0100 111||1.0101||Rounded up based on guard digits|
The first priority is to check the guard digits. Never forget that the sticky bit is just a hint, not a real digit. So if we can make a decision without looking at the sticky bit, that is good. The only decision we are making is to round the last storable bit up or down. When that stored value is retrieved for the next computation, its guard digits are set to zeros. It is sometimes helpful to think of the stored value as having the guard digits, but set to zero.
Two guard digits and the sticky bit in the IEEE format insures that operations yield the same rounding as if the intermediate result were computed using unlimited precision and then rounded to fit within the limits of precision of the final computed value.
At this point, you might be asking, “Why do I care about this minutiae?” At some level, unless you are a hardware designer, you don’t care. But when you examine details like this, you can be assured of one thing: when they developed the IEEE floating-point standard, they looked at the details very carefully. The goal was to produce the most accurate possible floating-point standard within the constraints of a fixed-length 32- or 64-bit format. Because they did such a good job, it’s one less thing you have to worry about. Besides, this stuff makes great exam questions.
In addition to specifying the results of operations on numeric data, the IEEE standard also specifies the precise behavior on undefined operations such as dividing by zero. These results are indicated using several special values. These values are bit patterns that are stored in variables that are checked before operations are performed. The IEEE operations are all defined on these special values in addition to the normal numeric values. [Table 4] summarizes the special values for a 32-bit IEEE floating-point number.
|+ or – 0||00000000||0|
|NaN (Not a Number)||11111111||nonzero|
|+ or – Infinity||11111111||0|
The value of the exponent and significand determines which type of special value this particular floating-point number represents. Zero is designed such that integer zero and floating-point zero are the same bit pattern.
Denormalized numbers can occur at some point as a number continues to get smaller, and the exponent has reached the minimum value. We could declare that minimum to be the smallest representable value. However, with denormalized values, we can continue by setting the exponent bits to zero and shifting the significand bits to the right, first adding the leading “1” that was dropped, then continuing to add leading zeros to indicate even smaller values. At some point the last nonzero digit is shifted off to the right, and the value becomes zero. This approach is called gradual underflow where the value keeps approaching zero and then eventually becomes zero. Not all implementations support denormalized numbers in hardware; they might trap to a software routine to handle these numbers at a significant performance cost.
At the top end of the biased exponent value, an exponent of all 1s can represent the Not a Number (NaN) value or infinity. Infinity occurs in computations roughly according to the principles of mathematics. If you continue to increase the magnitude of a number beyond the range of the floating-point format, once the range has been exceeded, the value becomes infinity. Once a value is infinity, further additions won’t increase it, and subtractions won’t decrease it. You can also produce the value infinity by dividing a nonzero value by zero. If you divide a nonzero value by infinity, you get zero as a result.
The NaN value indicates a number that is not mathematically defined. You can generate a NaN by dividing zero by zero, dividing infinity by infinity, or taking the square root of -1. The difference between infinity and NaN is that the NaN value has a nonzero significand. The NaN value is very sticky. Any operation that has a NaN as one of its inputs always produces a NaN result.
Exceptions and Traps
In addition to defining the results of computations that aren’t mathematically defined, the IEEE standard provides programmers with the ability to detect when these special values are being produced. This way, programmers can write their code without adding extensive IF tests throughout the code checking for the magnitude of values. Instead they can register a trap handler for an event such as underflow and handle the event when it occurs. The exceptions defined by the IEEE standard include:
- Overflow to infinity
- Underflow to zero
- Division by zero
- Invalid operation
- Inexact operation
According to the standard, these traps are under the control of the user. In most cases, the compiler runtime library manages these traps under the direction from the user through compiler flags or runtime library calls. Traps generally have significant overhead compared to a single floating-point instruction, and if a program is continually executing trap code, it can significantly impact performance.
In some cases it’s appropriate to ignore traps on certain operations. A commonly ignored trap is the underflow trap. In many iterative programs, it’s quite natural for a value to keep reducing to the point where it “disappears.” Depending on the application, this may or may not be an error situation so this exception can be safely ignored.
If you run a program and then it terminates, you see a message such as:
Overflow handler called 10,000,000 times
It probably means that you need to figure out why your code is exceeding the range of the floating-point format. It probably also means that your code is executing more slowly because it is spending too much time in its error handlers.
The IEEE 754 floating-point standard does a good job describing how floating- point operations are to be performed. However, we generally don’t write assembly language programs. When we write in a higher-level language such as FORTRAN, it’s sometimes difficult to get the compiler to generate the assembly language you need for your application. The problems fall into two categories:
- The compiler is too conservative in trying to generate IEEE-compliant code and produces code that doesn’t operate at the peak speed of the processor. On some processors, to fully support gradual underflow, extra instructions must be generated for certain instructions. If your code will never underflow, these instructions are unnecessary overhead.
- The optimizer takes liberties rewriting your code to improve its performance, eliminating some necessary steps. For example, if you have the following code:
Z = X + 500 Y = Z - 200The optimizer may replace it with
Y = X + 300. However, in the case of a value for
Xthat is close to overflow, the two sequences may not produce the same result.
Sometimes a user prefers “fast” code that loosely conforms to the IEEE standard, and at other times the user will be writing a numerical library routine and need total control over each floating-point operation. Compilers have a challenge supporting the needs of both of these types of users. Because of the nature of the high performance computing market and benchmarks, often the “fast and loose” approach prevails in many compilers.
While this is a relatively long chapter with a lot of technical detail, it does not even begin to scratch the surface of the IEEE floating-point format or the entire field of numerical analysis. We as programmers must be careful about the accuracy of our programs, lest the results become meaningless. Here are a few basic rules to get you started:
- Look for compiler options that relax or enforce strict IEEE compliance and choose the appropriate option for your program. You may even want to change these options for different portions of your program.
REAL*8for computations unless you are sure
REAL*4has sufficient precision. Given that REAL*4 has roughly 7 digits of precision, if the bottom digits become meaningless due to rounding and computations, you are in some danger of seeing the effect of the errors in your results.
REAL*8with 13 digits makes this much less likely to happen.
- Be aware of the relative magnitude of numbers when you are performing additions.
- When summing up numbers, if there is a wide range, sum from smallest to largest.
- Perform multiplications before divisions whenever possible.
- When performing a comparison with a computed value, check to see if the values are “close” rather than identical.
- Make sure that you are not performing any unnecessary type conversions during the critical portions of your code.
An excellent reference on floating-point issues and the IEEE format is “What Every Computer Scientist Should Know About Floating-Point Arithmetic,” written by David Goldberg, in ACM Computing Surveys magazine (March 1991). This article gives examples of the most common problems with floating-point and outlines the solutions. It also covers the IEEE floating-point format very thoroughly. I also recommend you consult Dr. William Kahan’s home page (http://www.cs.berkeley.edu/~wkahan/) for some excellent materials on the IEEE format and challenges using floating-point arithmetic. Dr. Kahan was one of the original designers of the Intel i8087 and the IEEE 754 floating-point format.
Run the following code to count the number of inverses that are not perfectly accurate:
REAL*4 X,Y,Z INTEGER I I = 0 DO X=1.0,1000.0,1.0 Y = 1.0 / X Z = Y * X IF ( Z .NE. 1.0 ) THEN I = I + 1 ENDIF ENDDO PRINT *,’Found ’,I END
Change the type of the variables to
REAL*8 and repeat. Make sure to keep the optimization at a sufficiently low level (-00) to keep the compiler from eliminating the computations.
Write a program to determine the number of digits of precision for
Write a program to demonstrate how summing an array forward to backward and backward to forward can yield a different result.
Assuming your compiler supports varying levels of IEEE compliance, take a significant computational code and test its overall performance under the various IEEE compliance options. Do the results of the program change?
- In high performance computing we often simulate the real world, so it is somewhat ironic that we use simulated real numbers (floating-point) in those simulations of the real world.
- Interestingly, analog computers have an easier time representing real numbers. Imagine a “water- adding” analog computer which consists of two glasses of water and an empty glass. The amount of water in the two glasses are perfectly represented real numbers. By pouring the two glasses into a third, we are adding the two real numbers perfectly (unless we spill some), and we wind up with a real number amount of water in the third glass. The problem with analog computers is knowing just how much water is in the glasses when we are all done. It is also problematic to perform 600 million additions per second using this technique without getting pretty wet. Try to resist the temptation to start an argument over whether quantum mechanics would cause the real numbers to be rational numbers. And don’t point out the fact that even digital computers are really analog computers at their core. I am trying to keep the focus on floating-point values, and you keep drifting away!
- Perhaps banks round this instead of truncating, knowing that they will always make it up in teller machine fees.
- Interestingly, there was an easy answer to the question for many programmers. Generally they trusted the results from the computer they used to debug the code and dismissed the results from other computers as garbage.
- Often even if you didn’t mean it.
- If you are somewhat hardware-inclined and you think about it for a moment, you will soon come up with a way to properly maintain the sticky bit without ever computing the full “infinite precision sum.” You just have to keep track as things get shifted around. |
3/7/2019· Calcium is silver to gray solid metal that develops a pale yellow tint. It is element atomic nuer 20 on the periodic table with the syol Ca. Unlike most transition metals, calcium and its compounds exhibit a low toxicity. The element is essential for human
Write a balanced chemical equation for the reaction between aluminum metal and chlorine gas. Write a balanced chemical equation for the reaction between lithium metal and liquid water. Write a balanced chemical equation for the reaction between gaseous hydrogen
19/8/2020· Chemical reaction - Chemical reaction - Energy considerations: Energy plays a key role in chemical processes. According to the modern view of chemical reactions, bonds between atoms in the reactants must be broken, and the atoms or pieces of molecules are reasseled into products by forming new bonds. Energy is absorbed to break bonds, and energy is evolved as bonds are made. In …
The reaction of calcium and steam will be the same as between calcium and water. Following is the chemical equation showing the reaction of calcium and water Ca (s) + 2H 2 O (l) → Ca(OH) 2 (aq) + H 2 (g) Also, calcium is sufficiently reactive and reacts readily
Steps Explanation Equation Write the unbalanced equation. This is a double displacement reaction, so the ions and anions swap to create new products. Ca(OH) 2 (aq) + HNO 3 (aq) → Ca(NO 3) 2 (aq) + H 2 O(ℓ) Balance the equation. Because there are two OH − ions in the formula for Ca(OH) 2, we need two moles of HNO 3 to provide H + ions
milk, sea water and various solid materials. It can also be used to determine the total hardness of fresh water provided the solutions used are diluted. The coined concentration of calcium and magnesium ions is considered to be the measure of water hardness.
26/7/2020· Calcium hydroxide forms when it reacts with water, but calcium oxide forms when it reacts with steam. Reaction of metals with dilute acids When a metal reacts with a dilute acid , a salt and
Calcium metal burns in the presence of bromine gas to form calcium bromide "Ca"_ ((s)) + "Br"_ (2(g)) -> "CaBr"_ (2(s)) As a final note, the aqueous state suggests that the compound is dissolved in water. Therefore, you cannot use the aqueous state unless
2Ag(s) (grey) + Cl 2 (g) Photographic paper has a coat of silver chloride, which turns into grey when exposed to sunlight. It happens because silver chloride is colourless while silver is a grey metal. (iii) Displacement Reaction: The chemical reactions in which a more reactive element displaces a less reactive element from a compound is known as Displacement Reactions.
Answer (1 of 2): The reaction of zinc metal and copper two chloride is a single replacement reaction. In single replacement reaction, one component of a molecule is replaced by other element. When zinc metal reacts with copper two chloride, zinc chloride is
Write a balanced chemical equation for the reaction between lithium metal and liquid water. Express your answer as a chemical equation. Identify all of the phases in your answer. Part C Write a balanced chemical equation for the reaction between gaseous
22/10/2019· Water systems using groundwater as a source are concerned with water hardness, since as water moves through soil and rock it dissolves small amounts of naturally-occurring minerals and carries them into the groundwater supply.Water is a great solvent for calcium and magnesium, so if the minerals are present in the soil around a water-supply well, hard water may be delivered to homes.
When calcium oxide (chemical formula: CaO) reacts with water (chemical formula: H2O), the following reaction takes place: `CaO + H_2O -> Ca(OH)_2` The product of this reaction is calcium hydroxide
Write a balanced equation for the following reactions. Aluminum metal is oxidized by oxygen (from the air) to form aluminum oxide. Sodium oxide reacts with carbon dioxide to form sodium carbonate. Calcium metal reacts with water to form calcium hydroxide
Once ignited, calcium metal burns in air to give a mixture of white calcium oxide, CaO, and calcium nitride, Ca 3 N 2. Calcium oxide is more normally made by heating calcium carbonate. Calcium, immediately below magnesium in the periodic table is more reactive with air than magnesium.
Fluorine gas is placed in contact with calcium metal at high temperatures to produce calcium fluoride powder. What is the formula equation for this - 5123751 The formula equation for any reaction is the chemical equation which involves the chemical formulas of the
Word equations are used to describe chemical reactions. Look at the word equations below. In each case complete the word equation by adding the name of the missing substance. 1. nitric acid + potassium hydroxide → _____ + water
For the chemical reaction between an acid and a metal hydroxide (base), the products are a salt and water. metal hydroxide + acid → salt + water If we are given the following word equation: calcium hydroxide + hydrochloric acid → calcium chloride + water →
Chemistry I Dr. Saulmon 2014-15 School Year Unit 4: Chemical Reactions Problem Set 7 Wednesday, October 01, 2014 1. Pennies in the United States now consist of a zinc disk that is coated with a thin layer of copper. If a penny is scratched and then soaked
21/8/2020· Water softening, the process of removing the dissolved calcium and magnesium salts that cause hardness in water. Softened water does not form insoluble scale or precipitates in pipes and tanks or interfere with cleaners such as soap. It is thus indispensable in
4/5/2015· A novel process for obtaining magnesium from sea water involves several reactions. Write a balanced chemical equation for each step of the process. (a) The first step is the decomposition of solid calcium carbonate from seashells to form solid calcium oxide and
Example 1: Balancing Reactions Hydrogen and nitrogen react together in order to produce ammonia gas, write the chemical equation of this reaction. Solution Step 1: Write each product and reactant using its chemical formula. \[H_2 + N_2 \rightarrow NH_3\] Step
13/5/2015· Write the balanced chemical equation for the following and identify the type of reaction 5,921 views 4:47 Q2 Write the balanced equation …
Here, it explains the representation of chemical reactions in word equation form. Further, it deals with writing a balanced chemical equation which is extensively covered in a stepwise manner. Apart from this, writing the syols of physical states of reactants and products involved in the chemical reactions in addition to its examples are also discussed in the chapter, Chemical Reactions and |
Between the orbits of Mars and Jupiter lies the Solar System’s Main Asteroid Belt. Consisting of millions of objects that range in size from hundreds of kilometers in diameter (like Ceres and Vesta) to one kilometer or more, the Asteroid Belt has long been a source of fascination for astronomers. Initially, they wondered why the many objects that make it up did not come together to form a planet. But more recently, human beings have been eyeing the Asteroid Belt for other purposes.
Whereas most of our efforts are focused on research – in the hopes of shedding additional light on the history of the Solar System – others are looking to tap for its considerable wealth. With enough resources to last us indefinitely, there are many who want to begin mining it as soon as possible. Because of this, knowing exactly how long it would take for spaceships to get there and back is becoming a priority.
Distance from Earth:
The distance between the Asteroid Belt and Earth varies considerably depending on where we measure to. Based on its average distance from the Sun, the distance between Earth and the edge of the Belt that is closest to it can be said to be between 1.2 to 2.2 AUs, or 179.5 and 329 million km (111.5 and 204.43 million mi).
However, at any given time, part of the Asteroid Belt will be on the opposite side of the Sun, relative to Earth. From this vantage point, the distance between Earth and the Asteroid Blt ranges from 3.2 and 4.2 AU – 478.7 to 628.3 million km (297.45 to 390.4 million mi). To put that in perspective, the distance between Earth and the Asteroid Belt ranges between being slightly more than the distance between the Earth and the Sun (1 AU), to being the same as the distance between Earth and Jupiter (4.2 AU) when they are at their closest.
But of course, for reasons of fuel economy and time, asteroid miners and exploration missions are not about to take the long way! As such, we can safely assume that the distance between Earth and the Asteroid Belt when they are at their closest is the only measurement worth considering.
The Asteroid Belt is so thinly populated that several unmanned spacecraft have been able to move through it on their way to the outer Solar System. In more recent years, missions to study larger Asteroid Belt objects have also used this to their advantage, navigating between the smaller objects to rendezvous with bodies like Ceres and Vesta. In fact, due to the low density of materials within the Belt, the odds of a probe running into an asteroid are now estimated at less than one in a billion.
The first spacecraft to make a journey through the asteroid belt was the Pioneer 10 spacecraft, which entered the region on July 16th, 1972 (a journey of 135 days). As part of its mission to Jupiter, the craft successfully navigated through the Belt and conducted a flyby of Jupiter (in December of 1973) before becoming the first spacecraft to achieve escape velocity from the Solar System.
At the time, there were concerns that the debris would pose a hazard to the Pioneer 10 space probe. But since that mission, 11 additional spacecraft have passed through the Asteroid Belt without incident. These included Pioneer 11, Voyager 1 and 2, Ulysses, Galileo, NEAR, Cassini, Stardust, New Horizons, the ESA’s Rosetta, and most recently, the Dawn spacecraft.
For the most part, these missions were part of missions to the outer Solar System, where opportunities to photograph and study asteroids were brief. Only the Dawn, NEAR and JAXA’s Hayabusa missions have studied asteroids for a protracted period in orbit and at the surface. Dawn explored Vesta from July 2011 to September 2012, and is currently orbiting Ceres (and sending back gravity data on the dwarf planet’s gravity) and is expected to remain there until 2017.
Fastest Mission to Date:
The fastest mission humanity has ever mounted was the New Horizons mission, which was launched from Earth on Jan. 19th, 2006. The mission began with a speedy launch aboard an Atlas V rocket, which accelerated it to a a speed of about 16.26 km per second (58,536 km/h; 36,373 mph). At this speed, the probe reached the Asteroid Belt by the following summer, and made a close approach to the tiny asteroid 132524 APL by June 13th, 2006 (145 days after launching).
However, even this pales in comparison to Voyager 1, which was launched on Sept. 5th, 1977 and reached the Asteroid Belt on Dec. 10th, 1977 – a total of 96 days. And then there was the Voyager 2 probe, which launched 15 days after Voyager 1 (on Sept. 20th), but still managed to arrive on the same date – which works out to a total travel time of 81 days.
Not bad as travel times go. At these speed, a spacecraft could make the trip to the Asteroid Belt, spend several weeks conducting research (or extracting ore), and then make it home in just over six months time. However, one has to take into account that in all these cases, the mission teams did not decelerate the probes to make a rendezvous with any asteroids.
Ergo, a mission to the Asteroid Belt would take longer as the craft would have to slow down to achieve orbital velocity. And they would also need some powerful engines of their own in order to make the trip home. This would drastically alter the size and weight of the spacecraft, which would inevitably mean it would be bigger, slower and a heck of a lot more expensive than anything we’ve sent so far.
Another possibility would be to use ionic propulsion (which is much more fuel efficient) and pick up a gravity assist by conducting a flyby of Mars – which is precisely what the Dawn mission did. However, even with a boost from Mars’ gravity, the Dawn mission still took over three years to reach the asteroid Vesta – launching on Sept. 27th, 2007, and arriving on July 16th, 2011, (a total of 3 years, 9 months, and 19 days). Not exactly good turnaround!
Proposed Future Methods:
A number of possibilities exist that could drastically reduce both travel time and fuel consumption to the Asteroid Belt, many of which are currently being considered for a number of different mission proposals. One possibility is to use spacecraft equipped with nuclear engines, a concept which NASA has been exploring for decades.
In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust. A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine.
In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date. According to NASA estimates, the most sophisticated NTP concept would have a maximum specific impulse of 5000 seconds (50 kN·s/kg).
Using this engine, NASA scientists estimate that it would take a spaceship only 90 days to get to Mars when the planet was at “opposition” – i.e. as close as 55,000,000 km from Earth. Adjusted for a distance of 1.2 AUs, that means that a ship equipped with a NTP/NEC propulsion system could make the trip in about 293 days (about nine months and three weeks). A little slow, but not bad considering the technology exists.
Another proposed method of interstellar travel comes in the form of the Radio Frequency (RF) Resonant Cavity Thruster, also known as the EM Drive. Originally proposed in 2001 by Roger K. Shawyer, a UK scientist who started Satellite Propulsion Research Ltd (SPR) to bring it to fruition, this drive is built around the idea that electromagnetic microwave cavities can allow for the direct conversion of electrical energy to thrust.
According to calculations based on the NASA prototype (which yielded a power estimate of 0.4 N/kilowatt), a spacecraft equipped with the EM drive could make the trip to Mars in just ten days. Adjusted for a trip to the Asteroid Belt, so a spacecraft equipped with an EM drive would take an estimated 32.5 days to reach the Asteroid Belt.
Impressive, yes? But of course, that is based on a concept that has yet to be proven. So let’s turn to yet another radical proposal, which is to use ships equipped with an antimatter engine. Created in particle accelerators, antimatter is the most dense fuel you could possibly use. When atoms of matter meet atoms of antimatter, they annihilate each other, releasing an incredible amount of energy in the process.
According to the NASA Institute for Advanced Concepts (NIAC), which is researching the technology, it would take just 10 milligrams of antimatter to propel a human mission to Mars in 45 days. Based on this estimate, a craft equipped with an antimatter engine and roughly twice as much fuel could make the trip to the Asteroid Belt in roughly 147 days. But of course, the sheer cost of creating antimatter – combined with the fact that an engine based on these principles is still theoretical at this point – makes it a distant prospect.
Basically, getting to the Asteroid Belt takes quite a bit of time, at least when it comes to the concepts we currently have available. Using theoretical propulsion concepts, we are able to cut down on the travel time, but it will take some time (and lots of money) before those concepts are a reality. However, compared to many other proposed missions – such as to Europa and Enceladus – the travel time is shorter, and the dividends quite clear.
As already stated, there are enough resources – in the form of minerals and volatiles – in the Asteroid Belt to last us indefinitely. And, should we someday find a way to cost-effective way to send spacecraft there rapidly, we could tap that wealth and begin to usher in an age of post-scarcity! But as with so many other proposals and mission concepts, it looks like we’ll have to wait for the time being.
We have written many articles about the asteroid belt for Universe Today. Here’s Where Do Asteroids Come From?, Why the Asteroid Belt Doesn’t Threaten Spacecraft, and Why isn’t the Asteroid Belt a Planet?.
- NASA: Solar System Exploration – Asteroids
- The Planets – Asteroid Belt Facts
- Cornell University, Dept. of Astronomy – Asteroid Belt
- Sol Station – Main Asteroid Belt
- NASA Institute for Advanced Concepts
- NASA Spaceflight – Evaluating NASA’s Futuristic EM Drive
- SPR Ltd. |
Chapter 14 Cosmic Samples and the Origin of the Solar System
1: A friend of yours who has not taken astronomy sees a meteor shower (she calls it a bunch of shooting stars). The next day she confides in you that she was concerned that the stars in the Big Dipper (her favorite star pattern) might be the next ones to go. How would you put her mind at ease?
2: In what ways are meteorites different from meteors? What is the probable origin of each?
3: How are comets related to meteor showers?
4: What do we mean by primitive material? How can we tell if a meteorite is primitive?
5: Describe the solar nebula, and outline the sequence of events within the nebula that gave rise to the planetesimals.
6: Why do the giant planets and their moons have compositions different from those of the terrestrial planets?
7: How do the planets discovered so far around other stars differ from those in our own solar system? List at least two ways.
8: Explain the role of impacts in planetary evolution, including both giant impacts and more modest ones.
9: Why are some planets and moons more geologically active than others?
10: Summarize the origin and evolution of the atmospheres of Venus, Earth, and Mars.
11: Why do meteors in a meteor shower appear to come from just one point in the sky?
12: What methods do scientists use to distinguish a meteorite from terrestrial material?
13: Why do iron meteorites represent a much higher percentage of finds than of falls?
14: Why is it more useful to classify meteorites according to whether they are primitive or differentiated rather than whether they are stones, irons, or stony-irons?
15: Which meteorites are the most useful for defining the age of the solar system? Why?
16: Suppose a new primitive meteorite is discovered (sometime after it falls in a field of soybeans) and analysis reveals that it contains a trace of amino acids, all of which show the same rotational symmetry (unlike the Murchison meteorite). What might you conclude from this finding?
17: How do we know when the solar system formed? Usually we say that the solar system is 4.5 billion years old. To what does this age correspond?
18: We have seen how Mars can support greater elevation differences than Earth or Venus. According to the same arguments, the Moon should have higher mountains than any of the other terrestrial planets, yet we know it does not. What is wrong with applying the same line of reasoning to the mountains on the Moon?
19: Present theory suggests that giant planets cannot form without condensation of water ice, which becomes vapor at the high temperatures close to a star. So how can we explain the presence of jovian-sized exoplanets closer to their star than Mercury is to our Sun?
20: Why are meteorites of primitive material considered more important than other meteorites? Why have most of them been found in Antarctica?
Figuring for Yourself
21: How long would material take to go around if the solar nebula in [link] became the size of Earth’s orbit?
22: Consider the differentiated meteorites. We think the irons are from the cores, the stony-irons are from the interfaces between mantles and cores, and the stones are from the mantles of their differentiated parent bodies. If these parent bodies were like Earth, what fraction of the meteorites would you expect to consist of irons, stony-irons, and stones? Is this consistent with the observed numbers of each? (Hint: You will need to look up what percent of the volume of Earth is taken up by its core, mantle, and crust.)
23: Estimate the maximum height of the mountains on a hypothetical planet similar to Earth but with twice the surface gravity of our planet. |
Have you ever thought to use a clock to identify mineral deposits or concealed water resources within the Earth? An international team headed by astrophysicists Philippe Jetzer and Ruxandra Bondarescu from the University of Zurich is convinced that ultraprecise portable atomic clocks will make this a reality in the next decade. The scientists argue that these atomic clocks have already reached the necessary degree of precision to be useful for geophysical surveying. They say that such clocks will provide the most direct measurement of the geoid – the Earth's true physical form. It will also be possible to combine atomic clocks measurements to existent geophysical methods to explore the interior of the Earth.
Determining geoid from general relativity
Today, the Earth's geoid – the surface of constant gravitational potential that extends the mean sea level – can only be determined indirectly. On continents, the geoid can be calculated by tracking the altitude of satellites in orbit. Picking the right surface is a complicated, multivalued problem. The spatial resolution of the geoid computed this way is low – approximately 100 km.
Using atomic clocks to determine the geoid is an idea based on general relativity that has been discussed for the past 30 years. Clocks located at different distances from a heavy body like our Earth tick at different rates. Similarly, the closer a clock is to a heavy underground structure the slower it ticks – a clock positioned over an iron ore will tick slower than one that sits above an empty cave. "In 2010 ultraprecise atomic clocks have measured the time difference between two clocks, one positioned 33 centimeters above the other," explains Bondarescu before adding: "Local mapping of the geoid to an equivalent height of 1 centimeter with atomic clocks seems ambitions, but within the reach of atomic clock technology."
Geophysical surveying with atomic clocks
According to Bondarescu, if an atomic clock is placed at sea level, i.e., at the exact altitude of the geoid, a second clock could be positioned anywhere on the continent as long as it is synchronized with the first clock. The connection between the clocks can be made with fiber optics cable or via telecommunication satellite provided that the transmission is reliable enough. The second clock will tick faster or slower, depending on whether it is above of beneath the geoid. The local measurement of the geoid can then be combined with other geophysical measurements such as those from gravimeters, which measure the acceleration of the gravitational field, to get a better idea of the underground structure.
Mappings possible to great depths
In principle, atomic clock surveying is possible to great depth provided that the heavy underground structure to be studied is large enough to affect the tick rates of clocks in a measurable manner. The smallest structure that atomic clocks accurate to 1 centimeter in geoid height can determine is a buried sphere with a radius of about 1.5 kilometer buried at 2 kilometers under the surface provided it has a density contrast of about 20% with the surrounding upper crust. However, scientists estimate that the same clocks would be sensitive to a buried sphere with a radius of 4 kilometers at a depth of about 30 kilometers for the same density contrast.
Currently, ultraprecise atomic clocks only work in labs. In other words, they are not transportable and thus cannot be used for measurements in the field. However, this is all set to change in the next few years: Various companies and research institutes, including the Centre Suisse d'Electronique et de Microtechnique CSEM based in Neuchâtel, are already working on the development of portable ultraprecise atomic clocks. "By 2022 at the earliest, one such ultraprecise portable atomic clock will fly into Space on board an ESA satellite," says Professor Philippe Jetzer, the Swiss delegate for the STE-Quest satellite mission aimed at testing the general relativity theory very precisely. As early as 2014 or 2015, the "Atomic Clock Ensemble in Space ACES" is to be taken to the International Space Station ISS. ACES is an initial prototype that does not yet have the precision of STE-QUEST. |
The Antikythera mechanism is a 2,100-year-old computer: Wikipedia 116 years ago (1902), divers found a chunk of bronze off a Greek island. It has radically changed our understanding of human history. One hundred sixteen years ago, an archaeologist was sifting through objects found in the wreck of a 2,000-year-old vessel off the Greek island Antikythera. Among the wreck’s treasures, fine vases and pots, jewellery and, fittingly enough, a bronze statue of an ancient philosopher, he found a peculiar contraption, consisting of a series of brass gears and dials mounted in a case the size of a mantel clock. Archaeologists dubbed the instrument the Antikythera mechanism. The genius — and mystery — of this piece of ancient Greek technology is that arguably it is the world’s first computer. If we gaze inside the machine, we find clear evidence of at least two dozen gears, laid neatly on top of one another, calibrated with the precision of a master-crafted Swiss watch. This was a level of technology that archaeologists would usually date to the sixteenth century AD. But a mystery remained: What was this contraption used for? To archaeologists, it was immediately apparent that the mechanism was some sort of clock, calendar or calculating device. But they had no idea what it was for. For decades, they debated. Was the Antikythera a toy model of the planets or was it a kind of early astrolabe, a device which calculates latitude? IMAGE ancient At long last, in 1959, Princeton science historian Derek J. de Solla Price provided the most convincing scientific analysis of this amazing device to date. After a meticulous study of the gears, he deduced that the mechanism was used to predict the position of the planets and stars in the sky depending on the calendar month. The single primary gear would move to represent the calendar year, and would, in turn, activate many separate smaller gears to represent the motions of the planets, sun and moon. So you could set the main gear to the calendar date and get close approximations for where those celestial objects in the sky on that date. And Price declared in the pages of Scientific American that it was a computer: “The mechanism is like a great astronomical clock ... or like a modern analogue computer which uses mechanical parts to save tedious calculation.” It was a computer in the sense that you, as a user, could input a few simple variables and it would yield a flurry of complicated mathematical calculations. Today the programming of computers is written in digital code, a series of ones and zeros. This ancient analog clock had its code written into the mathematical ratios of its gears. All the user had to do was enter the main date on one gear, and through a series of subsequent gear revolutions, the mechanism could calculate variables such as the angle of the sun crossing the sky. As a point of referencdee, mechanical calculators using gear ratios to add and subtract, didn’t surface in Europe until the 1600s. Since Price’s assessment, modern X-ray and 3D mapping technology have allowed scientists to peer deeper into the remains of the mechanism to learn even more of its secrets. In the early 2000s, researchers discovered text in the guise of an instruction manual that had never been seen before, inscribed on parts of the mechanism. The text, written in tiny typeface but legible ancient Greek, helped them bring closure to complete the puzzle of what the machine did and how it was operated. The mechanism had several dials and clock faces, each which served a different function for measuring movements of the sun, moon, stars, and planets, but they were all operated by just one main crank. Small stone or glass orbs moved across the machine’s face to show the motion of Mercury, Venus, Mars, Saturn, and Jupiter in the night sky and the position of the sun and moon relative to the 12 constellations of the zodiac. Another dial would forecast solar and lunar eclipses and even, amazingly enough, predictions about their colour. Today, researchers surmise that different coloured eclipses were considered omens of the future. After all, the ancient Greeks, like all ancients, were a little superstitious. The mechanism consisted of: - a solar calendar, charting the 365 days of the year - a lunar calendar, counting a 19 year lunar cycle - a tiny pearl-size ball that rotated to illustrate the phase of the moon, and another dial that counted down the days to regularly scheduled sporting events around the Greek isles, like the Olympics. The mechanics of this device are absurdly complicated. A 2006, in the journal Nature, a paper plotted out a highly complex schematic of the mechanics that connect all the gears. Researchers are still not sure who exactly used it. Did philosophers, scientists and even mariners build it to assist them in their calculations? Or was it a type of a teaching tool, to show students the math that held the cosmos together? Was it unique? Or are there more similar devices yet to be discovered? To date, none others have been found. Its assembly remains another mystery. How the ancient Greeks accomplished this astonishing feat is unknown to this day. Whatever it was used for and however it was built, we know this: its discovery has forever changed our understanding of human history, and reminds us that flashes of genius are possible in every human era. Nothing like this instrument is preserved elsewhere. Nothing comparable to it is known from any ancient scientific text or literary allusion,” Price wrote in 1959. “It is a bit frightening, to know that just before the fall of their great civilization the ancient Greeks had come so close to our age, not only in their thought, but also in their scientific technology.” There are amazing fully operational modern versions of the Antikythera Mechanism, such as these:
Earth-shattering linguistic data from the Movie, Arrival (2016)
Earth-shattering linguistic data from the Movie, Arrival (2016) Not too long ago, I had the distinct pleasure of watching what is undoubtedly the most intellectually challenging movie of my lifetime. The movie is unique. Nothing even remotely like it has ever before been screened. It chronicles the Arrival of 12 apparent UFOs, but they are actually much more than just that. They are, as I just said, a unique phenomenon. Or more to the point, they were, are always will be just that. What on earth can this mean? The ships, if that is what we want to call them, appear out of thin air, like clouds unfolding into substantial material objects ... or so it would appear. They are approximately the shape of a saucer (as in cup and saucer) but with a top on it. They hang vertically in the atmosphere. But there is no motion in them or around them. They leave no footprint. The air is undisturbed around them. There is no radioactivity. There is no activity. There are 12 ships altogether dispersed around the globe, but in no logical pattern. A famous female linguist, Dr. Louise Banks (played by Amy Adams), is enlisted by the U.S. military to endeavour to unravel the bizarre signals emanating from within. Every 18 hours on the mark the ship opens up at the bottom (or is it on its right side, given that it is perpendicular?) and allows people inside. Artificial gravity and breathable air are created for the humans. A team of about 6 enter the ship and are transported up an immense long black hallway to a dark chamber with a dazzlingly bright screen. There, out of the mist, appear 2 heptapods, octopus-like creatures, but with 7 and not eight tentacles. They stand upright on their 7 tentacles and they walk on them. At first, the humans cannot communicate with them at all. But the ink-like substance the heptapods squirt onto the thick window between them and the humans always resolves itself into circles with distinct patterns, as we see in this composite: Eventually, the humans figure out what the language means, if you can call it that, because the meanings of the circles do not relate in any way to the actions of the heptapods. Our heroine finally discovers what their mission is, to save humankind along with themselves. They tell us... There is no time. And we are to take this literally. I extracted all of the linguistic data I could (which was almost all of it) from the film, and it runs as follows, with phrases and passages I consider of great import italicized. 1. Language is the foundation of which the glue holds civilization together. It is the first weapon that draws people into conflict – vs. - The cornerstone of civilization is not language. It is science. 2. Kangaroo... means “I don't understand.” (Watch the movie to figure this one out!) 3. Apart from being able to see them and hear them, the heptapods leave absolutely no footprint. 4. There is no correlation between what the heptapods say and what they write. 5. Unlike all written languages, the writing is semiseriographic. It conveys meaning. It doesn't represent sound. Perhaps they view our form of writing as a wasted opportunity. 6. How heptapods write: ... because unlike speech, a logogram is free of time. Like their ship, their written language has forward or backward direction. Linguists call this non-linear orthography, which raises the question, is this how they think? Imagine you wanted to write a sentence using 2 hands, starting from either side. You would have to know each word you wanted to use as well as much space it would occupy. A heptapod can write a complex sentence in 2 seconds effortlessly. 7. There is no time. 8. You approach language like a mathematician. 9. When you immerse yourself in a foreign language, you can actually rewire your brain. It is the language you speak that determines how you think. 10. He (the Chinese general) is saying that they are offering us advanced technology. God, are they using a game to converse with... (us). You see the problem. If all I ever gave you was a hammer, everything is a nail. That doesn't say, “Offer weapon”, (It says, “offer tool”). We don't know whether they understand the difference. It (their language) is a weapon and a tool. A culture is messy sometimes. It can be both (Cf. Sanskrit). 11. They (masses 10Ks of circles) cannot be random. 12. We (ourselves and the heptapods) make a tool and we both get something out of it. It's a compromise. Both sides are happy... like a win-win. (zero-sum game). 13. It (their language) seems to be talking about time... everywhere... there are too many gaps; nothing's complete. Then it dawned on me. Stop focusing on the 1s and focus on the 0s. How much of this is data, and how much is negative space?... massive data... 0.08333 recurring. 0.91666667 = 1 of 12. What they're saying here is that this is (a huge paradigm). 10Ks = 1 of 12. Part of a layer adds up to a whole. It (their languages) says that each of the pieces fit together. Many become THERE IS NO TIME. It is a zero-sum-game. Everyone wins. NOTE: there are 12 ships, and the heptapods have 7 tentacles. 7X12 = 84. 8 +4 =12. 14. When our heroine is taken up into the ship in the capsule, these are the messages she reads: 1. Abbott (1 of the 2 heptapods) is death process. 2. Louise has a weapon. 3. Use weapon. 4. We need humanity help. Q. from our heroine, How can you know the future? 5. Louise sees future. 6. Weapon opens time. 15. (her daughter asks in her dream). Why is my name Hannah? Your name is very special. It is a palindrome. It reads the same forward and backward. (Cf. Silver Pin, Ayios Nikolaos Museum and Linear A tablet pendant, Troullous). 16. Our heroine says, * I can read it. I know what it is. It is not a weapon. It is a gift. The weapon (= gift) IS their language. They gave it all to us. * If you learn it, when your REALLY learn it, you begin to perceive the way that they do. SO you can see what’'s to come (in time). It is the same for them. It is non-linear. WAKE UP, MOMMY! Then the heptapods disappear, dissolving into mere clouds, the same way they appeared out of nowhere in clouds, only in the opposite fashion. There is no time. They do not exist in time. The implications of this movie for the further decipherment of Linear A and Linear B (or for any unknown language) are profound, as I shall explain in greater detail in upcoming posts.
You must be logged in to post a comment. |
Topic 7: Nucleic Acids and Proteins - Assessment Statements and Outline
Describe the structure of DNA, including the antiparallel strands, 3'-5' linkages and hydrogen bonding between purines and pyrimidines
DNA is composed of 2 antiparallel strands (they run in opposite directions). Each strand is composed of nucleotides which contain a phosphate group and deoxyribose sugar and a nitrogenous base. Each deoxyribose molecule contains 5 carbons. The fifth carbon always bonds with a phosphate group through a phosphodiester bond. The third carbon also bonds with a phosphate group contained in another nucleotide. At one end of each strand there is a free 5th carbon bonded with a phosphate group and and the other end there is a free 3rd carbon. The nitrogenous bases contained in the nucleotides form base pairs. These being A and T and G and C. A and G are double ring purines and C and T are single ring pyrimidines. A purine always pairs with a pyrimidine. The base pairs are connected by hydrogen bonds. A and T form 2 hydrogen bonds and C and G form 3 hydrogen bonds
Outline the structure of nucleosomes
Nucleosomes are composed of 2 molecules of 4 histone proteins each, which form the core. DNA then wraps around the 8 histone proteins twice. The DNA and Histones are attracted by opposite charges. DNA-negative, histone- positive. The nucleosome is held together by a H1 Histone. Once part of the nucleosome DNA cannot be transcribed
State that nucleosomes help supercoil chromosomes and help to regulate transcription
Nucleosomes form the smallest component of a supercoiled DNA other than the DNA itself. Nucleosomes hold the DNA in place. Many nucleosomes will hold a single string of DNA. A group of nucleosomes containing a single strand of DNA are further supercoiled to form chromosomes. With the DNA wrapped so tightly around the histones, the DNA is inaccessible to transcription enzymes. This inaccessibility helps regulate transcription of the DNA molecule.
Distinguish between unique or single-copy genes and highly repetitive sequences in nuclear DNA
DNA contains repetitive sequences and single copy genes. Repetitive sequences comprise 5-45% of the total genome and contain 5-300 base pairs per sequence. Could be up to 100,000 replicates of a certain type per genome. Repetitive DNA is usually dispersed throughout the genome and does not apear to have any coding function. They are considered to be transposable elements that can be moved from one location to another. Single copy genes on the other hand have coding functions, and provide the base sequences needed to produce proteins. Less than 2% of a chromosome contains coding genes.
State that eukaryotic genes can contain exons and introns
Eukaryotic genes are made up of numerous fragments of protein encoding information called exons and numerous fragments of non-coding fragments called intorns. Introns and exons and mixed together within the gene.
nucleic acid composed of nucleotides, one of the larges biomolecules known, large amount of covalent and hydrogen bonds, negatively charged
Covalent bond that connects the phosphate and deoxyribose molecules
A and G
T and C
coding fragments of genes
non-coding fragments of genes
Research into genomes (whole sets of genes)
clustered sections of repetitive DNA
comprised of numerous fragments of protein-encoding information mixed with non-coding fragments
Highly coiled, does not have a coding function. Located around centromere and near telomeres.
non-functional copies of genes, rendered non-functional by mutations (have internal "stop" codons)
carries base sequences from nucleus to ribosome. the transcript that carries the code of DNA
State that DNA replication occurs in a 5' to 3' direction
DNA replication occurs in a 5' to 3' direction due to the fact that DNA polymerase III adds new nucleotides to the new strand of DNA in a 5' to 3' direction
Explain the process of DNA replication in prokaryotes, including the role of enzymes (helicase, DNA polymerase, RNA primase and DNA ligase), Okazaki fragments and deoxynucleoside triphosphates
Prokaryotes have circular DNA and therefore have a single origin of replication. Replication is bidirectional starting at this point. At the origin of replication helicase separated the DNA molecule into two strands. There are helicase enzymes or replication forks at either end of the origin. The enzyme primase forms a primer that bonds to the strand that runs 3' to 5'. The DNA Polymerase III adds nucleotides to the primer in the same direction. The DNA Polymerase I then removes the primer and replaces it with DNA. The new DNA strand formed with the old strand that runs 5' to 3' forms slightly different. instead of forming continuously it uses the same enzymes to form fragments moving away from the helicase enzyme. these fragments are called okazaki fragments. The okazaki fragments are then joined by DNA ligase which completes the backbone by adding a phosphate group. The nucleotides that are joined to the old strand are called deoxynucleoside triphosphates. This is due to the fact that they have 2 extra phosphate groups that a lost and used as energy when they are bonded to the old strand.
State that DNA replication is initiated at many points in eukaryotic chromosomes
Eukaryotic chromosomes having extremely long and linear DNA molecules need multiple origins of replication in order to be efficient. These multiple origins of replication allow replication to be faster and more efficient
One strand is from the old molecule and one strand is new nucleotides
Origins of Replication
Sites where the replication of a DNA molecule begins.
The two strands run in opposite directions: One strand goes from 5' to 3', the other goes from 3' to 5'
an enzyme that untwists the double helix at the replication forks, separating the two parental strands and making them available as template strands
An enzyme that adds nucleoside triphosphates on the lagging strand to forms an RNA primer using the parental DNA strand as a template.
DNA Polymerase III
An enzyme that synthesizes new strands by adding nucleotides onto the primer
DNA Polymerase I
An Enzyme that removes RNA primers and replaces them with the appropriate nucleotides during DNA replication.
an enzyme that eventually joins the sugar-phosphate backbones of the Okazaki fragments
the new strand of DNA that is synthesized in the same direction as the unzipping
the new strand of DNA that is synthesized in the opposite direction to the unzipping. It is made by joining the Okazaki fragments together.
Small fragments of DNA produced on the lagging strand during DNA replication, joined later by DNA ligase to form a complete strand.
Free nucleotides in the nucleus, contain a deoxyribose, a nitrogenous base, and 3 phosphate groups. 2 phosphate groups are lost when bonded and used as energy for the bonding
process in which part of the nucleotide sequence of DNA is copied into a complementary sequence in RNA
Enzyme that adds nucleoside triphosphates using base pairing to the DNA template
State that Transcription is carried out in a 5' to 3' direction
Just like in replication when nucleotides are added during transcription the 5' end of the RNA nucleotides is bonded to the 3' end of the new RNA molecule
Distinguish between the sense and antisense strands of DNA
Sense strand carries the genetic code and has the same base sequence as the new RNA molecule except it has thymine and the RNA molecule has Uracil. the antisense strand is the stand being transcribed and contains the complementary base pairs of the sense strand.
Explain the process of transcription in prokaryotes, including the role of the promoter region, RNA polymerase, nucleoside triphosphates and the terminator
During transcription the enzyme RNA polymerase attaches to the promoter regions and begins to unwind the DNA molecule. A transcription bubble forms where the RNA polymerase is. This bubble contains the antisense strand, the RNA polymerase, and the new RNA molecule which has begun to form. the RNA molecule forms because the RNA polymerase helps bind nucleoside triphosphates that are in the nucleoplasm to the growing strand. At the end of the anti
State that eukaryotic RNA needs the removal of introns to form mature mRNA
In eukaryotic RNA there are sections of bases that are non-coding. These sequences are called introns and need to be removed fro the RNA molecule in order for the molecule to become functional mRNA.
DNA strand that carries the genetic code, has the same base sequence as the new RNA molecule, but the RNA molecule has uracil instead of thymine
DNA strand being copied during transcription, is complementary to the RNA stand
determines which DNA strand is the antisense strand, short sequence of bases that is not transcribed. A specific sequence of DNA bases at the start of a gene to which RNA polymerase binds
A sequence of bases that when translated cause the RNA polymerase to detach from the DNA molecule. Transcription stops. A Specific sequence of DNA bases marking the end of the transcription process
contins 3 phosphate groups, ribose, and nitrogenous base
Explain that each tRNA molecule is recognized by a tRNA-activating enzyme that binds a specific amino to the tRNA, using ATP for energy
Each tRNA bonds to only one specific amino acid. The amino acid and tRNA molecule are bond by a specific enzyme. There is a specific enzyme for each of the 20 amino acid-tRNA molecule pairs. The bond between the tRNA molecule and the enzyme requires energy and this energy is supplied by ATP. The bonded amino acid and tRNA molecule form a structure called an activated amino acid.
Outline the structure of ribosomes, including protein and RNA composition, large and small subunits, three tRNA binding sites and mRNA binding sites
Ribosomes are composed of ribosomal RNA molecules (rRNA) and many distinct proteins. Molecules of the ribosomes are constructed in the nucleus. Ribosomes contains two subunits, one large and one small. Both are composed of rRNA and proteins. The ribosome contains 3 different tRNA binding sites held between the two subunits. The A site holds the tRNA carrying the next amino acid to be added to the polypeptide chain. The P site holds the tRNA carrying the growing polypeptide chain. The E site is the site from which the tRNA molecule that has lost its amino acid is discharged. The mRNA binding site is also located between the two subunits.
State that translation consists of initiation, elongation, translocation and termination
The process of translation consists of four phases which are initiation, elongation, translocation and termination.
State that translation occurs in a 5' to 3' direction
The start codon of all mRNA molecules is located on the 5' end and the stop codon is located at the 3' end. This means that translation takes place in a 5' to 3' direction from the stat codon to the stop codon.
Draw and label the structure of a peptide bond between two amino acids
Explain the process of translation, including ribosomes, polysomes, start codons and stop codons
Translation takes place within a ribosome. First the start codon (AUG) of the mRNA molecule binds to the ribosomes. Next the tRNA molecule with the anticodon UAC is bonded to the enzyme methionine by way of the tRNA activating enzyme and ATP. The tRNA with the activated amino acid bonds to the start codon of the mRNA molecule located in the ribosome. More tRNA molecules containing activated amino acids bond to the mRNA molecule in order of the codons. Once a tRNA molecule has bonded to the mRNA and the amino acid has by bonded to the polypeptide chain the tRNA molecule is released from the E site. When the stop codon reaches the A site a release factor fills the A site and frees the tRNA molecule and polypeptide chain from the ribosome.
State that free ribosomes synthesize proteins for use primarily within the cell and that bound ribosomes synthesize proteins primarily for secretion of for lysosomes
Polypeptides synthesized by free ribosomes are primarily used within the cell and polypeptides synthesized by ribosomes connected to the rough endoplasmic reticulum are secreted out of the cell or are used in lysosomes.
change in language from DNA to the language of protein
Contains 2 subunits both containing rRNA molecules and many distinct proteins
holds the next amino acid to be added to the polypeptide chain
holds the tRNA carrying the growing polypeptide chain
site from which tRNA that has lost its amino acid is discharged
Phases of Translation
Initiation, elongation, translocation, termination
AUG, codes for the amino acid methionine
sequence at open end of tRNA molecules
GTP (Guanosine triphosphate)
energy- rich compound, joins the two subunits of the ribosome
bind the tRNA to the exposed mRNA codons at the A site
protein that catalysis hydrolysis of the bond linking the tRNA in the P site with the polypeptide chain
Explain the four levels of protein structure, indicating the significance of each lavel
The four levels of protein structure are primary, secondary, tertiary, and quaternary. Primary structure is the unique chain or sequence of amino acids held together by polypeptide bonds. Secondary structure is created by hydrogen bonds between the oxygen of a carboxyl group and the hydrogen of an amine group which produce either an alpha-helix or beta-pleated structure. Tertiary organization refers to the 3 Dimensional structure created by different bonds and forces between the R-groups of the amino acids. Tertiary structure is important in determining the specificity of enzymes. Quaternary organization involves multiple polypeptide chains which combine to form a single structure.
Outline the difference between fibrous and globular proteins, with reference to two examples of each protein type
Fibrous proteins are composed of many polypeptide chains in a long, narrow shape and are insoluble in water. Examples of fibrous proteins are collagen and actin. Globular proteins are more 3 dimensional and are mostly water soluble. Examples of globular proteins are haemoglobin and insulin.
Explain the significance of polar and non-polar amino acids
Non-polar amino acids are usually found in hydrophobic areas. Polar amino acids are usually found in hydrophilic areas that are exposed to water. Both polar and non-polar amino acids are found in the membrane which give it its unique structure.The polarity of amino acids are also important in determining the specificity of an enzyme.
State four functions of proteins, giving a named example of each
haemoglobin-protein containing iron that transports oxygen from the lungs to all parts of the body in vertebrates
insulin- hormone secreted by pancreas that aids in maintaining blood glucose level
immunoglobulins-group of proteins that act as antibodies to fight bacteria and viruses
amylase- digestive enzyme that catalyses the hydrolysis of starch
protein containing iron that transports oxygen from the lungs to all parts of the body in vertebrates
Actin and myosin
proteins that interact to bring about muscle movement in animals
hormone secreted by pancreas that aids in maintaining blood glucose level
group of proteins that act as antibodies to fight bacteria and viruses
digestive enzyme that catalyses the hydrolysis of starch
a fibrous protein that plays a structural role in the connective tissue in humans
Contains carbon, hydrogen, oxyge, and nitrogen. source of energy. Needed by tissue for repair and growth. Made up of 20 amino acids.
the unique sequence of amino acids held together by peptide bonds in each protein
created by the formation of hydrogen bonds between the oxygen from the carboxyl group of one amino acid and the hydrogen from the amino group of another; does not involve side chains, R groups. Common structures are alpha-helix and beta-pleated.
polypeptide bends and folds over itself due to interactions among R-group and the peptide backbone.
1. H bonds between polar side chains.
2. ionic bonds between +/- side chains
3. Van der Waals reaction among Hydrophobic side chains of the amino acid
4. Disulfide bonds (covalent) or bridges between sulfur atoms
involves multiple polypeptide chains which combine to form a single structure. Not all proteins are quaternary.
A compound, such as hemoglobin, made up of a protein molecule and a nonprotein prosthetic group.
The iron-containing prosthetic group (non-polypeptide group) found in haemoglobin.
A spiral shape constituting one form of the secondary structure of proteins, arising from a specific hydrogen-bonding structure.
One form of the secondary structure of proteins in which the polypeptide chain folds back and forth, or where two regions of the chain lie parallel to each other and are held together by hydrogen bonds.
A non-protein, but organic, molecule (such as vitamin) that is covalently bound to an enzyme as part of the active site.
long and narrow in shape and are mostly insoluble
rounded 3-D shape and are mostly soluble in water
Functions of proteins
Structural, transport, Movement, Defense
State that metabolic pathways consist of chains and cycles of enzyme-catalyzed reactions
Metabolic reactions are catalyzed by enzymes and occur in a specific sequence. Metabolic reactions often take place in either chains, cycles or both. In all sequences of reactions a substance is changed by a reaction with an enzyme, and then another, and so on until it forms the final product.
Describe the induced-fit model
In the induced-fit model enzymes change shape to conform to the specific substance that it is binding to. Results from a change in the R-groups of the amino acids.
Explain that enzymes lower the activation energy of the chemical reactions that they catalyze
Enzymes lower the amount of energy required for a specific reaction. This causes a decrease in the amount of time necessary to carry out a reaction. They do not alter the proportion of the reactants to products.
Explain the difference between competitive and non-competitive inhibition, with reference to one example of each
Competitive inhibition takes place when an inhibitor competes directly fro the active site of an enzyme. Sulfanilamide competes with PABA and blocks the enzyme in bacteria. Non-competitive inhibition takes place when an inhibitor interacts with another site on the enzyme and changes. Mercury is a non-competitive inhibitor that binds to sulfur groups. Both can be reversible or irreversible.
Explain the control of metabolic pathways by end-product inhibition, including the role of allosteric sites
End-product inhibition prevents the cell from wasting chemical resources and energy by making more of a substance than it needs. Once the desired amount of product is produced a allosteric enzyme is used to prevent the creation of excess product.
sum of all chemical reactions that occur within a living organism
reaction that uses energy to build complex organic molecules from simpler ones, endergonic, biosynthetic
reaction that breaks down complex organic molecules with release of energy, exergonic, degradative
Example: Cellular Resperation
specialized proteins that speed up chemical reactions
reactant of an enzyme-catalyzed reaction
The specific portion of an enzyme that attaches to the substrate by means of weak chemical bonds.
an enzyme molecule together with the molecule on which it act, correctly arranged at the active site of the enzyme
Mechanism of Enzyme action
1. Substance contacts the active site of the enzyme
2. Enzyme changes
3. Enzyme complex forms temporarily
4. Activation energy is lowered
5. transformed substance is released
6. unchanged enzyme is freed
E + S <-> ES <-> E + P
Activation Energy AE
the minimum amount of energy required to start a chemical reaction
ph, temperature, substance concentration. Affect the active site. The activity of the enzyme may be altered because of them
two active sites, one for a substrate and one for an inhibitor
Factors that Affect Enzymatic Activity
PH, Temperature, Substrate concentration, Inhibation
The model of the enzyme that shows the substrate fitting perfectly into the active site
change in the shape of an enzyme's active site that enhances the fit between the active site and its substrate(s)
The process of a substance reducing the activity of an enzyme by entering the active site in place of the substrate whose structure it mimics.
No competition (ALLOSTERIC INHIBITION) binds to a site other than the binding site causing a change in the shape making in non-functional
is a negative feedback process which regulates the reaction rate. If it gets too much it begins to produce less if it becomes scarce or doesn't produce enough it begins to produce more
an enzyme which contains a region to which small regulatory molecules may bind in addition to and separate from the substrate binding site, thereby affecting catalytic activity |
High school physics - NGSS
Newton's second law of motion states that F = ma, or net force is equal to mass times acceleration. A larger net force acting on an object causes a larger acceleration, and objects with larger mass require more force to accelerate. Both the net force acting on an object and the object's mass determine how the object will accelerate. Created by Sal Khan.
Want to join the conversation?
- Why is it valuable to recognize scalar and vector values? I understand the difference between them, but I don't understand the practicality of it. Thanks.(47 votes)
- let's say your driving North at 50 mph for an hour (which is a vector because it has a magnitude, 50mph, and a direction, North), then you know you went 50 miles North, rather than just 50 miles in ay direction, and if you're like me then you might want to know which direction you're driving in.(71 votes)
- what exactly is a vector force?(19 votes)
- You might want to watch this video on vector and scalars:
Hope this helps,
- I understand the whole math part of the formula (it's pretty simple), but can anyone tell me what he means by 5 m/s^2? is it just saying that this object of mass is moving at a speed of 5 meters per second? Why is seconds squared?(11 votes)
- 5 meters per second is a rate, but acceleration is a change in rate, so 5 meters per second per second. this would look like 5m/sec/sec. If you apply algebra to this, that would be the same as 5m/sec *1/sec, because dividing is the same as multiplying by the reciprocal. multiply it out and you get 5m/sec^2.(19 votes)
- how do objects hit the floor at the same time(5 votes)
- HI Jorge Garcia,
This only stands true when there is no air resistance present.
Suppose that an bowling ball and a tennis ball are dropped off a cliff at the same time. To understand this we must use Newton's second law - the law of acceleration (acceleration = force/mass). Newton's second law states that the acceleration of an object is directly related to the net force and inversely related to its mass. Acceleration of an object depends on two things, force and mass. This shows that the bowling experiences a much greater force. But because of the big mass, it resists acceleration more. Even though a bowling ball may experience 100 times the force of a tennis ball, it has 100 times the mass. So, the force/mass ratio (from the equation acceleration = force/mass) is the same for each. Therefore, the acceleration is the same and they reach the ground at the same time.
Hope that helps!
- JK(13 votes)
- hi there , I had a doubt in newtons laws of motion could you pls help me .....
a person kicks a 1kg football to score a goal. When he kicks a 1kg brick , his foot gets hurt .give a reason for it. thank you(4 votes)
- What happens to the shape of the football and the brick when kicked? The football deforms and then elastically rebounds where as the brick is rigid and doesn't deform.
The deformation of the football increases the amount of time that the force of the kick is spread out so to transfer the momentum from the foot to the football is done at a slower rate over a longer time requiring a lower force.
The brick being rigid the momentum transfer has to occur quicker so there is more force on the foot and brick making it more painful and more likely to cause damage to the foot.(16 votes)
- Am I correct?
F ∝ M & F ∝ A & Multiplication represents proportionality, and therefore F = M * A.
A better way to visualize everything is through A = F / M. Logically, doubling the force upon an object will double the acceleration of the object.
The unit kg * m/s^2 cannot be comprehended as kg * m/s^2 because you have created a new unit out of two independent properties: mass & acceleration. Kg * m/s^2 is a new unit that represents force, right?(6 votes)
- You are correct. a = F / m is just an easier alternate form, because mass typically doesn't change in a lot of force problems. kg * m / s^2 is the unit of force called Newton. Just to slightly nitpick, it's usually better to write acceleration as lowercase a, to avoid confusion with area (A).(7 votes)
- why is force=massxacceleration(6 votes)
- can we find what the mass of an object is if we know the force and the acceleration of that object just like how we found the acceleration because we knew the force and mass of that object?(4 votes)
- Yes! If you know two parts of an equation with three variables, you can find the remaining variable's value.(8 votes)
- I don't get one thing.
In the 1d motion I learnt that 2 objects irrespective of their mass will fall with the same velocity. But, according to the 2nd law of motion i.e. F=ma, force on a body is directly proportional to it's mass. And more the force, the greater the velocity of the object.
Please explain.(5 votes)
- F = mg (this says that the pull is stronger on a heavier object)
F = ma (this says it takes more pull to accelerate heavier object)
So ma = mg
m cancels out
a = g(6 votes)
- so, to clarify, the direction of net force and acceleration will always be the same?(4 votes)
- That is correct - it the force that produces the acceleration. The velocity might be in any direction, but the acceleration will be in the same direction. (I'm racking my brain to see if there might be a counter example and - well - nothing so far :-)(6 votes)
Newton's First Law tells us that an object at rest will stay at rest, and object with a constant velocity will keep having that constant velocity unless it's affected by some type of net force. Or you actually could say an object with constant velocity will stay having a constant velocity unless it's affected by net force. Because really, this takes into consideration the situation where an object is at rest. You could just have a situation where the constant velocity is zero. So Newton's First Law, you're going to have your constant velocity. It could be zero. It's going to stay being that constant velocity unless it's affected, unless there's some net force that acts on it. So that leads to the natural question, how does a net force affect the constant velocity? Or how does it affect of the state of an object? And that's what Newton's Second Law gives us. So Newton's Second Law of Motion. And this one is maybe the most famous. They're all kind of famous, actually. I won't pick favorites here. But this one gives us the famous formula force is equal to mass times acceleration. And acceleration is a vector quantity, and force is a vector quantity. And what it tells us-- because we're saying, OK, if you apply a force it might change that constant velocity. But how does it change that constant velocity? Well, let's say I have a brick right here, and it is floating in space. And it's pretty nice for us that the laws of the universe-- or at least in the classical sense, before Einstein showed up-- the laws of the universe actually dealt with pretty simple mathematics. What it tells us is if you apply a net force, let's say, on this side of the object-- and we talk about net force, because if you apply two forces that cancel out and that have zero net force, then the object won't change its constant velocity. But if you have a net force applied to one side of this object, then you're going to have a net acceleration going in the same direction. So you're going to have a net acceleration going in that same direction. And what Newton's Second Law of Motion tells us is that acceleration is proportional to the force applied, or the force applied is proportional to that acceleration. And the constant of proportionality, or to figure out what you have to multiply the acceleration by to get the force, or what you have to divide the force by to get the acceleration, is called mass. That is an object's mass. And I'll make a whole video on this. You should not confuse mass with weight. And I'll make a whole video on the difference between mass and weight. Mass is a measure of how much stuff there is. Now, that we'll see in the future. There are other things that we don't normally consider stuff that does start to have mass. But for our classical, or at least a first year physics course, you could really just imagine how much stuff there is. Weight, as we'll see in a future video, is how much that stuff is being pulled down by the force of gravity. So weight is a force. Mass is telling you how much stuff there is. And this is really neat that this formula is so simple, because maybe we could have lived in a universe where force is equal to mass squared times acceleration times the square root of acceleration, which would've made all of our math much more complicated. But it's nice. It's just this constant of proportionality right over here. It's just this nice simple expression. And just to get our feet wet a little bit with computations involving force, mass, and acceleration, let's say that I have a force. And the unit of force is appropriately called the newton. So let's say I have a force of 10 newtons. And just to be clear, a newton is the same thing as 10 kilogram meters per second squared. And that's good that a newton is the same thing as kilogram meters per second squared, because that's exactly what you get on this side of the formula. So let's say I have a force of 10 newtons, and it is acting on a mass. Let's say that the mass is 2 kilograms. And I want to know the acceleration. And once again, in this video, these are vector quantities. If I have a positive value here, we're going to make the assumption that it's going to the right. If I had a negative value, then it would be going to the left. So implicitly I'm giving you not only the magnitude of the force, but I'm also giving you the direction. I'm saying it is to the right, because it is positive. So what would be acceleration? Well we just use f equals ma. You have, on the left hand side, 10. I could write 10 newtons here, or I could write 10 kilogram meters per second squared. And that is going to be equal to the mass, which is 2 kilograms times the acceleration. And then to solve for the acceleration, you just divide both sides by 2 kilograms. So let's divide the left by 2 kilograms. Let me do it this way. Let's divide the right by 2 kilograms. That cancels out. The 10 and the 2, 10 divided by 2 is 5. And then you have kilograms canceling with kilograms. Your left hand side, you get 5 meters per second squared. And then that's equal to your acceleration. Now just for fun, what happens if I double that force? Well then I have 20 newtons. Well, I'll actually work it out. Then I have 20 kilogram meters per second squared is equal to-- I'll have to color code-- 2 kilograms times the acceleration. Divide both sides by 2 kilograms, and what do we get? Cancels out. 20 divided by 2 is 10. Kilograms cancel kilograms. And so we have the acceleration, in this situation, is equal to 10 meters per second squared is equal to the acceleration. So when we doubled the force-- we went from 10 newtons to 20 newtons-- the acceleration doubled. We went from 5 meters per second squared to 10 meters per second squared. So we see that they are directly proportional, and the mass is that how proportional they are. And so you could imagine what happens if we double the mass. If we double the mass in this situation with 20 newtons, then we won't be dividing by 2 kilograms anymore. We'll be dividing by 4 kilograms. And so then we'll have 20 divided by 4, which would be 5 and would be meters per second squared. So if you make the mass larger, if you double it, then your acceleration would be half as much. So the larger the mass you have, the more force you need to accelerate it. Or for a given force, the less that it will accelerate it, the harder it is to change its constant velocity. |
Constructing proteins is busy work. Every second of the day, your bodies cells are making proteins based on information stored in your DNA. The sequence of nucleotide bases in DNA encodes the structure of different proteins. This information is “copied” in the form of mRNA during a process called transcription. The information stored in the mRNA is later “read” in a process called translation, and the encoded proteins are constructed. This entire sequence of going from DNA to mRNA to protein is called gene expression.
Translation is a complex process and requires some specialized molecular machinery. Ribosomes are intracellular structures that serve as the site of RNA translation. Ribosomes are the specific locations in the cell where proteins are actually constructed. The primary function of ribosomes is to construct proteins from information encoded in mRNA. Ribosomes do this by organizing the components of translation and catalyzing the reaction that binds amino acids into larger polypeptide chains. Ribosomes contain two distinct parts: a smaller subunit that reads the information in RNA, and a larger subunit that catalyzes the reaction that binds amino acids into a polypeptide chain.
Strictly speaking, ribosomes are NOT organelles. Ribosomes are not organelles because they are not membrane-enclosed units; they are free-standing particles. Additionally, ribosomes are present in prokaryotes, and prokaryotes do not have organelles. In eukaryotes, ribosomes tend to occupy the rough endoplasmic reticulum or they are free standing in the cytosol.
Ribosomes are complex particles made out of special ribosomal RNA (rRNA) and various proteins. The number of proteins varies between species. The actual physical structure of ribosomes is extraordinarily complex and consists of a deep intertwining web of RNA molecules and proteins. These molecules and proteins are arranged into two distinct ribosomal subunits of different size, known as the small and large subunit. The small and large subunit work together to synthesize proteins. The small subunits read the information stored in RNA and the large subunit organizes and catalyzes the reaction that binds amino acids together into a polypeptide chain.
Eukaryotic ribosomes are in general 20-30 nm in diameter and have an rRNA to ribosomal protein ratio of about 1. Prokaryotes tend to have more proteins than rRNA in their ribosomes, about a 65:35 ratio. Otherwise, the structure of eukaryote and prokaryote ribosomes are similar. Most kinds of ribosomes share a core structure consisting of tightly coiled and crisscrossing loops. It seems that the bulk of ribosome functioning is due to the activity of the rRNA while the proteins help stabilize the structure and catalyze reactions.
The exact number of ribosomes per cell differs depending on the tissue and cell type. In general, the more proteins that a given set of cells produces, the more ribosome on average those cells will have. Cells that are required to produce lots of protein products have a lot of ribosomes. For instance, the pancreas synthesizes a lot of enzymes for digestion, so pancreatic cells tend to have a large number of ribosomes.
Ribosomes are the main site of protein translation and biosynthesis. Ribosomes do this by pairing the codons in the mRNA strand with the appropriate tRNA anticodon. Amino acids are encoded in the form of codons, triplets of nitrogenous bases in mRNA. When mRNA is read by a ribosome, the ribosomes matches the mRNA codon with the appropriate tRNA anticodon. The tRNA carries with it the amino acid that the mRNA codon encodes for.
After transcription, there exists a strand of complete mRNA ready to have its information read. The two ribosomal subunits enclose a strand of mRNA, almost like the two buns of a hamburger. The smaller subunit attaches to the mRNA strand and begins to “walk” along its length, looking for the start codon (AUG). The large subunit contains 3 main slots for tRNA molecules, dubbed the A, P, and E sites.
As the small and large subunits walk along the length of the mRNA strand codon-by-codon, tRNA containing the appropriate anticodon and amino acid fit into the slots of the large subunit. The large subunit the catalyzes the hydrolysis reaction that joins the amino group of one amino acid to the carboxyl group of another. This process continues down the mRNA strand until the small subunit reads a stop codon and halts translation. The recognition of a stop codon initiates the activity of release factors which disassociate the ribosomal subunits, freeing the polypeptide chain.
Imagine mRNA as a strand of tape with instructions on it. During translation, the mRNA tape is fed into the ribosomal machine. The machine reads the tape until it sees the instruction to begin translation (AUG codon). The ribosomal subunits go down the strand carry out the instructions, piecing together the parts in the order specified by the mRNA. Once the ribosome machine reads a stop codon, it stops attaching parts and detaches the constructed chain. The almost-complete protein is then transferred elsewhere in the cell for some last minute post-translation modifications.
Most proteins specified by the human genome take about 1 minute to translate from RNA. The longer the protein, the longer time it takes for ribosomes to make. For example, titin is a protein consisting of a sequence of over 30,000 amino acids. One unit of titin takes approximately 1 hour to translate. In general, ribosomal subunits are reusable. As soon as one protein is disconnected, another mRNA strand enters the mix and protein translation begins again.
How Are Ribosomes Made?
Ribosomes are essential for the construction of proteins, but where do ribosomes themselves come from? In eukaryotes, ribosomes are synthesized by the cell in the cytoplasm, near the nucleus. Although it might sound paradoxical, ribosomes themselves are constructed by other ribosomes. Some chromosomes have sequences that encode for ribosomal RNA and ribosomal proteins. These rRNA strands and r-proteins are themselves constructed by ribosomes via translation. The rRNA and r-proteins then conglomerate to form the ribosomal subunits and are shuttled out of the nucleolus into the cytoplasm to do their job.
Because ribosomes are so complex, a single error in construction can compromise the functioning of the entire complex. As such, the body has extensive surveillance mechanisms in place to detect non-functional rRNA strands and non-functional mature ribosomes. At this time, it is not fully understood how exactly the body checks for all the possible errors in ribosomal structure.
Evolutionarily speaking, ribosomes are very old as they seem to have been present in the first forms of life to exist on Earth. The exact story behind the evolution of ribosomes is not known. Some have theorized that ribosomes evolved out of self-replicating RNA molecules that only later gained the ability to synthesize proteins once amino acids became common. Ribosomes could then trace their origin back to a time before the emergence of DNA and protein-based life, a hypothetical time period dubbed the “RNA-world” by evolutionary biologists.
Disease involving abnormalities in the structure or functioning of ribosomes are called ribosomopathies. One example is Treacher-Collins syndrome, a rare genetic condition characterized by facial deformities. Treacher-Collin ssyndrome is caused by mutations in three specific gene clusters that encode for proteins that play a role in the early development of the face. These mutations cause decreased production of rRNA and ribosomal units, which can lead to the premature death of cell involved in the development of facial bones and tissue. The deformed facial bones can cause breathing and hearing problems.
Viruses and Ribosomes
Many known viruses are capable of taking over normal ribosomes to make their own viral proteins. Viruses are microscopic quasi-organisms that infect host cells and highjack their cellular machinery to reproduce. Viruses, strictly speaking, are not considered living because they cannot reproduce on their own. Instead, viruses use the host cells machinery to reproduce and assemble themselves inside the cell. Viruses are constructed, in part, by proteins so they use host cell ribosomes to make those proteins.
Viruses will attach themselves to healthy cells and inject their genetic material inside them. The injected DNA or RNA takes over the cell’s functioning and begins to construct the proteins required to assemble new viruses. Because viruses do not have any cells or organelles, they cannot produce these proteins on their own. Viral DNA is fed into the host cell’s ribosomes to construct the proteins required for viral replication. The virus continues to replicate and assemble until the cell bursts and dies.
To sum up, ribosomes are intracellular units that assist in the construction of proteins during RNA translation. Ribosomes help make proteins by organizing the mRNA, providing sites for amino acid-carrying tRNA to bond the mRNA, and catalyzing the hydrolysis reactions that link amino acids into a polypeptide chain. All living organisms, eukaryotes and prokaryotes, have ribosomes of some kind.
Ribosomes themselves are made of a special kind of ribosomal RNA and ribosomal proteins, existing together in a complex tangle. Ribosomes are further divided into two main parts; the smaller subunit that reads the information encoded in mRNA, and a larger subunit that organizes tRNA and binds amino acids to each other. |
Gender equality, also known as sexual equality or equality of the sexes, is the state of equal ease of access to resources and opportunities regardless of gender, including economic participation and decision-making; and the state of valuing different behaviors, aspirations and needs equally, regardless of gender.
Gender equality is the goal, while gender neutrality and gender equity are practices and ways of thinking that help in achieving the goal. Gender parity, which is used to measure gender balance in a given situation, can aid in achieving gender equality but is not the goal in and of itself. Gender equality is more than equal representation, it is strongly tied to women's rights, and often requires policy changes. As of 2017, the global movement for gender equality has not incorporated the proposition of genders besides women and men, or gender identities outside of the gender binary.
UNICEF says gender equality "means that women and men, and girls and boys, enjoy the same rights, resources, opportunities and protections. It does not require that girls and boys, or women and men, be the same, or that they be treated exactly alike."[a]
On a global scale, achieving gender equality also requires eliminating harmful practices against women and girls, including sex trafficking, femicide, wartime sexual violence, and other oppression tactics. UNFPA stated that, "despite many international agreements affirming their human rights, women are still much more likely than men to be poor and illiterate. They have less access to property ownership, credit, training and employment. They are far less likely than men to be politically active and far more likely to be victims of domestic violence."
As of 2017, gender equality is the fifth of seventeen sustainable development goals of the United Nations. Gender inequality is measured annually by the United Nations Development Programme's Human Development Reports.
Christine de Pizan, an early advocate for gender equality, states in her 1405 book The Book of the City of Ladies that the oppression of women is founded on irrational prejudice, pointing out numerous advances in society probably created by women.
The Shakers, an evangelical group, which practiced segregation of the sexes and strict celibacy, were early practitioners of gender equality. They branched off from a Quaker community in the north-west of England before emigrating to America in 1774. In America, the head of the Shakers' central ministry in 1788, Joseph Meacham, had a revelation that the sexes should be equal. He then brought Lucy Wright into the ministry as his female counterpart, and together they restructured the society to balance the rights of the sexes. Meacham and Wright established leadership teams where each elder, who dealt with the men's spiritual welfare, was partnered with an eldress, who did the same for women. Each deacon was partnered with a deaconess. Men had oversight of men; women had oversight of women. Women lived with women; men lived with men. In Shaker society, a woman did not have to be controlled or owned by any man. After Meacham's death in 1796, Wright became the head of the Shaker ministry until her death in 1821.
Shakers maintained the same pattern of gender-balanced leadership for more than 200 years. They also promoted equality by working together with other women's rights advocates. In 1859, Shaker Elder Frederick Evans stated their beliefs forcefully, writing that Shakers were "the first to disenthrall woman from the condition of vassalage to which all other religious systems (more or less) consign her, and to secure to her those just and equal rights with man that, by her similarity to him in organization and faculties, both God and nature would seem to demand". Evans and his counterpart, Eldress Antoinette Doolittle, joined women's rights advocates on speakers' platforms throughout the northeastern U.S. in the 1870s. A visitor to the Shakers wrote in 1875:
Each sex works in its own appropriate sphere of action, there being a proper subordination, deference and respect of the female to the male in his order, and of the male to the female in her order [emphasis added], so that in any of these communities the zealous advocates of "women’s rights" may here find a practical realization of their ideal.
The Shakers were more than a radical religious sect on the fringes of American society; they put equality of the sexes into practice. It has been argued that they demonstrated that gender equality was achievable and how to achieve it.
In wider society, the movement towards gender equality began with the suffrage movement in Western cultures in the late-19th century, which sought to allow women to vote and hold elected office. This period also witnessed significant changes to women's property rights, particularly in relation to their marital status. (See for example, Married Women's Property Act 1882.)
Since World War II, the women's liberation movement and feminism have created a general movement towards recognition of women's rights. The United Nations and other international agencies have adopted several conventions which promote gender equality. These conventions have not been uniformly adopted by all countries, and include:
- The Convention against Discrimination in Education was adopted in 1960, and came into force in 1962 and 1968.
- The Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) was adopted in 1979 by the United Nations General Assembly. It has been described as an international bill of rights for women, which came into force on 3 September 1981.
- The Vienna Declaration and Programme of Action, a human rights declaration adopted by consensus at the World Conference on Human Rights on 25 June 1993 in Vienna, Austria. Women's rights are addressed at para 18.
- The Declaration on the Elimination of Violence Against Women was adopted by the United Nations General Assembly in 1993.
- In 1994, the twenty-year Cairo Programme of Action was adopted at the International Conference on Population and Development (ICPD) in Cairo. This non binding programme-of-action asserted that governments have a responsibility to meet individuals' reproductive needs, rather than demographic targets. As such, it called for family planning, reproductive rights services, and strategies to promote gender equality and stop violence against women.
- Also in 1994, in the Americas, the Inter-American Convention on the Prevention, Punishment and Eradication of Violence against Women, known as the Belém do Pará Convention, called for the end of violence and discrimination against women.
- At the end of the Fourth World Conference on Women, the UN adopted the Beijing Declaration on 15 September 1995 - a resolution adopted to promulgate a set of principles concerning gender equality.
- The United Nations Security Council Resolution 1325 (UNSRC 1325), which was adopted on 31 October 2000, deals with the rights and protection of women and girls during and after armed conflicts.
- The Maputo Protocol guarantees comprehensive rights to women, including the right to take part in the political process, to social and political equality with men, to control their reproductive health, and an end to female genital mutilation. It was adopted by the African Union in the form of a protocol to the African Charter on Human and Peoples' Rights and came into force in 2005.
- The EU directive Directive 2002/73/EC - equal treatment of 23 September 2002 amending Council Directive 76/207/EEC on the implementation of the principle of equal treatment for men and women as regards access to employment, vocational training and promotion, and working conditions states that: "Harassment and sexual harassment within the meaning of this Directive shall be deemed to be discrimination on the grounds of sex and therefore prohibited."
- The Council of Europe's Convention on preventing and combating violence against women and domestic violence, the first legally binding instrument in Europe in the field of violence against women, came into force in 2014.
- The Council of Europe's Gender Equality Strategy 2014-2017, which has five strategic objectives:
Such legislation and affirmative action policies have been critical to bringing changes in societal attitudes. A 2015 Pew Research Center survey of citizens in 38 countries found that majorities in 37 of those 38 countries said that gender equality is at least "somewhat important," and a global median of 65% believe it is "very important" that women have the same rights as men. Most occupations are now equally available to men and women, in many countries.[i]
Similarly, men are increasingly working in occupations which in previous generations had been considered women's work, such as nursing, cleaning and child care. In domestic situations, the role of Parenting or child rearing is more commonly shared or not as widely considered to be an exclusively female role, so that women may be free to pursue a career after childbirth. For further information, see Shared earning/shared parenting marriage.
Another manifestation of the change in social attitudes is the non-automatic taking by a woman of her husband's surname on marriage.
A highly contentious issue relating to gender equality is the role of women in religiously orientated societies.[ii][iii] Some Christians or Muslims believe in Complementarianism, a view that holds that men and women have different but complementing roles. This view may be in opposition to the views and goals of gender equality.
In addition, there are also non-Western countries of low religiosity where the contention surrounding gender equality remains. In China, a cultural preference for a male child has resulted in a shortfall of women in the population. The feminist movement in Japan has made many strides which resulted in the Gender Equality Bureau, but Japan still remains low in gender equality compared to other industrialized nations.
The notion of gender equality, and of its degree of achievement in a certain country, is very complex because there are countries that have a history of a high level of gender equality in certain areas of life but not in other areas.[iv][v] Indeed, there is a need for caution when categorizing countries by the level of gender equality that they have achieved. According to Mala Htun and S. Laurel Weldon "gender policy is not one issue but many" and:
When Costa Rica has a better maternity leave than the United States, and Latin American countries are quicker to adopt policies addressing violence against women than the Nordic countries, one at least ought to consider the possibility that fresh ways of grouping states would further the study of gender politics.
Not all beliefs relating to gender equality have been popularly adopted. For example, topfreedom, the right to be bare breasted in public, frequently applies only to males and has remained a marginal issue. Breastfeeding in public is now more commonly tolerated, especially in semi-private places such as restaurants.
It is the vision that men and women should be treated equally in social, economic and all other aspects of society, and to not be discriminated against on the basis of their gender.[vi] Gender equality is one of the objectives of the United Nations Universal Declaration of Human Rights. World bodies have defined gender equality in terms of human rights, especially women's rights, and economic development. The United Nation's Millennium Development Goals Report states that their goal is to "achieve gender equality and the empowerment of women".Despite economic struggles in developing countries, the United Nations is still trying to promote gender equality, as well as help create a sustainable living environment is all its nations.Their goals also include giving women who work certain full-time jobs equal pay to the men with the same job.
There has been criticism from some feminists towards the political discourse and policies employed in order to achieve the above items of "progress" in gender equality, with critics arguing that these gender equality strategies are superficial, in that they do not seek to challenge social structures of male domination, and only aim at improving the situation of women within the societal framework of subordination of women to men, and that official public policies (such as state policies or international bodies policies) are questionable, as they are applied in a patriarchal context, and are directly or indirectly controlled by agents of a system which is for the most part male. One of the criticisms of the gender equality policies, in particular, those of the European Union, is that they disproportionately focus on policies integrating women in public life, but do not seek to genuinely address the deep private sphere oppression.
A further criticism is that a focus on the situation of women in non-Western countries, while often ignoring the issues that exist in the West, is a form of imperialism and of reinforcing Western moral superiority; and a way of "othering" of domestic violence, by presenting it as something specific to outsiders - the "violent others" - and not to the allegedly progressive Western cultures. These critics point out that women in Western countries often face similar problems, such as domestic violence and rape, as in other parts of the world. They also cite the fact that women faced de jure legal discrimination until just a few decades ago; for instance, in some Western countries such as Switzerland, Greece, Spain, and France, women obtained equal rights in family law in the 1980s.[vii][viii][ix][x] Another criticism is that there is a selective public discourse with regard to different types of oppression of women, with some forms of violence such as honor killings (most common in certain geographic regions such as parts of Asia and North Africa) being frequently the object of public debate, while other forms of violence, such as the lenient punishment for crimes of passion across Latin America, do not receive the same attention in the West.[xi] It is also argued that the criticism of particular laws of many developing countries ignores the influence of colonialism on those legal systems.[xii] There has been controversy surrounding the concepts of Westernization and Europeanisation, due to their reminder of past colonialism, and also due to the fact that some Western countries, such as Switzerland, have been themselves been very slow to give women legal rights. There have also been objections to the way Western media presents women from various cultures creating stereotypes, such as that of 'submissive' Asian or Eastern European women, a stereotype closely connected to the mail order brides industry. Such stereotypes are often blatantly untrue: for instance women in many Eastern European countries occupy a high professional status. Feminists in many developing countries have been strongly opposed to the idea that women in those countries need to be 'saved' by the West. There are questions on how exactly should gender equality be measured, and whether the West is indeed "best" at it: a study in 2010 found that among the top 20 countries on female graduates in the science fields at university level most countries were countries that were considered internationally to score very low on the position of women's rights, with the top 3 being Iran, Saudi Arabia and Oman, and only 5 European countries made it to that top: Romania, Bulgaria, Italy, Georgia and Greece.
Controversy regarding Western cultural influence in the world is not new; in the late 1940s, when the Universal Declaration of Human Rights was being drafted, the American Anthropological Association warned that the document would be defining universal rights from a Western perspective which could be detrimental to non-Western countries, and further argued that the West's history of colonialism and forceful interference with other societies made them a problematic moral representative for universal global standards.
There has been criticism that international law, international courts, and universal gender neutral concepts of human rights are at best silent on many of the issues important to women and at worst male centered; considering the male person to be the default. Excessive gender neutrality can worsen the situation of women, because the law assumes women are in the same position as men, ignoring the biological fact that in the process of reproduction and pregnancy there is no 'equality', and that apart from physical differences there are socially constructed limitations which assign a socially and culturally inferior position to women - a situation which requires a specific approach to women's rights, not merely a gender neutral one. In a 1975 interview, Simone de Beauvoir talked about the negative reactions towards women's rights from the left that was supposed to be progressive and support social change, and also expressed skepticism about mainstream international organizations.
Efforts to fight inequality
In 2010, the European Union opened the European Institute for Gender Equality (EIGE) in Vilnius, Lithuania to promote gender equality and to fight sex discrimination. In 2015 the EU published the Gender Action Plan 2016–2020.
Gender equality is part of the national curriculum in Great Britain and many other European countries. By presidential decree, the Republic of Kazakhstan created a Strategy for Gender Equality 2006–2016 to chart the subsequent decade of gender equality efforts. Personal, Social and Health Education, religious studies and Language acquisition curricula tend to address gender equality issues as a very serious topic for discussion and analysis of its effect in society.
A large and growing body of research has shown how gender inequality undermines health and development. To overcome gender inequality the United Nations Population Fund states that, "Women's empowerment and gender equality requires strategic interventions at all levels of programming and policy-making. These levels include reproductive health, economic empowerment, educational empowerment and political empowerment."
Health and safety
The effect of gender inequality on health
Social constructs of gender (that is, cultural ideals of socially acceptable masculinity and femininity) often have a negative effect on health. The World Health Organization cites the example of women not being allowed to travel alone outside the home (to go to the hospital), and women being prevented by cultural norms to ask their husbands to use a condom, in cultures which simultaneously encourage male promiscuity, as social norms that harm women's health. Teenage boys suffering accidents due to social expectations of impressing their peers through risk taking, and men dying at much higher rate from lung cancer due to smoking, in cultures which link smoking to masculinity, are cited by the WHO as examples of gender norms negatively affecting men's health. The World Health Organization has also stated that there is a strong connection between gender socialization and transmission and lack of adequate management of HIV/AIDS.
Certain cultural practices, such as female genital mutilation (FGM), negatively affect women's health. Female genital mutilation is the ritual cutting or removal of some or all of the external female genitalia. It is rooted in inequality between the sexes, and constitutes a form of discrimination against women. The practice is found in Africa, Asia and the Middle East, and among immigrant communities from countries in which FGM is common. UNICEF estimated in 2016 that 200 million women have undergone the procedure.
According to the World Health Organization, gender equality can improve men's health. The study shows that traditional notions of masculinity have a big impact on men's health. Among European men, non-communicable diseases, such as cancer, cardiovascular diseases, respiratory illnesses, and diabetes, account for the vast majority of deaths of men aged 30–59 in Europe which are often linked to unhealthy diets, stress, substance abuse, and other habits, which the report connects to behaviors often stereotypically seen as masculine behaviors like heavy drinking and smoking. Traditional gender stereotypes that keep men in the role of breadwinner and systematic discrimination preventing women from equally contributing to their households and participating in the workforce can put additional stress on men, increasing their risk of health issues and men bolstered by cultural norms, tend to take more risks and engage in interpersonal violence more often than women, which could result in fatal injuries.
Violence against women
Violence against women is a technical term used to collectively refer to violent acts that are primarily or exclusively committed against women.[xiii] This type of violence is gender-based, meaning that the acts of violence are committed against women expressly because they are women, or as a result of patriarchal gender constructs.[xiv] Violence and mistreatment of women in marriage has come to international attention during the past decades. This includes both violence committed inside marriage (domestic violence) as well as violence related to marriage customs and traditions (such as dowry, bride price, forced marriage and child marriage).
According to some theories, violence against women is often caused by the acceptance of violence by various cultural groups as a means of conflict resolution within intimate relationships. Studies on Intimate partner violence victimization among ethnic minorities in the United Studies have consistently revealed that immigrants are a high-risk group for intimate violence.
In countries where gang murders, armed kidnappings, civil unrest, and other similar acts are rare, the vast majority of murdered women are killed by partners/ex-partners.[xv] By contrast, in countries with a high level of organized criminal activity and gang violence, murders of women are more likely to occur in a public sphere, often in a general climate of indifference and impunity. In addition, many countries do not have adequate comprehensive data collection on such murders, aggravating the problem.
In some parts of the world, various forms of violence against women are tolerated and accepted as parts of everyday life.[xvi]
In most countries, it is only in more recent decades that domestic violence against women has received significant legal attention. The Istanbul Convention acknowledges the long tradition of European countries of ignoring this form of violence.[xvii][xviii]
In some cultures, acts of violence against women are seen as crimes against the male 'owners' of the woman, such as husband, father or male relatives, rather the woman herself. This leads to practices where men inflict violence upon women in order to get revenge on male members of the women's family. Such practices include payback rape, a form of rape specific to certain cultures, particularly the Pacific Islands, which consists of the rape of a female, usually by a group of several males, as revenge for acts committed by members of her family, such as her father or brothers, with the rape being meant to humiliate the father or brothers, as punishment for their prior behavior towards the perpetrators.
Richard A. Posner writes that "Traditionally, rape was the offense of depriving a father or husband of a valuable asset — his wife's chastity or his daughter's virginity". Historically, rape was seen in many cultures (and is still seen today in some societies) as a crime against the honor of the family, rather than against the self-determination of the woman. As a result, victims of rape may face violence, in extreme cases even honor killings, at the hands of their family members. Catharine MacKinnon argues that in male dominated societies, sexual intercourse is imposed on women in a coercive and unequal way, creating a continuum of victimization, where women have few positive sexual experiences.[xix] Socialization within rigid gender constructs often creates an environment where sexual violence is common.[xx] One of the challenges of dealing with sexual violence is that in many societies women are perceived as being readily available for sex, and men are seen as entitled to their bodies, until and unless women object.[xxi]
Violence against trans women
In 2009, United States data showed that transgender people are likely to experience a broad range of violence in the entirety of their lifetime. Violence against trans women in Puerto Rico started to make headlines after being treated as "An Invisible Problem" decades before. It was reported at the 58th Convention of the Puerto Rican Association that many transgender women face institutional, emotional, and structural obstacles. Most trans women don't have access to health care for STD prevention and are not educated on violence prevention, mental health, and social services that could benefit them.
Trans women in the United States have encountered the subject of anti-trans stigma, which includes criminalization, dehumanization, and violence against those who identify as transgender. From a societal stand point, a trans person can be victim to the stigma due to lack of family support, issues with health care and social services, police brutality, discrimination in the work place, cultural marginalisation, poverty, sexual assault, assault, bullying, and mental trauma. The Human Rights Campaign tracked over 128 cases[clarification needed] that ended in fatality against transgender people in the US from 2013–2018, of which eighty percent included a trans woman of color. In the US, high rates of Intimate Partner violence impact trans women differently because they are facing discrimination from police and health providers, and alienation from family. In 2018, it was reported that 77 percent of transgender people who were linked to sex work and 72 percent of transgender people who were homeless, were victims of intimate partner violence.
Reproductive and sexual health and rights
The importance of women having the right and possibility to have control over their body, reproduction decisions, and sexuality, and the need for gender equality in order to achieve these goals are recognized as crucial by the Fourth World Conference on Women in Beijing and the UN International Conference on Population and Development Program of Action. The World Health Organization (WHO) has stated that promotion of gender equality is crucial in the fight against HIV/AIDS.
Maternal mortality is a major problem in many parts of the world. UNFPA states that countries have an obligation to protect women's right to health, but many countries do not do that. Maternal mortality is considered today not just an issue of development but also an issue of human rights.[xxii]
The right to reproductive and sexual autonomy is denied to women in many parts of the world, through practices such as forced sterilization, forced/coerced sexual partnering (e.g. forced marriage, child marriage), criminalization of consensual sexual acts (such as sex outside marriage), lack of criminalization of marital rape, violence in regard to the choice of partner (honor killings as punishment for 'inappropriate' relations).[xxiii] The sexual health of women is often poor in societies where a woman's right to control her sexuality is not recognized.[xxiv]
Adolescent girls have the highest risk of sexual coercion, sexual ill health, and negative reproductive outcomes. The risks they face are higher than those of boys and men; this increased risk is partly due to gender inequity (different socialization of boys and girls, gender based violence, child marriage) and partly due to biological factors.[xxv]
Family planning and abortion
Family planning is the practice of freely deciding the number of children one has and the intervals between their births, particularly by means of contraception or voluntary sterilization. Abortion is the induced termination of pregnancy. Abortion laws vary significantly by country. The availability of contraception, sterilization and abortion is dependent on laws, as well as social, cultural and religious norms. Some countries have liberal laws regarding these issues, but in practice it is very difficult to access such services due to doctors, pharmacists and other social and medical workers being conscientious objectors. Family planning is particularly important from a women's rights perspective, as having very many pregnancies, especially in areas where malnutrition is present, can seriously endanger women's health. UNFA writes that "Family planning is central to gender equality and women’s empowerment, and it is a key factor in reducing poverty".
Family planning is often opposed by governments who have strong natalist policies. During the 20th century, such examples have included the aggressive natalist policies from communist Romania and communist Albania. State mandated forced marriage was also practiced by some authoritarian governments as a way to meet population targets: the Khmer Rouge regime in Cambodia systematically forced people into marriages, in order to increase the population and continue the revolution. By contrast, the one child policy of China (1979–2015) included punishments for families with more than one child and forced abortions. Some governments have sought to prevent certain ethnic or social groups from reproduction. Such policies were carried out against ethnic minorities in Europe and North America in the 20th century, and more recently in Latin America against the Indigenous population in the 1990s; in Peru, President Alberto Fujimori (in office from 1990 to 2000) has been accused of genocide and crimes against humanity as a result of a sterilization program put in place by his administration targeting indigenous people (mainly the Quechuas and the Aymaras).
Investigation and prosecution of crimes against women and girls
Human rights organizations have expressed concern about the legal impunity of perpetrators of crimes against women, with such crimes being often ignored by authorities. This is especially the case with murders of women in Latin America. In particular, there is impunity in regard to domestic violence.[xxvi]
Women are often, in law or in practice, unable to access legal institutions. UN Women has said that: "Too often, justice institutions, including the police and the courts, deny women justice". Often, women are denied legal recourse because the state institutions themselves are structured and operate in ways incompatible with genuine justice for women who experience violence.[xxvii]
Harmful traditional practices
"Harmful traditional practices" refer to forms of violence which are committed in certain communities often enough to become cultural practice, and accepted for that reason. Young women are the main victims of such acts, although men can be affected. They occur in an environment where women and girls have unequal rights and opportunities. These practices include, according to the Office of the United Nations High Commissioner for Human Rights:
female genital mutilation (FGM); forced feeding of women; early marriage; the various taboos or practices which prevent women from controlling their own fertility; nutritional taboos and traditional birth practices; son preference and its implications for the status of the girl child; female infanticide; early pregnancy; and dowry price
Son preference refers to a cultural preference for sons over daughters, and manifests itself through practices such as sex selective abortion; female infanticide; or abandonment, neglect or abuse of girl-children.
Abuses regarding nutrition are taboos in regard to certain foods, which result in poor nutrition of women, and may endanger their health, especially if pregnant.
The caste system in India which leads to untouchability (the practice of ostracizing a group by segregating them from the mainstream society) often interacts with gender discrimination, leading to a double discrimination faced by Dalit women. In a 2014 survey, 27% of Indians admitted to practicing untouchability.
Traditional customs regarding birth sometimes endanger the mothers. Births in parts of Africa are often attended by traditional birth attendants (TBAs), who sometimes perform rituals that are dangerous to the health of the mother. In many societies, a difficult labour is believed to be a divine punishment for marital infidelity, and such women face abuse and are pressured to "confess" to the infidelity.
Tribal traditions can be harmful to males; for instance, the Satere-Mawe tribe use bullet ants as an initiation rite. Men must wear gloves with hundreds of bullet ants woven in for ten minutes: the ants' stings cause severe pain and paralysis. This experience must be completed twenty times for boys to be considered "warriors".
Female genital mutilation
UNFPA and UNICEF regard the practice of female genital mutilation as "a manifestation of deeply entrenched gender inequality. It persists for many reasons. In some societies, for example, it is considered a rite of passage. In others, it is seen as a prerequisite for marriage. In some communities – whether Christian, Jewish, Muslim – the practice may even be attributed to religious beliefs."
An estimated 125 million women and girls living today have undergone FGM in the 29 countries where data exist. Of these, about half live in Egypt and Ethiopia. It is most commonly carried out on girls between infancy and 15 years old.
Forced marriage and child marriage
Early marriage, child marriage or forced marriage is prevalent in parts of Asia and Africa. The majority of victims seeking advice are female and aged between 18 and 23. Such marriages can have harmful effects on a girl's education and development, and may expose girls to social isolation or abuse.
The 2013 UN Resolution on Child, Early and Forced Marriage calls for an end to the practice, and states that "Recognizing that child, early and forced marriage is a harmful practice that violates abuses, or impairs human rights and is linked to and perpetuates other harmful practices and human rights violations, that these violations have a disproportionately negative impact on women and girls [...]". Despite a near-universal commitment by governments to end child marriage, "one in three girls in developing countries (excluding China) will probably be married before they are 18." UNFPA states that, "over 67 million women 20–24 year old in 2010 had been married as girls. Half were in Asia, one-fifth in Africa. In the next decade 14.2 million girls under 18 will be married every year; this translates into 39,000 girls married each day. This will rise to an average of 15.1 million girls a year, starting in 2021 until 2030, if present trends continue."
Bride price (also called bridewealth or bride token) is money, property, or other form of wealth paid by a groom or his family to the parents of the bride. This custom often leads to women having reduced ability to control their fertility. For instance, in northern Ghana, the payment of bride price signifies a woman's requirement to bear children, and women using birth control face threats, violence and reprisals. The custom of bride price has been criticized as contributing to the mistreatment of women in marriage, and preventing them from leaving abusive marriages. UN Women recommended its abolition, and stated that: "Legislation should [...] State that divorce shall not be contingent upon the return of bride price but such provisions shall not be interpreted to limit women’s right to divorce; State that a perpetrator of domestic violence, including marital rape, cannot use the fact that he paid bride price as a defence to a domestic violence charge."
The custom of bride price can also curtail the free movement of women: if a wife wants to leave her husband, he may demand back the bride price that he had paid to the woman's family; and the woman's family often cannot or does not want to pay it back, making it difficult for women to move out of violent husbands' homes.
Economy and public policy
Economic empowerment of women
Gender discrimination often results in women obtaining low-wage jobs and being disproportionately affected by poverty, discrimination and exploitation.[xxx] A growing body of research documents what works to economically empower women, from providing access to formal financial services to training on agricultural and business management practices, though more research is needed across a variety of contexts to confirm the effectiveness of these interventions.
Gender biases also exist in product and service provision. The term "Women's Tax", also known as "Pink Tax", refers to gendered pricing in which products or services marketed to women are more expensive than similar products marketed to men. Gender-based price discrimination involves companies selling almost identical units of the same product or service at comparatively different prices, as determined by the target market. Studies have found that women pay about $1,400 a year more than men due to gendered discriminatory pricing. Although the "pink tax" of different goods and services is not uniform, overall women pay more for commodities that result in visual evidence of feminine body image.[xxxi]
Gendered arrangements of work and care
Since the 1950s, social scientists as well as feminists have increasingly criticized gendered arrangements of work and care and the male breadwinner role. Policies are increasingly targeting men as fathers as a tool of changing gender relations.Shared earning/shared parenting marriage, that is, a relationship where the partners collaborate at sharing their responsibilities inside and outside of the home, is often encouraged in Western countries.
Western countries with a strong emphasis on women fulfilling the role of homemakers, rather than a professional role, include parts of German speaking Europe (i.e. parts of Germany, Austria and Switzerland); as well as the Netherlands and Ireland.[xxxii][xxxiii][xxxiv][xxxv] In the computer technology world of Silicon Valley in the United States, New York Times reporter Nellie Bowles has covered harassment and bias against women as well as a backlash against female equality.
A key issue towards insuring gender equality in the workplace is the respecting of maternity rights and reproductive rights of women. Different countries have different rules regarding maternity leave, paternity leave and parental leave.[xxxvi] Another important issue is ensuring that employed women are not de jure or de facto prevented from having a child.[xxxvii] In some countries, employers ask women to sign formal or informal documents stipulating that they will not get pregnant or face legal punishment. Women often face severe violations of their reproductive rights at the hands of their employers; and the International Labour Organization classifies forced abortion coerced by the employer as labour exploitation.[xxxviii] Other abuses include routine virginity tests of unmarried employed women.
Freedom of movement
The degree to which women can participate (in law and in practice) in public life varies by culture and socioeconomic characteristics. Seclusion of women within the home was a common practice among the upper classes of many societies, and this still remains the case today in some societies. Before the 20th century it was also common in parts of Southern Europe, such as much of Spain.
Women's freedom of movement continues to be legally restricted in some parts of the world. This restriction is often due to marriage laws.[xxxix] In some countries, women must legally be accompanied by their male guardians (such as the husband or male relative) when they leave home.
The Convention on the Elimination of all Forms of Discrimination Against Women (CEDAW) states at Article 15 (4) that:
4. States Parties shall accord to men and women the same rights with regard to the law relating to the movement of persons and the freedom to choose their residence and domicile.
In addition to laws, women's freedom of movement is also restricted by social and religious norms.[xl] Restrictions on freedom of movement also exist due to traditional practices such as baad, swara, or vani.[xli]
Girls' access to education
In many parts of the world, girls' access to education is very restricted. In developing parts of the world women are often denied opportunities for education as girls and women face many obstacles. These include: early and forced marriages; early pregnancy; prejudice based on gender stereotypes at home, at school and in the community; violence on the way to school, or in and around schools; long distances to schools; vulnerability to the HIV epidemic; school fees, which often lead to parents sending only their sons to school; lack of gender sensitive approaches and materials in classrooms. According to OHCHR, there have been multiple attacks on schools worldwide during the period 2009–2014 with "a number of these attacks being specifically directed at girls, parents and teachers advocating for gender equality in education". The United Nations Population Fund says:
About two thirds of the world's illiterate adults are women. Lack of an education severely restricts a woman's access to information and opportunities. Conversely, increasing women's and girls' educational attainment benefits both individuals and future generations. Higher levels of women's education are strongly associated with lower infant mortality and lower fertility, as well as better outcomes for their children.
Political participation of women
Women are underrepresented in most countries' National Parliaments. The 2011 UN General Assembly resolution on women's political participation called for female participation in politics, and expressed concern about the fact that "women in every part of the world continue to be largely marginalized from the political sphere".[xlii] Only 22 percent of parliamentarians globally are women and therefore, men continue to occupy most positions of political and legal authority. As of November 2014, women accounted for 28% of members of the single or lower houses of parliaments in the European Union member states.[XLVII]
In 2015, 61.3% of Rwanda's Lower House of Parliament were women, the highest proportion anywhere in the world, but worldwide that was one of only two such bodies where women were in the majority, the other being Bolivia's Lower House of Parliament. (See also Gender equality in Rwanda).
Marriage, divorce and property laws and regulations
Equal rights for women in marriage, divorce, and property/land ownership and inheritance are essential for gender equality. The Convention on the Elimination of all Forms of Discrimination Against Women (CEDAW) has called for the end of discriminatory family laws. In 2013, UN Women stated that "While at least 115 countries recognize equal land rights for women and men, effective implementation remains a major challenge".
The legal and social treatment of married women has been often discussed as a political issue from the 19th century onwards.[xliv][xlv] Until the 1970s, legal subordination of married women was common across European countries, through marriage laws giving legal authority to the husband, as well as through marriage bars.[xlvi][xlvii] In 1978, the Council of Europe passed the Resolution (78) 37 on equality of spouses in civil law. Switzerland was one of the last countries in Europe to establish gender equality in marriage, in this country married women's rights were severely restricted until 1988, when legal reforms providing for gender equality in marriage, abolishing the legal authority of the husband, come into force (these reforms had been approved in 1985 by voters in a referendum, who narrowly voted in favor with 54.7% of voters approving). In the Netherlands, it was only in 1984 that full legal equality between husband and wife was achieved: prior to 1984 the law stipulated that the husband's opinion prevailed over the wife's regarding issues such as decisions on children's education and the domicile of the family.
In the United States, a wife's legal subordination to her husband was fully ended by the case of Kirchberg v. Feenstra, 450 U.S. 455 (1981), a United States Supreme Court case in which the Court held a Louisiana Head and Master law, which gave sole control of marital property to the husband, unconstitutional.
There have been and sometimes continue to be unequal treatment of married women in various aspects of everyday life. For example, in Australia, until 1983 a husband had to authorize an application for an Australian passport for a married woman. Other practices have included, and in many countries continue to include, a requirement for a husband's consent for an application for bank loans and credit cards by a married woman, as well as restrictions on the wife's reproductive rights, such as a requirement that the husband consents to the wife's acquiring contraception or having an abortion. In some places, although the law itself no longer requires the consent of the husband for various actions taken by the wife, the practice continues de facto, with the authorization of the husband being asked in practice.
Laws regulating marriage and divorce continue to discriminate against women in many countries.[xlix] In Iraq husbands have a legal right to "punish" their wives, with paragraph 41 of the criminal code stating that there is no crime if an act is committed while exercising a legal right.[l] In the 1990s and the 21st century there has been progress in many countries in Africa: for instance in Namibia the marital power of the husband was abolished in 1996 by the Married Persons Equality Act; in Botswana it was abolished in 2004 by the Abolition of Marital Power Act; and in Lesotho it was abolished in 2006 by the Married Persons Equality Act. Violence against a wife continues to be seen as legally acceptable in some countries; for instance in 2010, the United Arab Emirates Supreme Court ruled that a man has the right to physically discipline his wife and children as long as he does not leave physical marks. The criminalization of adultery has been criticized as being a prohibition, which, in law or in practice, is used primarily against women; and incites violence against women (crimes of passion, honor killings).[li]
Social and ideological
Political gender equality
Two recent movements in countries with large Kurdish populations have implemented political gender equality. One has been the Kurdish movement in southeastern Turkey led by the Democratic Regions Party (DBP) and the Peoples' Democratic Party (HDP), from 2006 or before. The mayorships of 2 metropolitan areas and 97 towns are led jointly by a man and a woman, both called co-mayors. Party offices are also led by a man and a woman. Local councils were formed, which also had to be co-presided over by a man and a woman together. However, in November 2016 the Turkish government cracked down on the HDP, jailing ten of its members of Parliament, including the party's male and female co-leaders.
A movement in northern Syria, also Kurdish, has been led by the Democratic Union Party (PYD). In northern Syria all villages, towns and cities governed by the PYD were co-governed by a man and a woman. Local councils were formed where each sex had to have 40% representation, and minorities also had to be represented.
Gender stereotypes arise from the socially approved roles of women and men in the private or public sphere, at home or in the workplace. In the household, women are typically seen as mother figures, which usually places them into a typical classification of being "supportive" or "nurturing". Women are expected to want to take on the role of a mother and take on primary responsibility for household needs. Their male counterparts are seen as being "assertive" or "ambitious" as men are usually seen in the workplace or as the primary breadwinner for his family. Due to these views and expectations, women often face discrimination in the public sphere, such as the workplace. Women are stereotyped to be less productive at work because they are believed to focus more on family when they get married or have children. A gender role is a set of societal norms dictating the types of behaviors which are generally considered acceptable, appropriate, or desirable for people based on their sex. Gender roles are usually centered on conceptions of femininity and masculinity, although there are exceptions and variations.
Portrayal of women in the media
The way women are represented in the media has been criticized as perpetuating negative gender stereotypes. The exploitation of women in mass media refers to the criticisms that are levied against the use or objectification of women in the mass media, when such use or portrayal aims at increasing the appeal of media or a product, to the detriment of, or without regard to, the interests of the women portrayed, or women in general. Concerns include the fact that all forms of media have the power to shape the population's perceptions and portray images of unrealistic stereotypical perceptions by portraying women either as submissive housewives or as sex objects. The media emphasizes traditional domestic or sexual roles that normalize violence against women. The vast array of studies that have been conducted on the issue of the portrayal of women in the media have shown that women are often portrayed as irrational, fragile, not intelligent, submissive and subservient to men. Research has shown that stereotyped images such as these have been shown to negatively impact on the mental health of many female viewers who feel bound by these roles, causing amongst other problems, self-esteem issues, depression and anxiety.
According to a study, the way women are often portrayed by the media can lead to: "Women of average or normal appearance feeling inadequate or less beautiful in comparison to the overwhelming use of extraordinarily attractive women"; "Increase in the likelihood and acceptance of sexual violence"; "Unrealistic expectations by men of how women should look or behave"; "Psychological disorders such as body dysmorphic disorder, anorexia, bulimia and so on"; "The importance of physical appearance is emphasized and reinforced early in most girls' development." Studies have found that nearly half of females ages 6–8 have stated they want to be slimmer. (Striegel-Moore & Franko, 2002)".
Statistics on women's representation in the media
- Women have won only a quarter of Pulitzer prizes for foreign reporting and only 17 per cent of awards of the Martha Gellhorn Prize for Journalism. In 2015 the African Development Bank began sponsoring a category for Women’s Rights in Africa, designed to promote gender equality through the media, as one of the prizes awarded annually by One World Media.
- Created in 1997, the UNESCO/Guillermo Cano World Press Freedom Prize is an annual award that honors a person, organization or institution that has made a notable contribution to the defense and/or promotion of press freedom anywhere in the world. Nine out of 20 winners have been women.
- The Poynter Institute since 2014 has been running a Leadership Academy for Women in Digital Media, expressly focused on the skills and knowledge needed to achieve success in the digital media environment.
- The World Association of Newspapers and News Publishers (WAN-IFRA), which represents more than 18,000 publications, 15,000 online sites and more than 3,000 companies in more than 120 countries, leads the Women in the News (WIN) campaign together with UNESCO as part of their Gender and Media Freedom Strategy. In their 2016 handbook, WINing Strategies: Creating Stronger Media Organizations by Increasing Gender Diversity, they highlight a range of positive action strategies undertaken by a number of their member organizations from Germany to Jordan to Colombia, with the intention of providing blueprints for others to follow.
Informing women of their rights
While in many countries, the problem lies in the lack of adequate legislation, in others the principal problem is not as much the lack of a legal framework, but the fact is that most women do not know their legal rights. This is especially the case as many of the laws dealing with women's rights are of recent date. This lack of knowledge enables to abusers to lead the victims (explicitly or implicitly) to believe that their abuse is within their rights. This may apply to a wide range of abuses, ranging from domestic violence to employment discrimination. The United Nations Development Programme states that, in order to advance gender justice, "Women must know their rights and be able to access legal systems".
The 1993 UN Declaration on the Elimination of Violence Against Women states at Art. 4 (d) [...] "States should also inform women of their rights in seeking redress through such mechanisms". Enacting protective legislation against violence has little effect, if women do not know how to use it: for example a study of Bedouin women in Israel found that 60% did not know what a restraining order was; or if they don't know what acts are illegal: a report by Amnesty International showed in Hungary, in a public opinion poll of nearly 1,200 people in 2006, a total of 62% did not know that marital rape was an illegal (it was outlawed in 1997) and therefore the crime was rarely reported. Ensuring women have a minim understanding of health issues is also important: lack of access to reliable medical information and available medical procedures to which they are entitled hurts women's health.
Gender mainstreaming is described as the public policy of assessing the different implications for women and men of any planned policy action, including legislation and programmes, in all areas and levels, with the aim of achieving gender equality. The concept of gender mainstreaming was first proposed at the 1985 Third World Conference on Women in Nairobi, Kenya. The idea has been developed in the United Nations development community. Gender mainstreaming "involves ensuring that gender perspectives and attention to the goal of gender equality are central to all activities".
According to the Council of Europe definition: "Gender mainstreaming is the (re)organization, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages, by the actors normally involved in policy-making."
An integrated gender mainstreaming approach is "the attempt to form alliances and common platforms that bring together the power of faith and gender-equality aspirations to advance human rights." For example, "in Azerbaijan, UNFPA conducted a study on gender equality by comparing the text of the Convention on the Elimination of All Forms of Discrimination against Women with some widely recognized Islamic references and resources. The results reflect the parallels between the Convention and many tenets of Islamic scripture and practice. The study showcased specific issues, including VAW, child marriage, respect for the dignity of women, and equality in the economic and political participation of women. The study was later used to produce training materials geared towards sensitizing religious leaders."
This section may need to be rewritten to comply with Wikipedia's quality standards. (June 2013)
- Coloniality of gender
- Equal opportunity
- Gender empowerment
- Men's rights
- Right to equal protection
- Sex and gender distinction
- Sex industry
- Sex ratio
- Special Measures for Gender Equality in The United Nations(UN)
- Bahá'í Faith and gender equality
- Female education
- Gender Parity Index (in education)
- Gender polarization
- Gender sensitization
- Mixed-sex education
- Quaker Testimony of Equality
- Shared parenting (after divorce)
- Women in Islam
- 2009 Danish Act of Succession referendum
- Anti-discrimination law
- Equal Pay Act of 1963 (United States)
- Equality Act 2006 (UK)
- Equality Act 2010 (UK)
- European charter for equality of women and men in local life
- Gender Equality Duty in Scotland
- Gender Equity Education Act (Taiwan)
- Lilly Ledbetter Fair Pay Act (United States, 2009)
- List of gender equality lawsuits
- Paycheck Fairness Act (in the US)
- Title IX of the Education Amendments of 1972 (United States)
- Uniform civil code (India)
- Women's Petition to the National Assembly (France, 1789)
Organizations and ministries
- Afghan Ministry of Women Affairs (Afghanistan)
- Christians for Biblical Equality, an organization that opposes gender discrimination within the church
- Committee on Women's Rights and Gender Equality (European Parliament)
- Equal Opportunities Commission (United Kingdom)
- Gender Empowerment Measure, a metric used by the United Nations
- Gender Equity and Reconciliation International, an organization that supports women and men to collaborate on creating gender equality
- Gender-related Development Index, a metric used by the United Nations
- The Girl Effect, an organization to help girls, worldwide, toward ending poverty
- Government Equalities Office (UK)
- International Center for Research on Women
- Ministry of Integration and Gender Equality (Sweden)
- Ministry of Women, Family and Community Development (Malaysia)
- Philippine Commission on Women (Philippines)
- UN Women, United Nations entity working for the empowerment of women
Historical anecdotal reports
- The ILO similarly defines gender equality as "the enjoyment of equal rights, opportunities and treatment by men and women and by boys and girls in all spheres of life"
- Cite error: The named reference
Peruwas invoked but never defined (see the help page).
- Cite error: The named reference
Andorrawas invoked but never defined (see the help page).
- For example, many countries now permit women to serve in the armed forces, the police forces and to be fire fighters – occupations traditionally reserved for men. Although these continue to have a male majority, an increasing number of women are now active, especially in directive fields such as politics, and occupy high positions in business.
- For example, the Cairo Declaration on Human Rights in Islam declared that women have equal dignity but not equal rights, and this was accepted by many predominantly Muslim countries.
- In some Christian churches, the practice of churching of women may still have elements of ritual purification and the Ordination of women to the priesthood may be restricted or forbidden.
- An example is Finland, which has offered very high opportunities to women in public/professional life but has had a weak legal approach to the issue of violence against women, with the situation in this country having been called a paradox.[I][II]"Finland is repeatedly reminded of its widespread problem of violence against women and recommended to take more efficient measures to deal with the situation. International criticism concentrates on the lack of measures to combat violence against women in general and in particular on the lack of a national action plan to combat such violence and on the lack of legislation on domestic violence. (...) Compared to Sweden, Finland has been slower to reform legislation on violence against women. In Sweden, domestic violence was already illegal in 1864, while in Finland such violence was not outlawed until 1970, over a hundred years later. In Sweden the punishment of victims of incest was abolished in 1937 but not until 1971 in Finland. Rape within marriage was criminalised in Sweden in 1962, but the equivalent Finnish legislation only came into force in 1994 — making Finland one of the last European countries to criminalise marital rape. In addition, assaults taking place on private property did not become impeachable offences in Finland until 1995. Only in 1997 did victims of sexual offences and domestic violence in Finland become entitled to government-funded counselling and support services for the duration of their court cases."[III]
- Denmark received harsh criticism for inadequate laws in regard to sexual violence in a 2008 report produced by Amnesty International,[III] which described Danish laws as "inconsistent with international human rights standards".[IV] This led to Denmark reforming its sexual offenses legislation in 2013.[V][VI][VII]
- "Mainstreaming a gender perspective is the process of assessing the implications for women and men of any planned action, including legislation, policies or programmes, in all areas and at all levels. It is a strategy for making women's as well as men's concerns and experiences an integral dimension of the design, implementation, monitoring and evaluation of policies and programmes in all political, economic and societal spheres so that women and men benefit equally and inequality is not perpetuated. The ultimate goal is to achieve gender equality."[VIII]
- In Switzerland in 1985, a referendum guaranteed women legal equality with men within marriage.[IX][X] The new reforms came into force in January 1988.
- In Greece in 1983, legislation was passed guaranteeing equality between spouses, abolishing dowry, and ending legal discrimination against illegitimate children.[XI][XII]
- In 1981, Spain abolished the requirement that married women must have their husbands’ permission to initiate judicial proceedings[XIII]
- Although married women in France obtained the right to work without their husbands' permission in 1965,[XIV] and the paternal authority of a man over his family was ended in 1970 (before that parental responsibilities belonged solely to the father who made all legal decisions concerning the children), it was only in 1985 that a legal reform abolished the stipulation that the husband had the sole power to administer the children's property.[XV]
- In 2002, Widney Brown, advocacy director for Human Rights Watch, pointed out that "crimes of passion have a similar dynamic [to honor killings] in that the women are killed by male family members and the crimes are perceived [in those relevant parts of the world] as excusable or understandable".
- Especially of the French Napoleonic Code,[XVI] which was extremely powerful in its influence over the world (historian Robert Holtman regards it as one of the few documents that have influenced the whole world[XVII]) and which designated married women a subordinate role, and provided for leniency with regard to 'crimes of passion' (which was the case in France until 1975[XVIII])
- Forms of violence against women include Sexual violence (including War Rape, Marital rape, Date rape by drugs or alcohol, and Child sexual abuse, the latter often in the context of Child marriage), Domestic violence, Forced marriage, Female genital mutilation, Forced prostitution, Sex trafficking, Honor killing, Dowry killing, Acid attacks, Stoning, Flogging, Forced sterilisation, Forced abortion, violence related to accusations of witchcraft, mistreatment of widows (e.g. widow inheritance). Fighting against violence against women is considered a key issue for achieving gender equality. The Council of Europe adopted the Convention on preventing and combating violence against women and domestic violence (Istanbul Convention).
- The UN Declaration on the Elimination of Violence Against Women defines violence against women as "any act of gender-based violence that results in, or is likely to result in, physical, sexual or psychological harm or suffering to women, including threats of such acts, coercion or arbitrary deprivation of liberty, whether occurring in public or in private life" and states that:"violence against women is a manifestation of historically unequal power relations between men and women, which have led to domination over and discrimination against women by men and to the prevention of the full advancement of women, and that violence against women is one of the crucial social mechanisms by which women are forced into a subordinate position compared with men."[XIX]
- As of 2004–2009, former and current partners were responsible for more than 80% of all cases of murders of women in Cyprus, France, and Portugal.
- According to UNFPA:[XX]
- "In some developing countries, practices that subjugate and harm women – such as wife-beating, killings in the name of honour, female genital mutilation/cutting and dowry deaths – are condoned as being part of the natural order of things."
- In its explanatory report at para 219, it states:
- "There are many examples from past practice in Council of Europe member states that show that exceptions to the prosecution of such cases were made, either in law or in practice, if victim and perpetrator were, for example, married to each other or had been in a relationship. The most prominent example is rape within marriage, which for a long time had not been recognised as rape because of the relationship between victim and perpetrator."[XXI]
- In Opuz v Turkey, the European Court of Human Rights recognized violence against women as a form discrimination against women: "[T]he Court considers that the violence suffered by the applicant and her mother may be regarded as gender-based violence which is a form of discrimination against women."[XXII] This is also the position of the Istanbul Convention which reads:"Article 3 – Definitions, For the purpose of this Convention: a "violence against women" is understood as a violation of human rights and a form of discrimination against women [...]".[XXIII]
- She writes "To know what is wrong with rape, know what is right about sex. If this, in turn, is difficult, the difficulty is as instructive as the difficulty men have in telling the difference when women see one. Perhaps the wrong of rape has proved so difficult to define because the unquestionable starting point has been that rape is defined as distinct from intercourse, while for women it is difficult to distinguish the two under conditions of male dominance."[XXIV]
- According to the World Health Organization: "Sexual violence is also more likely to occur where beliefs in male sexual entitlement are strong, where gender roles are more rigid, and in countries experiencing high rates of other types of violence."[XXV]
- Rebecca Cook wrote in Submission of Interights to the European Court of Human Rights in the case of M.C. v. Bulgaria, 12 April 2003: "The equality approach starts by examining not whether the woman said 'no', but whether she said 'yes'. Women do not walk around in a state of constant consent to sexual activity unless and until they say 'no', or offer resistance to anyone who targets them for sexual activity. The right to physical and sexual autonomy means that they have to affirmatively consent to sexual activity."
- UNFPA says that, "since 1990, the world has seen a 45 per cent decline in maternal mortality – an enormous achievement. But in spite of these gains, almost 800 women still die every day from causes related to pregnancy or childbirth. This is about one woman every two minutes."[XXVI] According to UNFPA:
- "Preventable maternal mortality occurs where there is a failure to give effect to the rights of women to health, equality, and non-discrimination. Preventable maternal mortality also often represents a violation of a woman’s right to life."
- Amnesty International’s Secretary General has stated that: "It is unbelievable that in the twenty-first century some countries are condoning child marriage and marital rape while others are outlawing abortion, sex outside marriage and same-sex sexual activity – even punishable by death."[XXVII]
- High Commissioner for Human Rights Navi Pillay has called for full respect and recognition of women's autonomy and sexual and reproductive health rights, stating:
- "Violations of women's human rights are often linked to their sexuality and reproductive role. Women are frequently treated as property, they are sold into marriage, into trafficking, into sexual slavery. Violence against women frequently takes the form of sexual violence. Victims of such violence are often accused of promiscuity and held responsible for their fate, while infertile women are rejected by husbands, families, and communities. In many countries, married women may not refuse to have sexual relations with their husbands, and often have no say in whether they use contraception."[XXVIII]
- Females' risk of acquiring sexually transmitted infections during unprotected sexual relations is two to four times that of males'.[XXIX]
- High Commissioner for Human Rights, Navi Pillay, has stated on domestic violence against women: "The reality for most victims, including victims of honor killings, is that state institutions fail them and that most perpetrators of domestic violence can rely on a culture of impunity for the acts they commit – acts which would often be considered as crimes, and be punished as such, if they were committed against strangers."[XXX]
- According to Amnesty International, "Women who are victims of gender-related violence often have little recourse because many state agencies are themselves guilty of gender bias and discriminatory practices."[XXXI]
- For example, nations of the Arab world that deny equality of opportunity to women were warned in a 2008 United Nations-sponsored report that this disempowerment is a critical factor crippling these nations' return to the first rank of global leaders in commerce, learning, and culture.[XXXII] That is, Western bodies are less likely to conduct commerce with nations in the Middle East that retain culturally accepted attitudes towards the status and function of women in their society in an effort to force them to change their beliefs in the face of relatively underdeveloped economies.
- UN Women states that: "Investing in women’s economic empowerment sets a direct path towards gender equality, poverty eradication and inclusive economic growth."
- The UN Population Fund says that, "Six out of 10 of the world’s poorest people are women. Economic disparities persist partly because much of the unpaid work within families and communities falls on the shoulders of women, and because women continue to face discrimination in the economic sphere."
- For example, studies have shown that women are charged more for services especially tailoring, hair cutting and laundering.
- In 2011, Jose Manuel Barroso, then president of the European Commission, stated "Germany, but also Austria and the Netherlands, should look at the example of the northern countries [...] that means removing obstacles for women, older workers, foreigners and low-skilled job-seekers to get into the workforce".[XXXIII]
- The Netherlands and Ireland are among the last Western countries to accept women as professionals; despite the Netherlands having an image as progressive on gender issues, women in the Netherlands work less in paid employment than women in other comparable Western countries. In the early 1980s, the Commission of the European Communities report Women in the European Community, found that the Netherlands and Ireland had the lowest labour participation of married women and the most public disapproval of it.[XXXIV]
- In Ireland, until 1973, there was a marriage bar.[XXXV]
- In the Netherlands, from the 1990s onwards, the numbers of women entering the workplace have increased, but with most of the women working part time. As of 2014, the Netherlands and Switzerland were the only OECD members where most employed women worked part-time,[XXXVI] while in the United Kingdom, women made up two-thirds of workers on long term sick leave, despite making up only half of the workforce and even after excluding maternity leave.[XXXVII]
- In the European Union (EU) the policies vary significantly by country, but the EU members must abide by the minimum standards of the Pregnant Workers Directive and Parental Leave Directive.[XXXVIII]
- For example, some countries have enacted legislation explicitly outlawing or restricting what they view as abusive clauses in employment contracts regarding reproductive rights (such as clauses which stipulate that a woman cannot get pregnant during a specified time) rendering such contracts void or voidable.[XXXIX]
- Being the victim of a forced abortion compelled by the employer was ruled a ground of obtaining political asylum in the US.[XL]
- For instance, in Yemen, marriage regulations stipulate that a wife must obey her husband and must not leave home without his permission.[XLI]
- For example, purdah, a religious and social practice of female seclusion prevalent among some Muslim communities in Afghanistan and Pakistan as well as upper-caste Hindus in Northern India, such as the Rajputs, which often leads to the minimizing of the movement of women in public spaces and restrictions on their social and professional interactions;[XLII] or namus, a cultural concept strongly related to family honor.
- Common especially among Pashtun tribes in Pakistan and Afghanistan, whereby a girl is given from one family to another (often though a marriage), in order to settle the disputes and feuds between the families. The girl, who now belongs to the second family, has very little autonomy and freedom, her role being to serve the new family.[XLIII][XLIV][XLV][XLVI]
- The Council of Europe states that:
- "Pluralist democracy requires balanced participation of women and men in political and public decision-making. Council of Europe standards provide clear guidance on how to achieve this."
- Notably in Switzerland, where women gained the right to vote in federal elections in 1971;[XLVIII] but in the canton of Appenzell Innerrhoden women obtained the right to vote on local issues only in 1991, when the canton was forced to do so by the Federal Supreme Court of Switzerland.[XLIX]
- John Stuart Mill, in The Subjection of Women (1869) compared marriage to slavery and wrote that: "The law of servitude in marriage is a monstrous contradiction to all the principles of the modern world, and to all the experience through which those principles have been slowly and painfully worked out."[L]
- In 1957, James Everett, then Minister for Justice in Ireland, stated: "The progress of organised society is judged by the status occupied by married women".[LI]
- In France, married women obtained the right to work without their husband's consent in 1965;[LII] while the paternal authority of a man over his family was ended in 1970 (before that parental responsibilities belonged solely to the father who made all legal decisions concerning the children); and a new reform in 1985 abolished the stipulation that the father had the sole power to administer the children's property.[LIII]
- In Austria, the marriage law was overhauled between 1975 and 1983, abolishing the restrictions on married women's right to work outside the home, providing for equality between spouses, and for joint ownership of property and assets.[LIV]
- For example, in Greece dowry was removed from family law only in 1983 through legal changes which reformed marriage law and provided gender equality in marriage.[LV] These changes also dealt with the practice of women changing their surnames to that of the husbands upon getting married, a practice which has been outlawed or restricted in some jurisdictions, because it is seen as contrary to women's rights. As such, women in Greece are required to keep their birth names for their whole life.[LVI]
- For example, in Yemen, marriage regulations state that a wife must obey her husband and must not leave home without his permission.[XLI]
- Examples of legal rights include: "The punishment of a wife by her husband, the disciplining by parents and teachers of children under their authority within certain limits prescribed by law or by custom".[LVII]
- A Joint Statement by the United Nations Working Group on discrimination against women in law and in practice in 2012 stated:[LVIII] "the United Nations Working Group on discrimination against women in law and in practice is deeply concerned at the criminalization and penalization of adultery whose enforcement leads to discrimination and violence against women." UN Women also stated that "Drafters should repeal any criminal offenses related to adultery or extramarital sex between consenting adults".[LIX]
- Clarke, Kris (August 2011). "The paradoxical approach to intimate partner violence in Finland". International Perspectives in Victimology. 6 (1): 9–19. doi:10.5364/ipiv.6.1.19 (inactive 2020-05-21).CS1 maint: ref=harv (link) Available through academia.edu.
- McKie, Linda; Hearn, Jeff (August 2004). "Gender-neutrality and gender equality: comparing and contrasting policy responses to 'domestic violence' in Finland and Scotland". Scottish Affairs. 48 (1): 85–107. doi:10.3366/scot.2004.0043.CS1 maint: ref=harv (link) Pdf.
- Danish, Swedish, Finnish and Norwegian sections of Amnesty International (March 2010), "Rape and human rights in Finland", in Danish, Swedish, Finnish and Norwegian sections of Amnesty International (ed.), Case closed: rape and human rights in the Nordic countries, Amnesty International, pp. 89–91, archived from the original on 2017-10-17, retrieved 2015-12-02.CS1 maint: multiple names: authors list (link) CS1 maint: ref=harv (link) Pdf.
- Amnesty International (May 2011). Denmark: human rights violations and concerns in the context of counter-terrorism, immigration-detention, forcible return of rejected asylum-seekers and violence against women (PDF). Amnesty International. Amnesty International submission to the UN Universal Periodic Review, May 2011.
- "Ny voldtægtslovgivning er en sejr for danske kvinders retssikkerhed". Amnesty.dk - Amnesty International. Retrieved 14 June 2015.
- "Slut med "konerabat" for voldtægt". www.b.dk. 3 June 2013. Retrieved 14 June 2015.
- "Straffeloven - Bekendtgørelse af straffeloven". Retsinformation.dk. Retrieved 14 June 2015.
- United Nations. Report of the Economic and Social Council for 1997. A/52/3.18 September 1997, p 28.
- "Switzerland profile - Timeline". Bbc.com. 28 December 2016. Retrieved 14 November 2017.
- Switzerland, Markus G. Jud, Lucerne. "The Long Way to Women's Right to Vote in Switzerland: a Chronology". History-switzerland.geschichte-schweiz.ch. Retrieved 14 November 2017.
- Reuters (26 January 1983). "AROUND THE WORLD; Greece Approves Family Law Changes". The New York Times. Retrieved 14 November 2017.
- Demos, Vasilikie. (2007) "The Intersection of Gender, Class and Nationality and the Agency of Kytherian Greek Women." Paper presented at the annual meeting of the American Sociological Association. August 11.
- "Women Business and the Law 2014 Key Findings" (PDF). Archived from the original (PDF) on 2014-08-24. Retrieved 2014-08-25.
- "Modern and Contemporary France: Women in France" (PDF). Archived from the original (PDF) on 2016-03-04. Retrieved 2016-04-03.
- Ferrand, Frédérique. "National Report: France" (PDF). Parental Responsibilities. Commission on European Family Law.
- Raja., Rhouni (2010-01-01). Secular and Islamic feminist critiques in the work of Fatima Mernissi. Brill. p. 52. ISBN 9789004176164. OCLC 826863738.
- Holtman, Robert B. (1979). The Napoleonic revolution. Louisiana State University Press. ISBN 9780807104873. OCLC 492154251.
- Rheault, Magali; Mogahed, Dalia (May 28, 2008). "Common Ground for Europeans and Muslims Among Them". Gallup Poll. Gallup, Inc.
- "Declaration on the Elimination of Violence against Women". United Nations General Assembly. Retrieved 14 June 2015.
- "Gender equality". United Nations Population Fund. Retrieved 14 June 2015.
- "Explanatory Report to the Council of Europe Convention on preventing and combating violence against women and domestic violence (CETS No. 210)". Conventions.coe.int. Retrieved 14 June 2015.
- "Case of Opuz v. Turkey". European Court of Human Rights. September 2009. Retrieved 14 June 2015.
- Council of Europe. "Convention on preventing and combating violence against women and domestic violence (CETS No. 210)". Conventions.coe.int. Retrieved 14 June 2015.
- Toward a Feminist Theory of the State, by Catharine A. MacKinnon, pp 174
- "World report on violence and health: summary" (PDF). World Health Organization. 2002.
- "Maternal health: UNFPA – United Nations Population Fund". Unfpa.org. Retrieved 14 June 2015.
- "Sexual and reproductive rights under threat worldwide". Amnesty International. March 6, 2014.
- Pillay, Navi (May 15, 2012). "Valuing Women as Autonomous Beings: Women's sexual and reproductive rights" (PDF). University of Pretoria, Centre for Human Rights.
- "Giving Special Attention to Girls and Adolescents". Unfpa.org. Retrieved 14 June 2015.
- "High Commissioner speaks out against domestic violence and "honour killing" on occasion of International Women's Day"". Ohchr.org. Retrieved 14 November 2017.
- "Violence Against Women Information". Amnesty International USA.
- "Gender equality in Arab world critical for progress and prosperity, UN report warns". UN News Service Section. 2006-12-07. Retrieved 2017-03-28.
- "Germany's persistently low birthrate gets marginal boost". Deutsche Welle. 18 August 2011. Retrieved 14 November 2017.
- "it is in the Netherlands (17.6%) and in Ireland (13.6%) that we see the smallest numbers of married women working and the least acceptance of this phenomenon by the general public". (p. 14);
- "Martindale Center – Lehigh Business" (PDF). Martindale.cc.lehigh.edu. Retrieved 14 November 2017.
- "Archived copy" (PDF). Archived from the original (PDF) on March 4, 2016. Retrieved April 4, 2016.CS1 maint: archived copy as title (link)
- Watts, Joseph (11 February 2014). "Women make up two-thirds of workers on long-term sick leave". London Evening Standard. p. 10.
- "Professional, private and family life – European Commission". Ec.europa.eu. Retrieved 14 November 2017.
- "US asylum rule on forced abortion". News.bbc.co.uk. Retrieved 14 November 2017.
- "Yemen's Dark Side: Discrimination and violence against women and girls" (PDF). 2.ohchr.org. Retrieved 14 November 2017.
- Papanek, Hanna (1973). "Purdah: Separate Worlds and Symbolic Shelter". Comparative Studies in Society and History. 15 (3): 289–325. doi:10.1017/S001041750000712X.
- United Nations High Commissioner for Refugees. "Afghan Girls Suffer for Sins of Male Relatives". Refworld.
- "Vani: Pain of child marriage in our society". News Pakistan. 2011-10-26.
- Nasrullah, M.; Muazzam, S.; Bhutta, Z. A.; Raj, A. (2013). "Girl Child Marriage and Its Effect on Fertility in Pakistan: Findings from Pakistan Demographic and Health Survey, 2006–2007". Maternal and Child Health Journal: 1–10.
- Vani a social evil Anwar Hashmi and Rifat Koukab, The Fact (Pakistan), (July 2004)
- "Gender balance in decision-making positions". Ec.europa.eu. Retrieved 14 November 2017.
- "The Long Way to Women's Right to Vote in Switzerland: a Chronology". History-switzerland.geschichte-schweiz.ch. Retrieved 2011-01-08.
- "United Nations press release of a meeting of the Committee on the Elimination of Discrimination against Women (CEDAW), issued on 14 January 2003". Un.org. Retrieved 2011-09-02.
- "The Subjection of Women by John Stuart Mill". Marxists.org. Retrieved 14 June 2015.
- "Married Women's Status Bill, 1956—Second Stage: Minister for Justice (Mr. Everett)". Oireachtas. 16 January 1957.
- "Archived copy" (PDF). Archived from the original (PDF) on 2016-03-04. Retrieved 2016-04-03.CS1 maint: archived copy as title (link)
- "National Report: France" (PDF). Ceflonline.net. Retrieved 14 November 2017.
- Women and Politics in Contemporary Ireland: From the Margins to the Mainstream, by Yvonne Galligan, pp.90
- Demos, Vasilikie. (2007) "The Intersection of Gender, Class and Nationality and the Agency of Kytherian Greek Women." Paper presented at the annual meeting of the American Sociological Association. August 11.
- Long, Heather (2013-10-06). "Should women change their names after marriage? Ask a Greek woman". The Guardian.
- "Archived copy" (PDF). Archived from the original (PDF) on October 21, 2012. Retrieved October 21, 2012.CS1 maint: archived copy as title (link)
- "Statement by the United Nations Working Group on discrimination against women in law and in practice". Archived from the original on 2015-03-06.
- "Decriminalization of adultery and defenses". Endvawnow.org. Retrieved 14 June 2015.
This article incorporates text from a free content work. Licensed under CC BY SA 3.0 IGO License statement: World Trends in Freedom of Expression and Media Development Global Report 2017/2018, 202, University of Oxford, UNESCO.
- LeMoyne, Roger (2011). "Promoting Gender Equality: An Equity-based Approach to Programming" (PDF). Operational Guidance Overview in Brief. UNICEF. Retrieved 2011-01-28.
- "Gender equality". United Nations Population Fund. UNFPA. Archived from the original on 20 May 2019. Retrieved 14 June 2015.
- Riane Eisler (2007). The Real Wealth of Nations: Creating a Caring Economics. p. 72.
- de Pizan, Christine, "From The Book of the City of Ladies (1404)", Available Means, University of Pittsburgh Press, pp. 33–42, ISBN 978-0-8229-7975-3, retrieved 2020-05-30
- Evans, Frederick William (1859). Shakers: Compendium of the Origin, History, Principles, Rules and Regulations, Government, and Doctrines of the United Society of Believers in Christ's Second Appearing. New York: D. Appleton & Co. p. 34.
- Glendyne R. Wergland, Sisters in the Faith: Shaker Women and Equality of the Sexes (Amherst: University of Massachusetts Press, 2011).
- Wendy R. Benningfield, Appeal of the Sisterhood: The Shakers and the Woman's Rights Movement (University of Kentucky Lexington doctoral dissertation, 2004), p. 73.
- United Nations High Commissioner for Refugees. "Vienna Declaration and Programme of Action". United Nations High Commissioner for Refugees. Retrieved 14 June 2015.
- Organization of American States (August 2009). "Follow-up Mechanism to the Belém do Pará Convention (MESECVI): About the Belém do Pará Convention". Organization of American States. Retrieved 14 June 2015.
- Directive 2002/73/EC of the European Parliament and of the Council of European Communities (PDF). EUR-Lex Access to European Union law. 9 February 1976.
- The Convention of Belém do Pará and the Istanbul Convention: a response to violence against women worldwide (PDF). Organization of American States, Council of Europe, Permanent Mission of France to the United Nations and Permanent Mission of Argentina to the United Nations. March 2014. CSW58 side event flyer 2014.
- Council of Europe, Committee of Ministers, CM document (CM). "Committee of Ministers - Gender Equality Commission (GEC) - Gender Equality Strategy 2014-2017 [1183 meeting]". Wcd.coe.int. Retrieved 14 June 2015.CS1 maint: multiple names: authors list (link)
- Zainulbhai, Hani (2016-03-08). "Strong global support for gender equality, especially among women". Pew Research. Retrieved 2016-08-12.
- Coulombeau, Sophie (1 November 2014). "Why should women change their names on getting married?". BBC News. BBC. Retrieved 14 June 2015.
- Featherstone, Brid; Rivett, Mark; Scourfield, Jonathan (2007). Working with men in health and social care. pp. 27. ISBN 9781412918503.
- Htun, Mala; Weldon, S. Laurel (2007). "When and why do governments promote women's rights? Toward a comparative politics of states and sex equality". Delivery at the Conference Toward a Comparative Politics of Gender: Advancing the Discipline Along Interdisciplinary Boundaries, Case Western Reserve University, Cleveland, Ohio, October. Work in progress pdf. Paper prepared for delivery at the American Political Science Association, Annual Meeting, Chicago, 29 August - 2 September 2007.
- Jordan, Tim (2002). Social Change (Sociology and society). Blackwell. ISBN 978-0-631-23311-4.
- "Universal Declaration of Human Rights" (PDF). Wwda.org. United Nations. December 16, 1948. Retrieved October 31, 2016.
- World Bank (September 2006). "Gender Equality as Smart Economics: A World Bank Group Gender Action Plan (Fiscal years 2007–10)" (PDF). Cite journal requires
- United Nations Millennium Campaign (2008). "Goal #3 Gender Equity". United Nations Millennium Campaign. Retrieved 2008-06-01.
- Sheila, Jeffreys (2012-01-01). Man's dominion : religion and the eclipse of women's rights in world politics. Routledge. p. 94. ISBN 9780415596732. OCLC 966913723.
- Lombardo, Emanuela (1 May 2003). "EU Gender Policy: Trapped in the 'Wollstonecraft Dilemma'?". European Journal of Women's Studies. 10 (2): 159–180. doi:10.1177/1350506803010002003.
- Lombardo, Emanuela; Jalušiè, Vlasta; Maloutas, Maro Pantelidou; Sauer, Birgit (2007). "III. Taming the Male Sovereign? Framing Gender Inequality in Politics in the European Union and the Member States". In Verloo, Mieke (ed.). Multiple meanings of gender equality : a critical frame analysis of gender policies in Europe. New York: Central European University Press Budapest. pp. 79–108. ISBN 9786155211393. OCLC 647686058.
- Montoya, Celeste; Rolandsen Agustín, Lise (1 December 2013). "The Othering of Domestic Violence: The EU and Cultural Framings of Violence against Women". Soc Polit. 20 (4): 534–557. doi:10.1093/sp/jxt020.
- Alison, Stone (2008). An introduction to feminist philosophy. Polity Press. pp. 209–211. ISBN 9780745638836. OCLC 316143234.
- Schreir, Sally, ed. (1988). Women's movements of the world : an international directory and reference guide. Cartermill International. p. 254. ISBN 9780582009882. OCLC 246811744.
- Mayell, Hillary (February 12, 2002). "Thousands of Women Killed for Family "Honor"". National Geographic News. National Geographic Society. Retrieved 14 June 2015.
- "Non! Nein! No! A Country That Wouldn't Let Women Vote Till 1971". News.nationalgeographic.com. 26 August 2016. Retrieved 14 November 2017.
- "Swiss suffragettes were still fighting for the right to vote in 1971". Independent.co.uk. 26 September 2015. Retrieved 14 November 2017.
- Squires, Nick (21 March 2017). "Italian TV programme axed after portraying Eastern European women as submissive sex objects". The Telegraph. Retrieved 14 November 2017.
- "Women in business 2015 results". Grant Thornton International Ltd. Home. Retrieved 14 November 2017.
- Fiscutean, Andrada. "Women in tech: Why Bulgaria and Romania are leading in software engineering - ZDNet". Zdnet.com. Retrieved 14 November 2017.
- Transmediterranean: Diasporas, Histories, Geopolitical Spaces, edited by Joseph Pugliese pg.60-61
- Magazine, Contexts. "What Gender Is Science? - Contexts". contexts.org. Retrieved 14 November 2017.
- Raday, F. (30 March 2012). "Gender and democratic citizenship: the impact of CEDAW". International Journal of Constitutional Law. 10 (2): 512–530. doi:10.1093/icon/mor068.
- EU Non-Discrimination Law in the Courts: Approaches to Sex and Sexualities, Discrimination in the EU law, by Jule Mulder, pg 35-39
- Philosophy Matters (9 January 2017). "Why I Am A Feminist : An Interview with Simone de Beauvoir (1975)". YouTube. Retrieved 14 November 2017.
- "The European Union's new Gender Action Plan 2016–2020: gender equality and women's empowerment in external relations". odi.org.
- "Strategy for Gender Equality in Kazakhstan 2006–2016" (PDF). NDI.org.
- "Engaging Men and Boys: A Brief Summary of UNFPA Experience and Lessons Learned". UNFPA: United Nations Population Fund. 2013. Retrieved 2017-03-28.
- "WHO: World Health Organization". Who.int. Retrieved 14 June 2015.
- "WHO: World Health Organization". Who.int. Retrieved 14 June 2015.
- "Female genital mutilation".
- Cite error: The named reference
UNICEF2016was invoked but never defined (see the help page).
- "Gender equality could help men in Europe live longer: report". Euronews. September 20, 2018.
- "Countries where men hold the power are really bad for men's health". Quartz. September 17, 2018.
- "Gender Inequality Is Bad for Men's Health, Report Says". Global Citizen. September 18, 2018.
- "Men's health and well-being in the WHO European Region". WHO. 2019-06-06.
- Bachman, Ronet (January 1994). Violence Against Women: A National Crime Victimization Survey Report (PDF)(Report). U.S. Department of Justice.
- Hellemans, Sabine; Loeys, Tom; Buysse, Ann; De Smet, Olivia (1 November 2015). "Prevalence and Impact of Intimate Partner Violence (IPV) Among an Ethnic Minority Population". Journal of Interpersonal Violence. 30 (19): 3389–3418. doi:10.1177/0886260514563830. hdl:1854/LU-5815751. PMID 25519236.
- "Femicide: A Global Problem" (PDF). Small Arms Survey. Research Notes: Armed Violence. February 2012.
- "Supplement to the Handbook for Legislation on Violence Against Women: Harmful Practices Against Women" (PDF). UN Women. 2012.
- "Many Voices One Message: Stop Violence Against Women in PNG" (PDF). Activist Toolkit, Amnesty International. 2009–2010.
- Sex and Reason, by Richard A. Posner, page 94.
- "Ethics: Honour crimes". Bbc.co.uk. Retrieved 14 June 2015.
- Harter, Pascale (2011-06-14). "Libya rape victims 'face honour killings'". BBC News. Retrieved 14 June 2015.
- "Rape and Sexual Violence: Human rights law and standards in the international criminal court" (PDF). Amnesty International. 1 March 2011. Retrieved 14 June 2015.
- "Hungary: Cries Unheard: The Failure To Protect Women From Rape And Sexual Violence In The Home" (PDF). Amnesty International. 2007. Retrieved 14 June 2015.
- Rodríguez-Madera, Sheilla L.; Padilla, Mark; Varas-Díaz, Nelson; Neilands, Torsten; Guzzi, Ana C. Vasques; Florenciani, Ericka J.; Ramos-Pibernus, Alíxida (2017-01-28). "Experiences of Violence Among Transgender Women in Puerto Rico: An Underestimated Problem". Journal of Homosexuality. 64 (2): 209–217. doi:10.1080/00918369.2016.1174026. ISSN 0091-8369. PMC 5546874. PMID 27054395.
- Campaign, Human Rights. "A National Epidemic: Fatal Anti-Transgender Violence in America". Human Rights Campaign. Retrieved 2019-02-25.
- Country Comparison: Maternal Mortality Rate in The CIA World Factbook.
- Hunt, Paul; Mezquita de Bueno, Julia (2010). Reducing Maternal Mortality: The contribution of the right to the highest attainable standard of health (PDF). United Nations Population Fund: University of Essex.
- Duncan, Stephanie Kirchgaessner Pamela; Nardelli, Alberto; Robineau, Delphine (11 March 2016). "Seven in 10 Italian gynaecologists refuse to carry out abortions". The Guardian. Retrieved 14 November 2017.
- "Doctors' Refusal to Perform Abortions Divides Croatia". Balkan Insight. 2017-02-14. Retrieved 14 November 2017.
- "Family planning: UNFPA – United Nations Population Fund". Unfpa.org. Retrieved 14 November 2017.
- Natalae Anderson (September 22, 2010). "Documentation Center of Cambodia, Memorandum: Charging Forced Marriage as a Crime Against Humanity," (PDF). D.dccam.org. Retrieved 14 November 2017.
- "Mass sterilisation scandal shocks Peru". News.bbc.co.uk. 24 July 2002. Retrieved 14 November 2017.
- "Impunity for violence against women is a global concern". Ohchr.org. Retrieved 14 November 2017.
- "Femicide and Impunity in Mexico: A context of structural and generalized violence" (PDF). 2.ohchr.org. Retrieved 14 November 2017.
- "Femicide in Latin America". Unwomen.org. Retrieved 14 June 2015.
- "Central America: Femicides and Gender-Based Violence". Cgrs.uchastings.edu. Retrieved 14 June 2015.
- "Progress of the World's Women 2015–2016". My Favorite News. Retrieved 14 June 2015.
- "Prevalence of FGM/C". UNICEF. 2014-07-22. Archived from the original on 15 July 2015. Retrieved 18 August 2014.
- "National Gender Based Violence & Health Programme". Gbv.scot.nhs.uk. Retrieved 14 June 2015.
- "Fact Sheet No.23, Harmful Traditional Practices Affecting the Health of Women and Children" (PDF). Ohchr.org. Retrieved 14 November 2017.
- "CASTE DISCRIMINATION AGAINST DALITS OR SO-CALLED UNTOUCHABLES IN INDIA" (PDF). 2.ohchr.org. Retrieved 14 November 2017.
- "Biggest caste survey: One in four Indians admit to practising untouchability". The Indian Express. 29 November 2014. Retrieved 14 June 2015.
- Backshall, Steve (6 January 2008). "Bitten by the Amazon". The Sunday Times. London. Retrieved 13 July 2013.
- Newman Wadesango; Symphorosa Rembe; Owence Chabaya. "Violation of Women's Rights by Harmful Traditional Practices" (PDF). Krepublishers.com. Retrieved 14 November 2017.
- "The impact of harmful traditional practices on the girl child" (PDF). Un.org. Retrieved 14 November 2017.
- "Breast Ironing... A Harmful Practice That Has Been Silenced For Too Long" (PDF). Ohchr.org. Retrieved 14 November 2017.
- "Exchange on HIV/AIDS, Sexuality and Gender". 2008. Archived from the original on 19 August 2014. Retrieved 14 November 2017.
- "Female genital mutilation: UNFPA – United Nations Population Fund". Unfpa.org. Retrieved 14 November 2017.
- "UNFPA-UNICEF Joint Programme on Female Genital Mutilation/Cutting: Accelerating Change". Unfpa.org. Retrieved 4 April 2017.
- "Child marriage". UNICEF. 22 October 2014. Retrieved 14 June 2015.
- "Child Marriage". Human Rights Watch. Retrieved 14 June 2015.
- "Resolution adopted by the General Assembly : 69/XX. Child, Early and Forced Marriage" (PDF). Who.int. Retrieved 14 November 2017.
- "End Child Marriage". UNFPA – United Nations Population Fund. Retrieved 14 June 2015.
- "Women's Fears and Men's Anxieties : The Impact of Family Planning on Gender Relations in Northern Ghana" (PDF). Popcouncil.org. Retrieved 14 November 2017.
- "Equality Now (2007) Protecting the girl child: Using the law to end child, early and forced marriage and related human rights violations" (PDF). Equalitynow.org. Retrieved 14 November 2017.
- Lelieveld, M. (2011). "Child protection in the Somali region of Ethiopia. A report for the BRIDGES project Piloting the delivery of quality education services in the developing regional states of Ethiopia" (PDF). Savethechildren.org.uk. Archived from the original (PDF) on 24 September 2015. Retrieved 17 April 2015.
- Stange, Mary Zeiss, and Carol K. Oyster, Jane E. Sloan (2011). Encyclopedia of Women in Today's World, Volume 1. SAGE. p. 496. ISBN 9781412976855.CS1 maint: multiple names: authors list (link)
- "The situation in the EU". European Commission. Retrieved July 12, 2011.
- "What we do: Economic empowerment: UN Women – Headquarters". headQuarters. Retrieved 14 June 2015.
- "Roadmap for Promoting Women's Economic Empowerment". Womeneconroadmap.org. Retrieved 14 November 2017.
- Harvard Law Review Association (May 1996), Civil rights – gender discrimination: California prohibits gender-based pricing
- Duesterhaus, Megan; Grauerholz, Liz; Weichsel, Rebecca; Guittar, Nicholas A. (2011). "The Cost of Doing Femininity: Gendered Disparities in Pricing of Personal Care Products and Services". Gender Issues. 28 (4): 175–191. doi:10.1007/s12147-011-9106-3.
- Bjørnholt, M. (2014). "Changing men, changing times; fathers and sons from an experimental gender equality study" (PDF). The Sociological Review. 62 (2): 295–315. doi:10.1111/1467-954X.12156.
- Vachon, Marc and Amy (2010). Equally Shared Parenting. United States: Perigree Trade. ISBN 978-0-399-53651-9.; Deutsch, Francine (April 2000). Halving It All: How Equally Shared Parenting Works. Harvard University Press. ISBN 978-0-674-00209-8.; Schwartz, Pepper (September 1995). Love Between Equals: How Peer Marriage Really Works. Touchstone. ISBN 978-0-02-874061-4.
- Nellie Bowles, September 23, 2017, The New York Times, Push for Gender Equality in Tech? Some Men Say It’s Gone Too Far: After revelations of harassment and bias in Silicon Valley, a backlash is growing against the women in tech movement., Retrieved June 17, 2018, "...Silicon Valley has for years accommodated a fringe element of men who say women are ruining the tech world.... backlash against the women in technology movement ... surveys show there is no denying the travails women face in the male-dominated industry ..."
- Thacher Schmid, March 12, 2018, Willamette Week, While Startups Increasingly Move to Portland, a New York Times Reporter Warns That There’s a “Gender Problem” in Tech: Nellie Bowles will be in Portland next month to speak at TechfestNW on the inclusivity, or lack thereof, in tech culture., Retrieved June 17, 2018, "...Bowles has written a number of groundbreaking stories on the "gender problem" in tech, including a profile of a "contrarian" fringe element of men leading a backlash against women asserting their rights...."
- "Modern workplaces, maternity rights, and gender equality". Fawcett Society. November 2012. Archived (PDF) from the original on 2016-05-09. Retrieved 2016-04-26.
- For example, "Law n. 202/2002, Art. 10 (4) and Art. 37". Romanian Law Online (in Romanian).
- "Details of indicators for labour exploitation" (PDF). Ilo.org. Retrieved 14 November 2017.
- "HRW calls on Indonesia to scrap 'virginity tests' for female police". Dw.com. Retrieved 14 November 2017.
- "THE CONVENTION ON THE ELIMINATION OF ALL FORMS OF DISCRIMINATION AGAINST WOMEN (CEDAW)" (PDF). Igfm-muenchen.de. Retrieved 14 November 2017.
- Liberating Women's History:Theoretical and Critical Essays, edited by Berenice A. Carroll, pp. 161–2
- "Why can't women drive in Saudi Arabia?". BBC. 27 October 2013. Retrieved 14 June 2015.
- "CEDAW 29th Session 30 June to 25 July 2003". Archived from the original on April 1, 2011. Retrieved 14 June 2015.
- Ahsan, I. (2009). PANCHAYATS AND JIRGAS (LOK ADALATS): Alternative Dispute Resolution System in Pakistan. Strengthening Governance Through Access To Justice
- "Global issues affecting women and girls". National Union of Teachers. Archived from the original on 29 April 2015. Retrieved 14 June 2015.
- "Global Campaign For Education United States Chapter". Retrieved 14 June 2015.
- "Progress and Obstacles to Girls' Education in Africa". Plan International. 16 July 2015.
- "Attacks against girls' education occurring with "increasing regularity" – UN human rights report". Ohchr.org. 9 February 2015. Retrieved 2017-03-27.
- "Gender equality". United Nations Population Fund. Archived from the original on 20 May 2019. Retrieved 17 January 2015.
- "Women in Parliaments: World and Regional Averages". Ipu.org. Retrieved 14 November 2017.
- "A/RES/66/130 Women and Political Participation". United Nations. 2012-03-19. Archived from the original on 3 March 2018. Retrieved 14 November 2017.
- "Gender Equality Strategy 2014-2017". Council of Europe. Retrieved 14 November 2017.
- Inter-Parliamentary Union (1 August 2015). "Women in national parliaments". Retrieved 31 August 2015.
- "Equality in family relations: recognizing women's rights to property". Ohchr.org. Retrieved 14 November 2017.
- "Women's land rights are human rights, says new UN report". UN Women. 11 November 2013.
- "RESOLUTION (78) 37 ON EQUALITY OF SPOUSES IN CIVIL LAW". Council of Europe. 27 September 1978. Archived from the original on 21 January 2016.
- Times, Special to the New York (23 September 1985). "SWISS GRANT WOMEN EQUAL MARRIAGE RIGHTS". The New York Times.
- "Switzerland Profile: Timeline". BBC News. 21 December 2017.
- Markus G. Jud, Lucerne, Switzerland. "The Long Way to Women's Right to Vote in Switzerland: a Chronology". History-switzerland.geschichte-schweiz.ch. Retrieved 14 November 2017.CS1 maint: multiple names: authors list (link)
- The Economics of Imperfect Labor Markets: Second Edition, by Tito Boeri, Jan van Ours, pp. 105
- "Dutch gender and LGBT-equality policy 2013-2016". Archived from the original on 6 September 2017.
- "2015 Review Report of the Netherlands Government in the context of the twentieth anniversary of the Fourth World Conference on Women and the adoption of the Beijing Declaration and Platform for Action" (PDF). Unece.org. Retrieved 14 November 2017.
- "Kirchberg v. Feenstra :: 450 U.S. 455 (1981) :: Justia U.S. Supreme Court Center". Justia Law.
- "The History of Passports in Australia". Archived from the original on 14 June 2006. Retrieved 14 November 2017.
- "Women's Lives Women's Rights: Campaigning for maternal health and sexual and reproductive rights" (PDF). Amnesty.ca. Retrieved 14 November 2017.
- "Left without a choice : Barriers to reproductive health in Indonesia" (PDF). 2.ohchr.org. Retrieved 14 November 2017.
- Rao, D. Bhaskara (2004). Education For Women. Discovery Publishing House. p. 161. ISBN 9788171418732.
- Buhle Angelo Dube (February 2008). "The Law and Legal Research in Lesotho". Archived from the original on 2010-06-20. Retrieved 2010-07-04.
- "Court in UAE says beating wife, child OK if no marks are left". Edition.cnn.com. Retrieved 14 June 2015.
- Nordland, Rod (2016-12-07). "Crackdown in Turkey Threatens a Haven of Gender Equality Built by Kurds". The New York Times. ISSN 0362-4331. Retrieved 2018-01-23.
- Mogelson, Luke (2017-10-30). "Dark Victory in Raqqa". The New Yorker. ISSN 0028-792X. Retrieved 2018-01-23.
- Shaw and Lee, Susan and Janet. Women Voices and Feminist Visions. p. 450.
Women are expected to want to be mothers
- "How Does Gender Bias Really Affect Women in the Workplace?". 2016-03-24. Retrieved 2016-09-23.
- Durbin, Susan (2010). "Gender inequality in employment: Editors' introduction". Equality, Diversity and Inclusion. 29 (3): 221–238. doi:10.1108/02610151011028831.
- "Women and Girls as Subjects of Media's Attention and Advertisement Campaigns : The Situation in Europe, Best Practices and Legislations" (PDF). Europarl.europa.eu. Retrieved 14 November 2017.
- Acevedo et all. 2010. 'A Content Analysis of the Roles Portrayed by Women in Commercials: 1973 – 2008', Revista Brasileira de Marketing Vol. 9. Universidade Nove de Julho, Sao Paulo.
- "The Myriad: Westminster's Interactive Academic Journal". Archived from the original on 2016-04-28.
- Gretchen Kelly (November 23, 2015). "The Thing All Women Do That You Don't Know About". Huffington Post. Retrieved 14 November 2017.
- Asquith, Christina (2016-03-07). "Why Don't Female Journalists Win More Awards?". The Atlantic. Retrieved 2019-08-21.
- Bank, African Development (2019-02-13). "African Development Bank promotes gender equality in the media through 'Women's Rights in Africa' Award". African Development Bank - Building today, a better Africa tomorrow. Retrieved 2019-08-21.
- World Trends in Freedom of Expression and Media Development Global Report 2017/2018. http://www.unesco.org/ulis/cgi-bin/ulis.pl?catno=261065&set=005B2B7D1D_3_314&gp=1&lin=1&ll=1: UNESCO. 2018. p. 202.CS1 maint: location (link)
- "WINning Strategies – Creating Stronger News Media Organizations by Increasing Gender Diversity (2018 update) - WAN-IFRA". www.wan-ifra.org. Retrieved 2019-08-21.
- "Help is available if you or someone you know is a victim of Domestic Violence" (PDF). 2.gov.bc.ca. Retrieved 14 November 2017.
- "Know your rights – get your rights!". Maternityaction.org.uk. 2015-01-14. Retrieved 14 November 2017.
- "Eight Point Agenda for Women's Empowerment and Gender Equality". Undp.org. Archived from the original on 9 May 2017. Retrieved 14 November 2017.
- Assembly, United Nations General. "A/RES/48/104 – Declaration on the Elimination of Violence against Women – UN Documents: Gathering a body of global agreements". Un-documents.net. Retrieved 14 November 2017.
- Khoury, Jack (30 April 2012). "Study: Most Bedouin Victims of Domestic Violence Believe It's a 'Decree From God'". Haaretz. Retrieved 14 November 2017.
- "Hungary : Cries unheard : The failure to protect women from rape and sexual violence in the home" (PDF). Refworld.org. Retrieved 14 November 2017.
- "Hungary law 'fails rape victims'". BBC. 10 May 2007. Retrieved 14 November 2017.
- "Women and health : today's evidence tomorrow's agenda" (PDF). Who.int. Retrieved 14 November 2017.
- Booth, C.; Bennett (2002). "Gender Mainstreaming in the European Union". European Journal of Women's Studies. 9 (4): 430–46. doi:10.1177/13505068020090040401.
- "Definition of Gender Mainstreaming". International Labor Organization. Retrieved 14 June 2015.
- "II. The Origins of Gender Mainstreaming in the EU", Academy of European Law online
- "Gender Mainstreaming". UN Women. Retrieved 14 June 2015.
- "Gender at the Heart of ICPD: The UNFPA Strategic Framework on Gender Mainstreaming and Women's Empowerment". United Nations Population Fund. 2011. Retrieved 14 June 2015.
|Wikimedia Commons has media related to Gender equality.|
- United Nations Rule of Law: Gender Equality, on the relationship between gender equality, the rule of law and the United Nations.
- Women and Gender Equality, the United Nations Internet Gateway on Gender Equality and Empowerment of Women.
- Gender Equality, an overview of the United Nations Development Program's work on Gender Equality.
- Gender issue -Significance in Watershed Management Programmes, Watershedpedia.
- GENDERNET International forum of gender experts working in support of Gender equality. Development Co-operation Directorate of the Organisation for Economic Co-operation and Development (OECD).
- OECD's Gender Initiative, an overview page which also links to wikiGENDER, the Gender equality project of the OECD Development Centre.
- The Local A news collection about Gender equality in Sweden.
- Egalitarian Jewish Services A Discussion Paper.
- End The Gender Pay Gap Project based in Palo Alto, CA.
- Gender Equality Incorporated, an organization that develops capacity in addressing gender and inclusion issues, Canada
- Center for Development and Population Activities (CEDPA)
- Equileap, an organisation aiming to accelerate progress towards gender equality in the workplace |
Practice and learn Prealgebra and Algebra topics!
Walk through step-by-step solutions to see where you made your mistake.
See your stats for every problem type (saved across app runs).
Work out problems without needing to take notes.
Get math help wherever you are.
35 practice concepts
– Find the Greatest Common Denominator
– Fraction Addition
– Fraction Division
– Fraction Multiplication
– Fraction Subtraction
– Reduce Fraction to Lowest Terms
– Distributed Linear
– One Step Linear Equations Using Integers
– Two Step Linear Equations Using Integers
– Multiplying Monomials by Polynomials
– Naming Polynomials by Degree
– Naming Polynomials by Degree and Terms
– Naming Polynomials by Number of Terms
– Single Degree Simplifiable Polynomial
– Factorable Quadratic in Factored Form
– Factorable Quadratic in Standard Form
– Convert from Decimal to Scientific Notaiton
– Convert from Scientific Notation to Decimal
– English to Variable Expression
– Evaluating Single Variable Linear Expressions
– Finding Prime Numbers
– Prime or Composite?
– Classifying Rational Numbers
– Convert Expanded Form To Exponent
– Convert Exponent to Expanded Form
– Convert Exponent to Words
– Exponentiating Exponents
– Write Base and Exponent
– Expressing Fraction in Higher Terms
– Find Value to Make Fractions Equivalent
– Finding the Least Common Multiple
– Find the Reciprocal
– Identifying Numerator and Denominator
– Order of Operations with Integers
– Order of Operations with Whole Numbers |
When NASA began 60 years ago, we had questions about the universe humans had been asking since we first looked up into the night sky. In the six decades since, NASA, along with its international partners and thousands of researchers, have expanded our knowledge of the Universe by using a full fleet of telescopes and satellites. From the early probes of the 1950s and 1960s to the great telescopes of the 1990s and 21st century, NASA scientists have been exploring the evolution of the universe from the Big Bang to the present.
Pillars of Creation, Eagle Nebula, a cloud of gas and dust created by an exploding star from which new stars and planets are forming. Image Credit: NASA/ ESA/The Hubble Heritage Team (STScl/AURA)
The Great Observatories
NASA astronomers use several kinds of telescopes in space and on the ground. Each observes targets like stars, planets, and galaxies, but captures different wavelengths of light using various techniques to add to our understanding of these cosmic phenomenon.
Image Credit: NASA
Since it was launched in 1990, Hubble has forever changed our idea of what the universe looks like. It does not travel to stars, planets or galaxies, but takes pictures of them as it whirls around Earth at about 17,000 mph.
Image Credit: NASA
Chandra X-ray Observatory
The Chandra X-ray Observatory allows scientists from around the world to obtain X-ray images of exotic environments to help understand the structure and evolution of the universe. X-rays are produced when matter is heated to millions of degrees. X-ray telescopes can also trace the hot gas from an exploding star or detect X-rays from matter swirling as close as 90 kilometers from the event horizon of a stellar black hole.
Image Credit: NASA/JPL-Caltech
Spitzer Space Telescope
NASA’s Spitzer Space Telescope, designed to detect primarily heat or infrared radiation, launched in 2003. Spitzer's highly sensitive instruments allow scientists to peer into cosmic regions that are hidden from optical telescopes, including dusty stellar nurseries, the centers of galaxies, and newly forming planetary systems. Spitzer's infrared eyes also allow astronomers to see cooler objects in space, like failed stars (brown dwarfs), exoplanets, giant molecular clouds, and organic molecules that may hold the secret to life on other planets.
Image of the infant universe 13.7 billion years created from WMAP data, showing differences in temperatures that became “seeds” for galaxies.Image Credit: NASA
The Age of the Universe
The Wilkinson Microwave Anisotropy Probe (WMAP) satellite returned data that allowed astronomers to precisely assess the age of the universe to be 13.77 billion years old and to determine that atoms make up only 4.6 percent of the universe, with the remainder being dark matter and dark energy. Using telescopes like Hubble and Spitzer, scientists also now know how fast the universe is expanding.
These minute temperature variations (depicted here as varying shades of blue and purple) are linked to slight density variations in the early universe. These variations are believed to have given rise to the structures that populate the universe today: clusters of galaxies, as well as vast, empty regions.Image Credit: NASA
How the Universe Began and Evolved
The Cosmic Background Explorer (COBE), launched in 1989, studied the radiation still left from the Big Bang to better understand how the universe formed. In 2006, John Mather of NASA and George Smoot of the University of California shared the Nobel Prize for Physics for confirming the Big Bang theory using COBE data.
NASA telescopes have helped us better understand this mysterious, invisible matter that is five times the mass of regular matter. The first direct detection of dark matter was made in 2007 through observations of the Bullet Cluster of galaxies by the Chandra x-ray telescope.
Image Credit: X-ray NASA/CXC/University of Colorado/J. Comerford et al.; Optical: NASA/STScI
Although we can’t “see” black holes, scientists have been able to study them by observing how they interact with the environment around them with telescopes like Swift, Chandra, and Hubble. In 2017, NASA's Swift telescope has mapped the death spiral of a star as it is consumed by a black hole. This year, astronomers using Chandra have discovered evidence for thousands of black holes located near the center of our Milky Way galaxy.
Image Credit: Image Credit: NASA/ESA/G. Dubner (IAFE, CONICET-University of Buenos Aires) et al.; A. Loll et al.; T. Temim et al.; F. Seward et al.; VLA/NRAO/AUI/NSF; Chandra/CXC; Spitzer/JPL-Caltech; XMM-Newton/ESA; and Hubble/STScI
Image of the Crab Nebula, combining data from several telescopes. The Crab Nebula, the result of a bright supernova explosion seen by Chinese and other astronomers in the year 1054, is 6,500 light-years from Earth.
Image Credit: NASA/ESA/A.V. Filippenko (University of California, Berkeley)/P. Challis (Harvard-Smithsonian Center for Astrophysics), et al.
A Bright Supernova
The explosion of a massive star blazes with the light of 200 million Suns in this NASA Hubble Space Telescope image.
Image Credit: NASA/ESA/CXC/SSC/STScI
Spiral Galaxy M101
Spiral Galaxy M101 viewed from three different NASA telescopes and kinds of light: Spitzer (infrared), Hubble (visible light), and Chandra (X-ray).
A galaxy is a huge collection of gas, dust, and billions of stars and their solar systems, held together by gravity. Some are spiral-shaped like our Milky Way Galaxy; others are smooth and oval shaped. NASA telescopes are helping us learn about how galaxies formed and evolved over time.
Image Credit: NASA/ESA/S. Beckwith (STScI)/HUDF Team
Thousands of Galaxies
Hubble Space Telescope picture showing thousands of galaxies. Even the tiny dots are entire galaxies.
Image Credit: NASA/ESA/M. Mutchler (STScI)
NGC 4302 and NGC 4298
Spiral galaxy pair NGC 4302 and NGC 4298. Astronomers used the Hubble to take a portrait of a stunning pair of spiral galaxies. This starry pair offers a glimpse of what our Milky Way galaxy would look like to an outside observer.
Just 30 years ago, scientists didn’t know if there were planets orbiting other stars besides our own Sun. Now, scientists believe every star likely has at least one exoplanet. They come in a wide variety of sizes, from gas giants larger than Jupiter to small, rocky planets about as big as Earth or Mars. They can be hot enough to boil metal or locked in deep freeze. They can orbit their stars so tightly that a “year” lasts only a few days; they can even orbit two stars at once. Some exoplanets don’t orbit around a star, but wander through the galaxy in permanent darkness. NASA’s Kepler spacecraft and newly-launched Transiting Exoplanet Survey Satellite are helping us find more distant worlds
Image Credit: NASA Ames/SETI Institute/JPL-Caltech
Kepler-186f, the first rocky exoplanet to be found within the habitable zone—the region around the host star where the temperature is right for liquid water. This planet is also very close in size to Earth. Even though we may not find out what's going on at the surface of this planet anytime soon, it's a strong reminder of why new technologies are being developed that will enable scientists to get a closer look at distant worlds.
Image Credit: NASA/JPL-Caltech
1 Pegasi b
1 Pegasi b. This giant planet, which is about half the mass of Jupiter and orbits its star every four days, was the first confirmed exoplanet around a sun-like star, a discovery that launched a whole new field of exploration.
Image Credit: NASA/JPL-Caltech (artist concept)
Kepler-16b. This planet was Kepler's first discovery of a planet that orbits two stars—what is known as a circumbinary planet.
Artist's concept of TRAPPIST-1. Image Credit: NASA
Using Spitzer, scientists found the most number of Earth-sized planets found in the habitable zone of a single star, called TRAPPIST-1. This system of seven rocky worlds–all of them with the potential for water on their surface–is an exciting discovery in the search for life on other worlds. There is the possibility that future study of this unique planetary system could reveal conditions suitable for life. |
College PhysicsScience and Technology
Motion Equations for Constant Acceleration in One Dimension
We might know that the greater the acceleration of, say, a car moving away from a stop sign, the greater the displacement in a given time. But we have not developed a specific equation that relates acceleration and displacement. In this section, we develop some convenient equations for kinematic relationships, starting from the definitions of displacement, velocity, and acceleration already covered.
Notation: t, x, v, a
First, let us make some simplifications in notation. Taking the initial time to be zero, as if time is measured with a stopwatch, is a great simplification. Since elapsed time is , taking means that , the final time on the stopwatch. When initial time is taken to be zero, we use the subscript 0 to denote initial values of position and velocity. That is, is the initial position and is the initial velocity. We put no subscripts on the final values. That is, is the final time, is the final position, and is the final velocity. This gives a simpler expression for elapsed time—now, . It also simplifies the expression for displacement, which is now . Also, it simplifies the expression for change in velocity, which is now . To summarize, using the simplified notation, with the initial time taken to be zero,
where the subscript 0 denotes an initial value and the absence of a subscript denotes a final value in whatever motion is under consideration.
We now make the important assumption that acceleration is constant. This assumption allows us to avoid using calculus to find instantaneous acceleration. Since acceleration is constant, the average and instantaneous accelerations are equal. That is,
so we use the symbol for acceleration at all times. Assuming acceleration to be constant does not seriously limit the situations we can study nor degrade the accuracy of our treatment. For one thing, acceleration is constant in a great number of situations. Furthermore, in many other situations we can accurately describe motion by assuming a constant acceleration equal to the average acceleration for that motion. Finally, in motions where acceleration changes drastically, such as a car accelerating to top speed and then braking to a stop, the motion can be considered in separate parts, each of which has its own constant acceleration.
To get our first two new equations, we start with the definition of average velocity:
Substituting the simplified notation for and yields
Solving for yields
where the average velocity is
The equation reflects the fact that, when acceleration is constant, is just the simple average of the initial and final velocities. For example, if you steadily increase your velocity (that is, with constant acceleration) from 30 to 60 km/h, then your average velocity during this steady increase is 45 km/h. Using the equation to check this, we see that
which seems logical.
A jogger runs down a straight stretch of road with an average velocity of 4.00 m/s for 2.00 min. What is his final position, taking his initial position to be zero?
Draw a sketch.
The final position is given by the equation
To find , we identify the values of , , and from the statement of the problem and substitute them into the equation.
1. Identify the knowns. , , and .
2. Enter the known values into the equation.
Velocity and final displacement are both positive, which means they are in the same direction.
The equation gives insight into the relationship between displacement, average velocity, and time. It shows, for example, that displacement is a linear function of average velocity. (By linear function, we mean that displacement depends on rather than on raised to some other power, such as . When graphed, linear functions look like straight lines with a constant slope.) On a car trip, for example, we will get twice as far in a given time if we average 90 km/h than if we average 45 km/h.
We can derive another useful equation by manipulating the definition of acceleration.
Substituting the simplified notation for and gives us
Solving for yields
An airplane lands with an initial velocity of 70.0 m/s and then decelerates at for 40.0 s. What is its final velocity?
Draw a sketch. We draw the acceleration vector in the direction opposite the velocity vector because the plane is decelerating.
1. Identify the knowns. , , .
2. Identify the unknown. In this case, it is final velocity, .
3. Determine which equation to use. We can calculate the final velocity using the equation .
4. Plug in the known values and solve.
The final velocity is much less than the initial velocity, as desired when slowing down, but still positive. With jet engines, reverse thrust could be maintained long enough to stop the plane and start moving it backward. That would be indicated by a negative final velocity, which is not the case here.
In addition to being useful in problem solving, the equation gives us insight into the relationships among velocity, acceleration, and time. From it we can see, for example, that
- final velocity depends on how large the acceleration is and how long it lasts
- if the acceleration is zero, then the final velocity equals the initial velocity , as expected (i.e., velocity is constant)
- if is negative, then the final velocity is less than the initial velocity
(All of these observations fit our intuition, and it is always useful to examine basic equations in light of our intuition and experiences to check that they do indeed describe nature accurately.)
An intercontinental ballistic missile (ICBM) has a larger average acceleration than the Space Shuttle and achieves a greater velocity in the first minute or two of flight (actual ICBM burn times are classified—short-burn-time missiles are more difficult for an enemy to destroy). But the Space Shuttle obtains a greater final velocity, so that it can orbit the earth rather than come directly back down as an ICBM does. The Space Shuttle does this by accelerating for a longer time.
We can combine the equations above to find a third equation that allows us to calculate the final position of an object experiencing constant acceleration. We start with
Adding to each side of this equation and dividing by 2 gives
Since for constant acceleration, then
Now we substitute this expression for into the equation for displacement, , yielding
Dragsters can achieve average accelerations of . Suppose such a dragster accelerates from rest at this rate for 5.56 s. How far does it travel in this time?
Draw a sketch.
We are asked to find displacement, which is if we take to be zero. (Think about it like the starting line of a race. It can be anywhere, but we call it 0 and measure all other positions relative to it.) We can use the equation once we identify , , and from the statement of the problem.
1. Identify the knowns. Starting from rest means that , is given as and is given as 5.56 s.
2. Plug the known values into the equation to solve for the unknown :
Since the initial position and velocity are both zero, this simplifies to
Substituting the identified values of and gives
If we convert 402 m to miles, we find that the distance covered is very close to one quarter of a mile, the standard distance for drag racing. So the answer is reasonable. This is an impressive displacement in only 5.56 s, but top-notch dragsters can do a quarter mile in even less time than this.
What else can we learn by examining the equation We see that:
- displacement depends on the square of the elapsed time when acceleration is not zero. In [link], the dragster covers only one fourth of the total distance in the first half of the elapsed time
- if acceleration is zero, then the initial velocity equals average velocity () and becomes
A fourth useful equation can be obtained from another algebraic manipulation of previous equations.
If we solve for , we get
Substituting this and into , we get
Calculate the final velocity of the dragster in [link] without using information about time.
Draw a sketch.
The equation is ideally suited to this task because it relates velocities, acceleration, and displacement, and no time information is required.
1. Identify the known values. We know that , since the dragster starts from rest. Then we note that (this was the answer in [link]). Finally, the average acceleration was given to be .
2. Plug the knowns into the equation and solve for
To get , we take the square root:
145 m/s is about 522 km/h or about 324 mi/h, but even this breakneck speed is short of the record for the quarter mile. Also, note that a square root has two values; we took the positive value to indicate a velocity in the same direction as the acceleration.
An examination of the equation can produce further insights into the general relationships among physical quantities:
- The final velocity depends on how large the acceleration is and the distance over which it acts
- For a fixed deceleration, a car that is going twice as fast doesn’t simply stop in twice the distance—it takes much further to stop. (This is why we have reduced speed zones near schools.)
Putting Equations Together
In the following examples, we further explore one-dimensional motion, but in situations requiring slightly more algebraic manipulation. The examples also give insight into problem-solving techniques. The box below provides easy reference to the equations needed.
On dry concrete, a car can decelerate at a rate of , whereas on wet concrete it can decelerate at only . Find the distances necessary to stop a car moving at 30.0 m/s (about 110 km/h) (a) on dry concrete and (b) on wet concrete. (c) Repeat both calculations, finding the displacement from the point where the driver sees a traffic light turn red, taking into account his reaction time of 0.500 s to get his foot on the brake.
Draw a sketch.
In order to determine which equations are best to use, we need to list all of the known values and identify exactly what we need to solve for. We shall do this explicitly in the next several examples, using tables to set them off.
Solution for (a)
1. Identify the knowns and what we want to solve for. We know that ; ; ( is negative because it is in a direction opposite to velocity). We take to be 0. We are looking for displacement , or .
2. Identify the equation that will help up solve the problem. The best equation to use is
This equation is best because it includes only one unknown, . We know the values of all the other variables in this equation. (There are other equations that would allow us to solve for , but they require us to know the stopping time, , which we do not know. We could use them but it would entail additional calculations.)
3. Rearrange the equation to solve for .
4. Enter known values.
Solution for (b)
This part can be solved in exactly the same manner as Part A. The only difference is that the deceleration is . The result is
Solution for (c)
Once the driver reacts, the stopping distance is the same as it is in Parts A and B for dry and wet concrete. So to answer this question, we need to calculate how far the car travels during the reaction time, and then add that to the stopping time. It is reasonable to assume that the velocity remains constant during the driver’s reaction time.
1. Identify the knowns and what we want to solve for. We know that ; ; . We take to be 0. We are looking for .
2. Identify the best equation to use.
works well because the only unknown value is , which is what we want to solve for.
3. Plug in the knowns to solve the equation.
This means the car travels 15.0 m while the driver reacts, making the total displacements in the two cases of dry and wet concrete 15.0 m greater than if he reacted instantly.
4. Add the displacement during the reaction time to the displacement when braking.
- 64.3 m + 15.0 m = 79.3 m when dry
- 90.0 m + 15.0 m = 105 m when wet
The displacements found in this example seem reasonable for stopping a fast-moving car. It should take longer to stop a car on wet rather than dry pavement. It is interesting that reaction time adds significantly to the displacements. But more important is the general approach to solving problems. We identify the knowns and the quantities to be determined and then find an appropriate equation. There is often more than one way to solve a problem. The various parts of this example can in fact be solved by other methods, but the solutions presented above are the shortest.
Suppose a car merges into freeway traffic on a 200-m-long ramp. If its initial velocity is 10.0 m/s and it accelerates at , how long does it take to travel the 200 m up the ramp? (Such information might be useful to a traffic engineer.)
Draw a sketch.
We are asked to solve for the time . As before, we identify the known quantities in order to choose a convenient physical relationship (that is, an equation with one unknown, ).
1. Identify the knowns and what we want to solve for. We know that ; ; and .
2. We need to solve for . Choose the best equation. works best because the only unknown in the equation is the variable for which we need to solve.
3. We will need to rearrange the equation to solve for . In this case, it will be easier to plug in the knowns first.
4. Simplify the equation. The units of meters (m) cancel because they are in each term. We can get the units of seconds (s) to cancel by taking , where is the magnitude of time and s is the unit. Doing so leaves
5. Use the quadratic formula to solve for .
(a) Rearrange the equation to get 0 on one side of the equation.
This is a quadratic equation of the form
where the constants are .
(b) Its solutions are given by the quadratic formula:
This yields two solutions for , which are
In this case, then, the time is in seconds, or
A negative value for time is unreasonable, since it would mean that the event happened 20 s before the motion began. We can discard that solution. Thus,
Whenever an equation contains an unknown squared, there will be two solutions. In some problems both solutions are meaningful, but in others, such as the above, only one solution is reasonable. The 10.0 s answer seems reasonable for a typical freeway on-ramp.
With the basics of kinematics established, we can go on to many other interesting examples and applications. In the process of developing kinematics, we have also glimpsed a general approach to problem solving that produces both correct answers and insights into physical relationships. Problem-Solving Basics discusses problem-solving basics and outlines an approach that will help you succeed in this invaluable task.
We have been using SI units of meters per second squared to describe some examples of acceleration or deceleration of cars, runners, and trains. To achieve a better feel for these numbers, one can measure the braking deceleration of a car doing a slow (and safe) stop. Recall that, for average acceleration, . While traveling in a car, slowly apply the brakes as you come up to a stop sign. Have a passenger note the initial speed in miles per hour and the time taken (in seconds) to stop. From this, calculate the deceleration in miles per hour per second. Convert this to meters per second squared and compare with other decelerations mentioned in this chapter. Calculate the distance traveled in braking.
A manned rocket accelerates at a rate of during launch. How long does it take the rocket reach a velocity of 400 m/s?
To answer this, choose an equation that allows you to solve for time , given only , , and .
Rearrange to solve for .
- To simplify calculations we take acceleration to be constant, so that at all times.
- We also take initial time to be zero.
- Initial position and velocity are given a subscript 0; final values have no subscript. Thus,
- The following kinematic equations for motion with constant are useful:
- In vertical motion, is substituted for .
Problems & Exercises
An Olympic-class sprinter starts a race with an acceleration of . (a) What is her speed 2.40 s later? (b) Sketch a graph of her position vs. time for this period.
A well-thrown ball is caught in a well-padded mitt. If the deceleration of the ball is , and 1.85 ms elapses from the time the ball first touches the mitt until it stops, what was the initial velocity of the ball?
38.9 m/s (about 87 miles per hour)
A bullet in a gun is accelerated from the firing chamber to the end of the barrel at an average rate of for . What is its muzzle velocity (that is, its final velocity)?
(a) A light-rail commuter train accelerates at a rate of . How long does it take to reach its top speed of 80.0 km/h, starting from rest? (b) The same train ordinarily decelerates at a rate of . How long does it take to come to a stop from its top speed? (c) In emergencies the train can decelerate more rapidly, coming to rest from 80.0 km/h in 8.30 s. What is its emergency deceleration in ?
While entering a freeway, a car accelerates from rest at a rate of for 12.0 s. (a) Draw a sketch of the situation. (b) List the knowns in this problem. (c) How far does the car travel in those 12.0 s? To solve this part, first identify the unknown, and then discuss how you chose the appropriate equation to solve for it. After choosing the equation, show your steps in solving for the unknown, check your units, and discuss whether the answer is reasonable. (d) What is the car’s final velocity? Solve for this unknown in the same manner as in part (c), showing all steps explicitly.
At the end of a race, a runner decelerates from a velocity of 9.00 m/s at a rate of . (a) How far does she travel in the next 5.00 s? (b) What is her final velocity? (c) Evaluate the result. Does it make sense?
(c) This result does not really make sense. If the runner starts at 9.00 m/s and decelerates at , then she will have stopped after 4.50 s. If she continues to decelerate, she will be running backwards.
Blood is accelerated from rest to 30.0 cm/s in a distance of 1.80 cm by the left ventricle of the heart. (a) Make a sketch of the situation. (b) List the knowns in this problem. (c) How long does the acceleration take? To solve this part, first identify the unknown, and then discuss how you chose the appropriate equation to solve for it. After choosing the equation, show your steps in solving for the unknown, checking your units. (d) Is the answer reasonable when compared with the time for a heartbeat?
In a slap shot, a hockey player accelerates the puck from a velocity of 8.00 m/s to 40.0 m/s in the same direction. If this shot takes , calculate the distance over which the puck accelerates.
A powerful motorcycle can accelerate from rest to 26.8 m/s (100 km/h) in only 3.90 s. (a) What is its average acceleration? (b) How far does it travel in that time?
Freight trains can produce only relatively small accelerations and decelerations. (a) What is the final velocity of a freight train that accelerates at a rate of for 8.00 min, starting with an initial velocity of 4.00 m/s? (b) If the train can slow down at a rate of , how long will it take to come to a stop from this velocity? (c) How far will it travel in each case?
(c) 7.68 km to accelerate and 713 m to decelerate
A fireworks shell is accelerated from rest to a velocity of 65.0 m/s over a distance of 0.250 m. (a) How long did the acceleration last? (b) Calculate the acceleration.
A swan on a lake gets airborne by flapping its wings and running on top of the water. (a) If the swan must reach a velocity of 6.00 m/s to take off and it accelerates from rest at an average rate of , how far will it travel before becoming airborne? (b) How long does this take?
A woodpecker’s brain is specially protected from large decelerations by tendon-like attachments inside the skull. While pecking on a tree, the woodpecker’s head comes to a stop from an initial velocity of 0.600 m/s in a distance of only 2.00 mm. (a) Find the acceleration in and in multiples of . (b) Calculate the stopping time. (c) The tendons cradling the brain stretch, making its stopping distance 4.50 mm (greater than the head and, hence, less deceleration of the brain). What is the brain’s deceleration, expressed in multiples of ?
An unwary football player collides with a padded goalpost while running at a velocity of 7.50 m/s and comes to a full stop after compressing the padding and his body 0.350 m. (a) What is his deceleration? (b) How long does the collision last?
In World War II, there were several reported cases of airmen who jumped from their flaming airplanes with no parachute to escape certain death. Some fell about 20,000 feet (6000 m), and some of them survived, with few life-threatening injuries. For these lucky pilots, the tree branches and snow drifts on the ground allowed their deceleration to be relatively small. If we assume that a pilot’s speed upon impact was 123 mph (54 m/s), then what was his deceleration? Assume that the trees and snow stopped him over a distance of 3.0 m.
Consider a grey squirrel falling out of a tree to the ground. (a) If we ignore air resistance in this case (only for the sake of this problem), determine a squirrel’s velocity just before hitting the ground, assuming it fell from a height of 3.0 m. (b) If the squirrel stops in a distance of 2.0 cm through bending its limbs, compare its deceleration with that of the airman in the previous problem.
(b) . This is about 3 times the deceleration of the pilots, who were falling from thousands of meters high!
An express train passes through a station. It enters with an initial velocity of 22.0 m/s and decelerates at a rate of as it goes through. The station is 210 m long. (a) How long is the nose of the train in the station? (b) How fast is it going when the nose leaves the station? (c) If the train is 130 m long, when does the end of the train leave the station? (d) What is the velocity of the end of the train as it leaves?
Dragsters can actually reach a top speed of 145 m/s in only 4.45 s—considerably less time than given in [link] and [link]. (a) Calculate the average acceleration for such a dragster. (b) Find the final velocity of this dragster starting from rest and accelerating at the rate found in (a) for 402 m (a quarter mile) without using any information on time. (c) Why is the final velocity greater than that used to find the average acceleration? Hint: Consider whether the assumption of constant acceleration is valid for a dragster. If not, discuss whether the acceleration would be greater at the beginning or end of the run and what effect that would have on the final velocity.
(c) , because the assumption of constant acceleration is not valid for a dragster. A dragster changes gears, and would have a greater acceleration in first gear than second gear than third gear, etc. The acceleration would be greatest at the beginning, so it would not be accelerating at during the last few meters, but substantially less, and the final velocity would be less than 162 m/s.
A bicycle racer sprints at the end of a race to clinch a victory. The racer has an initial velocity of 11.5 m/s and accelerates at the rate of for 7.00 s. (a) What is his final velocity? (b) The racer continues at this velocity to the finish line. If he was 300 m from the finish line when he started to accelerate, how much time did he save? (c) One other racer was 5.00 m ahead when the winner started to accelerate, but he was unable to accelerate, and traveled at 11.8 m/s until the finish line. How far ahead of him (in meters and in seconds) did the winner finish?
In 1967, New Zealander Burt Munro set the world record for an Indian motorcycle, on the Bonneville Salt Flats in Utah, of 183.58 mi/h. The one-way course was 5.00 mi long. Acceleration rates are often described by the time it takes to reach 60.0 mi/h from rest. If this time was 4.00 s, and Burt accelerated at this rate until he reached his maximum speed, how long did it take Burt to complete the course?
(a) A world record was set for the men’s 100-m dash in the 2008 Olympic Games in Beijing by Usain Bolt of Jamaica. Bolt “coasted” across the finish line with a time of 9.69 s. If we assume that Bolt accelerated for 3.00 s to reach his maximum speed, and maintained that speed for the rest of the race, calculate his maximum speed and his acceleration. (b) During the same Olympics, Bolt also set the world record in the 200-m dash with a time of 19.30 s. Using the same assumptions as for the 100-m dash, what was his maximum speed for this race?
- College Physics
- Introduction: The Nature of Science and Physics
- Introduction to One-Dimensional Kinematics
- Vectors, Scalars, and Coordinate Systems
- Time, Velocity, and Speed
- Motion Equations for Constant Acceleration in One Dimension
- Problem-Solving Basics for One-Dimensional Kinematics
- Falling Objects
- Graphical Analysis of One-Dimensional Motion
- Two-Dimensional Kinematics
- Dynamics: Force and Newton's Laws of Motion
- Introduction to Dynamics: Newton’s Laws of Motion
- Development of Force Concept
- Newton’s First Law of Motion: Inertia
- Newton’s Second Law of Motion: Concept of a System
- Newton’s Third Law of Motion: Symmetry in Forces
- Normal, Tension, and Other Examples of Forces
- Problem-Solving Strategies
- Further Applications of Newton’s Laws of Motion
- Extended Topic: The Four Basic Forces—An Introduction
- Further Applications of Newton's Laws: Friction, Drag, and Elasticity
- Uniform Circular Motion and Gravitation
- Work, Energy, and Energy Resources
- Linear Momentum and Collisions
- Statics and Torque
- Rotational Motion and Angular Momentum
- Introduction to Rotational Motion and Angular Momentum
- Angular Acceleration
- Kinematics of Rotational Motion
- Dynamics of Rotational Motion: Rotational Inertia
- Rotational Kinetic Energy: Work and Energy Revisited
- Angular Momentum and Its Conservation
- Collisions of Extended Bodies in Two Dimensions
- Gyroscopic Effects: Vector Aspects of Angular Momentum
- Fluid Statics
- Fluid Dynamics and Its Biological and Medical Applications
- Introduction to Fluid Dynamics and Its Biological and Medical Applications
- Flow Rate and Its Relation to Velocity
- Bernoulli’s Equation
- The Most General Applications of Bernoulli’s Equation
- Viscosity and Laminar Flow; Poiseuille’s Law
- The Onset of Turbulence
- Motion of an Object in a Viscous Fluid
- Molecular Transport Phenomena: Diffusion, Osmosis, and Related Processes
- Temperature, Kinetic Theory, and the Gas Laws
- Heat and Heat Transfer Methods
- Introduction to Thermodynamics
- The First Law of Thermodynamics
- The First Law of Thermodynamics and Some Simple Processes
- Introduction to the Second Law of Thermodynamics: Heat Engines and Their Efficiency
- Carnot’s Perfect Heat Engine: The Second Law of Thermodynamics Restated
- Applications of Thermodynamics: Heat Pumps and Refrigerators
- Entropy and the Second Law of Thermodynamics: Disorder and the Unavailability of Energy
- Statistical Interpretation of Entropy and the Second Law of Thermodynamics: The Underlying Explanation
- Oscillatory Motion and Waves
- Introduction to Oscillatory Motion and Waves
- Hooke’s Law: Stress and Strain Revisited
- Period and Frequency in Oscillations
- Simple Harmonic Motion: A Special Periodic Motion
- The Simple Pendulum
- Energy and the Simple Harmonic Oscillator
- Uniform Circular Motion and Simple Harmonic Motion
- Damped Harmonic Motion
- Forced Oscillations and Resonance
- Superposition and Interference
- Energy in Waves: Intensity
- Physics of Hearing
- Electric Charge and Electric Field
- Introduction to Electric Charge and Electric Field
- Static Electricity and Charge: Conservation of Charge
- Conductors and Insulators
- Coulomb’s Law
- Electric Field: Concept of a Field Revisited
- Electric Field Lines: Multiple Charges
- Electric Forces in Biology
- Conductors and Electric Fields in Static Equilibrium
- Applications of Electrostatics
- Electric Potential and Electric Field
- Electric Current, Resistance, and Ohm's Law
- Circuits, Bioelectricity, and DC Instruments
- Introduction to Magnetism
- Ferromagnets and Electromagnets
- Magnetic Fields and Magnetic Field Lines
- Magnetic Field Strength: Force on a Moving Charge in a Magnetic Field
- Force on a Moving Charge in a Magnetic Field: Examples and Applications
- The Hall Effect
- Magnetic Force on a Current-Carrying Conductor
- Torque on a Current Loop: Motors and Meters
- Magnetic Fields Produced by Currents: Ampere’s Law
- Magnetic Force between Two Parallel Conductors
- More Applications of Magnetism
- Electromagnetic Induction, AC Circuits, and Electrical Technologies
- Introduction to Electromagnetic Induction, AC Circuits and Electrical Technologies
- Induced Emf and Magnetic Flux
- Faraday’s Law of Induction: Lenz’s Law
- Motional Emf
- Eddy Currents and Magnetic Damping
- Electric Generators
- Back Emf
- Electrical Safety: Systems and Devices
- RL Circuits
- Reactance, Inductive and Capacitive
- RLC Series AC Circuits
- Electromagnetic Waves
- Geometric Optics
- Vision and Optical Instruments
- Wave Optics
- Introduction to Wave Optics
- The Wave Aspect of Light: Interference
- Huygens's Principle: Diffraction
- Young’s Double Slit Experiment
- Multiple Slit Diffraction
- Single Slit Diffraction
- Limits of Resolution: The Rayleigh Criterion
- Thin Film Interference
- *Extended Topic* Microscopy Enhanced by the Wave Characteristics of Light
- Special Relativity
- Introduction to Quantum Physics
- Atomic Physics
- Introduction to Atomic Physics
- Discovery of the Atom
- Discovery of the Parts of the Atom: Electrons and Nuclei
- Bohr’s Theory of the Hydrogen Atom
- X Rays: Atomic Origins and Applications
- Applications of Atomic Excitations and De-Excitations
- The Wave Nature of Matter Causes Quantization
- Patterns in Spectra Reveal More Quantization
- Quantum Numbers and Rules
- The Pauli Exclusion Principle
- Radioactivity and Nuclear Physics
- Medical Applications of Nuclear Physics
- Particle Physics
- Frontiers of Physics
- Atomic Masses
- Selected Radioactive Isotopes
- Useful Information
- Glossary of Key Symbols and Notation |
The following Projects are an assortment of long-term activities that can be completed individually, in groups or as a class. We have provided starting points for research and development; you and the students can work together to create a more detailed plan of action. Consider the following two recommendations. First, because of the amount of work involved in a Project, students should choose one of great interest to them. Second, to encourage excellence and promote student-student learning, students should present their finished projects to the rest of the class, to the school and to the community, if appropriate.
Project 1: Research Questions and Action Projects
Project 1 differs from the others: it is a list of possible research topics organized according to some key ideas and addressed to students.
In assigning a Research Question or Action Project, we ask that you allow students to choose their topic—either one provided or one of their own. You might also:
- Specify length of piece.
- Make clear the purpose and the audience.
- Suggest sources and ideas for information.
- Provide in-class time for compiling information and writing.
- Require students to exchange papers and provide written feedback.
- Provide a breakdown of due-dates for the following stages: choice of topic, outline, rough draft and final draft.
- Permit students to supplement a written report with a skit, a piece of artwork, a piece of music, a dance, a Video, or a multimedia presentation.
Provide the students with evaluation criteria that include:
- accuracy of the content based on guiding questions.
- clarity of writing.
- effective organization of main ideas.
- use of detailed examples or citing evidence to support their conclusions.
Project 1: Teacher Activity Notes - Research Questions and Action Projects
Human Respiratory and Circulation Systems
1. How does a fetus receive oxygen and nutrients while in the uterus? What changes occur in the heart and lungs at birth? Why? How does premature birth affect the circulatory and respiratory systems? How is a baby's blood different from that of an adult?
Structure and Function of Blood, Heart, and Vessels
2. How is the human circulatory system different from other animals'? Choose two animals, one vertebrate and one invertebrate. Compare and contrast the circulatory system of each animal with the other and with the human circulatory system. Use the theory of evolution to help explain your findings.
3. How is human blood similar to and different from blood of other animals? Choose two other animals, one mammal and one non mammalian species. Compare and contrast the physical and chemical properties of their blood with those of human blood. Use the theory of evolution to help explain your findings.
4. Scientists did not always know as much about the circulatory system as they do today. Choose a period in the past. Explain the following.
- How did scientists at that time explain the function of the circulatory system, the production of blood, and the function of the heart?
- What kinds of evidence did they use to support their claims?
Note: Limit your study to only one country or culture.
Integration within and between Systems
5. Use your knowledge of the circulatory system, respiratory system, and digestive system to explain how their functions are integrated.
6. Cholesterol level-What is a cholesterol level? How is it determined? What is a healthy level for adults? For adolescents? Why? How can your cholesterol level be raised or lowered?
7. Hypertension-What is hypertension? What causes it? What are its effects on the body? How is it prevented and treated? What can adolescents do to prevent the onset of hypertension later in life?
8. HIV-How is the circulatory system involved in the transmittal and spread of HIV? What are ways adolescents can help prevent the spread of HIV? What will you do to reduce your own risk of getting this virus?
9. Choose one of the following diseases: Leukemia-anemia-sickle-cell anemia. What is the disease? How does it affect the circulatory system and the body? How is this disease treated?
Science, Technology, and Society
11. Besides school, what influences your attitudes and behaviors regarding health? Family, friends, the media, culture? What behaviors and attitudes regarding health have you adopted as a result of this influence? Why? How are your attitudes and behavior similar to and different from what you have learned in school?
12. Heart transplants-Why are heart transplants performed? How does the donor/recipient system work? Who gets a heart transplant? Is the system fair?
13. What is a recent innovation in the world of cardiovascular medicine? Describe where, when, how, and why it was developed.
14. Research the education/training required for these careers: dietitian, nutritionist, cardiologist, cardiovascular surgeon. What do these professionals do? What type(s) of technology do they use? Would you want to become one of them? Why or why not?
15. During the American Civil War, doctors and nurses treated many injured soldiers. What was their knowledge of the structure and function of the circulatory system? What kinds of surgical procedures and medicines did they use to treat gunshot wounds? Refer to the poems of Walt Whitman who was a nurse during the Civil War.
Project 2: Teacher Activity Notes - Be Heart Smart
Summary Students learn what it means to be “heart smart.” They distinguish between healthy and unhealthy behaviors, consider why it is important to practice healthy behaviors, and devise strategies to improve and maintain good cardiovascular health.
Scale, Yardstick, Stethoscopes, Sphygmomanometers, Charts of height and weight, Dietary tables, and Resource materials on cardiovascular health and fitness
Health, Social Studies, Physical Education
- One week for initial research and sharing of information
- One week for development of action plans
- Several class periods for students to prepare and present their projects
- Research presentation defining what it means to be heart smart
- Written action plans which identify and eliminate unhealthy behaviors
- Written report of completed action plan describing methods and results
1. Divide students into research groups of three to five students. Ask them to define what it means to be “heart smart.” Have them brainstorm a list of important factors related to heart disease and explain how these factors affect the circulatory system. Give them the option of consulting some or all of the following sources-textbooks, reference books, the Internet, CD-ROMs, a health professional, a representative of the American Heart Association, a coach, and/or a physical trainer. Provide students with additional sources of information on cardiovascular health and disease or with time to conduct their own research.
2. Using the information they have collected, have students assess their own health by answering the following questions.
- How healthy am I?
- Am I at risk for heart disease?
Students also may determine some or all of the following information about themselves.
Weight-height-pulse rate-blood pressure-cholesterol level-stress level-amount of fat and cholesterol in diet-amount of daily exercise-unhealthy habits.
Provide equipment to measure weight, blood pressure, and pulse rate. Provide charts to determine healthy ranges for weight versus height and amount of fat and cholesterol in foods. Many of the labs and activities under “Staying Healthy” can be modified for use here. Also arrange for students to have their cholesterol levels checked, if at all possible.
NOTE: You may choose to assign each student an ID number to be used when collecting personal health data. The ID number will allow students to examine and analyze data from the entire class while protecting students' privacy.
3. Ask each student to identify a personal behavior that is considered unhealthy. Individually or in groups, have them devise a plan of action for the next several weeks to eliminate or alter this unhealthy habit. During this period, allot time for students to describe in a daily journal what they did to change their behaviors.
4. After completing their respective action plans, have students reflect on changes in their lives. Do they feel differently? How have their attitudes changed? In their journals or in a written report, have students describe the unhealthy behavior, their plan of action, what they actually did or did not do, and their results so far.
They should be able to answer the following questions.
- Why is this targeted behavior believed to increase the risk of heart disease?
- What physical and/or psychological changes did you note while implementing your plan of action?
- Do you plan to continue being heart smart? Why or why not?
Suggested Follow-up Activities
- As a class, watch a film or video on heart disease-its causes, prevention, and treatment.
- If students are enthusiastic about the project and their results have them create an educational program or session to be taught to their schoolmates or families. The presentation should answer the following questions. What does it mean to be heart smart? Why is being heart smart important?
Note: Be sensitive to situations at home that may negatively impact on programs for parents or guardians.
- Arrange for students to visit a hospital, clinic, or research institution that specializes in the treatment and/or prevention of heart disease. Or ask a person from one of these organizations to come to speak to the class.
- At the end of the year, have students review their health habits. What habits have they been able to change? What habits have they been unable to change? Are they healthier and why or why not?
- At the completion of the project, students can follow up with a parent or guardian.
Use the students' products to assess if students can:
- define and describe the major risk factors for heart disease.
- describe how these risk factors are a result of specific behaviors and/or lifestyles.
- distinguish between healthy and unhealthy behaviors.
- present an organized action plan with realistic time lines for changing a personal unhealthy behavior.
- clearly express the methods they used and what they learned.
- organize and present data in a written format to the class in a meaningful way.
Project 3: Teacher Activity Notes - A Cafeteria Case Study
Summary Students conduct a case study of the school cafeteria, assessing how much fat and cholesterol are served on the daily menu and if the cafeteria promotes healthy eating habits. They make recommendations on how the cafeteria can become more health-conscious.
A nutritional table or Diet Analysis Plus® program, Version 3.0. You can obtain the Diet Analysis Plus program from ITP Distribution Center, 7625 Empire Drive, Florence, Kentucky, 41042, ATTN: Order Fulfillment. The telephone number is 1-800-824-5179. When ordering, indicate the following order number-for IBM WIN, ISB 0534538207 and for Macintosh, ISBN 0534538223.
Interdisciplinary Connection Health
- One class period for research on the fat and cholesterol content of the cafeteria menu
- At least two class periods for students to prepare and present their presentations
- Written assessment of cafeteria food
- Written report of the study's results and recommendations to the cafeteria staff
- Presentations of the assessment and recommendations
1. Before beginning this project, discuss it with the principal and the head of the cafeteria in order to ensure cooperation on their parts, as well as respect and consideration on the part of the class.
2. Ask the cafeteria supervisor for a printed menu of what is se1ved over the course of two weeks. Divide the class into teams of four or five. Have each team study one of the following.
- the kind and frequency of different types of foods served over those two weeks,
- descriptions of how meals were prepared,
- estimates of the amount of fat and cholesterol present in the foods served,
- the opinions of the cafeteria staff, and
- the cafeteria budget.
3. Allow each team an opportunity to present its findings orally to the rest of the class.
4. The class should then write a report to the head of the cafeteria listing any problems with the food se1ved and recommendations on how to make the meals healthier. Facts about heart disease and diet should be used to support any comments and suggestions.
Suggested Follow-up Activities
- A few months later ask students to repeat their analysis of the cafeteria's menu. How does it compare with their initial analysis? Has the cafeteria's menu improved sufficiently from a nutritional point of view? If not, why not? Is their analysis faulty in some way? Does the cafeteria lack the funds to provide healthier food? Are there federal or state laws controlling the cafeteria's choices? Have students' concerns or recommendations fallen on deaf ears?
- Ask students to assess what they learned in doing this project. Have them write about these experiences in a reflective paper.
- Ask the class to synthesize all that the groups learned in the form of an article. Submit this article to the school and/or local newspaper for publication.
- Have a professional nutritionist or nurse working with a cardiologist visit the class and discuss his or her job as it relates to cardiovascular health.
Use the students' products to assess if students can:
- identify the fat and cholesterol content of foods on a cafeteria's daily menu.
- make practical/reasonable recommendations for changing the menu based on cost and availability of specific foods.
- evaluate the impact of their recommendations for a) students, b) cafeteria staff, c) the budget manager, d) outside vendors, e) parents/guardians, and f) school staff.
- clearly express their assessment and recommendations to the class.
- make a convincing presentation using the facts.
- effectively answer questions from the class.
- use visuals to illustrate major points.
Project 4: Teacher Activity Notes - Tasty Tidbits
Summary Students create and share healthy recipes to eat and enjoy.
Non-copyrighted recipes from home, magazines, or cookbooks, nutritional table or Diet Analysis Plus® program.
Interdisciplinary Connections Health, Social Studies, Visual and Performing Arts, Home Economics
Collection of recipes with written dietary assessment of how healthy the recipes are.
- Provide students with a nutritional table or a computer program, such as Diet Analysis Plus® in order to calculate the calories, fat content, cholesterol, protein, carbohydrates, and vitamins for each recipe.
- As a class, create a recipe book of dishes and desserts that are easy to prepare, novel, low in both fat and cholesterol, and taste good. Ask students to bring in and examine cookbooks and/or recipes from home or from magazines. Parents or guardians may need to help gather recipes, so send a letter home with students one week before beginning the project. This is a good opportunity to look at the diversity of the foods that different cultures prepare.
- Divide the class into groups of three to five students. Have each group choose or create three to five “heart smart” recipes. For each recipe, they should include the name of the dish, needed ingredients, how to prepare it, a drawing of the finished product, and reasons that the dish is healthy.
- Have students combine and organize the recipes into a class cookbook. They should include a cover, a table of contents, illustrations, and a rationale for using the recipes.
Suggested Follow-up Activities
- Publish the cookbook for parents, guardians, and the community. Money for this project may be obtained from the school district, or the cookbooks may be sold. CAUTION: Do not distribute the books if they contain copyrighted recipes.
- Ask students to assess what they learned in doing this project. Have them write about their experiences in a reflective paper.
Use the students' products to assess if students can
- define what it means to have a “heart smart” recipe.
- clearly write the instructions so that the recipe is “easy to prepare.”
- evaluate how nutritious the recipe is based on calories, fat content, cholesterol, protein, carbohydrates, and vitamins.
- organize the recipes into a cookbook that has a professional appearance.
- convince the reader that the recipe is healthy.
Project 5: Teacher Activity Notes - Past vs. Present
If possible, provide students with magazines such as Time, Life, Newsweek, or The Saturday Evening Post from previous decades.
Interdisciplinary Connections Health, Social Studies, Physical Education, Visual and Performing Arts
- Two to three weeks. Allow for occasional group work in class. Students are expected to assign themselves homework in order to complete the project on time.
- Allot time during class for team presentations.
- Written responses to discussion questions
- Written action plan of how teams will promote healthier behavior among their peers
- Team presentations of their promotional efforts and the results of those efforts
2. Each group will use the information collected to discuss and write answers to the following questions:
a. In general, do you think today's adolescents are more knowledgeable about health and more health-conscious than adolescents in the past were? Why or why not?
b. What social, monetary, and/or educational factors are involved?
c. How could today's adolescents be more health-conscious and knowledgeable with respect to the aspect of health you researched?
Help students avoid right/wrong dichotomies about their parents', guardians', or friends' behaviors-that certain practices are “wrong” and others are “right.” Also help students examine the past decade with respect. People had different information and lived with different societal expectations and norms, in different decades.
3. In their groups, have students consider their responses to Question c above. Ask them to pick ONE thing that they could do to help today's adolescents lead a healthier life. It is important that they consider what they learned, how to convey the information to adolescents, and how to convince them that it is important to lead a healthy life with respect to this aspect of cardiovascular health.
4. Have students devise and write up a plan of action to accomplish their goals.
5. For final presentation have students explain what they did, why they did it, and their results. Encourage them to present their information in creative ways that are both interesting and educational. They may want to use slides, video, posters, or a multimedia presentation.
Suggested Follow-up Activities
- At the end of the year ask students to assess what they learned in this project. Have them write about these experiences in a reflective paper.
- Arrange for students to visit an advertising agency, newspaper publisher, or a TV station to see how information and/or images of health are created and portrayed.
- Ask students to use the information they have collected to change one aspect of their own behavior. Have them keep a weekly journal of their progress. Evaluate this change in behavior after a quarter, after a semester, or at the end of the year.
- Ask the class to synthesize all that the groups have learned in an article. Submit this article to the school and/or local newspaper for publication.
Use the students' products to assess if students can:
- identify the media's influence on adolescent health issues.
- create an action plan for promoting healthy behaviors among their peers.
- use primary sources like magazines, newspapers, CD-ROMs, the Internet, interviews, and surveys to obtain information on adolescent health issues.
- use visual and/or multimedia presentations to promote healthy behaviors.
- clearly explain what they did, why they did it, and their results.
- compare and contrast recent adolescent health issues with those encountered by teens during the decade they selected to research. |
270 likes | 408 Views
Geometry. 9.1 An Introduction to Circles. Circle. The set of all points in a plane at a given distance from a given point. . P. 3. P is the center of the circle. The circle is the set of all points in the plane of the screen 3 away from P. Radius Plural is radii.
E N D
Geometry 9.1 An Introduction to Circles
Circle • The set of all points in a plane at a given • distance from a given point. . P 3 P is the center of the circle. The circle is the set of all points in the plane of the screen 3 away from P.
Radius Plural is radii • A segment that joins the center of the • circle to any point on the circle. . P radius
Chord • A segment whose endpoints lie on a circle. . P chord
Diameter • A chord that passes through the center • of the circle. twice The diameter is _________ as long as the radius. . P diameter
Secant • A line that contains a chord of a circle. . P SECANT
Tangent • A line in the plane of a circle that • intersects the circle in exactly one point. . P Tangent . Point of tangency
F B E A C D Name: 1) The center: A
F B E A C D Name: 2) Two diameters: DB and FC
F B E A C D Name: 3) A point of tangency: D
F B E A C D Name: 4) Four radii: AB and AF and AC AD and
F B E A C D Name: 5) A tangent: ED
F B E A C D Name: 6) A secant: FC
F B E A C D Name: 7) Six chords: FB and and DF DC and and and BC DB FC
F B E A C D 8) Why is AC not a Chord of A: . A chord is a segment with both endpoints on the circle. AC has only one endpoint on the circle.
F B E A C D 9) Why is BD not a Chord of A: . A chord is a segment not a line.
G F H L I X K J Sphere • Sphere: The set of all points in space given distance from any given point is a sphere. • Many of the terms used with circles are also used with spheres. • For example, sphere X has a… center: radii: chords: diameter: secants: tangent: point of tangency:
4 4 Congruent Circles congruent circles/spheres Circles (or spheres) are congruent if they have congruent radii.
Concentric Circles concentric circles: Circles that lie in the same plane and have the same center are concentric. concentric spheres: Concentric spheres have the same center.
Inscribed Polygon • A polygon is inscribed in a circle if each vertex of the polygon lies on the circle. • A circle is circumscribed about a polygon if each vertex of the polygon lies on the circle. The polygon is inscribed in the circle. Thus, the circle is circumscribed about the polygon. These two sentences have the same meaning.
A F C G D In sphere A, draw: 10. a diameter, BC 11. a chord, DE 12. a tangent, CF 13. a secant, DG B C E D
14. Draw two concentric circles. Draw a tangent to one of the circles. Is it tangent to the other circle? • 15. Draw a large circle. Inscribe an isosceles triangle in the circle. • 16. Draw a rectangle. Circumscribe a circle about the rectangle. No.
O 6 x B Find the value of x. 6 6 60 3 90 3√3 30 3√3 3√3 x = 6√3
x 8 60 O x O 3 4 4 O x Find the value of x. O is the center of each circle. x = 8 x = 5 x x = 4√2 x 10 45 O x = 5√2
HW • P. 330-340 CE 1-11 WE 1-15 |
In 2013 chemical researchers reported progress in continuous-flow chemistry, also known as flow chemistry, a method of carrying out chemical reactions that has begun to revolutionize chemical synthesis in laboratory research and in the pharmaceutical industry. Not only does the method help reduce waste and energy consumption in chemical production, but it also makes some types of reactions safer to run.
Until recently, chemical reactions for research and the production of specialty compounds were largely done in flasks by a method called batch processing. In this method chemists place a set amount of reactants with an appropriate solvent into a vessel, such as a flask, where the materials are allowed to react for a certain amount of time to yield the desired chemical product. The product is then removed from the vessel and purified. To obtain the product in large quantities, the process is either repeated or performed in a very large reaction flask, and obtaining large amounts of the product can be expensive and time-consuming.
In continuous-flow chemistry, in contrast, the chemical reactions that take place rely on a continuous supply of reactants. In the most basic system, the reactants and solvent are fed through separate tubes into one end of a reaction chamber, where they react chemically, and the resulting products flow out the other end through another tube into a collection vessel. The reaction chamber may consist simply of a length of glass or stainless-steel tubing, or it may be a unit called a microreactor, in which the flow of substances is confined to very narrow channels fabricated on a small chip. Chemists can readily adjust the flow rate of the reactants in order to control the amount of each reactant they combine and the reaction time. In some continuous-flow systems, a separate tube introduces a compound to quench, or stop, the reaction in the flow of materials that passes from the reaction chamber.
The increasing use of continuous-flow chemistry stems from its advantages over batch processing. It is easier to control the temperature in a continuous-flow reaction because the area being heated or cooled is very small. With continuous-flow systems it is also easier to control how the reactants mix and simpler to place them under extreme conditions, such as high pressure. In batch processing the end products often need to be purified in order to be isolated in large amounts. Because chemists have more control over the reaction conditions in a continuous-flow setup, they can optimize the reaction to create products, reducing or eliminating the need for a purification step. Another major advantage of continuous-flow chemistry is that it can make chemical synthesis “greener.” For example, it can help cut waste by reducing or even eliminating the requirement for using a solvent to carry out a reaction. When a solvent is required, continuous-flow reactors can often make use of carbon dioxide, which has a low environmental impact compared with other solvents, and continuous-flow reactors do not require a large amount of solvent to be heated at once, as in traditional batch-process reactors. Chemists performing continuous-flow chemistry tend to make greater use of catalysts, which reduces waste because catalysts, unlike other reactants, can promote chemical reactions without being consumed. Microreactors allow researchers to test new catalysts quickly in very small amounts, minimizing the amount of these materials that would otherwise be needed, and small-scale “scouting” reactions—in which a chemist runs experiments to see if a reaction is viable or produces the desired chemical—can be conducted with a relatively small amount of material.
Demonstrating that continuous-flow chemistry can reduce waste in multiple ways, David J. Cole-Hamilton and co-workers at the University of St. Andrews, Scot., in July reported on a solventless pressurized continuous-flow system that they used with a rhodium catalyst to add hydrogen to dibutyl itaconate. The product of the reaction can exist in two versions called enantiomers, which are structurally mirror images of each other. Usually only one of the two enantiomers of a compound is desired, and the chemical separation required to isolate the desired enantiomer is difficult and expensive. However, the continuous-flow system used by the researchers yielded a product that consisted almost entirely (99%) of a single enantiomer and thereby required no purification.
Another study showed how hazardous or noxious substances that are produced in a chemical reaction in a continuous-flow process can be safely utilized in a downstream chemical reaction without being released into the environment. In an article first published in June, Dong-Pyo Kim and co-workers at Pohang (S.Kor.) University of Science and Technology described experiments with chemical reactions that produce isocyanide, an isomer of cyanide that serves as a building block in multiple-bond chemistry. Its smell is so intense and disagreeable, however, that the compound is commonly avoided. The researchers used a continuous-flow system to convert a precursor of isocyanide to an isocyanide end product by means of a self-purification and separation system. The reaction ran efficiently without releasing the noxious odour. This work may have great impact in the areas of drug discovery and natural-product synthesis with isocyanide and other toxic or noxious ingredients.
Test Your Knowledge
Acoustics and Radio Technology: Fact or Fiction?
A report by David Cantillo and C. Oliver Kappe of the University of Graz, Austria, published in October described a technique that allowed a hazardous reaction to be run more safely by means of a catalyst-free continuous-flow system. They used the system to prepare organic nitriles from carboxylic acids, with acetonitrile serving as a solvent. Organic nitriles are a class of compounds widely used as reaction intermediates, but they have been difficult to produce because of the very high temperatures and pressures needed for the reaction to proceed. In addition, the reaction yields have generally been low, and the products have required purification. Using a continuous-flow system, the researchers were readily able to apply very high temperatures and pressures that made the reaction run in much less time than it would have taken otherwise. The researchers tested several different starting materials in the reaction, and for each they obtained reactions with high yields that did not require subsequent purification.
In a paper published in March, Challa S.S.R. Kumar of Louisiana State University and colleagues described a new application for continuous-flow chemistry. Their system contained a chip-based reactor with a winding channel in which they could see the growth of catalytically active gold nanoparticles in real time. Using a combination of X-ray-analysis techniques, the researchers observed the nanoparticles forming within a five-millisecond time frame. This technique can potentially be applied to the study of other nanoparticle and metal-oxide systems, including potential catalysts, to watch how they form and grow. It could also be used to enhance the performance of a type of miniaturized device called a lab on a chip, a microchip-sized device that can perform a variety of laboratory operations quickly with very small sample sizes.
These papers were but a few of the growing number being published on continuous-flow chemistry. The trend signaled a greater recognition and acceptance of the technologies for general chemical synthesis as more laboratories in both academic and commercial settings integrated them into daily use.
Scientists at the National Institute of Standards and Technology, Gaithersburg, Md., in May 2013 announced that they had created a lens that could project in ultraviolet light a three-dimensional image of an object. In October physicists at the Foundation for Fundamental Research on Matter, Amsterdam, published a paper about a material that they had created that could give visible light passing through it a nearly infinite wavelength. That same month engineers at Stanford University stated that they had designed a material that could conceal an object with an “invisibility cloak” in regions of the visible and near-infrared light spectrum. All of these unusual substances were examples of metamaterials.
Metamaterials are artificially structured materials that exhibit extraordinary electromagnetic properties not available or not easily obtainable in nature. Since the early 2000s, metamaterials have emerged as a rapidly growing interdisciplinary area involving physics, engineering, and optics. The properties of metamaterials are tailored by manipulating their internal physical structure. This makes them different from natural materials, whose properties are mainly determined by their chemical constituents and bonds. The primary reason for the intensive interest in metamaterials is their unusual effect on light propagating through them.
Metamaterials consist of periodically or randomly distributed artificial structures that have a size and spacing much smaller than the wavelengths of incoming electromagnetic radiation. Consequently, the microscopic details of these individual structures cannot be resolved by the wave. For example, it is difficult to view the fine features of metamaterials that operate at optical wavelengths with visible light, and shorter-wavelength electromagnetic radiation, such as X-rays, is needed to image and scan them. Essentially, each artificial structure functions in a manner similar to the way in which an atom or a molecule functions in normal materials. However, when subjected to regulated interactions with electromagnetic radiation, the structures give rise to entirely extraordinary properties unavailable in natural materials.
An example of such extraordinary properties can be seen in electric permittivity (ε) and magnetic permeability (μ), two fundamental parameters that characterize the electromagnetic properties of a medium. These two parameters can be modified, respectively, in structures known as metallic wire arrays and split-ring resonators (SRRs), proposed by English physicist John Pendry in the 1990s. By adjusting the spacing and size of the elements in metallic wire arrays, a material’s electric permittivity (a measure of the tendency of the material’s electric charge to distort in the presence of an electric field) can be “tuned” to a desired value (negative, zero, or positive). Metallic SRRs consist of one or two rings or squares with a gap in them that can be used to engineer a material’s magnetic permeability (the tendency of a magnetic field to arise in the material in response to an external magnetic field). When an SRR is placed in a magnetic field that is oscillating at the SRR’s resonant frequency, electric current flows around the ring, inducing a tiny magnetic effect known as the magnetic dipole moment. In this way artificial magnetism can be achieved even if the metal used to construct the SRR is nonmagnetic.
By combining metallic wire arrays and SRRs in such a manner that both ε and μ are negative, materials can be created with a negative refractive index. Refractive index is a measure of the bending of a ray of light when passing from one medium into another (for example, from air into water). In normal refraction with positive-index materials, light entering the second medium continues past the normal (a line perpendicular to the interface between the two media), but it is bent either toward or away from the normal, depending on its angle of incidence (the angle at which it propagates in the first medium with respect to the normal) as well as on the difference in refractive index between the two media. However, when light passes from a positive-index medium to a negative-index medium, the light is refracted on the same side of the normal as the incident light. In other words, light is bent “negatively” at the interface between the two media; that is, negative refraction takes place.
Negative-index materials do not exist in nature, but according to theoretical studies conducted by Russian physicist Victor G. Veselago in the late 1960s, they were anticipated to exhibit many exotic phenomena, including negative refraction. In 2001 negative refraction was first experimentally demonstrated by American physicist Robert Shelby and his colleagues at microwave wavelengths, and the phenomenon was subsequently extended to optical wavelengths.
In addition to electric permittivity, magnetic permeability, and refractive index, engineers can manipulate the anisotropy, chirality, and nonlinearity of a metamaterial. Anisotropic metamaterials are organized so that their properties vary with direction. Some composites of metals and dielectrics exhibit extremely large anisotropy, which allows for negative refraction and new imaging systems, such as superlenses (see below). Chiral metamaterials have a handedness; that is, they cannot be superimposed onto their mirror image. Such metamaterials have an effective chirality parameter κ that is nonzero. A sufficiently large κ can lead to a negative refractive index for one direction of circularly polarized light, even when ε and μ are not simultaneously negative. Nonlinear metamaterials have properties that depend on the intensity of the incoming wave. Such metamaterials can lead to novel tunable materials or produce unusual conditions, such as doubling the frequency of the incoming wave.
The unprecedented material properties provided by metamaterials allow for novel control of the propagation of light, which has led to the rapid growth of a new field known as transformation optics. In transformation optics a metamaterial with varying values of permittivity and permeability is constructed such that light takes a specific desired path. One of the most remarkable designs in transformation optics is the invisibility cloak. Light smoothly wraps around the cloak without introducing any scattered light and thus creates a virtual empty space inside the cloak where an object becomes invisible. Such a cloak was first demonstrated at microwave frequencies by American engineer David Schurig and colleagues in 2006.
Owing to negative refraction, a flat slab of negative-index material can function as a lens to bring light radiating from a point source to a perfect focus. This metamaterial is called a superlens, because by amplifying the decaying evanescent waves that carry the fine features of an object, its imaging resolution does not suffer from the diffraction limit of conventional optical microscopes. In 2004 electrical engineers American Anthony Grbic and Cypriot Canadian George Eleftheriades built a superlens that functioned at microwave wavelengths, and in 2005 American Xiang Zhang and colleagues experimentally demonstrated a superlens at optical wavelengths with a resolution three times better than the traditional diffraction limit.
The concepts of metamaterials and transformation optics have been applied not only to the manipulation of electromagnetic waves but also to acoustic, mechanic, thermal, and even quantum mechanical systems. Such applications have included the creation of a negative effective mass density and negative effective modulus, an acoustic “hyperlens” with resolution greater than the diffraction limit of sound waves, and an invisibility cloak for thermal flows.
Astronomical events, other than those originating from the Sun, have often been remote, distant occurrences, but one such event, on Feb. 15, 2013, had a direct and immediate impact on Earth. At 9:20 am local time, a small near-Earth asteroid with a mass of 12,000 tons and moving relative to Earth at about 18.6 km per second (roughly 41,000 mph) entered the atmosphere above the city of Chelyabinsk, Russia. It then exploded and fragmented. The energy was 20 to 30 times stronger than that released in the Hiroshima atomic bomb blast. The 2013 asteroid was the largest object to strike Earth since an even larger asteroid or comet hit the Tunguska region of Siberia in 1908. (See Special Report.)
For information on Eclipses, Equinoxes, and Solstices, and Earth Perihelion and Aphelion in 2014, see below.
Following the Viking spacecraft landings on Mars in 1976, scientists began to report that a small number of meteorites found on Earth had a Martian origin. This idea was originally suggested by the similarity in the isotopic composition of some gases trapped in these meteorites and that of the Martian atmosphere as measured by Viking. Of the 50,000 meteorites found to date on Earth, not even 100 were thought to be of Martian origin. In October 2013 a team of scientists reported that recent measurements of the isotopic composition of argon in the Martian atmosphere made by the NASA Mars Exploration Rover mission provided the most definitive evidence to date that these meteorites were indeed of Martian origin. Also in 2013, NASA reported that one of these meteorites, named NWA 7034, which had been found in the Sahara in 2011, had 10 times the water content of most other Martian meteorites and was some 2.1 billion years old. Together, these recent results helped clarify the past history of the Martian atmosphere and of the water content on Mars when it was warmer, wetter, and thus possibly more conducive to the presence of life.
The Cassini spacecraft was launched in 1997 and arrived at the giant gas planet Saturn in 2004. In the intervening years, it had made many remarkable discoveries about the ringed planet and its moons. On July 19 the imaging system of the spacecraft was pointed in the direction of Earth. It then took a portrait of Earth and the Moon, both just visible beneath Saturn’s rings. Even more scientifically intriguing images were taken from above Saturn. A composite of these images showed the full ring system, cloud bands above the planetary surface, and the “polar hexagon,” an unusual six-sided jet stream surrounding Saturn’s north pole. Such an image could never be taken from Earth-based telescopes, or even from the Hubble Space Telescope, since Saturn presents an edge-on view of itself only for observers moving in the orbital plane of the solar system.
Stars and Extrasolar Planets
The most successful extrasolar planet (exoplanet) hunting campaign ever ended in 2013. NASA’s Kepler space telescope photographed more than 150,000 stars every 30 minutes for four years. In May one of Kepler’s four reaction wheels, which were responsible for pointing the telescope, failed. Another wheel had previously failed in 2012, and the telescope required at least three working wheels for its mission. Attempts to restart the wheel failed, and in August NASA announced that the mission had ended. The Kepler team reported more than 3,500 planet candidates to date. Of these, 167 had been confirmed by follow-up studies using ground-based telescopes. Further analysis of the Kepler observations was expected to lead to the discovery of additional extrasolar planets. In all, more than 1,000 extrasolar planets residing in more than 800 stellar systems had been discovered to date.
Of the Kepler exoplanet discoveries made in 2013, several were particularly notable. The star Kepler-37 appeared to harbour the smallest exoplanet discovered to date. It was about the size of the Moon and was very likely a rocky planet with no atmosphere or water at all. It was also the smallest exoplanet found that orbits a Sunlike star. Another exoplanet, Kepler-78b, had a mass of about 1.8 times the mass of Earth. It orbits its star with a period of only 8.5 hours, so its surface temperature is about 2,000 °C (3,600 °F). Because its size (about 20% larger than Earth) was also known, it was possible to calculate its density and its probable composition. Kepler-78b was thought to consist of liquid rock or ironlike molten material. Its very presence so close to its central star presented a puzzle for theories of planet formation. Yet another star, Kepler-62, had five planets in orbit about it. The exoplanet designated Kepler-62f had a diameter about 1.4 times that of Earth and an orbital period of 267 days. It resided in the so-called “Goldilocks” habitable zone for life where surface water could exist in liquid form.
By analyzing the statistics of exoplanet discoveries made by the Kepler telescope and by the W.M. Keck Observatory, a team of astronomers from the University of California, Berkeley, and the University of Hawaii at Manoa, Honolulu, concluded that of the 100 billion stars in the Milky Way Galaxy, 22% of the Sunlike ones have Earthlike planets residing in their habitable zones. This suggested that there might be about 10 billion such planets in the galaxy and that there was a reasonable chance that the nearest star with an exoplanet that could potentially harbour life could be as close as 12 light-years.
Nearby stars should be good places to hunt for extrasolar planets. The nearest star to the Sun is Proxima Centauri. It lies at a distance of some 4.24 light years and is part of a triple star system with Alpha and Beta Centauri. Proxima Centauri, discovered in 1915, was about 100 times dimmer than could be seen with the naked eye. The next nearest star, discovered a year later, was Barnard’s star at a distance of six light-years. In 2013, after nearly a century with no other very close stars discovered, astronomer Kevin Luhman of Pennsylvania State University, using NASA’s Wide-Field Infrared Survey Explorer (WISE) satellite, reported the discovery of the third nearest system. It had escaped detection earlier because it consists of a pair of brown dwarfs, which are much cooler than the Sun and radiate primarily at infrared wavelengths. The system was also located close to the plane of the Milky Way, which previous surveys for brown dwarfs had avoided because of the plane’s crowded stellar fields. The pair, called WISE 1049-5319 (or Luhman 16), lies at a distance from Earth of about 6.6 light-years.
Galaxies and Cosmology
Gamma-ray bursts were the most energetic explosive events detected in the universe. They were thought to be associated with the collapse and subsequent explosion of stars 10 times more massive than the Sun. Though these events were also accompanied by the emission of optical light and X-rays, they were first detected more than 30 years earlier by military satellites looking for gamma-ray flashes from secret nuclear tests. On April 27 the Fermi Gamma-Ray Space Telescope detected the highest-energy gamma-rays ever seen from such an event (designated GRB 130427A), extending up to 94 billion electron volts. To put the energy of this radiation in perspective, the gamma-ray photons detected from the event had about 100 times more energy than the rest mass energy of a proton. In visible light this gamma-ray burst was bright enough to be seen by amateur astronomers, even though it originated in a galaxy 3.6 billion light-years away.
The large-scale structure of the universe was mapped out by means of multiple techniques. Some involved observations of individual galaxies, whereas others involved the study of microwave background radiation from the earliest era of the universe even before galaxies were formed. In 2013 astronomers reported new or refined studies using each of these methods. Using the new infrared MOSFIRE spectrograph on the Keck I telescope in Hawaii, a team of astronomers detected and analyzed optical emission from a galaxy named z8_GND_5296. It had the highest redshift z = 7.51 confirmed to date, placing it at a distance from Earth of about 13.1 billion light years. This observation showed that galaxies began forming quite early, only about 700 million years after the big bang.
In March 2013 the European Space Agency’s Planck satellite team announced the results of the mission’s first 15 and a half months of mapping the cosmic microwave background radiation left over from the big bang. A variety of earlier measurements made with balloons, rockets, satellites, and even ground-based equipment had already given a good picture of the radiation that remained from the original hot expanding fireball. The mission of Planck was to map this radiation in exquisite detail to reveal the fluctuations in the intensity of the uniformity of the radiation across the sky. With the ability to measure deviations of a part in a million, Planck verified the earlier results, but with much higher precision. Taken together with earlier results, those from the Planck mission led to the conclusion that the universe is 13.798 billion years old (with an uncertainty of +/– .037 billion years) and that it is made up of 4.9% ordinary matter, 26.8% dark matter, and 68.3% dark energy.
Neutrinos are subatomic particles with no electric charge and a very small mass. Their interactions with matter are very weak. Every second more than 1029 neutrinos from the Sun arrive at Earth’s surface, and nearly all of them pass completely through the planet without any interactions. However, neutrino “observatories” have been built in which large quantities of liquid are placed deep underground (to shield them from other particles), and detectors then record the rare interactions of neutrinos (usually from the Sun) with the liquid. A different type of neutrino observatory is IceCube, which consists of more than 5,000 detectors placed 1.5 km (1 mi) below the ice in Antarctica. In December scientists announced that over the course of two years, IceCube had detected 28 very high-energy neutrinos that were from outside the solar system and likely from the same as-yet-undetermined objects that produce high-energy cosmic rays. |
Physical characteristics that are quantized—such as energy, charge, and angular momentum—are of such importance that names and symbols are given to them. The values of quantized entities are expressed in terms of quantum numbers, and the rules governing them are of the utmost importance in determining what nature is and does. This section covers some of the more important quantum numbers and rules—all of which apply in chemistry, material science, and far beyond the realm of atomic physics, where they were first discovered. Once again, we see how physics makes discoveries which enable other fields to grow.
The energy states of bound systems are quantized, because the particle wavelength can fit into the bounds of the system in only certain ways. This was elaborated for the hydrogen atom, for which the allowed energies are expressed as , where . We define to be the principal quantum number that labels the basic states of a system. The lowest-energy state has , the first excited state has , and so on. Thus the allowed values for the principal quantum number are
This is more than just a numbering scheme, since the energy of the system, such as the hydrogen atom, can be expressed as some function of , as can other characteristics (such as the orbital radii of the hydrogen atom).
The fact that the magnitude of angular momentum is quantized was first recognized by Bohr in relation to the hydrogen atom; it is now known to be true in general. With the development of quantum mechanics, it was found that the magnitude of angular momentum can have only the values
where is defined to be the angular momentum quantum number. The rule for in atoms is given in the parentheses. Given , the value of can be any integer from zero up to . For example, if , then can be 0, 1, 2, or 3.
Note that for , can only be zero. This means that the ground-state angular momentum for hydrogen is actually zero, not as Bohr proposed. The picture of circular orbits is not valid, because there would be angular momentum for any circular orbit. A more valid picture is the cloud of probability shown for the ground state of hydrogen in Figure 30.48. The electron actually spends time in and near the nucleus. The reason the electron does not remain in the nucleus is related to Heisenberg’s uncertainty principle—the electron’s energy would have to be much too large to be confined to the small space of the nucleus. Now the first excited state of hydrogen has , so that can be either 0 or 1, according to the rule in . Similarly, for , can be 0, 1, or 2. It is often most convenient to state the value of , a simple integer, rather than calculating the value of from . For example, for , we see that
It is much simpler to state .
As recognized in the Zeeman effect, the direction of angular momentum is quantized. We now know this is true in all circumstances. It is found that the component of angular momentum along one direction in space, usually called the -axis, can have only certain values of . The direction in space must be related to something physical, such as the direction of the magnetic field at that location. This is an aspect of relativity. Direction has no meaning if there is nothing that varies with direction, as does magnetic force. The allowed values of are
where is the -component of the angular momentum and is the angular momentum projection quantum number. The rule in parentheses for the values of is that it can range from to in steps of one. For example, if , then can have the five values –2, –1, 0, 1, and 2. Each corresponds to a different energy in the presence of a magnetic field, so that they are related to the splitting of spectral lines into discrete parts, as discussed in the preceding section. If the -component of angular momentum can have only certain values, then the angular momentum can have only certain directions, as illustrated in Figure 30.54.
Calculate the angles that the angular momentum vector can make with the -axis for , as illustrated in Figure 30.54.
Figure 30.54 represents the vectors and as usual, with arrows proportional to their magnitudes and pointing in the correct directions. and form a right triangle, with being the hypotenuse and the adjacent side. This means that the ratio of to is the cosine of the angle of interest. We can find and using and .
We are given , so that can be +1, 0, or −1. Thus has the value given by .
can have three values, given by .
As can be seen in Figure 30.54, and so for , we have
Similarly, for , we find ; thus,
And for ,
The angles are consistent with the figure. Only the angle relative to the -axis is quantized. can point in any direction as long as it makes the proper angle with the -axis. Thus the angular momentum vectors lie on cones as illustrated. This behavior is not observed on the large scale. To see how the correspondence principle holds here, consider that the smallest angle ( in the example) is for the maximum value of , namely . For that smallest angle,
which approaches 1 as becomes very large. If , then . Furthermore, for large , there are many values of , so that all angles become possible as gets very large.
Intrinsic Spin Angular Momentum Is Quantized in Magnitude and Direction
There are two more quantum numbers of immediate concern. Both were first discovered for electrons in conjunction with fine structure in atomic spectra. It is now well established that electrons and other fundamental particles have intrinsic spin, roughly analogous to a planet spinning on its axis. This spin is a fundamental characteristic of particles, and only one magnitude of intrinsic spin is allowed for a given type of particle. Intrinsic angular momentum is quantized independently of orbital angular momentum. Additionally, the direction of the spin is also quantized. It has been found that the magnitude of the intrinsic (internal) spin angular momentum, , of an electron is given by
where is defined to be the spin quantum number. This is very similar to the quantization of given in , except that the only value allowed for for electrons is 1/2.
The direction of intrinsic spin is quantized, just as is the direction of orbital angular momentum. The direction of spin angular momentum along one direction in space, again called the -axis, can have only the values
for electrons. is the -component of spin angular momentum and is the spin projection quantum number. For electrons, can only be 1/2, and can be either +1/2 or –1/2. Spin projection is referred to as spin up, whereas is called spin down. These are illustrated in Figure 30.53.
In later chapters, we will see that intrinsic spin is a characteristic of all subatomic particles. For some particles is half-integral, whereas for others is integral—there are crucial differences between half-integral spin particles and integral spin particles. Protons and neutrons, like electrons, have , whereas photons have , and other particles called pions have , and so on.
To summarize, the state of a system, such as the precise nature of an electron in an atom, is determined by its particular quantum numbers. These are expressed in the form —see Table 30.1 For electrons in atoms, the principal quantum number can have the values . Once is known, the values of the angular momentum quantum number are limited to . For a given value of , the angular momentum projection quantum number can have only the values . Electron spin is independent of and , always having . The spin projection quantum number can have two values, .
|Principal quantum number|
|Angular momentum projection|
Figure 30.55 shows several hydrogen states corresponding to different sets of quantum numbers. Note that these clouds of probability are the locations of electrons as determined by making repeated measurements—each measurement finds the electron in a definite location, with a greater chance of finding the electron in some places rather than others. With repeated measurements, the pattern of probability shown in the figure emerges. The clouds of probability do not look like nor do they correspond to classical orbits. The uncertainty principle actually prevents us and nature from knowing how the electron gets from one place to another, and so an orbit really does not exist as such. Nature on a small scale is again much different from that on the large scale.
We will see that the quantum numbers discussed in this section are valid for a broad range of particles and other systems, such as nuclei. Some quantum numbers, such as intrinsic spin, are related to fundamental classifications of subatomic particles, and they obey laws that will give us further insight into the substructure of matter and its interactions.
The classic Stern-Gerlach Experiment shows that atoms have a property called spin. Spin is a kind of intrinsic angular momentum, which has no classical counterpart. When the z-component of the spin is measured, one always gets one of two values: spin up or spin down.
- 1 The spin quantum number s is usually not stated, since it is always 1/2 for electrons |
THE PYTHAGOREAN DISTANCE FORMULA
BASIC TO TRIGONOMETRY and calculus is the theorem that relates the squares drawn on the sides of a right-angled triangle. Credit for the proving the theorem goes to the Greek philosopher Pythagoras, who lived in the 6th century B. C.
Here is the statement of the theorem:
In a right triangle the square drawn on the side opposite the right angle
That means that if ABC is a right triangle with the right angle at A, then the square drawn on BC opposite the right angle, is equal to the two squares together on CA, AB.
In other words, if it takes one can of paint to paint the square on BC, then it will also take exactly one can to paint the other two squares.
The side opposite the right angle is called the hypotenuse ("hy-POT'n-yoos"; which literally means stretching under).
Algebraically, if the hypotenuse is c, and the sides are a, b:
a2 + b2 = c2.
For a proof, see below.
Problem 1. State the Pythagorean theorem in words.
In a right triangle the square on the side opposite the right angle will equal the squares on the sides that make the right angle.
Problem 2. Calculate the length of the hypotenuse c when the sides are as follows.
To see the answer, pass your mouse over the colored area.
a) a = 5 cm, b = 12 cm.
b) a = 3 cm, b = 6 cm.
Since 9 is a square number, and a common factor of 9 and 36, then we may anticipate simplifying the radical by writing 9 + 36 = 9(1 + 4) = 9· 5.
We could, of course, have written 9 + 36 = 45 = 9· 5. But that first wipes out the square number 9. We then have to bring it back.
The distance d of a point (x, y) from the origin
According to the Pythagorean theorem, and the meaning of the rectangular coördinates (x, y),
d 2 = x2 + y2.
"The distance of a point from the origin
Example 1. How far from the origin is the point (4, −5)?
Problem 3. How far from the origin is the point (−5, −12)?
The distance between any two points
How far is it from (4, 3) to (15, 8)?
Consider the distance d as the hypotenuse of a right triangle. Then according to Lesson 31, Problem 5, the coördinates at the right angle are (15, 3).
Therefore, the horizontal leg of that triangle is simply the distance from 4 to 15: 15 − 4 = 11.
The vertical leg is the distance from 3 to 8: 8 − 3 = 5.
To find a formula, let us use subscripts and label the two points as
(x1, y1) ("x-sub-1, y-sub-1") and (x2, y2) ("x-sub-2, y-sub-2") .
The subscript 1 labels the coördinates of the first point; the subscript 2 labels the coördinates of the second. We write the absolute value because distance is never negative.
Here then is the Pythagorean distance formula between any two points:
It is conventional to denote the difference of x-coördinates by the symbol Δx ("delta-x"):
Δx = x2 − x1
Δy = y2 − y1
Example 2. Calculate the distance between the points (1, 3) and (4, 8).
Note: It does not matter which point we call the first and which the second. Alternatively,
But (−3)2 = 9, and (−5)2 = 25. The distance between the two points is the same.
Example 3. Calculate the distance between the points (−8, −4) and (1, 2).
Problem 4. Calculate the distance between (2, 5) and (8, 1)
Problem 5. Calculate the distance between (−11, −6) and (−16, −1)
A proof of the Pythagorean theorem
Let a right triangle have sides a, b, and hypotenuse c. And let us arrange four of those triangles to form a square whose side is a + b. (Fig. 1)
Now, the area of that square is equal to the sum of the four triangles, plus the interior square whose side is c.
Two of those triangles taken together, however, are equal to a rectangle whose sides are a, b. The area of such a rectangle is a times b: ab. Therefore the four triangles together are equal to two such rectangles. Their area is 2ab.
As for the square whose side is c, its area is simply c2. Therefore, the area of the entire square is
c2 + 2ab. . . . . . . (1)
At the same time, an equal square with side a + b (Fig. 2)
a2 + b2 + 2ab.
But this is equal to the square formed by the triangles, line (1):
a2 + b2 + 2ab = c2 + 2ab.
Therefore, on subtracting the two rectangles 2ab from each square, we are left with
a2 + b2 = c2.
That is the Pythagorean theorem.
Please make a donation to keep TheMathPage online.
Copyright © 2017 Lawrence Spector
Questions or comments? |
Space junk is classified by NASA according to whether it is of natural or artificial origin, with the latter defined as ‘any man-made object in orbit around the Earth which no longer serves a useful function’. It is the accumulation of this category of space junk which poses a particularly catastrophic threat to humankind’s future in space exploration, due to increased risk of collision with and damage to functioning satellites. It could also have detrimental effects on Earth’s environment.
Artificial, or orbital, space junk consists of objects ranging from paint flecks from functioning space stations, to those as large as decades-old, inoperative spacecraft. As of February 2020, the European Space Agency (ESA) reports that approximately 22,300 pieces of debris are tracked on a regular basis by Space Surveillance Networks. Statistically, however, the numbers are likely to be much higher. The count of artificial objects in orbit around the Earth that are greater than 10cm in length is likely to be approximately 34 000, with approximately 900 000 objects between 1cm and 10cm. For those objects between 1mm and 1cm, the count is some 128 million. Consequently, the sheer number of these objects currently in orbit, and their potential to slam into other objects at speeds of up to 5 miles per second, means that the risk of causing serious damage to functioning spacecraft is significant. In 2006, the International Space Station’s (ISS) fused-silica and borosilicate-glass fortified window suffered a 7mm chip due to an impact from a piece of space debris no larger than thousandths of a millimetre across. It is easy to see the threat posed by much larger objects.
A single collision can generate thousands of particles of space junk. In 2009, the inactive Russian satellite Cosmos 2251 collided with the active American communication satellite Iridium 33 approximately 804 kms above Siberia, resulting in approximately 2 000 pieces of debris at least 10cm in diameter, and thousands more smaller pieces, entering the Earth’s atmosphere. It is estimated that over 50% of the debris from Iridium 33 will remain in orbit for at least a century, and that of Cosmos 2251 for at least 20 to 30 years.
You might also like: Canned Fish in Hong Kong Found With Metallic Contaminants
Most space junk is located in what is known as low Earth orbit – the zone within approximately 2 011 kms of the planet’s surface, and in which many satellites, such as the ISS and NASA’s Earth Observing Fleet System, operate. Allowing space junk to accumulate, and henceforth increase the risk of further collisions similar to that of Iridium 33 and Cosmos 2251, poses a great risk to the possibility of future space exploration.
The >4700 launches that have been conducted across the globe since Sputnik 1 in 1957 have resulted in a steep upward trend in material mass in Earth orbit, which has exceeded 700 metric tons and shows no signs of relenting. According to computer simulations focusing on the next 200 years, over this time debris larger than approximately 20 cm across will multiply 1.5 times. Debris between 10 inches and 20 cm is set to multiply 3.2 times, and debris smaller than 10 cm will increase by a factor of 13 to 20. The risk this poses to satellites such as the ISS, which as of 2016 has had to perform 25 debris collision avoidance manoeuvres since 1999, is considerable.
The problem is not confined to the risk posed to space exploration. A proportion of the space junk in low Earth orbit will gradually lose altitude and burn up in Earth’s atmosphere; larger debris, however, can occasionally impact with Earth and have detrimental effects on the environment. For example, debris from Russian Proton rockets, launched from the Baikonur cosmodrome in Kazakhstan, litters the Altai region of eastern Siberia. This includes debris from old fuel tanks containing highly toxic fuel residue, unsymmetrical dimethylhydrazine (UDMH), a carcinogen which is harmful to plants and animals. While efforts are made to contain fallout from launches within a specified area, it is extremely difficult to achieve completely.
Anatoly Kuzin, deputy director of Khrunichev State Research and Production Space Centre, which manufactures Proton rockets, maintains that thorough testing shows no correlation between reported illnesses in affected areas and the rocket launches. Testimonies from locals, however, refer to a disproportionate number of cancer cases in the area which many believe is related to the UDMH in the fuel tank debris; in 2007, 27 people were hospitalised in the Ust-Kansky District of Altai with cancer-related complications, many of them citing the rocket fuel as the suspected cause.
Efforts to tackle the problem started in the 1990s, with NASA’s orbital debris mitigation policy and guidelines. The U.S. National Space Policy of 2006 and 2010 emphasises the necessity to implement the U.S. Government Orbital Debris Mitigation Standard Practices, which prioritise debris-release control, selection of safe flight profile and operational configuration, and the secure disposal of space apparatus after a mission. 2002 saw the first internationally-recognised standard consensus on orbital debris mitigation guidelines, but in place by the Inter-Agency Space Debris Coordination Committee.
However, head of the ESA’s space-debris office in Germany, Holger Krag, estimates that only half of all space emissions abide by these guidelines. The introduction of mega-constellations – mass groupings of artificial satellites – into low Earth orbit, Krag warns, will bring the need to remove failed satellites from space, on which most companies will not want to spend money. NASA warns against the accumulation of mega-constellations and miniature satellites such as CubeSats, which will do nothing to alleviate the growing problem.
In May 2020, economists at the University of Colorado Boulder proposed attaching an annual fee, rising 14% per year, to each satellite put into orbit in the hope that the fee might discourage the unnecessary accumulation of space junk. Other measures proposed over the years have included removal of large pieces of debris with instruments such as harpoons and lasers, the development of self-removing satellites, and the coating of satellites in polymeric foam, to allow them to descend into the Earth’s atmosphere and burn up. As yet, however, there is no universally recognised solution to the problem. More spaceflight companies must adhere to the guidelines set out by the Inter-Agency Space Debris Coordination Committee, and it is vital that the movement to reduce future accumulation of space junk becomes a more cohesive, vigorous effort. |
Three Axes of Flight
All manoeuvring flight takes place around one or more of the three axes of rotation. They are called the longitudinal, lateral and vertical axes of flight. The common reference point for the three axes is the airplane’s centre of gravity (CG), which is the theoretical point where the entire weight of the airplane is considered to be concentrated. Since all three axes pass through this point, you can say that the airplane always moves about its CG, regardless of which axis is involved. The ailerons, elevator, and rudder create aerodynamic forces which cause the airplane to rotate about the three axes.
Now consider what happens when you apply control pressure to begin a turn. When you deflect the ailerons, they create an immediate rolling movement about the longitudinal axis. Since the ailerons always move in opposite directions, the aerodynamic shape of each wing and its production of lift is affected differently.
One of the first things you will learn during flight is that the rolling movement about the longitudinal axis will continue as long as the ailerons are deflected. To stop the roll, you must relax control pressure and return the aileron to their original or neutral position. This is called neutralizing the controls.
Roll movement about the longitudinal axis is produced by the ailerons.
Since the horizontal stabilizer is an airfoil, the action of the elevator (or stabilator) is quite similar to that of the aileron. Essentially, the chord line and effective camber of the stabilizer are changed by deflection of the elevator.
Pitch movement about the lateral axis is produced by the elevator (stabilator).
Movement of the control wheel fore and aft causes motion about the lateral axis. Typically, this is referred to as an adjustment to pitch, or a change in pitch attitude. For example, when you move the control wheel forward, it causes movement about the lateral axis that decreases the airplane’s pitch attitude. A decrease in pitch attitude decreases the angle of attack. Conversely, an increase in pitch attitude increases the angle of attack.
When you apply pressure on the rudder pedals, the rudder deflects into the airstream. This produces an aerodynamic force that rotates the airplane about its vertical axis. This is referred to as yawing the airplane. The rudder may be displaced either to the left or right of centre, depending on which rudder pedals you depress.
Yaw movement about the vertical axis is produced by rudder
FORCES ACTING ON A CLIMBING AIRPLANE
When you transition from level flight into a climb, you must combine the change in pitch attitude with an increase in power. If you attempt to climb just by pulling back on the control wheel to raise the nose of the airplane, momentum will cause a brief increase in altitude, but airspeed will soon decrease.
An airplane climbs because of excess thrust, not excess lift.
The amount of thrust generated by the propeller for cruising flight at a given airspeed is not enough to maintain the same airspeed in a climb. Excess thrust, not excess lift, is necessary for a sustained climb. In fact during a vertical climb, the wings supply no lift, and thrust is the only force opposing weight.
FORCES ACTING ON A DESCENDING AIRPLANE
Let’s continue our discussion by considering the forces of weight, lift, thrust and drag as they affect a descending airplane. If you are using power during a stabilized descent, the four forces are in equilibrium. During the descent, a component of weight acts forward along the the flight path. As speed increases, this force is balanced by an increase in parasite drag.
In a descent, a component of weight acts forward along the flight path.
During a power-off glide, the throttle is placed in an idle position so the engine and propeller produce no thrust. In this situation the source of the airplane’s thrust is provided only by the component of weight acting forward along the flight path. In a steady, power-off glide, the forward component of weight is equal to and opposite drag.
CONSTANT AIRSPEED DESCENT
Once you have established a state of equilibrium for a constant airspeed descent, the efficiency of the glide will be affected if you increase drag. For example, if you lower the landing gear, both parasite and total drag increase. To maintain the airspeed you held before the landing gear was extended, you have to lower the nose of the airplane.
You can also increase drag by descending at a speed that creates more drag than necessary. Any speed, other than the recommended glide speed creates more drag. If you descend with the speed too high, parasite drag increases; and if you descend with speed too slow, induced drag increases.
GLIDE ANGLE AND GLIDE SPEED
During a descent, the angle between the actual glide path of your airplane and the horizon usually is called the glide angle. Your glide angle increases as drag increases, and decreases as drag decreases. Since a decreased glide angle, or a shallower glide provides the greatest gliding distance, minimum drag normally produces the maximum glide distance.
The way to minimize drag is to fly at an airspeed that results in the most favourable lift-to-drag ratio. This important performance e speed is called the best glide speed. In most cases, it is the only speed that will give you the maximum gliding distance. However, with a very strong headwind, you may need a slightly higher glide speed, while a slower speed may be recommended to take advantage of a strong tail wind.
The lift-to-drag ratio (L/D) can be used to measure the gliding efficiency of your airplane. The airspeed resulting in the least drag on your airplane will give the maximum L/D ratio (L/D max), the best glide angle, and the maximum gliding distance.
The higher the value of L/D max, the better the glide ratio.
The glide ratio represents the distance an airplane will travel forward, without power, in relation to altitude loss. For example, a glide ratio of 10:1 means that an airplane will descend one foot for every 10 feet of horizontal distance it travels. Since the throttle is closed in a power-off glide, the pitch attitude must be adjusted to maintain the best glide speed.
In the event of an engine failure, maintaining the best glide speed becomes even more important. This is especially true for a power failure after becoming airborne. Promptly establishing the correct gliding speed attitude and airspeed is critical. For a loss of power during flight, using the right speed could make the difference between successfully gliding to a suitable area or landing short of it.
If a power failure occurs after takeoff, immediately establish the proper gliding attitude and airspeed.
EFFECT OF WEIGHT ON THE GLIDE
Variations in weight do not affect the glide angle, provided you use the correct airspeed for each weight. Normally, optimum, or best, glide speeds are given in the pilot’s operating handbook (POH) for typical weight ranges. A fully loaded airplane requires a higher airspeed than the same airplane with a light load. Although the heavier airplane sinks faster and will reach the ground sooner, it will travel the same distance as a lighter airplane as long as you maintain the correct glide speed for the increased weight.
An airplane’s maximum gliding distance is unaffected by weight, but the best glide airspeed increases with weight.
FORCES ACTING ON A TURNING AIRPLANE
From our discussion about the three axes of rotation, you learned that ailerons control roll movement about the longitudinal axis, and the rudder controls yaw movement about the vertical axis. Coordinated turns require you to use both of these flight controls. You use the ailerons to roll into or out of a bank and, at the same time, you use the rudder to control yaw.
The horizontal component of lift causes an airplane to turn
Before your airplane turns, however, it must overcome inertia, or its tendency to continue in a straight line. You create the necessary turning force by banking the airplane so that the direction of lift is inclined. Now, one component of lift still acts acts vertically to oppose weight, just as it did in straight-and-level flight, while another acts horizontally. To maintain altitude, you will need to increase lift by increasing back pressure and, therefore, the angle of attack until the vertical component of lift equals weight. The horizontal component of lift, called centripetal force, is directed inwards, towards the centre of rotation. It is this centre seeking force which causes the airplane to turn. Centripetal force is opposed by centrifugal force, which acts outwards from the centre of rotation. When the opposing forces are balanced, the airplane maintains a constant rate of turn without gaining or losing altitude
When you roll into a turn, the aileron on the inside of the turn is raised, and the aileron on the outside of the turn is lowered. The lowered aileron on the outside increases the angle of attack and produces more lift for that wing. Since induced drag is a by-product of lift, you can see that the outside wing also produces more drag than the inside wing. This causes a yawing tendency towards the outside of the turn, which is called adverse yaw (Figure 1-36).
The coordinated use of aileron and rudder corrects for adverse yaw when you roll into or out of a turn. For a turn to the left, you depress the left rudder pedal slightly as you roll into the left turn. Once you are established in the turn, you relax both aileron and rudder pressures and neutralize the controls. Then, when you want to roll out of the turn, you apply coordinated right aileron and rudder pressure to return to a wings-level attitude.
The basic purpose of the rudder on an airplane is to control yawing.
During your initial flight training, you will learn how to manoeuvre the airplane through coordinated use of the controls. As you enter a turn and increase the angle of bank, you may notice the tendency of the airplane to continue rolling into an even steeper bank, even though you neutralize the ailerons. This overbanking tendency is caused by the additional lift on the outside, or raised wing. This adds to the lift, and the combined effects tend to roll the airplane beyond the desired bank angle. The overbanking tendency is most pronounced at high angles of bank. To correct for this tendency, you will have to develop a technique of holding just enough opposite aileron, away from the turn, to maintain, to maintain your desired angle of bank. Overbanking tendency exists, to some degree, in almost all airplanes.
So far in the discussion you have looked at the combination of opposing forces acting on a turning airplane. Now it’s time to examine load factors induced during turning flight. To better understand these forces, picture yourself on a roller coaster. As you enter a banked turn during the ride, the forces you will experience are very similar to the forces which act on a turning airplane. On a roller coaster, the resultant force created by the combination of weight and centrifugal force presses you down into your seat. This pressure is an increased load factor that causes you to feel heavier in the turn than when you are on a flat portion of the track.
The increased weight you feel during a turn in a roller coaster is also experienced in an airplane. In a turning airplane, however, you must compensate for the increase in weight and loss of vertical lift, or you will lose altitude. You can do this by increasing the angle of attack with back pressure on the control wheel. The increase in the angle of attack increases the total lift of the airplane. Keep in mind that when you increase lift, drag also increases. This means you must also increase thrust if you want to maintain your original airspeed and altitude. An airplane in a coordinated, level turn is in a state of equilibrium, where opposing forces are in balance. This is similar to the state of equilibrium that exists during unaccelerated, straight-and-level flight.
During turning manoeuvres, weight and centrifugal force combine into a resultant which is greater than weight alone. Additional loads are imposed on the airplane, and the wings must support the additional load factor. In other words, when you are flying in a curved flight path, the wings must support not only the weight of the airplane and its contents, but they also must support the load imposed by centrifugal force.
The load factor imposed on an airplane will increase as the angle of bank is increased.
Load factor is the ratio of the load supported by the airplane’s wing’s to the actual weight of the aircraft and its contents. If the wings are supporting twice as much weight as the weight of the airplane and its contents, the load factor is two. You are probably more familiar with the term “G-forces” as a way to describe flight loads caused by aircraft manoeuvring. “Pulling G’s” is common terminology for higher performance airplanes. For example, an acrobatic category airplane may pull three or four G’s during a manoeuvre. An airplane in cruising flight, while not accelerating in any direction, has a load factor of one. The one-G condition means the wings are supporting only the actual weight of the airplane and its contents.
A positive load factor occurs when centrifugal force acts in the same direction as weight. Whenever centrifugal force acts in a direction opposite weight, a negative load is imposed. For example, if you abruptly push the control wheel forward while flying, you would experience a sensation as if your weight suddenly decreased. This is caused by centrifugal force acting upward, which tends to overcome your actual body weight. If the centrifugal force equaled your actual body weight, you would experience a “weightless” sensation of zero G’s. A negative G-loading occurs when the centrifugal force exceeds your body weight. In rare instances, you may experience, a rapid change in G-forces. For example, in extremely turbulent air, you might be subjected to positive G’s, then negative G’s and sometimes sideward G-forces. Side-ward G-forces are called transverse G-forces.
LOAD FACTOR AND STALL SPEED
Earlier you learned that you can stall an airplane at any airspeed and in any flight attitude. You can easily stall an airplane in a turn at a higher-than-normal speed. As the angle of bank increases in level turns, you must increase the angle of attack to maintain altitude. As you increase the angle of bank, the stall speed increases (Figure 1-38).
Actually, stall speed increases in proportion to the square root of the load factor. If you are flying an airplane with a one-G stalling speed of 55 knots, you can stall it at twice that speed (110 knots) with a load factor of four G’s. Stalls that occur with G-forces on an airplane are called accelerated stalls. An accelerated stall occurs at a speed higher than the normal one-G stall speed. These stalls demonstrate that the critical angle of attack, rather than speed, is the reason for a stall. Stalls also occur at unusually high speeds in severe turbulence, or in low-level wind shear.
Increasing the load factor will cause an airplane to stall at a higher speed.
LIMIT LOAD FACTOR
When the FAA certifies an airplane, one of the criteria they look at is how much stress the airplane can withstand. The limit load factor is the. Umber of G’s an airplane can sustain, on a continuing basis, without causing permanent deformation or structural damage. In other words, the limit load factor is the amount of positive or negative G’s an airframe is capable of supporting.
Most small general aviation airplanes with a gross weight of 12,500 pounds or less, and nine passenger seats or less, are certified in either the normal, utility, or acrobatic categories. A normal category airplane is certified for nonacrobatic manoeuvres. Training manoeuvres and turns not exceeding 60 degrees of bank are permitted in this category. The maximum limit load factor in the normal category is 3.8 positive G’s and 1.52 negative G’s. In other words, the airplane’s wings are designed to withstand 3.8 times the actual weight of the airplane and its contents during manoeuvring flight. By following proper loading techniques and flying within the limits listed in the pilot’s operating handbook, you will avoid excessive loads on the airplane, and possible structural damage.
In addition to those manoeuvres permitted in the normal category, an airplane certified in the utility category may be used for several manoeuvres requiring additional stress on the airframe. A limit of 4.4 positive G’s or 1.78 negative G’s is permitted in the utility category. Some, but not all utility category airplanes are also approved for spins. An acrobatic category airplane may be flown in any flight attitude as long as its limit load factor does not exceed six positive G’s or three negative G’s,
A key factor for you to remember is that it is possible to exceed design limits for load factor during manoeuvres. For example, if you roll into a steep, level turn of 75 degrees, you will put approximately four G’s on the airplane. This is above the maximum limit of 3.8 G’s for an airplane in the normal category. You also should be aware of the conditions specified for the maximum load limit. If flaps are extended, for instance, the maximum load limit normally is less. The POH for the airplane you are flying is your best source of load limit information.
An important airspeed related to load factors and stall speed is the design manoeuvring speed [Va].This limiting speed normally is not marked on the airspeed indicator, since it may vary with total weight. The POH and/or a placard in the airplane are the best source for determining Va. Although some handbooks may designate only one manoeuvring speed, others may show several. When more than one is specified, you will notice that Va decreases as weight decreases. An aircraft operating at lighter weights is subject to more rapid acceleration from gusts and turbulence than a more heavily loaded one.
Any airspeed in excess of Va can over stress the airframe during abrupt manoeuvres or turbulence. The higher the airspeed, the greater the amount of excess load that can be imposed before a stall occurs. Va represents the maximum speed at which you can use full, abrupt control movement without over stressing the airframe. If you are flying at or below this speed any combination of pilot-induced control movement, or gust loads resulting from turbulence, should not cause an excessive load on the airplane. This is why you should always fly at or below Va during turbulent conditions.
The amount of excess load that can be imposed on the airframe depends on the aircraft’s speed.
The design manoeuvring speed also is the maximum speed at which you can safely stall an airplane. If you stall the airplane above this speed, you will generate excessive G- loads. At or below this speed, the airplane will stall before excessive G-forces build up. By staying at or below Va you will avoid the possibility of over stressing or even damaging the airplane.
No discussion of the aerodynamics of flight would be complete without considering spins. Your awareness to what causes spins, and how you can avoid them, is very important.
A spin is defined as an aggravated stall which results in autorotation. In order for a spin to develop, a stall must first occur. The spin results when one wing stalls before the other and begins to drop. Although both wings are stalled during a spin, they are both producing some lift. However, the outer (or rising) wing produces more lift than the inner (or lowering) wing. This imbalance in lift contributes to the aircraft’s rolling and yawing motion while it is in the spin.
Many airplanes are prohibited from spin manoeuvres. For examples, airplanes certified by the FAA in the normal category are probing from spins, which is also true of some airplanes in the utility category. Airplanes that are not certified for spins may not be recoverable from fully developed spins.
To enter a spin, an airplane must first be stalled. In a spin, both wings are in a stalled condition.
Although stress loads usually are not severe during a spin, an erratic recovery ma6 impose excessive loads on the airframe that could result in an accelerated stall or structural failure. For example, some airplanes have a placard displayed on the panel which tells you not to enter a spin when passengers are in the rear seats. This is because the passengers move the centre of gravity to an aft position. Recovery from a spin in an airplane with aft loading may be difficult or even impossible.
Specific recovery techniques also vary with different makes and models of airplanes. This is why you should never intentionally spin an airplane without an experienced instructor on board the aircraft. If you enter a spin inadvertently, you should follow the procedure outlined by the manufacturer of your airplane. The following procedure pertains to a general recovery procedure, but it should not be applied arbitrarily, without regard for the manufacturer’s recommendation.
Since an airplane must be in a stalled condition before it will spin, the first thing you should do is to try to recover from the stall before the spin develops. If your reaction is too slow and a spin develops, move the throttle to idle and make sure the flaps are up. Next, apply full rudder deflection opposite to the direction of the turn. As the rotation slows, briskly position the elevator forward of the neutral position to decrease the angle of attack. As the rotation stops, neutralize the rudder and smoothly apply back pressure to recover from the nose-down pitch attitude. During recovery from the dive, make sure you avoid excessive airspeed. This could lead to high G-forces, which could cause an accelerated stall, or even result in structural failure.
Propeller-driven airplanes are subject to several left-turning tendencies caused by a combination of physical and aerodynamic forces – torque, gyroscopic precession, asymmetrical thrust, and spiralling slipstream.
You will need to compensate for these forces, especially when you are flying in high-power, low-airspeed flight conditions following takeoff or during the initial climb. If you know what is happening to the airplane, you will have a better idea of how to correct for these tendencies.
In airplanes with a single engine, the propeller rotates clockwise when viewed from the pilot’s seat. Torque can be understood most easily by remembering Newton’s third law of motion. The clockwise action of a spinning propeller causes a torque reaction which tends to rotate the airplane counterclockwise about its longitudinal axis.
Torque effect is greatest in a single-engine airplane during a low-airspeed, high power flight condition.
Generally, aircraft have design adjustments which compensate for torque while in cruising flight, but you will have to take corrective action during other phases of flight. Some airplanes have aileron trim tabs which you can use to correct for the effects of torque at various power settings.
The turning propeller of an airplane also exhibits characteristics of a gyroscope – rigidity in space and precession. The characteristic that produces a left-turning tendency is precession. Gyroscopic precession is the resultant reaction of an object when force is applied. The reaction to a force applied to a gyro acts in the direction of rotation and approximately 90 degrees ahead of the point where force is applied.
When you are flying single-engine at a high angle of attack, the descending blade of the propeller takes a greater “bite” of air than the ascending blade on the other side. The greater bite is caused by a higher angle of attack for the descending blade, compared to the ascending blade. This creates the uneven, or asymmetrical thrust, which is known as the P-factor. P-factor makes an airplane yaw about its vertical axis to the left.
P-factor results from the descending propeller blade on the right producing more thrust than the ascending blade on the left.
You should remember that P-factor is most pronounced when the engine is operating at a high- power setting, and when the airplane is flown at a high angle of attack. In level cruising flight, P-factor is not apparent, since both ascending and descending propeller blades are at nearly the same angle of attack, and are creating approximately the same amount of thrust.
P-factor causes an airplane to yaw to the left when it is at high angles of attack.
As the propeller rotates, it produces a backward flow of air, or slipstream, which wraps around the airplane. This spiralling slipstream causes a change in the airflow around the vertical stabilizer. Due to the direction of the propeller rotation, the resultant slipstream strikes to the left side of the vertical fin
Another significant aerodynamic consideration is the phenomenon of ground effect. During takeoffs and landings when you are flying very close to the ground, the earth’s surface interferes with the airflow and actually alters the three-dimensional airflow pattern around the airplane. This causes a reduction in wingtip vortices and a decrease in upwash and downwash.
An airplane is usually in ground effect when it is less than the height of the airplane’s wingspan above the surface.
WIngtip vortices are caused by the air beneath the wing rolling up and around the wingtip. This causes a spiral vortex that trails behind each wingtip whenever lift is being produced. Wingtip vortices are another factor contributing to induced drag. Upwash and downwash refer to the effect an airfoil exerts on the free airstream. Upwash is the deflection of the oncoming airstream upward and over the wing. Downwash is the downward deflection of the airstream as it passes over the wing and past the trailing edge.
Ground effect reduces induced angle of attack and induced drag.
If you remember how angle of attack influences induced drag, it will help you understand ground effect. During flight, the downwash of the airstream causes the relative wind to be inclined downwards in the vicinity of the wing. This is referred to as the average relative wind. The angle between the free airstream relative wind and the average relative wind is the induced angle of attack. In effect, the greater the downward deflection of the airstream, the higher the induced angle of attack and the higher the induced drag. Since ground effect restricts the downward deflection of the airstream, both the induced angle of attack and induced drag decrease. When the wing is at a height equal to its span, the decline in induced drag is only about 1.4 %; when the wing is at a height equal to one-tenth its span, the loss of induced drag is about 48%.
Ground effect allows an airplane to become airborne before it reaches its recommended takeoff speed.
With the reduction of induced angle of attack and induced drag in ground effect, the amount of thrust required to produce lift is reduced. What this means is that your airplane is capable of lifting at a lower-than-normal speed. Although you might initially think that this is desirable, consider what happens as you climb out of ground effect. The power(thrust) required to sustain flight increases significantly as the normal airflow around the wing returns and induced drag is suddenly increased. If you attempt to climb out of ground effect before reaching the speed for normal climb , you might sink back to the surface.
In ground effect, induced drag decreases, and excess speed in the flare may cause floating.
Ground effect is noticeable in the landing phase of flight, too, just before touchdown. Within one wingspan above the ground, the decrease in induced drag makes your seem to float on the cushion beneath it. Because of this, power reduction is usually required during flare to help the airplane land. Although all airplanes may experience ground effect, it is more noticeable in low-wing airplanes, simply because the wings are closer to the ground.
Courtesy of : Private Pilot Manual published by Jeppesen Sanderson Inc. 1991, CO, USA. |
We use math every day without realizing it. From buying stuff to navigating across town, math is everywhere. This would be fine if we only have to work with whole numbers, but fractional numbers fill our world. We have to know how to deal with these decimal numbers in order to live and play. We have to work with decimals to divide up the check at a restaurant or bar. We have to divide decimal numbers to know when we must fill up the gasoline tank before we go on a trip. Dividing decimals by decimals isn’t rocket science. Anyone can do it. You just have to understand what decimal numbers are.
Decimal numbers are a way to save space while writing fractions. Instead of writing the fractions as two numbers arranged vertically with a bar between them, you can write any fraction as a single number followed by a period. These numbers are technically just the top numbers from fractions with a power of 10 on the bottom. Thus, 0.23 is the fraction 23/100. We write a 0 in front of the decimal when there is no whole number associated with it.
This short review of decimal numbers exists to show you the reasoning behind the techniques I am about to show you for dividing decimals by decimals. Since decimals are really fractions, you are really dividing a fraction by a fraction, and the techniques will reflect this. You should also note that we can view division as a fraction itself. When we say “4 divided by 2”, we are describing the fraction. The beauty of fractions is that we can multiply them with any other fraction that has the value of one to create a new fraction. For instance, we can multiply by to get which we can easily see evaluates to 2. Dividing decimals by decimals requires us to do the same to create a much easier problem.
Division is the opposite operation to multiplication. It depicts a partitioning of something. We write it by places two numbers on either side of a operator sign The first number is called the dividend or numerator and it is the number we are dividing from. The second number is called the divisor or denomination, and it is the number we are dividing the dividend by. The operator’s form is often set by convenience. We can write the operation as a fraction as we did in the last section with either a horizontal bar or a / symbol. We can also write it on a single line using the ÷ or | symbol though | has a special meaning I will say later.
When decimals are involved, you must identify which decimal number is the dividend and the divisor, as the techniques for dividing then differ between the two. You can modify the operation to get rid of the decimal divisor, but not the dividend.
Any time you see a decimal divisor, you can get rid of it just by creating a new problem with the same answer. You can do this by multiplying the dividend and divisor by the same number. Generally, you want to use the power of ten associated with the decimal in fraction form. For instance, when dividing 37.6 by 2.5 you can multiply both numbers by 10 since .5 is in faction form as I do in the following example.
As you can see, we have nothing to fear about a decimal divisor. We can change the problem to get rid of it. You just have to know the power of ten associated with its digits and multiply both it and the dividend by that number.
For the decimal dividend, we can only modify it as much as we can modify the divisor. Once we remove the decimal from the divisor, we have to live with how the dividend looks from that point on. Fortunately, the decimal point does not play a role in the mathematics. Since we divide numbers by inspecting the dividend’s digits from left to right, you can deal with the decimal point when you come to it. Plus, you just have to add a decimal point in the quotient at the time. The dividend’s decimal point does nothing else. You just carry it over into the quotient and leave it there. Otherwise, you ignore it. You then continue to complete the division as if the decimal point wasn’t there. Once done, the decimal point should be in the right position in the quotient.
That is how easy it is to divide decimals by decimals. You first identify the dividend and divisor. You then multiply both numbers by the power of ten that will get rid of the decimal point in the divisor. You then proceed with the division as if the dividend did not have a decimal point until you come to it. Then, you just add a decimal point to the quotient. You then continue the division as if the decimal point doesn’t exist. The resulting quotient will be the solution to your initial problem with the decimal point in the right position.
The | Division Operator
When dividing decimals by decimals, the above tricks are all you need to know. However, it would be rude if I did not explain the | division operator. Before, I said it has special meaning, but you may never come across it outside of computer programming and math classes. The | symbol denotes integer or whole number division which is the same thing as regular division but you discard the remainder. The key thing to remember is that integer division requires both the dividend and divisor to be whole numbers. Otherwise, it has no solution. In context with our discussion on dividing decimals by decimals, if you see something like 4.6|0.23, you immediately know the solution is DNE (“does not exist”). However, like I said before, you may never have to use this type of division. I just included it as a joke or conversation starter you can use with your friends. |
Millions upon millions of years ago–times before humans even walked the planet–dinosaurs roamed planet Earth, establishing themselves as one of the most known reptile species to ever take long-term residence on our planet. This all came to an end when an asteroid, with a diameter of 10.2 kilometers rained down on the Earth, eliminating these creatures. Today, NASA has already started looking ahead at many scenarios that could devastate or possibly extinct humans, and has started to test ways to deflect the trajectory of asteroids to protect Earth. Because having an asteroid capable of eradicating humans is unlikely for millions of years, and other factors such as technology and climate change are more plausible, the objective of the Double Asteroid Redirection Test (DART) initiative was mainly taken as a defense system for smaller asteroids that could still create calamity for our world, yet incapable of threatening the fate of human civilization. Although every life form on Earth comes to an end, this planetary defense system developed by NASA will help secure a safer future from minor asteroids.
NASA plans to have the spacecraft fly directly into Didymos’ moon Dimorphos, which will hopefully impact the moon enough to change the orbital period of the two astronomical objects and pull them closer together. The expected time of impact is tomorrow at around 7:14 pm Eastern Time.
The entire concept of DART might seem somewhat childish: find a spacecraft and crash it into an asteroid. However, this technique carries some merit. A large enough spacecraft or satellite slamming into an asteroid has enough power to move the asteroid in the smallest way, which could affect the entire trajectory of an asteroid over time. The key to this technique is finding asteroids ahead of time so the changed trajectories miss Earth.
NASA’s Double Asteroid Redirection Test is the first of any type of asteroid planetary defense system and launched on the 21st November of 2021 to try and intercept the asteroid Didymos (half a mile in diameter and 7 million miles from Earth), which will help NASA test and analyze DART.
The spacecraft itself is relatively small: it’s around 40 feet in length when both of its solar arrays fly open and can be launched from a Falcon 9 rocket. DART uses an ion propulsion system called NEXT-C which uses the electrostatic acceleration of ions to produce thrust which is received from the xenon propellant. A single camera called DRACO is equipped with the spacecraft which will help it with navigation because it lies too far from Earth to be controlled. This highly specialized piece of technology will also help measure the size and shape of the impact and get a glimpse (high-quality images) of the surface before it crashes. Before impact, DART will also release a CubeSat named LICIACubeSat developed by the Italian Space Agency that will observe the damage after the impact.
Because of this experiment, NASA can help develop further technology to prepare Earth against other deadly asteroids that threaten human civilization. Even though these impacts may never make much of a difference because of the abundance of problems on the planet itself, they will help protect humans from experiencing the same fate as the dinosaurs did millions of years ago. |
The following Topics and Sub-Topics are covered in this chapter and are available on MSVgo:
You might have come across an enclosed shape with three pointed tips; it is known as a triangle. A triangle is a polygon shape with three vertices and three edges. Being an enclosed shape with three sides, a triangle also has three angles whose sum is always 180 degrees. Triangle is one of the basic shapes in geometry. There are two ways to categorize triangles: the length of their sides or by their angle. Classifying triangles based on length are derived: isosceles triangle, equilateral triangle, and scalene triangle.
A scalene triangle is the type of triangle whose all three sides tend to have different lengths and different angles. Some of the right-angled triangles are scalene if the other two angles and sides are not congruent. Due to this reason, no line of the scalene triangle is in symmetry. Additionally, the angle opposite the longest side is generally the largest angle, while the small side’s angle is the smallest. Lack of symmetry is the key feature of this sort of triangle. The formula for calculating a scalene triangle area is half times the product of its height and base length. For more information on this sort of triangle, go through the resource library on MSVgo.
The triangle with two equal sides and two equal angles is known as an isosceles triangle. The name ‘isosceles’ is derived from two Greek words: Iso, meaning same, and Skelos means legs. In an isosceles triangle, there are two same base angles with one other angle. The area of an isosceles triangle is calculated with the formula (b/4) * √(4a2 – b2), and its altitude is calculated with the formula (b/2a) * √(4a2 – b2). Key examples of isosceles triangles seen in the modern world are the faces of bipyramids and most of the Catalan solids. Since ancient times, the isosceles triangle was used in architecture and design to structure the pediments and gables of the constructed buildings. For more information on this sort of triangle, go through the resource library on MSVgo.
The equilateral triangle’s basic nature is that it has three equal sides and three congruent angles of F degrees. If a perpendicular is drawn from its vertex, the opposite sides are bisected into equal halves. The ortho-center and centroid of the equilateral triangle are always at the same point. The area of an equilateral triangle is derived with formula √3a2/4, where a is the side. Additionally, the median, angle bisector, and altitude of the sides of the equilateral triangle are always the same. For more information on this sort of triangle, go through the resource library on MSVgo.
A triangle whose all three angles are less than 90 degrees is known as an acute angle triangle. The formula for calculating an acute angle triangle area is half the product of base and height, while its perimeter is the addition of the length of all three sides. Equilateral triangles are always acute angle triangles as all its angles are 60 degrees. Additionally, if a line is drawn from the acute angle triangle base to the opposite vertex, it is always perpendicular to the base. For more information on this sort of triangle, go through the resource library on MSVgo.
The right-angle triangle is the triangle whose one angle is 90 degrees. This sort of triangle is the most used shape in mathematics due to its implications in Pythagoras theorem and trigonometry. Concerning the Pythagoras theorem, the right angle triangle depicts that hypotenuse is always the root of the sum of the squares of the base side and perpendicular side. On the other hand, the right-angle triangle is used in trigonometry as it always reflects the presence of three angles in the first quadrant due to the 90 degrees angle; thus, the values of sine, cos, and tan are derived easily using it. The first quadrant also reflects positive values, and the only sin, cos, and tan are positive in the first quadrant, while they change the value in the other three quadrants. Right angle triangle can be scalene or isosceles but never equilateral triangle as one of its angles is 90 degrees. For more information on this sort of triangle, go through the resource library on MSVgo.
An obtuse angle triangle is a triangle with one obtuse angle and two acute angles. There can be only one obtuse in a triangle because the sum of all the angles for a triangle is 180, and as per the definition of the obtuse angle, it is higher than 90 degrees. Thus, having two angles greater than 90 degrees will take the sum of the three angles to more than 180 degrees. For more information on this sort of triangle, go through the resource library on MSVgo.
Overall, the triangle is an enclosed geometrical shape with three sides, three angles, and three vertices. There are key basic types of triangles: the scalene triangle, isosceles triangle, and the equilateral triangle. There are three other sorts of triangles: acute angle triangle, right-angle triangle, and obtuse angle triangle.
What are the six types of triangles?
The six types of triangles are scalene triangle, isosceles triangle, equilateral triangle, acute angle, triangle, right-angle triangle, and obtuse angle triangle.
What are the three main types of triangles?
The three main types of triangles are scalene triangle, isosceles triangle, and equilateral triangle.
What are the five properties of a triangle?
The five properties of a triangle are:
What are congruent triangles in geometry?
If two angles and the included side of one triangle are similar to the corresponding constraints of another triangle, then both the triangles are said to be congruent.
For more fun and interactive lessons on Triangles, visit the MSVgo application. |
For most spacecraft, changes to orbits are caused by the oblateness of the Earth, gravitational attraction from the sun and moon, solar radiation pressure, and air drag. These are called "perturbing forces". They must be counteracted by maneuvers to keep the spacecraft in the desired orbit. For a geostationary spacecraft, correction maneuvers on the order of 40–50 m/s per year are required to counteract the gravitational forces from the sun and moon which move the orbital plane away from the equatorial plane of the Earth.
For sun-synchronous spacecraft, intentional shifting of the orbit plane (called "precession") can be used for the benefit of the mission. For these missions, a near-circular orbit with an altitude of 600–900 km is used. An appropriate inclination (97.8-99.0 degrees) is selected so that the precession of the orbital plane is equal to the rate of movement of the Earth around the sun, about 1 degree per day.
As a result, the spacecraft will pass over points on the Earth that have the same time of day during every orbit. For instance, if the orbit is "square to the sun", the vehicle will always pass over points at which it is 6 a.m. on the north-bound portion, and 6 p.m. on the south-bound portion (or vice versa). This is called a "Dawn-Dusk" orbit. Alternatively, if the sun lies in the orbital plane, the vehicle will always pass over places where it is midday on the north-bound leg, and places where it is midnight on the south-bound leg (or vice versa). These are called "Noon-Midnight" orbits. Such orbits are desirable for many Earth observation missions such as weather, imagery, and mapping.
The perturbing force caused by the oblateness of the Earth will in general perturb not only the orbital plane but also the eccentricity vector of the orbit. There exists, however, an almost-circular orbit for which there are no secular/long periodic perturbations of the eccentricity vector, only periodic perturbations with period equal to the orbital period. Such an orbit is then perfectly periodic (except for the orbital plane precession) and it is therefore called a "frozen orbit". Such an orbit is often the preferred choice for an Earth observation mission where repeated observations of the same area of the Earth should be made under as constant observation conditions as possible.
Through a study of many lunar orbiting satellites, scientists have discovered that most low lunar orbits (LLO) are unstable. Four frozen lunar orbits have been identified at 27°, 50°, 76°, and 86° inclination. NASA expounded on this in 2006:
Lunar mascons make most low lunar orbits unstable ... As a satellite passes 50 or 60 miles overhead, the mascons pull it forward, back, left, right, or down, the exact direction and magnitude of the tugging depends on the satellite's trajectory. Absent any periodic boosts from onboard rockets to correct the orbit, most satellites released into low lunar orbits (under about 60 miles or 100 km) will eventually crash into the Moon. ... [There are] a number of 'frozen orbits' where a spacecraft can stay in a low lunar orbit indefinitely. They occur at four inclinations: 27°, 50°, 76°, and 86°"—the last one being nearly over the lunar poles. The orbit of the relatively long-lived Apollo 15 subsatellite PFS-1 had an inclination of 28°, which turned out to be close to the inclination of one of the frozen orbits—but poor PFS-2 was cursed with an inclination of only 11°.
which can be expressed in terms of orbital elements thusly:
Making a similar analysis for the term (corresponding to the fact that the earth is slightly pear shaped), one gets
which can be expressed in terms of orbital elements as
In the same article the secular perturbation of the components of the eccentricity vector caused by the is shown to be:
The first term is the in-plane perturbation of the eccentricity vector caused by the in-plane component of the perturbing force
The second term is the effect of the new position of the ascending node in the new orbital plane, the orbital plane being perturbed by the out-of-plane force component
Making the analysis for the term one gets for the first term, i.e. for the perturbation of the eccentricity vector from the in-plane force component
For inclinations in the range 97.8–99.0 deg, the value given by (6) is much smaller than the value given by (3) and can be ignored. Similarly the quadratic terms of the eccentricity vector components in (8) can be ignored for almost circular orbits, i.e. (8) can be approximated with
Now the difference equation shows that the eccentricity vector will describe a circle centered at the point ; the polar argument of the eccentricity vector increases with radians between consecutive orbits.
one gets for a polar orbit () with that the centre of the circle is at and the change of polar argument is 0.00400 radians per orbit.
The latter figure means that the eccentricity vector will have described a full circle in 1569 orbits.
Selecting the initial mean eccentricity vector as the mean eccentricity vector will stay constant for successive orbits, i.e. the orbit is frozen because the secular perturbations of the term given by (7) and of the term given by (9) cancel out.
In terms of classical orbital elements, this means that a frozen orbit should have the following (mean!) elements:
The modern theory of frozen orbits is based on the algorithm given in a 1989 article by Mats Rosengren.
For this the analytical expression (7) is used to iteratively update the initial (mean) eccentricity vector to obtain that the (mean) eccentricity vector several orbits later computed by the precise numerical propagation takes precisely the same value. In this way the secular perturbation of the eccentricity vector caused by the term is used to counteract all secular perturbations, not only those (dominating) caused by the term. One such additional secular perturbation that in this way can be compensated for is the one caused by the solar radiation pressure, this perturbation is discussed in the article "Orbital perturbation analysis (spacecraft)".
Applying this algorithm for the case discussed above, i.e. a polar orbit () with ignoring all perturbing forces other than the and the forces for the numerical propagation one gets exactly the same optimal average eccentricity vector as with the "classical theory", i.e. .
When we also include the forces due to the higher zonal terms the optimal value changes to .
Assuming in addition a reasonable solar pressure (a "cross-sectional-area" of 0.05 m2/kg, the direction to the sun in the direction towards the ascending node) the optimal value for the average eccentricity vector becomes which corresponds to :, i.e. the optimal value is not anymore.
The main perturbing force to be counter-acted to have a frozen orbit is the " force", i.e. the gravitational force caused by an imperfect symmetry north/south of the Earth, and the "classical theory" is based on the closed form expression for this " perturbation". With the "modern theory" this explicit closed form expression is not directly used but it is certainly still worthwhile to derive it.
The derivation of this expression can be done as follows:
The potential from a zonal term is rotational symmetric around the polar axis of the Earth and corresponding force is entirely in a longitudial plane with one component in the radial direction and one component with the unit vector orthogonal to the radial direction towards north. These directions and are illustrated in Figure 1.
Figure 1: The unit vectors
In the article Geopotential model it is shown that these force components caused by the term are
Figure 2: The unit vector orthogonal to in the direction of motion and the orbital pole . The force component is marked as "F"
Let make up a rectangular coordinate system with origin in the center of the Earth (in the center of the Reference ellipsoid) such that points in the direction north and such that are in the equatorial plane of the Earth with pointing towards the ascending node, i.e. towards the blue point of Figure 2.
The components of the unit vectors
making up the local coordinate system (of which are illustrated in figure 2), and expressing their relation with , are as follows:
where is the polar argument of relative the orthogonal unit vectors and in the orbital plane
where is the angle between the equator plane and (between the green points of figure 2) and from equation (12) of the article Geopotential model one therefore obtains
Secondly the projection of direction north, , on the plane spanned by is
and this projection is
where is the unit vector orthogonal to the radial direction towards north illustrated in figure 1.
Introducing the expression for of (14) in (15) one gets
The fraction is
are the components of the eccentricity vector in the coordinate system.
As all integrals of type
are zero if not both and are even, we see that
It follows that
and are the base vectors of the rectangular coordinate system in the plane of the reference Kepler orbit with in the equatorial plane towards the ascending node and is the polar argument relative this equatorial coordinate system
is the force component (per unit mass) in the direction of the orbit pole |
Start by putting one million (1 000 000) into the display of your calculator. Can you reduce this to 7 using just the 7 key and add, subtract, multiply, divide and equals as many times as you like?
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be?
Using some or all of the operations of addition, subtraction, multiplication and division and using the digits 3, 3, 8 and 8 each once and only once make an expression equal to 24.
This Sudoku puzzle can be solved with the help of small clue-numbers on the border lines between pairs of neighbouring squares of the grid.
This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15!
Using the digits 1, 2, 3, 4, 5, 6, 7 and 8, mulitply a two two digit numbers are multiplied to give a four digit number, so that the expression is correct. How many different solutions can you find?
Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it?
Using the statements, can you work out how many of each type of rabbit there are in these pens?
A game for 2 or more players with a pack of cards. Practise your skills of addition, subtraction, multiplication and division to hit the target score.
This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code?
Can you design a new shape for the twenty-eight squares and arrange the numbers in a logical way? What patterns do you notice?
Can you find different ways of creating paths using these paving slabs?
You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . .
Amy has a box containing domino pieces but she does not think it is a complete set. She has 24 dominoes in her box and there are 125 spots on them altogether. Which of her domino pieces are missing?
In this game, you can add, subtract, multiply or divide the numbers on the dice. Which will you do so that you get to the end of the number line first?
If the answer's 2010, what could the question be?
Use your logical reasoning to work out how many cows and how many sheep there are in each field.
This challenge asks you to investigate the total number of cards that would be sent if four children send one to all three others. How many would be sent if there were five children? Six?
Resources to support understanding of multiplication and division through playing with number.
A game for 2 people. Use your skills of addition, subtraction, multiplication and division to blast the asteroids.
The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it?
In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square?
Well now, what would happen if we lost all the nines in our number system? Have a go at writing the numbers out in this way and have a look at the multiplications table.
Four Go game for an adult and child. Will you be the first to have four numbers in a row on the number line?
Look at what happens when you take a number, square it and subtract your answer. What kind of number do you get? Can you prove it?
Cherri, Saxon, Mel and Paul are friends. They are all different ages. Can you find out the age of each friend using the information?
Zumf makes spectacles for the residents of the planet Zargon, who have either 3 eyes or 4 eyes. How many lenses will Zumf need to make all the different orders for 9 families?
Can you arrange 5 different digits (from 0 - 9) in the cross in the way described?
There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places.
A game for 2 people using a pack of cards Turn over 2 cards and try to make an odd number or a multiple of 3.
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see?
When I type a sequence of letters my calculator gives the product of all the numbers in the corresponding memories. What numbers should I store so that when I type 'ONE' it returns 1, and when I type. . . .
Find the number which has 8 divisors, such that the product of the divisors is 331776.
The Scot, John Napier, invented these strips about 400 years ago to help calculate multiplication and division. Can you work out how to use Napier's bones to find the answer to these multiplications?
This article for teachers looks at how teachers can use problems from the NRICH site to help them teach division.
There are four equal weights on one side of the scale and an apple on the other side. What can you say that is true about the apple and the weights from the picture?
What is happening at each box in these machines?
Here are the prices for 1st and 2nd class mail within the UK. You have an unlimited number of each of these stamps. Which stamps would you need to post a parcel weighing 825g?
What is the lowest number which always leaves a remainder of 1 when divided by each of the numbers from 2 to 10?
Given the products of adjacent cells, can you complete this Sudoku?
When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" .
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
Use your logical-thinking skills to deduce how much Dan's crisps and ice-cream cost altogether.
This number has 903 digits. What is the sum of all 903 digits?
Can you find which shapes you need to put into the grid to make the totals at the end of each row and the bottom of each column?
Skippy and Anna are locked in a room in a large castle. The key to that room, and all the other rooms, is a number. The numbers are locked away in a problem. Can you help them to get out?
How would you count the number of fingers in these pictures?
Can you each work out the number on your card? What do you notice? How could you sort the cards?
If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why? |
In this coordinate geometry worksheet, students plot ordered pairs and identify points on the coordinate plane. This two-page worksheet contains eleven problems.
6 Views 39 Downloads
Investigating Transformations Using Coordinates
Use transformations to coordinate the coordinates! After graphing a transformation, class members examine the patterns in the coordinates. They repeat the process for each type of transformation and end by generalizing their conclusions....
7th - 9th Math CCSS: Designed
Stitching Quilts into Coordinate Geometry
Who knew quilting would be so mathematical? Introduce linear equations and graphing while working with the lines of pre-designed quilts. Use the parts of the design to calculate the slope of the linear segments. The project comes with...
8th - 11th Math CCSS: Adaptable |
CONFIDENCE INTERVALS FOR THE POPULATION MEAN (INTRODUCTION)
Watch this spaceDetailed text explanation coming soon. In the meantime, enjoy our video. The text below is a transcript of the video.
Connect with StatsExamples here
LINK TO SUMMARY SLIDE FOR VIDEO:
TRANSCRIPT OF VIDEO:
Confidence intervals are an extremely important method that we use to estimate the mean of a population we're interested in. Let's take a look at what they represent and how to calculate them.
It is often the case in statistics that we want to know the mean of a population. We can't measure every individual in the population, so we have to take a random sample from that population and calculate the mean of that sample to estimate the mean of the population.
The sample mean is an estimate of the population mean , but sampling error makes it inexact. The mean of the sample will never be exactly the same as the mean of the population. So what can we say about the likely population mean, based on the sample mean?
We know it is probably close, but we also know there's some chance our sample could be very inaccurate. How do we estimate that inaccuracy and create a range where we think the population mean probably is?
Luckily, there is a mathematical result called the central limit theorem. This theorem states that the distribution of sample means from a population will be normally distributed. No matter what the population looks like.
So if we took a series of samples from the population illustrated at the top and looked at what the values of the means of those samples are, they would form a normal distribution centered around the population mean.
Most of the sample means would be close to the mean of the population But a few would be further away. The central limit theorem tells us exactly how many would be close, and how many would be far, and in what proportions.
The width of this distribution of sample means is based on the variance in the population and the sample size of each sample. The more variance in the population, the wider the distribution of sample means. The more values in each sample, the more narrow the distribution of sample means.
If we look at this distribution of sample means, any individual sample mean comes from a normal distribution. And the most likely result is that it comes from the middle of the distribution which corresponds to the mean of the population.
Sometimes, the mean of any particular sample would be would not be close to the mean.
Only rarely would the mean of any sample be far away from the population mean
The central limit theorem allows us to calculate how close any particular sample mean is likely to be to the true population mean.
How close the sample mean is to the population mean is influenced by the variance in the population and the sample size.
We can use the central limit theorem to estimate where the population mean probably is by thinking about the normal distribution Of our sample means.
The distribution of those sample means is centered around the population mean And the middle 95% of that distribution is what we would expect to see 95% of the time when we look at our samples.
The logic goes both ways. We can start with the mean of one sample and use our knowledge of the width of the distribution of sample means to create a window around that sample mean that will probably include the population mean.
As an example, if we identify the middle 95% of a normal distribution around our sample mean, there is a 95% chance the population mean is within that 95% region. We call this region a 95% confidence interval because it indicates where we are confident the population mean is.
Note that the width of that confidence interval will be based on the variance in the population and the number of values in the samples. The standard deviation of that distribution of sample means is called the standard error.
An important pair of terms to keep straight is how the standard error relates to the standard deviation. These two values measure different things
When we think about a distribution of values, like a population of values or a sample of values, we often describe them as normally distributed around a mean with some particular variance. The standard deviation describes the spread of the data values in that population or sample.
Our distribution of sample means is also normally distributed but it's variance comes from the variance in the population divided by the sample size. if we think about the standard deviation of that set of sample means that would be the variance of the population divided by the square root of the sample size. This value is called the standard error.
The standard error does not describe the spread of the data in the population or sample, it describes the spread of the sample means and likewise the spread of possible population means.
The terms standard deviation and standard error are very similar, but they measure completely different things. The first one measures the spread of data values in a population or sample, the second measures the spread of sample means taken from a population.
So, how do we calculate the confidence interval for the population mean Using the data from one of our samples?
First, we take a sample of N values.
Then we calculate the mean variance and standard deviation of that sample
Then we get the middle region of the standard normal distribution corresponding to the degree of confidence desired. If we want to be 95% confident of where the population mean is, we want to know how many standard deviations wide the middle 95% of the standard normal distribution is. If we wanted to be 99% confident of where the population mean is, we would want the middle 99% of the standard normal distribution, in terms of standard deviations.
Standard deviations within our distribution of sample mean Are really standard errors. Therefore, the width of this region in the standard normal distribution tells us the width of the confidence interval in terms of the number of standard errors above and below the sample mean.
For example, if we take a sample of 16 values from a population And they have a mean of 15 and we know the population standard deviation is 5 we can calculate our confidence intervals.
Using the properties of a normal distribution the middle 95% region of that normal distribution would be 1.96 standard errors above and below the sample mean.
Plugging these numbers in gives us a result of 15 plus or minus 2.45 which gives us a range for our 95% confidence interval of 12.55 to 17.45.
Based on this we can say there is a 95% probability that the population mean we took this sample from is between 12.55 and 17.45. we have a 95% confidence that the population mean Is within that region.
Note that there is a 2 1/2% chance the population mean is larger than 17.45 and a 2.5% chance the population mean is smaller than 12.55. That would occur when sampling error causes our sample mean To be smaller or larger than the true population mean just due to randomness.
Unfortunately in the real world we would never have the population variance if we didn't already know the population mean . And if we already knew the population mean we don't need to be doing statistics at all. We therefore estimate the population variance from the sample variance
However, samples usually underestimate the actual variance of the population due to sampling error. Therefore, to adjust for this sampling error, we use the t distribution instead of the normal distribution. The t distribution is wider to account for the sampling error and the underestimate of the population variance that would result from it.
As shown in the figure here, the middle 95% of the t distribution we have to use, will result in a slightly larger confidence interval than if we were able to use the normal distribution itself.
As illustrated here the T distribution is wider to account for the sampling error but as the sample size increases it narrows to become the normal distribution.
Because the imprecision due to the sampling error depends on the sample size, there is a different T distribution for each sample size, otherwise known as degrees of freedom.
in this figure we can see the normal distribution at the top where the middle 95% of the distribution is within 1.96 standard errors of the mean.
The bottom figure is the T distribution for 6 degrees of freedom which is a sample size of 7 values. in this situation The middle 95% of the distribution Is within 2.447 standard errors of the mean.
The middle figure shows a larger sample size, 11 degrees of freedom which is a sample size of 12, and we can see that now the middle 95% of that T distribution is 2.201 standard errors above and below the mean.
As the sample size increases, the t distribution narrows to eventually become the normal distribution for an infinite sample size.
In the same way that there are Z tables of areas for the normal distribution, there are t tables that can be used to calculate these areas.
Here we see a comparison of the most common Type of Z table and T table. The Z table on the left shows the areas to the left of particular points on the X axis which represent various Z values.
Z tables usually describe the area to the left which allows them to be versatile for a variety of different types of calculations.
The numbers in the table are areas.
In contrast, T tables usually describe the location on the X axis that corresponds to different areas for each column. The areas specified are usually the ones on the outside of the middle region.
The numbers in the table are the width of the T distribution in terms of standard errors.
The T tables do this because they are mainly used for determining confidence intervals, which we do by specifying the area outside of the middle region we're interested in.
Rather than have a separate tea table for every different degree of freedom and all the detailed areas, T tables usually have one row per each different degree of freedom and highlight a small number of areas in the columns.
The most common table looks like the ones illustrated which just shows the area to the right of a particular distance. To get the middle region, you would need to double that area and look at that distance above and below the value of 0.
An example will make this more clear.
Let's use the T distribution to figure out the width of the 95% confidence interval if we take a sample of 16 values and get a mean of 15 and a sample standard deviation of 5.
First, a sample size of 16 corresponds to a degrees of freedom value of 15.
We would then go to our t table and identify the row that corresponds to 15 degrees of freedom.
To have a region in the T distribution with 95% in the middle, that would mean an area of Alpha equals 0.025 on each side. It's those Alpha values that correspond to the columns in our t table.
We therefore go to the column corresponding to Alpha equals 0.025.
We read down that column until we get to the row for degrees of freedom 15 and the value is 2.131.
That tells us that if we have a T distribution with 15 degrees of freedom, in order to have a middle region with 95% of the area, and two and a half percent above and below that middle region, the region would need to go 2.131 standard errors above and below the mean.
This figure illustrates how wide the regions would have to be for a variety of different Alpha values which correspond to different regions in the center of the distribution. And it's the center of the distribution that gives us our confidence interval.
to get our 95% confidence interval we have to go 2.131 standard errors above and below the sample mean.
If we wanted to be more confident and have something like a 99% confidence interval we would have to go 2.947 standard errors above and below the mean.
If we were content to be less confident and use something like a 80% confidence interval we would only have to go 1.341 standard errors above and below the mean.
By far the most commonly used degree of confidence for confidence intervals is 95%, but it is possible to calculate these others if you want to.
Back to our example where we took a sample of 16 values and obtained a sample mean of 15 a sample standard deviation of 5.
the 95% confidence interval would be the sample mean plus or minus 2.131 standard errors which would give us 15 plus or minus 2.66 which would give us a 95% confidence interval of 12.34 up to 17.66.
Based on our data we would be 95% confident that the true population mean is somewhere between 12.34 and 17.66, with only a 5% probability that it is outside of that interval.
Let's compare our two different confidence intervals, using the normal distribution And using the T distribution. For both of these we have a sample mean of 15 and the standard deviation value of either the population or sample is 5.
If we knew the population standard deviation And used the normal distribution to create our confidence interval it's 1.96 standard errors above and below the sample mean.
If we have to estimate the population standard deviation from the sample standard deviation and use the T distribution to create our confidence interval, it's 2.131 standard errors above and below the sample mean.
In the real world the first situation Is unrealistic because how could we know the population standard deviation if we didn't know it's mean Therefore The T distribution Is what we should always use when calculating confidence intervals.
For example, If we had 25 degrees of freedom the width of our 95% confidence interval from the t distribution would be 2.06 standard errors versus 1.96 from the normal distribution. This is a difference of about 5% which means that if we used the normal distribution instead of the T distribution our distribution would be 5% too narrow.
Keep in mind the purpose of all of this. the purpose of confidence intervals is to estimate the value of the population mean from a sample mean.
If we knew the population mean and standard deviation, we wouldn't need confidence intervals or statistics at all, we would have our answer.
If we have a sample mean and we know the population standard deviation, we could use the normal distribution to calculate our confidence interval, but this is an unrealistic situation.
In the real world, confidence intervals are used when we know the sample mean and sample standard deviation, in which case we use the T distribution to calculate the confidence intervals.
Since confidence intervals are representing our confidence in the true value you can think of them as real-world versions of significant figures. Lots of people learn about significant figures in science classes and how the number that you report is an indication of how sure you are about your correct answer. Significant figures are really a form of shortcut and casual way to represent the uncertainty in an answer we obtain. in the real world of science, it's confidence intervals that are used to indicate uncertainty, not significant figures.
So what are confidence intervals used for?
1st , they're used in the way we've been talking about, as a descriptive statistic for our estimate of the population mean.
2nd, the concept of confidence intervals are used as the basis for the T test,, which is a test of whether two populations appear to have different means from one another. We can't just compare the means of our samples because sampling error would cause them to be different even if the populations had the exact same mean.
The way this test works is diagrammed conceptually here.
On the left is a situation where the confidence intervals for our two samples overlap which is what we would see if the means of the populations they were taken from were equal to each other.
On the right is the situation where the confidence intervals for our two samples don't overlap which is what we would see if the means of the populations they were taken from we're different from each other.
The details of T tests are slightly more complicated than what's shown here, and those details are described in another video on this channel and in this playlist if you're interested, but the fundamental concept of how that very common statistical test works comes from the idea of confidence intervals.
Confidence intervals are widely used in the reporting of data from experiments and scientific studies and they provide the foundation for one of the most common statistical tests.
Click to do all the usual YouTube things. You can also find a full collection of resources, articles, and links to statistical videos at the website shown on the bottom of the screen.
Connect with StatsExamples here
This information is intended for the greater good; please use statistics responsibly. |
How the Central Limit Theorem tutorial fits into the typical statistics course: WISE tutorials are modularized to allow instructors to pick or choose modules that best fit their course needs. Each module is a self-contained lesson that does not depend on any of the other modules, although some specific prerequisite information may be required.
The Central Limit Theorem (CLT) Module was designed with the assumption that students have some familiarity with basic elementary statistics, such as mean, standard deviation, variance, the normal curve, and sampling distributions. You may find it helpful for your students to complete the Sampling Distribution Module before the CLT Module. The CLT Module is intended to prepare students to learn about hypothesis testing and confidence intervals.
When to use the CLTtutorial? Instructors often introduce the Central Limit Theorem after they’ve discussed descriptive statistics and the z-probability distribution and before an introduction to formal hypothesis testing procedures. Some instructors may wish to use Activity 2 of this module for review later in the course. This relatively advanced component emphasizes conditions where it may not be appropriate to assume that sampling distributions are close to normal. This critical concept is relevant to students who have already learned the importance of the normality assumption for parametric hypothesis testing. You may consider having students return to this component later in the course, after t-tests and ANOVA have been introduced.
Suggestions for Using the CLT Tutorial
- Class demonstration/Lecture aid
- Lab assignment
- Homework assignment
- Review assignment
There are many ways in which the CLT Module can be inserted into your lesson plan. Your choices may depend on students’ level of computer literacy, computer resources available at your school, and class time restrictions. Here are a few suggestions:
1. Pre-lecture Assignment
Assign the module as homework to introduce the Central Limit Theorem to students. This will allow you to use more class time for in-depth discussions and activities instead of a full lecture.
2. Live Demonstration
As part of either a lecture or guided lab assignment, the SDM applet itself may be used by the instructor to demonstrate visually different aspects of the sampling distribution and the Central Limit Theorem. Some instructors may choose to step through parts or all of the tutorial in a demonstration mode. This demonstration may serve as a stimulus for classroom discussion and/or introduction to an assignment for students. See our step-by-step guide for a live demonstration using the applet.
3. Post-lecture Assignment
After your presentation of the Central Limit Theorem material, the module can be used to demonstrate lecture points and give students practice using the concepts. This applet allows students to gain a perspective on the concepts that complements a lecture or other presentations. The more perspectives students are exposed to in the course of instruction, the more likely they are to understand and retain the material.
For more information, see the Introduction to the tutorial.
- Multiple-choice questions – The main portion of the module is designed to give students feedback without evaluating their performance. The multiple-choice questions provide feedback on both correct and incorrect responses. However, no record is kept of student answers.
- Essay questions – There are follow-up questions after the main part of the module. These questions are multiple-choice and short-answer essays and are designed to examine conceptual understanding of the topic. You may want students to complete this portion of the module and hand in their responses for your evaluation. This will give you an opportunity to evaluate what your students have learned. We have not posted answers to these questions.
WISE modules are designed as self-contained lessons that students can use with little, if any, guidance. If you are concerned that students may not feel comfortable using web pages and applets, you may consider using the module as part of an in-class activity. Most students complete the module in 40 – 50 minutes.
We hope this tutorial is helpful for you and your students, and we welcome your feedback on this tutorial and other aspects of the WISE site. Please send your comments to firstname.lastname@example.org.
3,898 total views, 1 views today |
Praise refers to positive evaluations made by a person of another's products, performances, or attributes, where the evaluator presumes the validity of the standards on which the evaluation is based. The influence of praise on an individual can depend on many factors, including the context, the meanings the praise may convey, and the characteristics and interpretations of the recipient. Praise is distinct from acknowledgement or feedback, which are more neutral forms of recognition, and encouragement, which is more future oriented. In addition, while praise may share some predictive relationships (both positive and negative) with tangible rewards, praise tends to be less salient and expected, conveys more information about competence, and is typically given more immediately after the desired behavior.
As behavioral reinforcement
The concept of praise as a means of behavioral reinforcement is rooted in B.F. Skinner's model of operant conditioning. Through this lens, praise has been viewed as a means of positive reinforcement, wherein an observed behavior is made more likely to occur by contingently praising said behavior. Hundreds of studies have demonstrated the effectiveness of praise in promoting positive behaviors, notably in the study of teacher and parent use of praise on child in promoting improved behavior and academic performance, but also in the study of work performance. Praise has also been demonstrated to reinforce positive behaviors in non-praised adjacent individuals (such as a classmate of the praise recipient) through vicarious reinforcement. Praise may be more or less effective in changing behavior depending on its form, content and delivery. In order for praise to effect positive behavior change, it much be contingent on the positive behavior (i.e., only administered after the targeted behavior is enacted), must specify the particulars of the behavior that is to be reinforced, and must be delivered sincerely and credibly.
Acknowledging the effect of praise as a positive reinforcement strategy, numerous behavioral and cognitive behavioral interventions have incorporated the use of praise in their protocols. The strategic use of praise is recognized as an evidence-based practice in both classroom management and parenting training interventions, though praise is often subsumed in intervention research into a larger category of positive reinforcement, which includes strategies such as strategic attention and behavioral rewards.
Effects beyond behavior change
Although the majority of early research on the influences of praise focused on behavior implications, more recent investigations have highlighted important implications in other domains. Praise may have cognitive influences on an individual, by attracting attention to the self, or by conveying information about the values and expectations of the praiser to the recipient. Effective praise (i.e., praise that is welcomed or accepted by the recipient) may also have positive emotional effects by generating a positive affective state (e.g., happiness, joy, pride). Praise is also thought to convey that one has surpassed a noteworthy evaluative standard, and if the recipient of the praise is likely to experience a sense of pleasure stemming from a positive self-perception. Contrastingly, praise may create negative emotional consequences if it appears disingenuous or manipulative.
Alternative views of the effects of praise on motivation exist. In one camp, praise is thought to decrease intrinsic motivation by increasing the presence of external control. However, praise has also been argued to define standards and expectations, which in turn may motivate an individual to exert effort to meet those standards. Lastly, praise may serve to influence interpersonal relations. For example, strong pressures to reciprocate praise have been found. It is thought that the mutual praise may serve to increase attraction and strengthen the interpersonal relationship, and this process may underlie the use of praise in ingratiation.
Person versus process
Over the past several decades, researchers have distinguished between praise that is directed at a person's general abilities and qualities (e.g., "You're such a good drawer.") and praise that is directed at the process of performance (e.g., "You are working so hard at that drawing."). This distinction between person versus process praise is sometimes referred to as ability versus effort praise, though ability and effort statements can be seen as subcategories of person and process statements, respectively.
Traditionally, person(trait)-oriented praise was thought to instill a child's belief that they have the capacity to succeed, and thus help motivate them to learn. However, social-cognitive theorists have more recently suggested that person-oriented (as opposed to process-oriented) praise may have detrimental impacts on a child's self-perceptions, motivation and learning. For example, praising children for their personal attributes, rather than specifics about their performance, may teach them to make interferences about their global worth, and may thus undermine their intrinsic motivation. In a study of person- versus process-oriented praise, Kamins and Dweck found that children who received person-oriented praise displayed more "helpless" responses following a failure including self-blame, than those in the process condition. Henderlong and Lepper suggest that person-oriented praise may function like tangible rewards, in that they produce desired outcomes in the short-run, but may undermine intrinsic motivation and subsequent perseverance. However, Skipper & Douglas found that although person- versus process-oriented praise (and an objective feedback control group) predicted more negative responses to the first failure, all three groups demonstrated similarly negative responses to the second failure. Thus, the long-term negative consequences of person-oriented praise are still unclear.
Person and process (or performance) praise may also foster different attributional styles such that person-oriented praise may lead one to attribute success and failure to stable ability, which in turn may foster helplessness reactions in the face of setbacks. Contrastingly, process praise may foster attributions regarding effort or strategy, such that children attribute their success (or failure) to these variables, rather than their stable trait or ability. This attributional style can foster more adaptive reactions to both success and failure. In support of this notion, Muller and Dweck experimentally found praise for child intelligence to be more detrimental to 5th graders' achievement motivation than praise for effort. Following a failure, the person-praised students displayed less task persistence, task enjoyment, and displayed worse task performance than those praised for effort. These findings are in line with personal theories of achievement striving, in which in the face of failure, performance tends to improve when individuals make attributions to a lack of effort, but worsen when they attribute their failure to a lack of ability.
In the studies mentioned above, person-oriented praise was found to be less beneficial than process-oriented praise, but this is not always found to be the case. Particularly, effort-oriented praise may be detrimental when given during tasks that are exceptionally easy. This may be especially apparent for older children as they see effort and ability to be inversely related and thus an overemphasis on effort may suggest a lack of ability.
Controlling versus informational
Proponents of cognitive evaluation theory (Deci & Ryan ) have focused on two aspects of praise thought to influence a child's self-determination: information and control. Taking this perspective, the informational aspect of praise is thought to promote a perceived internal locus of control (and thus greater self-determination) while the controlling aspects promote a perceived external locus of control and thus extrinsic compliance or defiance. Thus, Deci & Ryan suggest that the effect of praise is moderated by the salience of informational versus controlling aspects of praise.
The theory that informational praise enhances self-determination over controlling praise has been supported by several empirical studies. In a metanalysis including five studies distinguishing informational from controlling praise, Deci, Koestner & Ryan found that informational-based praise related to greater instrinsic motivation (as measured by free-choice behavior and self-reported interest) while controlling praise was associated with less intrinsic motivation. For example, Pittman and colleagues found that adults demonstrated more free-choice engagement with a task after receiving informational ("e.g., "Compared to most of my subjects, you're doing really well."), rather than controlling (e.g., "I haven't been able to use most of the data I've gotten so far, but you're doing really well, and if you keep it up I'll be able to use yours.") praise.
Several complexities of informational versus controlling praise have been acknowledged. First, though the differences between information and controlling praise have been well-established, it is difficult to determine the whether the net effects these forms of praise will be positive, negative or neutral compared to a control condition. In addition, it is often difficult to determine the extent to which informational, controlling, or both, which may muddy interpretations of results.
Social-comparison versus mastery
Social comparison is a psychological process that is widely prevalent, particularly so in educational settings. In Festinger's social comparison theory, he noted that people engage in social comparison as a means to reduce ambiguity and accurately evaluate their own qualities and abilities. However, controversy exists over whether providing children with social-comparison praise has beneficial impact on their motivation and performance. Some studies have demonstrated that students who received social-comparison praise (e.g., "you're doing better than most students" or "you're performance is amongst the best we've had") demonstrated greater motivation compared to no-praise or other control groups. Sarafino, Russo, Barker, Consentino and Titus found that students who received social-comparison voluntarily engaged in the task more so than those who received feedback that they performed similar to others. Though these studies demonstrate the possible positive influence of social-comparison praise, they have been criticized for inadequate control groups. For example, a control group given feedback that they are average may be seen as negative, rather than neutral. In addition, most social-comparison studies do not examine motivation or behavior following a subsequent unsuccessful task.
Beyond methodology, the primary criticism to social-comparison praise is that it teaches children to evaluate themselves on the basis of the performance of others, and may therefore lead to maladaptive coping in situations in which one is outperformed by others individuals. Social-comparison praise has been hypothesized to decrease intrinsic motivation for the praised children because they may then view their behaviors as externally controlled. Contrastingly, it is suggested that praise that focused on a child's competence (mastery) rather than social comparison may be important for fostering motivation. This area is relatively understudied, though some interesting findings have emerged. In a study of adults, Koestner, Zuckerman, and Olsson found that gender moderated the influence of social-comparison and mastery praise, where women were more intrinsically motivated following mastery praise, while men were more motivated following social-comparison praise. In a study of children, Henderlong Corpus, Ogle & Love-Geiger found that social-comparison praise lead to decreased motivation following ambiguous feedback for all children, and also decreased motivation following positive feedback for girls only. Thus, mastery praise may be more conducive than social-comparison to fostering intrinsic motivation, particularly for females, though more research is needed to tease apart these relationships.
Factors that affect influence
The function of praise on child performance and motivation may likely vary as a function of age. Few studies have directly examined developmental differences in praise, though some evidence has been found. Henderlong Corpus & Lepper found person praise (as opposed to process praise) to negatively influence motivation for older girls (4th/5th grade), while for preschool-age children, there were no differences in the effects of process, person and product praise, though all three forms of praise were associated with increased motivation as compared to neutral feedback. In a different study, Henderlong found that for older children, process praise enhanced post-failure motivation more so than person praise, and person praise decreased motivation as compared to neutral feedback. Contrastingly, for preschool-age children process praise enhanced post-failure motivation more than person praise, but both were better than neutral feedback. Some posit that younger children do not experience the negative effects of certain types of praise because they do not yet make causal attributions in complex ways, and they are more literal in their interpretations of adult speech.
The function of praise on child behavior and motivation has also found to vary as a function of child gender. Some researchers have shown that females are more susceptible to the negative effects of certain types of praise (person-oriented praise, praise that limits autonomy). For example, Koestner, Zuckerman & Koestner found that girls were more negatively influenced by praise that diminished perceived autonomy. Henderlong Corpus and Lepper found that process praise was more beneficial to motivation than person praise, but only for girls. Interestingly, this difference was found for older children, but not preschool-aged children.
Others have found young girls to be more negatively influenced by the evaluations of adults more generally. Some have posited that this gender difference is due to girls more often attributing failure to lack of ability rather than a lack of motivation or effort. Gender differences may be attributable to normative socialization practices, in which people generally emphasize dependence and interpersonal relationships for girls, but achievement and independence for boys.
Culture has been referred to as a "blind spot" in the praise literature. Yet, there is reason to believe that cultural differences in the effects of praise exist. Much of the discussion on culture and praise has focused on differences between independent and interdependent cultures (e.g.). Stated briefly, independent cultures, common in Western cultures, generally value and seek to promote individualism and autonomy, while interdependent cultures promote fundamental connectedness and harmony in interpersonal relationships. Looking through this cultural lens, clear differences in the use and impact of praise can be found. In comparison to the United States, praise is rarely in China and Japan (e.g.), as praise may be thought to be harmful to a child's character. In interdependent cultures, individuals are generally motivated by self-improvement. This cultural difference has also been found experimentally. Heine, Lehman, Markus & Katayama found that Canadian students persisted longer after positive than negative performance feedback, while the opposite was true for Japanese students. Some posit that individuals from independent and interdependent cultures largely express different models of praise (independence-supportive and interdependence-supportive praise.
- Kanouse, D. E.; Gumpert, P.; Canavan-Gumpert, D. (1981). "The semantics of praise". New directions in attribution research 3: 97–115.
- Henderlong, Jennifer; Lepper, Mark R. (2002). "The effects of praise on children's intrinsic motivation: A review and synthesis". Psychological Bulletin 128 (5): 774–795. doi:10.1037/0033-2909.128.5.774.
- Carton, John (19 June 1989). "The differential effects of tangible rewards and praise on intrinsic motivation: A comparison of cognitive evaluation theory and operant theory". Behavior Analyst 19 (2): 237–255.
- Kazdin, Alan (1978). History of behavior modification: Experimental foundations of contemporary research. Baltimore: University Park Press.
- Strain, Phillip S.; Lambert, Deborah L.; Kerr, Mary Margaret; Stagg, Vaughan; Lenkner, Donna A. (1983). "Naturalistic assessment of children's compliance to teachers' requests and consequences for compliance". Journal of Applied Behavior Analysis 16 (2): 243–249. doi:10.1901/jaba.1983.16-243.
- Garland, Ann F.; Hawley, Kristin M.; Brookman-Frazee, Lauren; Hurlburt, Michael S. (May 2008). "Identifying Common Elements of Evidence-Based Psychosocial Treatments for Children's Disruptive Behavior Problems". Journal of the American Academy of Child & Adolescent Psychiatry 47 (5): 505–514. doi:10.1097/CHI.0b013e31816765c2.
- Crowell, Charles R.; Anderson, D. Chris; Abel, Dawn M.; Sergio, Joseph P. (1988). "Task clarification, performance feedback, and social praise: Procedures for improving the customer service of bank tellers". Journal of Applied Behavior Analysis 21 (1): 65–71. doi:10.1901/jaba.1988.21-65. PMC 1286094. PMID 16795713.
- Kazdin, Alan E. (1973). "The effect of vicarious reinforcement on attentive behavior in the classroom". Journal of Applied Behavior Analysis 6 (1): 71–78. doi:10.1901/jaba.1973.6-71.
- Brophy, Jere (1981). "On praising effectively". The Elementary School Journal, 81 (5). JSTOR 1001606.
- Simonsen, Brandi; Fairbanks, Sarah; Briesch, Amy; Myers, Diane; Sugai, George (2008). "Evidence-based Practices in Classroom Management: Considerations for Research to Practice". Education and Treatment of Children 31 (1): 351–380. doi:10.1353/etc.0.0007.
- Weisz, John R.; Kazdin, Alan E. (2010). Evidence-based psychotherapies for children and adolescents. Guilford Press.
- DELIN, CATHERINE R.; BAUMEISTER, ROY F. (September 1994). "Praise: More Than Just Social Reinforcement". Journal for the Theory of Social Behaviour 24 (3): 219–241. doi:10.1111/j.1468-5914.1994.tb00254.x.
- Wicklund, R.A. (1975). Objective self-awareness. In Advances in experimental social psychology. New York: Academic Press. pp. 233–275.
- Deci, E. L.; Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum Press.
- Jones, E.E.; Wortman, C. (1973). Ingraktion: An attributional approach. US: General Learning Press.
- Henderlong Corpus, Jennifer; Lepper, Mark R. (2007). "The Effects of Person Versus Performance Praise on Children’s Motivation: Gender and age as moderating factors". Educational Psychology 27 (4): 487–508. doi:10.1080/01443410601159852.
- Briggs, D. C. (1975). You child's self-esteem: The key to life. Random House LLC.
- Kamins, Melissa L.; Dweck, Carol S. (1999). "Person versus process praise and criticism: Implications for contingent self-worth and coping". Developmental Psychology 35 (3): 835–847. doi:10.1037/0012-16220.127.116.115.
- Skipper, Yvonne; Douglas, Karen (June 2012). "Is no praise good praise? Effects of positive feedback on children's and university students' responses to subsequent failures". British Journal of Educational Psychology 82 (2): 327–339. doi:10.1111/j.2044-8279.2011.02028.x.
- Mueller, Claudia M.; Dweck, Carol S. (1998). "Praise for intelligence can undermine children's motivation and performance". Journal of Personality and Social Psychology 75 (1): 33–52. doi:10.1037/0022-3518.104.22.168.
- Weiner, B. (1 January 1994). "Integrating Social and Personal Theories of Achievement Striving". Review of Educational Research 64 (4): 557–573. doi:10.3102/00346543064004557.
- Covington, V. (1984). "The self-worth theory of achievement motivation: Findings and implications". The Elementary School Journal: 5–20.
- Deci, E. L.; Ryan, R. M. (1980). "The empirical exploration of intrinsic motivational processes". In L. Berkowitz. Advances in experimental social psychology (13 ed.). New York: Academic Press. pp. 39–80.
- Deci, E. L.; Koestner, R.; Ryan, R. M. (1 January 2001). "Extrinsic Rewards and Intrinsic Motivation in Education: Reconsidered Once Again". Review of Educational Research 71 (1): 1–27. doi:10.3102/00346543071001001.
- Pittman, T. S.; Davey, M. E.; Alafat, K. A.; Wetherill, K. V.; Kramer, N. A. (1 June 1980). "Informational versus Controlling Verbal Rewards". Personality and Social Psychology Bulletin 6 (2): 228–233. doi:10.1177/014616728062007.
- Levine, J. M. (1983). Social Comparison and Education. In J.M. Levine & Wang, M.C. Eds. Teacher and Student Perception: Implications for Learning. Hillsdale: N.J.: Lawrence Erlbaum Associates, Inc. pp. 29–55.
- Festinger, L. (1 May 1954). "A Theory of Social Comparison Processes". Human Relations 7 (2): 117–140. doi:10.1177/001872675400700202.
- Deci, Edward L. (1971). "Effects of externally mediated rewards on intrinsic motivation.". Journal of Personality and Social Psychology 18 (1): 105–115. doi:10.1037/h0030644.
- Pretty, Grace H.; Seligman, Clive (1984). "Affect and the overjustification effect.". Journal of Personality and Social Psychology 46 (6): 1241–1253. doi:10.1037/0022-3522.214.171.1241.
- Blanck, Peter D.; Reis, Harry T.; Jackson, Linda (March 1984). "The effects of verbal reinforcement of intrinsic motivation for sex-linked tasks". Sex Roles 10 (5-6): 369–386. doi:10.1007/BF00287554.
- Sarafino, Edward P.; Russo, Alyce; Barker, Judy; Consentino, Annmarie; Consentino, Annmarie (September 1982). "The Effect of Rewards on Intrinsic Interest: Developmental Changes in the Underlying Processes". The Journal of Genetic Psychology 141 (1): 29–39. doi:10.1080/00221325.1982.10533454.
- Corpus, Jennifer Henderlong; Ogle, Christin M.; Love-Geiger, Kelly E. (25 August 2006). "The Effects of Social-Comparison Versus Mastery Praise on Children's Intrinsic Motivation". Motivation and Emotion 30 (4): 333–343. doi:10.1007/s11031-006-9039-4.
- Kohn, A. (1 January 1996). "By All Available Means: Cameron and Pierce's Defense of Extrinsic Motivators". Review of Educational Research 66 (1): 1–4. doi:10.3102/00346543066001001.
- Koestner, Richard; Zuckerman, Miron; Olsson, Jennifer (March 1990). "Attributional style, comparison focus of praise, and intrinsic motivation". Journal of Research in Personality 24 (1): 87–100. doi:10.1016/0092-6566(90)90008-T.
- Henderlong, J. (2000). Beneficial and detrimental effects of praise on children's motivation: Performance versus person feedback (Unpublished doctoral dissertation). Stanford University.
- Barker, George P.; Graham, Sandra (1987). "Developmental study of praise and blame as attributional cues". Journal of Educational Psychology 79 (1): 62–66. doi:10.1037/0022-06126.96.36.199.
- Ackerman, Brian P. (1981). "Young children's understanding of a speaker's intentional use of a false utterance". Developmental Psychology 17 (4): 472–480. doi:10.1037/0012-16188.8.131.522.
- Koestner, R.; Zuckerman, M.; Koestner, J. (1 March 1989). "Attributional Focus of Praise and Children's Intrinsic Motivation: The Moderating Role of Gender". Personality and Social Psychology Bulletin 15 (1): 61–72. doi:10.1177/0146167289151006.
- Dweck, Carol S.; Davidson, William; Nelson, Sharon; Enna, Bradley (1978). "Sex differences in learned helplessness: II. The contingencies of evaluative feedback in the classroom and III. An experimental analysis.". Developmental Psychology 14 (3): 268–276. doi:10.1037/0012-16184.108.40.2068.
- Wang, Y. Z.; Wiley, A. R.; Chiu, C.-Y. (1 January 2008). "Independence-supportive praise versus interdependence-promoting praise". International Journal of Behavioral Development 32 (1): 13–20. doi:10.1177/0165025407084047.
- Markus, Hazel R.; Kitayama, Shinobu (1991). "Culture and the self: Implications for cognition, emotion, and motivation". Psychological Review 98 (2): 224–253. doi:10.1037/0033-295X.98.2.224.
- Lewis, C. C. (1995). Educating hearts and minds: Reflections on Japanese preschool and elementary education. Cambridge, England: Cambridge University Press.
- Salili, F. (1 March 1996). "Learning and Motivation: An Asian Perspective". Psychology & Developing Societies 8 (1): 55–81. doi:10.1177/097133369600800104.
- Heine, Steven H.; Lehman, Darrin R.; Markus, Hazel Rose; Kitayama, Shinobu (1999). "Is there a universal need for positive self-regard?". Psychological Review 106 (4): 766–794. doi:10.1037/0033-295X.106.4.766. PMID 10560328.
|Library resources about
|Wikiquote has quotations related to: Praise|
|Look up praise in Wiktionary, the free dictionary.| |
Apoptosis (from Ancient Greek , apópt?sis, "falling off") is a form of programmed cell death that occurs in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses between 50 and 70 billion cells each day due to apoptosis.[a] For an average human child between the ages of 8 and 14, approximately 20-30 billion cells die per day.
In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them.
Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately.
In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis.
German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Foxton Ross Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair R. Currie, as well as Andrew Wyllie, who was Currie's graduate student, at University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz.
For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of components of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer.
The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John E. Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis.
In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ap-?-TOH-sis) and the second p pronounced , as in the original Greek. In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc.
In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation:
We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid.
The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC).
A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell suicide. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis.
Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain.
The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. They are very closely related to intrinsic pathway, and tumors arise more frequently through intrinsic pathway than the extrinsic pathway because of sensitivity. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis.
During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor - 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3.
Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability.
Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals.
TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-? signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the proteinTRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis.
The fas receptor (First apoptosis signal) - (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8.
Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family.
Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases, caspase 2,8,9,10,11,12, and effector caspases, caspase 3,6,7. The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program.
Amphibian frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibians metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog.
Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-?B.
Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis.
A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include:
Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death.
Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly:
The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis. Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation.
Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704-5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons.
The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation. A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality. However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure. These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation. A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist.
In order to perform analysis of apoptotic versus necrotic (necroptotic) cells, one can do analysis of morphology by label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). It is important to know how primary and secondary necrotic cells can be distinguished by analysis of supernatant for caspases, HMGB1, and release of cytokeratin 18. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references.
The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased.
A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9, and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the amount of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-?B in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis.
The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair, however it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors.
Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis".
Apoptosis in HeLa[b] cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur. Canine distemper virus (CDV) is able to induce apoptosis despite the presence of these inhibitory proteins. This is an important oncolytic property of CDV: this virus is capable of killing canine lymphoma cells. Oncoproteins E6 and E7 still leave p53 inactive, but they are not able to avoid the activation of caspases induced from the stress of viral infection. These oncolytic properties provided a promising link between CDV and lymphoma apoptosis, which can lead to development of alternative treatment methods for both canine lymphoma and human non-Hodgkin lymphoma. Defects in the cell cycle are thought to be responsible for the resistance to chemotherapy or radiation by certain tumor cells, so a virus that can induce apoptosis despite defects in the cell cycle is useful for cancer treatment.
The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death, and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway.
Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis.
On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. It is of interest to note that neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis" ). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated. At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death, and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM.
Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKK?, which leads to NF-?B activation and cell survival. Active NF-?B induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-?B has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type.
The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways:
Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200.
Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV.
Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells.
Viruses can trigger apoptosis of infected cells via a range of mechanisms including:
Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro. Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade.
The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice.
OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever.
The Oropouche virus also causes disruption in cultured cells - cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected.
With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway.
In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria.
Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function.
Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus.
Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear.
The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease. |
Polymers. Polymers . C.X.C objectives. Students should be able to : Define polymers Distinguish between addition and condensation as reactions in the formation of polymers Name examples of polymers formed by: ( i ) addition reactions (ii)condensation reactions
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
C.X.C objectives • Students should be able to : • Define polymers • Distinguish between addition and condensation as reactions in the formation of polymers • Name examples of polymers formed by: (i) addition reactions (ii)condensation reactions • Draw diagrams to represent the formulae of monomers • State at least one use of each of the following types of polymers : (i) polyalkene (ii) polyamide (iii) polyester (iv) polysaccharide • Show how the monomers are linked in the structure of a polymer • Demonstrate the differences in properties between a monomer and the polymer it forms.
Polymers • Obj 1. Students should be able to define polymers : • What is a polymer ? • Polymers are macromolecules formed by linking together thousands of small molecules called monomers, usually in chains. Polymers are formed by polymerisation. • Some polymers occur naturally whereas some are man-made (synthetic). Synthetic polymers are referred to as Plastics but will be further discussed later in this module.
Polymerisation • Obj. 2. Distinguish between addition and condensation as reactions in the formation of polymers. • Polymerisation is the process whereby a polymer is formed from monomers. This can happen in two ways. Addition Polymerisation Condensation Polymerisation Occurs when monomers join with the elimination of a small molecule e.g. water from between each unit. Occurs when unsaturated monomers are linked to form a saturated polymer.
Addition Polymerisation • An addition polymer is constructed of one type of monomer. This is an unsaturated molecule (usually an alkene C C ) • Addition polymers are referred to as Polyalkenes. • The polymer is formed when the double bond breaks and the units join together . • Only one type of product is formed.
Naming Polymers • To name a polymer, the prefix ‘Poly’ is placed before the name of the monomer. • For example: Polypropene Polystyrene Polyethene
Addition Polymers • Polyalkenes have properties of substances commonly named PLASTICS. Plastics are synthetic polymers.
Condensation Polymerisation • Condensation polymerisation describes a process whereby the polymer is formed when monomer units join together with the elimination of a small molecule, usually BUT NOT ALWAYS water. (HCl or NH3 could be eliminated as well.) • In order for monomers to form condensation polymers, the monomer must have two active sites ( point at which the monomers join). • Two products are formed as a result of this type of polymerisation. • It is important to note that a condensation polymer can have monomers of one or two types.
Condensation Polymerisation • There are two types of condensation polymerisation. Natural Condensation polymers e.g protein and starch Synthetic or man-made condensation polymers e.g. nylon and terylene
Types of linkage • Condensation polymers can be divided into groups based on the type of linkage between the monomer units. • Polyamides – Amide Linkage • Polyesters- Ester Linkage C O Polysaccarides- Saccaride Linkage O
Types of condensation polymers • Polyamides • Polyesters • Polysaccharides
Polyamides • Protein is a natural polyamide. The monomers which make up proteins are amino acids.
Polyamides cont’d : • Two amino acids join together to form a dipeptide. A ‘H’ from the amine ( NH2) group of one of the amino acids and an ‘OH’ from the other amino acid condense to form water and join the two monomers.
Polyamides cont’d: • Nylon is an example of a synthetic polymer. Nylon is special in that it is formed by two different monomers, a diacid and a diamine.
Polyesters • Polyesters are synthetic fibres, such as terylene, made as imitations of natural materials like wool and cotton. • Polyester structure consist of many monomers joined together by ester bonds. • The monomers in polyester are : diacid + dialcohol
Polysaccarides • Polysaccarides are natural polymers such as starch and cellulose. The monomers are monosaccaridese.g. fructose or glucose (simple reducing sugars) • Two glucose units join together by the elimination of water to produce a disaccaride (sucrose) and many glucose units join together to form the polysaccaride- starch. • Starch- [ O X O X O ]n
Hydrolysis • Hydrolysis of Polymers- • The word ‘hydrolysis’ means to split up by the addition of water. Hydro- water, lysis-to break up/separate. • Polymers which undergo hydrolysis are broken up into their respective monomers when water is added.
Hydrolysis • Carbohydrates and Proteins can be hydrolysed in two ways. • 1) In the body during digestion by enzymes. • 2) In the lab, by boiling with dilute Hydrochloric acid or sulphuric acid.
Properties of Monomers and Polymers • Polymers tend to have totally different physical and chemical properties from their monomers.
Videos • The Polymer Party – • http://www.youtube.com/watch?v=SgWgLioazSo |
As we showed you in a recent slideshow, some NASA scientists envision astronauts making whatever they need out of local materials on Mars or the moon via 3D printing. While technology from organizations like Contour Crafting has made this theoretically possible, now, Washington State University (WSU) engineers have actually used moon rocks to print some simple-shaped objects -- on Earth.
Real moon rocks are too rare, so researchers are using an imitation moon rock called lunar regolith simulant. Regolith is a mixture of loose dust, rock, and soil that covers solid bedrock on earth, as well as other planets, the moon, and some asteroids. The simulant is formulated to approximate the real lunar regolith's chemical and mineral properties. There are several versions. The WSU team used about 10 lb of one version that contains silicon, aluminum, calcium, iron, and magnesium oxides.
Washington State University engineers have 3D-printed some simple-shaped objects using a simulant of lunar regolith, a mixture of loose dust, rock, and soil that covers solid bedrock. Shown here, Apollo 16 astronaut Charlie Duke drives a core sample tube into the lunar regolith. (Source: NASA)
A team that includes Amit Bandyopadhyay and Susmita Bose, professors at the university's School of Mechanical and Materials Engineering, has demonstrated the printing of parts from the raw, artificial moon rock. NASA is working with several organizations, including Contour Crafting, to develop the technology for fabricating simple tools or replacement parts, but Bandyopadhyay's group is the first to demonstrate the ability.
Previously, Bandyopadhyay and Bose had used 3D printing to create bone-like materials for use in orthopedic implants. Their current work uses Laser Engineering Net Shaping (LENS) technology, specifically, LENS-750 systems. These are based on laser sintering, the most common additive manufacturing method. The team published its results in an article in the Rapid Prototyping Journal.
According to the article abstract, the team produced dense parts with no macroscopic defects, which they characterized to evaluate how laser processing affected the lunar regolith simulant's microstructure, constituent phases, and chemistry. Characterization was done using X-ray diffraction, differential scanning calorimetry, scanning electron microscope, and X-ray photoelectron spectroscopy.
Although the laser processing did cause marginal changes in the material's composition, after some trial and error, the researchers managed to produce parts that did not crack when they solidified.
The team has sent its results to NASA. Other team members include Vamsi Krishna Balla, also with WSU's School of Mechanical and Materials Engineering; Luke B. Roberson, of NASA's Kennedy Space Center; Gregory W. O'Connor, of Amalgam Industries; and Steven Trigwell, of ASRC Aerospace Corp. The research was supported by a $750,000 W.M. Keck Foundation grant.
In the video below, Bandyopadhyay shows the regolith material and explains that the technology can also be used onsite to repair broken parts. The achievement, he says in the video, is a first-generation work that will probably not be ready for commercial use for another 50 years or so.
I suspect it may not take that long, considering how fast this technology area is advancing. NASA is already working on 3D printing rocket engine parts, and other researchers have figured out how to 3D print entire personal electronic devices. The two biggest challenges in printing objects from moon rocks seem to be figuring out the best combination of laser sintering processing and moon rock material, plus, making small printers that will work in a zero-gravity environment.
Funny you should mention that about certain types of plastic helping to shield astronauts from cosmic rays. You're right--it's in an instrument in NASA's Lunar Reconnaissance Orbiter. I just wrote a blog on this discovery that will be appearing soon.
Ann , you are absolutely correct the main issue these days for astronauts is cosmic ray radiation , these radiations are very harmfull and causes severe cellular damage which can result least in cancer and can lead to deaths as well .I have read somewhere that using plastic in deep space can drop down the issue of cosmic rays . Plastic reduces the radiation from fast moving charged particles cosmic rays , Anything with high hydrogen content with water will work well. However NASA is working in all of these remedies to find out a perfect solution .
Thanks, Deberah. The cost of the fuel and logistics involved in shipping stuff to astronauts on the space station, the moon, or another planet is considered by many to be one of the main reasons humans haven't gone on longer space voyages or spent time on the moon. Another is figuring out how to protect us from harmful cosmic ray radiation.
Ann this is really very informative article , thats really great that researchers are working on 3D printing by lunar rocks . This will drop down the cargo charges for the objects in case of development on moon . Many years back i heard that astronauts wants to colonize the moon but it was very difficult now what i feel in the near futur to develop coloniese it will be very easy to develop colonies on moon .
emneumann, thanks for the comments, and glad you liked the article. Unfortunately, we *have* used up many, perhaps even most, sources of raw native ores. Scrap and reclaimed metals are by no means easily reusable at the same strengths as when originally forged. Aluminum makers claim theirs is, but as usual, that depends on several variables. The dystopic scenarios are not confined to science fiction.
I'd like to point out that the materials upon which our technology is based aren't consumed and made to be unusable once they have been incorporated into our machines and infrastructure. That is to say, we have not "used up" the iron, aluminum and other raw materials and they will be more accessible to future post dark age humanity that they were to our ancestors. They will just be in other places and not in their native ores. They will be in land fills, salvage yards and in the infrastructure concentrated in urban areas. In fact, many of them will be in a form much more recognizable as useful to people in a dystopian future than they were the first time we dug them out of the ground. Granted, fossil fuels will be much harder to find but that should be the only resource disadvantage to future peoples trying to build a technological society from scratch.
This reminds me of the folks who think money spent on space exploration disappears into the vacuum of the void with the few insignificant pounds of materials that we actually send into space. That money feeds into the economy and allows many people to feed their families, pay their mortgages, etc. and is in no way a waste or lost forever.
ChasChas, minerals are not to be dismissed--and they are also found on the moon. If a widescale disaster happened here on Earth, as in sci-fi novels and movies, and all cultures got sent back to the stone age, it would be really difficult to re-create current conditions primarily because we've used up most of the Earth's minerals that were available via mining, to forge metals. Those metals are what we used to build machines, including the ones that then built other materials. The history of industrial technology is an interesting and instructive study.
The amount of plastic clogging the ocean continues to grow. Some startling, not-so-good news has come out recently about the roles plastic is playing in the ocean, as well as more heartening news about efforts to collect and reuse it.
Optomec's third America Makes project for metal 3D printing teams the LENS process company with GE Aviation, Lockheed, and other big aerospace names to develop guidelines for repairing high-value flight-critical Air Force components.
A self-propelled robot developed by a team of researchers headed by MIT promises to detect leaks quickly and accurately in gas pipelines, eliminating the likelihood of dangerous explosions. The robot may also be useful in water and petroleum pipe leak detection.
Aerojet Rocketdyne has built and successfully hot-fire tested an entire 3D-printed rocket engine. In other news, NASA's 3D-printed rocket engine injectors survived tests generating a record 20,000 pounds of thrust. Some performed equally well or better than welded parts.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience. |
Click the checkbox for the options to print and add to Assignments and Collections. Come to a Concert.
Education World Printable Critical Thinking Skills Worksheets And More Critical Thinking Activities Thinking Skills Worksheets Kids Critical Thinking
Ad Download over 30000 K-8 worksheets covering math reading social studies and more.
Logical thinking skills worksheets. Understand and identify the specific critical-thinking skills they are using. 1 those that you as the teacher will lead and 2 student reproducibles for indepen-dent work. Click the checkbox for the options to print and add to Assignments and Collections.
Logic thinking Other contents. Teaching critical thinking is crucial for student success in core subject areas and it can begin as young as preschool through the introduction of worksheets featuring games and puzzles. Logic Puzzles worksheets are great to help children develop their reasoning skills.
Various types of color patterns repeating patterns AB ABC AAB patterns and growingdecreasing patterns has been used to stimulate logical thinking for. All that the little ones need to do is follow the directions and complete. Kindergarten Logic Puzzles Riddles Worksheets and Printables.
Challenge students with these mind-bending critical thinking puzzles. On the introductory pages for each section of the book youll find. Some of the worksheets for this concept are 81 fresh fun critical thinking activities The critical thinking Logic work 1 Deductive reasoning exercises for attention and executive The thinking toolbox Critical thinking reasoning and reading strategies Just for adults deductions 10.
For each thinking skill in this book there are two kinds of activities. These Pattern Worksheets For Kindergarten and Preschool help develop higher-order thinking skills HOTS in children. On this kindergarten math worksheet kids use their.
Children are presented a number of objects. Assemble a cipher disk and use it to decode facts about animals explorers plants. JumpStart has a fun collection of free printable critical thinking worksheets and free critical thinking activities for kids.
Thinking And Reasoning Skills. Ad Download over 30000 K-8 worksheets covering math reading social studies and more. Important facts about logical reasoning for 4 th graders Fun stimulating logical reasoning 4th grade questions and answers.
Critical thinking skills are necessary in the 21st century and these worksheets cover a wide range of logic puzzles and problems Sudoku Masyu and Hidato puzzles word problems and brain teasers of all. Work on your logic skills and enhance your critical thinking skills. Homeschooling parents as well as teachers can encourage better logical thinking and deductive reasoning skills in kids by introducing them to these exercises.
Fact And Opinion – Students determine the validity of a body of work. Logic Puzzles Worksheets Riddles Worksheets. Critical thinking is one of them.
They have to figure out the objects which are related to each other and match the things that go together. Sequencing worksheets and games. Dictionary Practice Worksheets – Practice your dictionary skills.
Sequencing spatial and pattern exercises and pictorial activities can build logical thinking. Strengthen your kids logical thinking skills with these fun stimulating logical reasoning 4 th grade questions and answersGiven that logic is a discipline of thinking your little ones will enjoy practicing the following logical math exercises. Printable thinking skills worksheets for kids.
Displaying top 8 worksheets found for – Thinking And Reasoning Skills. Add to my workbooks 2 Download file pdf Embed in my website or blog Add to Google Classroom. Discover learning games guided lessons and other interactive activities for children.
Give children a variety of toys and blocks of different shapes colors and sizes and ask them to identify and arrange things in patterns. Below you will find worksheets such as dot to dot word search find the correct shadow find the correct pattern find the differences and fun mazes. Discover learning games guided lessons and other interactive activities for children.
Check out these Things That Go Together Worksheets to enhance your childs logical reasoning skills. Compare and Contrast – Students examine differences and similarities in a variety situations. On this kindergarten math worksheet kids use their logical reasoning and critical thinking skills to solve a fun Sudoku puzzle with a zoo theme.
Brain Teasers – A great way to stimulate thinking. These worksheets will help kids develop their early thinking skills by following some simple clues to figure out an answer. For instance you can create a simple color pattern like red block-blue.
The worksheet has pictures and a set of instructions. Step by Step is worksheet for kindergartners that encourages them to read and follow instructions carefully. Critical thinking and logical reasoning are two of the most important skills that kids need to develop.
Build logical thinking skills with these addition square puzzles. On this kindergarten math worksheet kids use their logical reasoning and critical thinking skills to solve a fun Sudoku puzzle with a zoo theme. Dont worry they come complete with answer keys.
Uh oh someone mislabeled these boxes of fruit. Use the activity sheets to develop logical. Check out our collection of fun and educational thinking skills worksheets that are geared towards kindergarten aged children.
Our worksheets use a lot of imagery to keep your kids entertained and excited about doing math.
Shapes Mix Up Critical Thinking And Logical Reasoning Skills And Activities Shapes Worksheet Kindergarten Art Worksheets Printables Shapes Kindergarten
Lollipop Logic Critical Thinking Activities Critical Thinking Activities Kids Critical Thinking Critical Thinking Worksheets
Pin By Kathryn Maness On Following Directions Follow Directions Worksheet Logical Reasoning Worksheets Critical Thinking Worksheets
Critical Thinking Skills Worksheet Worksheets Are Definitely The Ba In 2021 Critical Thinking Worksheets For Kids Critical Thinking Skills Thinking Skills Worksheets |
Something mysterious is going on at the Sun. In defiance of all logic, its atmosphere gets much, much hotter the farther it stretches from the Sun’s blazing surface.
Temperatures in the corona — the tenuous, outermost layer of the solar atmosphere — spike upwards of 2 million degrees Fahrenheit, while just 1,000 miles below, the underlying surface simmers at a balmy 10,000 F. How the Sun manages this feat remains one of the greatest unanswered questions in astrophysics; scientists call it the coronal heating problem. A new, landmark mission, NASA’s Parker Solar Probe — scheduled to launch no earlier than August 11, 2018 — will fly through the corona itself, seeking clues to its behavior and offering the chance for scientists to solve this mystery.
From Earth, as we see it in visible light, the Sun’s appearance — quiet, unchanging — belies the life and drama of our nearest star. Its turbulent surface is rocked by eruptions and intense bursts of radiation, which hurl solar material at incredible speeds to every corner of the solar system. This solar activity can trigger space weather events that have the potential to disrupt radio communications, harm satellites and astronauts, and at their most severe, interfere with power grids.
Above the surface, the corona extends for millions of miles and roils with plasma, gases superheated so much that they separate into an electric flow of ions and free electrons. Eventually, it continues outward as the solar wind, a supersonic stream of plasma permeating the entire solar system. And so, it is that humans live well within the extended atmosphere of our Sun. To fully understand the corona and all its secrets is to understand not only the star that powers life on Earth, but also, the very space around us.
The coronal heating problem remains one of the greatest unanswered questions in astrophysics. Learn how astronomers first discovered evidence for this mystery during an eclipse in the 1800s, and what scientists today think could explain it. Credit: NASA’s Goddard Space Flight Center/Joy Ng
A 150-year-old mystery
Most of what we know about the corona is deeply rooted in the history of total solar eclipses. Before sophisticated instruments and spacecraft, the only way to study the corona from Earth was during a total eclipse, when the Moon blocks the Sun’s bright face, revealing the surrounding, dimmer corona.
The story of the coronal heating problem begins with a green spectral line observed during an 1869 total eclipse. Because different elements emit light at characteristic wavelengths, scientists can use spectrometers to analyze light from the Sun and identify its composition. But the green line observed in 1869 didn’t correspond to any known elements on Earth. Scientists thought perhaps they’d discovered a new element, and they called it coronium.
Not until 70 years later did a Swedish physicist discover the element responsible for the emission is iron, superheated to the point that it’s ionized 13 times, leaving it with just half the electrons of a normal atom of iron. And therein lies the problem: Scientists calculated that such high levels of ionization would require coronal temperatures around 2 million degrees Fahrenheit — nearly 200 times hotter than the surface.
For decades, this deceptively simple green line has been the Mona Lisa of solar science, baffling scientists who can’t explain its existence. Since identifying its source, we’ve come to understand the puzzle is even more complex than it first appeared.
“I think of the coronal heating problem as an umbrella that covers a couple of related confusing problems,” said Justin Kasper, a space scientist at the University of Michigan in Ann Arbor. Kasper is also principal investigator for SWEAP, short for the Solar Wind Electrons Alphas and Protons Investigation, an instrument suite aboard Parker Solar Probe. “First, how does the corona get that hot that quickly? But the second part of the problem is that it doesn’t just start, it keeps going. And not only does heating continue, but different elements are heated at different rates.” It’s an intriguing hint at what’s going on with heating in the Sun.
Since discovering the hot corona, scientists and engineers have done a great deal of work to understand its behavior. They’ve developed powerful models and instruments and launched spacecraft that watch the Sun around the clock. But even the most complex models and high-resolution observations can only partially explain coronal heating, and some theories contradict each other. There’s also the problem of studying the corona from afar.
We may live within the Sun’s expansive atmosphere, but the corona and solar plasma in near-Earth space differ dramatically. It takes the slow solar wind around four days to travel 93 million miles and reach Earth or the spacecraft that study it — plenty of time for it to intermix with other particles zipping through space and lose its defining features.
Studying this homogenous soup of plasma for clues to coronal heating is like trying to study the geology of a mountain, by sifting through sediment in a river delta thousands of miles downstream. By traveling to the corona, Parker Solar Probe will sample just-heated particles, removing the uncertainties of a 93-million-mile journey and sending back to Earth the most pristine measurements of the corona ever recorded.
“All of our work over the years has culminated to this point: We realized we can never fully solve the coronal heating problem until we send a probe to make measurements in the corona itself,” said Nour Raouafi, Parker Solar Probe deputy project scientist and solar physicist at the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland.
Traveling to the Sun is an idea older than NASA itself, but it’s taken decades to engineer the technology that makes its journey possible. In that time, scientists have determined exactly what kinds of data — and corresponding instruments — they need in order to complete a picture of the corona and answer this ultimate of burning questions.
Explaining the corona’s secrets
Parker Solar Probe will test two chief theories to explain coronal heating. The outer layers of the Sun are constantly boiling and roil with mechanical energy. As massive cells of charged plasma churn through the Sun — much the way distinct bubbles roll up through a pot of boiling water — their fluid motion generates complex magnetic fields that extend far up into the corona. Somehow, the tangled fields channel this ferocious energy into the corona as heat — how they do so is what each theory attempts to explain.
One theory proposes electromagnetic waves are the root of the corona’s extreme heat. Perhaps that boiling motion launches magnetic waves of a certain frequency — called Alfvén waves — from deep within the Sun out into the corona, which send charged particles spinning and heat the atmosphere, a bit like how ocean waves push and accelerate surfers toward the shore.
Another suggests bomb-like explosions, called nanoflares, across the Sun’s surface dump heat into the solar atmosphere. Like their larger counterparts, solar flares, nanoflares are thought to result from an explosive process called magnetic reconnection. Turbulent boiling on the Sun twists and contorts magnetic field lines, building up stress and tension until they explosively snap — like breaking an over-wound rubber band — accelerating and heating particles in their wake.
The two theories aren’t necessarily mutually exclusive. In fact, to complicate matters, many scientists think both may be involved in heating the corona. Sometimes, for example, the magnetic reconnection that sets off a nanoflare could also launch Alfvén waves, which then further heat surrounding plasma.
The other big question is, how often do these processes happen — constantly or in distinct bursts? Answering that requires a level of detail we don’t have from 93 million miles away.
“We’re going close to the heating, and there are times Parker Solar Probe will co-rotate, or orbit the Sun at the same speed the Sun itself rotates,” said Eric Christian, a space scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and member of the mission’s science team. “That’s an important part of the science. By hovering over the same spot, we’ll see the evolution of heating.”
Uncovering the evidence
Once Parker Solar Probe arrives at the corona, how will it help scientists distinguish whether waves or nanoflares drive heating? While the spacecraft carries four instrument suites for a variety of types of research, two in particular will obtain data useful for solving the coronal heating mystery: the FIELDS experiment and SWEAP.
Surveyor of invisible forces, FIELDS, led by the University of California, Berkeley, directly measures electric and magnetic fields, in order to understand the shocks, waves and magnetic reconnection events that heat the solar wind.
SWEAP — led by the Harvard-Smithsonian Astrophysical Observatory in Cambridge, Massachusetts — is the complementary half of the investigation, gathering data on the hot plasma itself. It counts the most abundant particles in the solar wind — electrons, protons and helium ions — and measures their temperature, how fast they’re moving after they’ve been heated, and in what direction.
Together, the two instrument suites paint a picture of the electromagnetic fields thought to be responsible for heating, as well as the just-heated solar particles swirling through the corona. Key to their success are high-resolution measurements, capable of resolving interactions between waves and particles at mere fractions of a second.
Parker Solar Probe will swoop within 3.9 million miles of the Sun’s surface — and while this distance may seem great, the spacecraft is well-positioned to detect signatures of coronal heating. “Even though magnetic reconnection events take place lower down near the Sun’s surface, the spacecraft will see the plasma right after they occur,” said Goddard solar scientist Nicholeen Viall. “We have a chance to stick our thermometer right in the corona and watch the temperature rise. Compare that to studying plasma that was heated four days ago from Earth, where a lot of the 3D structures and time-sensitive information are washed out.”
This part of the corona is entirely unexplored territory, and scientists expect sights unlike anything they’ve seen before. Some think the plasma there will be wispy and tenuous, like cirrus clouds. Or perhaps it will appear like massive pipe cleaner-like structures radiating from the Sun.
“I’m pretty sure when we get that first round of data back, we’ll see the solar wind at lower altitudes near the Sun is spiky and impulsive,” said Stuart Bale, University of California, Berkeley, astrophysicist and FIELDS principal investigator. “I’d lay my money on the data being much more exciting than what we see near Earth.”
The data is complicated enough — and comes from multiple instruments — that it will take scientists some time to piece together an explanation for coronal heating. And because the Sun’s surface isn’t smooth and varies throughout, Parker Solar Probe needs to make multiple passes over the Sun to tell the whole story. But scientists are confident it has the tools to answer their questions.
The basic idea is that each proposed mechanism for heating has its own distinct signature. If Alfvén waves are the source of the corona’s extreme heat, FIELDS will detect their activity. Since heavier ions are heated at different rates, it appears that different classes of particles interact with those waves in specific ways; SWEAP will characterize their unique interactions.
If nanoflares are responsible, scientists expect to see jets of accelerated particles shooting out in opposite directions — a telltale sign of explosive magnetic reconnection. Where magnetic reconnection occurs, they should also detect hot spots where magnetic fields are rapidly changing and heating the surrounding plasma.
Discoveries lie ahead
There is an eagerness and excitement buzzing among solar scientists: Parker Solar Probe’s mission marks a watershed moment in the history of astrophysics, and they have a real chance of unraveling the mysteries that have confounded their field for nearly 150 years.
By piecing together the inner workings of the corona, scientists will reach a deeper understanding of the dynamics that spark space weather events, shaping conditions in near-Earth space. But the applications of this science extend beyond the solar system too. The Sun opens a window into understanding other stars — especially those that also exhibit Sun-like heating — stars that could potentially foster habitable environments but are too far to ever study. And illuminating the fundamental physics of plasmas could likely teach scientists a great deal about how plasmas behave elsewhere in the universe, like in clusters of galaxies or around black holes.
It’s also entirely possible that we haven’t even conceived of the greatest discoveries to come. It’s hard to predict how solving coronal heating will shift our understanding of the space around us, but fundamental discoveries such as this have the capacity to change science and technology forever. Parker Solar Probe’s journey takes human curiosity to a never-before-seen region of the solar system, where every observation is a potential discovery.
“I’m almost certain we’ll discover new phenomena we don’t know anything about now, and that’s very exciting for us,” Raouafi said. “Parker Solar Probe will make history by helping us understand coronal heating — as well as solar wind acceleration and solar energetic particles — but I think it also has the potential to steer the direction of solar physics’ future.” |
If dark matter is the glue holding galaxies together, dark energy is its doppelganger, pushing the universe apart at increasing speeds. Dark energy is thought to make up three-quarters of the universe yet its basic nature remains poorly understood.
In a new approach to cracking the dark energy puzzle, Columbia astronomers are working with computer scientists to wring more information from high-resolution images of about a billion galaxies in our universe. Their project, drawing on statistics, and computer self-learning and face-recognition algorithms, is funded by a two-year, $200,000 grant awarded by the Office of the Provost and administered by the Data Science Institute.
Though invisible, dark energy can be inferred from its effects on the shining stars and galaxies seen from telescopes on Earth and in space. In 1998 astronomers noticed that the distance between supernovas, or exploding stars, was getting bigger, faster, and coined the term dark energy to describe its cause.
After the Big Bang 13.8 million years ago, the universe grew rapidly before gravity slowed down its expansion. Then, about six billion years ago, dark energy is thought to have mysteriously caused it to pick up speed again. By pinning down the nature of dark energy, astronomers hope to understand the ultimate fate of our universe.
For now, dark energy is studied by monitoring the night sky. The most comprehensive technique involves tracking the subtle distortions of light around distant galaxies caused by gravitational lensing. When light travels to Earth, it bends around clumps of invisible dark matter en route. By measuring the changing shape of this distortion, or shear, at different distances from Earth, astronomers can trace dark matter’s gravitational pull through time.
“Looking at galaxies at different distances from Earth is like traveling through time,” said Columbia astronomer Zoltán Haiman, who is leading the dark energy project. “It allows us to reconstruct the time-evolution of the dark matter clumps. If they grow bigger, rapidly over time, we can infer there is either less dark energy filling the universe, or that dark energy is weaker.”
Using a supercomputer, Haiman, with his graduate students Andrea Petri and Jia Liu, has generated nearly a hundred models for how dark energy produced the universe we see today. Computer scientist Daniel Hsu is now applying statistical analysis and image-matching techniques to those models, each containing thousands of variations, to pin down ratios for the three variables thought to be most important for defining dark energy.
The idea for the project, said Haiman, came from a discussion he had years ago with computer scientist Shree Nayar, who helped develop the technology that lets computers quickly tell male and female faces apart. Haiman wondered if similar techniques could be used to compare pictures of the evolving universe to pick out shear features most predictive of dark energy, much as the mouth and nose are features predictive of male and female human faces.
One challenge for astronomers investigating dark energy is a shortage of observational data. To identify where clumps of dark matter have formed across the universe, shear maps for a large number of galaxies at a wide range of distances from Earth are needed.
To get around this, Haiman and his colleagues recently came up with a technique to get better dark matter estimates from limited data. Testing their method on shear shapes produced by six million galaxies—less than 1 percent of the sky–they showed that such estimates could be markedly improved.
As powerful new telescopes come online, including the Large Synoptic Survey Telescope (LSST) in Chile, an explosion of shear data is expected. Larger surveys will allow astronomers to measure the shapes of up to a billion galaxies, including those in our deep cosmic past. Data science tools like those being developed by Haiman and his colleagues will allow astronomers to extract more information from surveys large and small.
Until the mystery is solved, dark energy could be many things. “It could be a fundamental property of vacuum–the fundamental quantum property of nothing–or a new and exotic elementary particle,” said Haiman.
It might also turn out that dark energy does not exist, and Einstein’s general theory of relativity does not apply on cosmic scales.
“We may just be interpreting the data with the wrong equations,” said Haiman. “We can only solve this riddle with further analysis and measurements.”
Source: Columbia University |
Browse Australian Curriculum (version 8.2) content descriptions, elaborations and
find matching resources.
Calculate perimeter and area of rectangles using familiar metric units (ACMMG109)
Selected links to a range of interactive and print resources for Measurement topics in K-6 Mathematics.
How do we know what a house will look like before it is built? Discover how house plans work by looking at the design of a house that Hugo's family is going to build. See how a floor plan shows the room layout. See drawings of what the house will look like from different views.
This is a website designed for teachers and students in year 5, and addresses components of the length and area topic. It is particularly relevant for selecting appropriate metric units of measurement for length, perimeter and area, and calculation of the area of rectangles. There are pages for both teachers and students. ...
In this resource students will calculate the perimeter of different shapes, choose the appropriate measuring device, make different shapes from given perimeters
This is a six-page HTML resource about solving problems with perimeter and area. It contains one video and nine questions, five of which are interactive. The resource discusses and explains solving problems with area and perimeter to reinforce students' understanding.
Do you know how to work out the area of a square, a rectangle or a triangle? Learn the simple maths formulas needed from this video. What would be the area of a rectangle with a height of 5cm and a length of 3cm?
This series of three lessons explores the relationship between area and perimeter using the context of bumper cars at an amusement park. Students design a rectangular floor plan with the largest possible area with a given perimeter. They then explore the perimeter of a bumper car ride that has a set floor area and investigate ...
Do you know the formula for working out the area of a square? How about a triangle? Watch this short maths video to learn the formulas for both.
Want to know the trick to making a really big fort? Using cushions to build a fort, explore the concept of finding the largest area for a fixed perimeter. Surprisingly, there is no direct relationship between the perimeter of a rectangle and its area.
Solve whole number division problems such as 156/6. Use a partitioning tool to help solve randomly generated division problems. Learn strategies to do complex arithmetic in your head. Split a division problem into parts that are easy to work with. This learning object is one in a series of four objects.
Join our host, Ed, as he finds himself in all types of situations where only his knowledge of Maths can help him. From saving the planet from Aliens, to creating a superhero that can stop a strawberry milkshake tidal wave. From searching for buried treasure, to jumping like a daredevil, or planning the greatest circus party ...
This is a teacher resource that contains a program outline, student activities, and links to images and videos relevant to teaching primary students about farm animals that are raised for food and fibre, with a particular focus on livestock needs and farming technologies. The program outline and supporting documents can ...
This lesson plan introduces students to some of Australia's native bee species. Organised in five stages, the lesson plan includes links to videos, scientific and bee-related websites, an information sheet, and downloadable versions of a pictorial slideshow, lesson plan and an assessment rubric. The resource includes suggestions ...
This collection comprises 24 digital curriculum resources, including learning objects from the series 'Shape sorter', 'Face painter', 'Geoboard', 'Photo hunt', 'Viewfinder' and 'Shape maker'. There are three categories: exploring two-dimensional shapes; visualising three-dimensional shapes; and making three-dimensional ...
In this resource students measure objects of different length in centimetres and millimetres, order lengths from shortest to longest, convert between millimetres, centimetres, metres and kilometres.
Did you know that in Australia we use a metric system for measurement? See if you know the units of measurement for length, mass and volume. Find out what system the United States uses. You guessed it - they don't use the metric system! See how a mix up of these units can cause all kinds of mess ups.
This is a website designed for teachers and students in year 5, and addresses components of the enlargement transformations topic in geometry. It is particularly relevant for the concept of enlarging two-dimensional shapes and also contains material on enlarging drawings using grid paper. There are pages for both teachers ...
This is a Geogebra activity used to teach the area of a parallelogram. Suitable for use with an interactive whiteboard (IWB).
Use this open exploration tool to explore patterns in geometry, fractions, area and perimeter by creating your own shapes or filling in shapes. Use the text tool to annotate your representations. Great for work on fractions, symmetry and area. Free when reviewed on 12/5/2015.
This is a unit of work that uses farming to explore the measurement and geometry concepts of grid references, directional language, area and length. It has a teacher directed task that introduces directional language and grid references and two student work tasks. The work tasks involve designing a farm using a grid and ... |
Bearing is the expression of direction using degree of an angle. the following are procedure to be followed when determining position of the place by bearing and distance when you asked to determine a position of A from B
Obtain the map needed
Consider the points A and B and identify the distance needed
Measure the distance between the two places and convert it into ground distance. for example given the FR scale is 1:50000, and the map distance obtained is 9cm therefore actual ground distance will be 4.5km
Give the position by distance i.e B is situated 4.5km from A
Give the position by bearing follow these procedures
Join the two places by straight line
Draw the four major points of the compass at the place of reference i.e point A
Using protractor measure the angle (from north) clockwise until you get to the line joining A and B
Give the position by bearing e.g point B bear at 95 degrees east from A
Give the overall position, that is point B bears at 4.5km point a. 95 degrees East.
Statement scale is the type of map scale expression in which scale is expressed in form of a written statement, for example, one centimeter on the map represents ten kilometers on the ground. This can also be expressed in short as 1cm represent 10km or 1cm to 10km.
The following are features of statement scale;
The scale is expressed as a word statement.
The scales bear specific units of measurement. The unit of measurement used in the map is smaller than the actual measurements on the ground, for example, 1cm represents 10km
The word ”represent” should be used. do not use ”equivalent” or ”equal”. For example, do not say one centimeter on the map is equal to ten kilometers on the ground. This is because the statement is not true. instead, say one centimeter on the map represents ten kilometers on the ground.
The map distance always carries the digit 1 while that of the ground may be less than 1, 1, or more than one.
A statement scale is simple to express. However, it may be difficult for users who are not familiar with the unit of measurement used in the scale. If the map is used or enlarged, the scale will not remain the same
Research may be defined as the systematic and objective analysis and recording of controlled observation that may lead to the development of generalizations, principles, or theories, resulting in prediction and possibly ultimate control of an event.
The following are two types of research:
Basic research is the kind of research conducted with the aim of generating and expanding knowledge. It includes the generalization and formulation of principles or theories.
The fundamental research is sometimes carried on in a laboratory or other sterile environment, sometimes with animals.
This type of research has no immediate or planned applications, may later result in further research of an applied nature.
Characteristics of basic research are:
It is conducted in a specific area for example in the laboratory.
It takes a long time to conduct as it involves the investigation of a particular problem.
There are three types of plate boundary (or margin): constructive, destructive and passive.
These arise where two plates move away from each other, and new crust is created at the boundary. They are mainly found between oceanic plates, and are consequently underwater features.
Rift valleys may initially develop, but molten rock from the mantle (magma) rises to fill any possible gaps. Constructive margins are often marked by ocean ridges (e.g. the Mid-Atlantic Ridge, the East Pacific Rise).
The rising magma forms submarine volcanoes, which in time may grow above sea level (e.g. Iceland, Tristan da Cunha and Ascension Island on the Mid Atlantic Ridge, and Easter Island on the East Pacific Rise).
Different rates of latitudinal movement along the boundary cause transform faults to develop as the magma cools – these lie perpendicular (at a right angle) to the plate boundary.
Of the annual volume of lava ejected onto the Earth’s surface, 73 per cent is found on mid-ocean ridges, and approximately one-third of the lava ejected onto the Earth’s surface during the past 500 years is found in Iceland.
The Atlantic Ocean formed as the continent of Laurasia split in two, and the Atlantic is continuing to widen by approximately 2–5 cm per year. Very rarely, constructive margins can occur on land, and it is thought that this is happening inEast Africa at the Great African Rift Valley System.
Extending for 4,000 km from the Red Sea to Mozambique, its width varies from 10 to 50 km, and at points its sides reach over 600 m in height.
Where the land has dropped sufficiently, the sea has invaded. – it has been suggested that the Red Sea is the beginnings of a newly forming ocean. Associated volcanoes include Mount Kilimanjaro and Mount Kenya to the east and Ruwenzori to the west.
These occur where two plates move towards each other, and one is forced below the other into the mantle.
The Pacific Ocean is virtually surrounded by destructive plate margins with their associated features, and its perimeter has become known as the Pacific Ring of Fire.
The features present at destructive margins will depend upon what types of plates are converging.
When oceanic crust meets continental crust:
The thinner, denser oceanic crust is forced to dip downwards at an angle and sink into the subduction zone beneath the thicker, lighter and more buoyant continental crust.
A deep-sea trench forms at the plate margin as subduction takes place. These form the deepest areas on the planet.
As the oceanic crust descends, the edge of the continental crust may crumple to form fold mountains, which run in chains parallel to the boundary (e.g. the Andes).
Sediments collecting in the deep-sea trench may also be pushed up to form fold mountains.
As the oceanic crust descends into the hot mantle, the additional heat generated by friction helps the plate to melt, usually at a depth of 400–600 km below the surface.
As it is less dense than the mantle, the newly forme magma will tend to rise to the Earth’s surface, where it may form volcanoes.
However, as the rising magma at destructive margins is very acidic, it may solidify before it reaches the surface and form a batholith at the base of the mountain chain .
As the oceanic plate descends, shallow earthquakes occur where the crust is stretched as it dips beneath the surface. Deeper earthquakes arise from increases in friction and pressure may be released as earthquakes.
The area in the subduction zone where most earthquakes take place is known as the Benioff zone.
The depth of the deeper earthquakes may also provide an indication as to the angle of subduction, where gentler angles of subduction give rise to shallower earthquakes.
If subduction occurs offshore, island arcs may form (e.g. Japan, the West Indies).
When oceanic crust meets oceanic crust:
Where two oceanic plates collide, either one may be subducted.
Similar features arise as those where an oceanic plate meets a continental plate.
When continental crust meets continental crust (note that this is very rare):
Because continental crust cannot sink, the edges of the two plates and the intervening sediments are crumpled to form very deep-rooted fold mountains.
The zone marking the boundary of the two colliding plates is known as the suture line.
These boundaries mark the site at which the Earth’s crust is at its thickest. For example, the Indo-Australian Plate is moving northeastwards and is crashing into the rigid Eurasian Plate, creating the Himalayas.
Uplift is a continuous process (it is happening right now); however, weathering and erosion of the mountain tops means that the actual height of the mountains is not as great as the rate of uplift would suggest.
Sediments which form part of the Himalayas were once underlying the Tethys Sea, which existed at the time of the Pangean supercontinent.
These occur where two plates slide past each other and crust is neither created nor destroyed.
The boundary between the two plates is characterized by pronounced transform faults, which lie parallel to the plate boundary.
As the plates slide past each other, friction builds up and causes the plates to stick, and release is in the form of earthquakes.
An excellent example of a passive margin is the San Andreas Fault (one of several hundred known faults) in California, which marks a junction between the North American and the Pacific Plates.
Although both plates are moving in a northwesterly direction, the Pacific Plate moves at a faster rate than the North American late (6 cm per year, compared with just 1 cm per year), creating the illusion that the plates are moving in opposite directions.
The Earth’s lithosphere (crust and upper mantle) is divided into seven large and several smaller plates.
These plates are constantly moving, and are driven by convection currents in the mantle.
Plate boundaries mark the sites of the world’s major landforms, and they are also areas where mountain-building, volcanoes, and earthquakes can be found.
However, in order to account for such activity at the plate boundaries, several points should be noted
Continental crust is less dense than oceanic crust so it does not sink.
Whereas oceanic crust is continuously being created and destroyed, continental crust is permanent, and hosts the oldest rocks on the planet (the shieldlands).
Continental plates may be composed of both continental and oceanic crust (e.g. Eurasia).
Continental crust may extend further than the margins of the land masses (when continental crust is covered by an ocean, it is known as continental shelf).
It is not possible for plates to overlap, so they may either crumple up to form mountain chains, or one plate must sink below the other.
If two plates are moving apart, new crust is formed in the intervening space, as no ‘gaps’ may occur in the Earth’s crust.
The earth is not expanding, so if newer crust is being created in one area, older crust must be being destroyed elsewhere.
Plate movements are geologically fast and continuous. Sudden movements manifest themselves as earthquakes.
Very little structural change takes place in the centre of the plates (the shield lands). Plate margins mark the sites of the most significant landforms, including volcanoes, batholith intrusions, fold mountains, island arcs and deep-sea trenches
Statistics being the scientific and systematic methods dealing with numerical facts is broadly categorized into two types depending on how data is handled.
The two main categories of statistics are descriptive and inferential statistics.
this deals with recording, summarizing, analyzing and presentation of numerical facts that have been actually collected.
Descriptive statistics is the term given to the analysis of data that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data.
Descriptive statistics do not, however, allow us to make conclusions beyond the data we have analyzed or reach conclusions regarding any hypotheses we might have made.
They are simply a way to describe our data. Descriptive statistics are very important because if we simply presented our raw data it would be hard to visualize what the data was showing, especially if there was a lot of it.
Descriptive statistics, therefore, enables us to present the data in a more meaningful way, which allows a simpler interpretation of the data.
For example, if we had the results of 100 pieces of students’ coursework, we may be interested in the overall performance of those students.
We would also be interested in the distribution or spread of the marks. Descriptive statistics allow us to do this.
Typically, there are two general types of statistics that are used to describe data they are a measure of central tendency and measure of spread
this makes inferences about populations using data drawn from the population.
Instead of using the entire population to gather the data, the statistician will collect a sample or samples from the millions of residents and make inferences about the entire population using the sample.
With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone.
For instance, we use inferential statistics to try to infer from the sample data what the population might think.
Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study.
Thus, we use inferential statistics to make inferences from our data to more general conditions; we use descriptive statistics simply to describe what’s going on in our data.
The vertical aerial photograph is an aerial photograph technique where the shots are taken from directly above the subject of the image.
Hence, this method of aerial photograph is also often referred to as “overhead aerial photograph.
Oblique photographs (also known as oblique aerial photographs or oblique air photographs) are taken from a high point, which is at an angle neither horizontal (ground level photograph) nor perpendicular (vertical aerial photograph) to the area being photographed.
The following are advantages of vertical aerial photograph over oblique aerial photograph;
vertical aerial photograph have uniform scale
Vertical photographs present approximately uniform scale throughout the photo but not oblique photos. It follows that making measurements (e.g., distances and directions) on vertical photographs is easier and more accurate.
it is easy to determine direction in vertical aerial photograph
Because of a constant scale throughout a vertical photograph, the determination of directions (i.e., bearing or azimuth) can be performed in the same manner as a map. This is not true for an oblique photo because of the distortions.
vertical aerial photograph are easier to interpret
Because of a constant scale, vertical photographs are easier to interpret than oblique photographs. Furthermore, tall objects (e.g., buildings, trees, hills, etc.) will not mask other objects as much as they would on oblique photos.
vertical aerial photograph are easier to use
Vertical photographs are simple to use photogrammetrically as a minimum of mathematical correction is required.
vertical aerial photograph can be used as maps
To some extent and under certain conditions (e.g., flat terrain), a vertical aerial photograph may be used as a map if a coordinate grid system and legend information are added.
vertical aerial photograph can be used for stereoscopic study
The stereoscopic study is also more effective on vertical than on oblique photographs. |
A massive comet – approximately 80 miles across, twice the width of Rhode Island – travels our way from the edge of the solar system at 22,000 miles per hour. Fortunately, it does not come close to 1 billion miles from the Sun, which is slightly further away from Earth than Saturn; That will be in 2031.
Among the oldest objects in the solar system are comets and icy bodies that were unexpectedly thrown out of the solar system during a game of gravitational pinball between massive outer planets, said David Juvid. UCLA Professor of Planetary Science and Astronomy has co-authored a new study on comets. Letters from the Astrophysical Journal. The ejected comets rested on the Oort cloud, a vast reservoir of distant comets that travel billions of miles deep into space around the Solar System, he said.
The spectacular multi-million-mile-long tail of a typical comet, which looks like a skyscraper, disproves the fact that the source at the center of the fireworks is a solid nucleus of dusty ice — essentially a dirty snowball. This largest one, known as the Comet C / 2014 UN271, was discovered by Pedro Bernardinelli and Gary Bernstein and may be as large as 85 miles across.
“This comet is the tip of the iceberg for many thousands of comets that are too faint to see in remote parts of the solar system,” Juvid said. “We always suspected that this comet must be bigger because it’s so bright at such a great distance. Now we’ve confirm it.”
This comet has the largest nucleus ever seen on a comet by astronomers. Javid and his colleagues used NASA’s Hubble Space Telescope to determine the size of its nucleus. Its nucleus is about 50 times larger than that of known comets. Its mass is estimated at 500 trillion tons, which is hundreds of thousands of times the mass of a common comet found very close to the Sun.
Mann-du Hui, a leading author who graduated from UCLA in 2019 with a Ph.D. Taiba, Macau. “We guessed the comet might be too big, but we need better data to confirm this.”
So the researchers used Hubble to take five photos of the comet on January 8, 2022, and incorporated radio observations of the comet into their analysis.
Juvit said the comet is now 2 billion miles from the Sun and will return to its nesting site in the Oort cloud in a few million years.
Comet C / 2014 UN271 was accidentally spotted in 2010 when it was 3 billion miles away from the Sun. Since then, it has been actively studied by ground and space-based telescopes.
The challenge in measuring this comet is how to determine the solid embryo from the large dusty coma – the dust and gas cloud – that surrounds it. The comet is currently so far away that its nucleus cannot be visually resolved by Hubble. Instead, Hubble shows a bright spike of light at the location of the data nucleus. Hui and his colleagues next created a computer model of the surrounding coma and adjusted it to fit the Hubble images. Later, they left the womb after the glow of coma.
Hui and his team compared the brightness of the nucleus with previous radio observations from the Atacama Large Millimeter / Submillimeter Array or Alma in Chile. The new Hubble measurements are closer to the previous size rating of the ALMA, but definitely suggest a darker black surface than previously thought.
“It’s bigger and blacker than coal,” Juvid said.
The comet has been falling towards the Sun for over 1 million years. The Oort cloud is thought to be home to trillions of comets. Juvid thinks that the Oort cloud extends at least a quarter of the distance of the stars closest to our Sun, a few hundred times farther than the distance between the Sun and the Earth in the Alpha Centauri system.
According to Juvit, the comets in the Oort cloud were ejected from the solar system billions of years ago by the gravitational pull of massive extraterrestrials. The professor said that distant comets will only return to the Sun and the planets if their orbits are disturbed by the gravitational pull of the passing star.
First predicted by the Dutch astronomer John Oort in 1950, the Oort cloud is still a theory because the comets that make it up are so dim and invisible at a direct distance. This means that the largest structure of the solar system is all invisible, Juvid said.
Comet 2014 UN271 is the largest ever observed
Man-To Hui et al, Hubble Space Telescope Detection of the Nucleus of Comet C / 2014 UN271 (Bernardinelli-Bernstein), Letters from the Journal of Astronomy (2022) DOI: 10.3847 / 2041-8213 / ac626a
Presented by the University of California, Los Angeles
Quote: 4 billion year old monument Retrieved April 12, 2022 from https://phys.org/news/2022-04-billion-year-old-relic-early from the Early Solar System (2022, April 12) on our way . solar.html
This document is subject to copyright. No part may be reproduced without written permission, except for any reasonable manipulation for the purpose of personal study or research. Content is provided for informational purposes only. |
Voltage plays a crucial role in the functioning and performance of computer hardware, particularly when it comes to Random Access Memory (RAM) power requirements. RAM is an essential component of any computer system as it stores data that can be accessed quickly by the CPU. The power supply of RAM directly affects its speed, stability, and overall efficiency. For instance, imagine a scenario where a user upgrades their computer’s RAM without considering its voltage compatibility with the existing hardware. This oversight could potentially lead to unstable system operation or even permanent damage to the components.
Understanding the relationship between voltage and RAM power requirements is vital for ensuring optimal performance and longevity of computer systems. Different types of RAM modules have varying voltage specifications, which must be carefully considered during installation or upgrade processes. Moreover, inadequate power supply can result in frequent crashes, freezing screens, and slow response times. Therefore, it becomes necessary for users to be aware of these power requirements and make informed decisions regarding their hardware configurations to avoid potential issues down the line. In this article, we will delve into the importance of voltage in relation to RAM and explore how different voltages affect various aspects of computer performance.
What is Voltage?
Voltage, also known as electric potential difference, refers to the force that pushes electrical charges through a circuit. It can be compared to water pressure in plumbing systems; just as higher water pressure allows water to flow more forcefully through pipes, higher voltage enables the movement of electric current with greater intensity. To better understand this concept, let us consider an example.
Imagine a computer system where insufficient voltage is supplied to its components, particularly random access memory (RAM). In such a scenario, the RAM modules may not receive enough power to perform optimally or even function at all. This could result in sluggish performance, frequent crashes, or even permanent damage to the hardware. Therefore, it becomes crucial for users and technicians alike to have a clear understanding of voltage requirements when dealing with computer hardware.
- Insufficient voltage supply can lead to reduced efficiency and increased chances of component failure.
- Excessive voltage levels can cause overheating and other detrimental effects on sensitive electronic components.
- Properly regulated voltages ensure stable operation and longevity of computer systems.
- Adhering to manufacturer-recommended voltage specifications avoids voiding warranty coverage.
Additionally, we provide a comparative table showcasing how different RAM module types require specific voltage ranges:
|RAM Module Type||Recommended Voltage Range|
|DDR3||1.35V – 1.65V|
|DDR4||1.2V – 1.4V|
Understanding these voltage ranges is essential for selecting compatible RAM modules and ensuring they are appropriately powered within their specified limits.
Transitioning into the subsequent section about “Importance of Proper Voltage in Computer Hardware,” it becomes evident that voltage plays a critical role in maintaining the stability, performance, and longevity of computer systems. By recognizing the significance of voltage management, users can make informed decisions when selecting and configuring hardware components to optimize their computing experience.
Importance of Proper Voltage in Computer Hardware
Transitioning from our previous discussion on voltage, let us now delve into the importance of proper voltage in computer hardware. To illustrate this point, consider a hypothetical scenario where a user attempts to power their computer with an incorrect voltage input. As the incorrect voltage flows through the system, it can lead to various issues such as overheating, system instability, or even permanent damage to components.
To ensure the smooth functioning of computer hardware, including Random Access Memory (RAM), it is essential to understand and meet the specific power requirements. The RAM modules in a computer rely on stable and appropriate voltages for optimal performance. Failure to provide the correct voltage levels can result in data corruption, reduced lifespan of the RAM modules, or complete failure of the memory subsystem.
Understanding RAM power requirements involves considering several crucial factors:
- Voltage rating: Each RAM module has a specified operating voltage range provided by its manufacturer. Operating outside this range may cause functional problems or irreversible damage.
- Overclocking considerations: Overclocking RAM often requires increased voltage settings beyond standard specifications. However, exceeding safe limits can lead to excessive heat generation or electrical failures.
- Power supply compatibility: It is important to ensure that the chosen power supply unit provides adequate and stable voltages for all components, including RAM modules.
- Compatibility with motherboard: Different motherboards have varying support for different types of RAM modules. Checking compatibility ensures that both the motherboard and RAM are designed to work together seamlessly.
Consider Table 1 below as an example showcasing typical DDR4 RAM modules along with their respective recommended voltage ranges:
Table 1: DDR4 RAM Module Examples
|Model||Recommended Voltage Range|
|Corsair Vengeance LPX||1.35V – 1.5V|
|G.Skill Trident Z RGB||1.2V – 1.4V|
|Crucial Ballistix MAX||1.35V – 1.45V|
|Kingston HyperX Fury||1.2V – 1.35V|
In summary, proper voltage management plays a crucial role in maintaining the stability and longevity of computer hardware, particularly RAM modules. Meeting the specific power requirements outlined by manufacturers ensures optimal performance and prevents potential damage or failures caused by incorrect voltages. Now that we have discussed the importance of voltage in computer hardware, let us further explore the intricate details of understanding RAM power requirements.
Transitioning to subsequent section: Understanding RAM Power Requirements, it is vital to grasp how different factors influence the power needs of this essential component without compromising its functionality or overall system integrity.
Understanding RAM Power Requirements
Having established the importance of proper voltage in computer hardware, let us now delve into understanding RAM power requirements.
RAM (Random Access Memory) is a critical component of any computer system as it allows for quick access to data that is actively being used by programs. To ensure optimal performance and reliability, it is essential to provide sufficient power to the RAM modules. Failure to do so may lead to various issues such as system instability, data corruption, or even complete failure.
Understanding the power requirements of RAM involves considering several key factors. Firstly, the type and capacity of the RAM module play a significant role in determining its power needs. Different generations of RAM, such as DDR3 or DDR4, have varying voltage specifications which must be adhered to for reliable operation. Moreover, higher-capacity modules generally require more power compared to their lower-capacity counterparts due to additional circuitry needed for addressing larger amounts of memory.
To illustrate this point further, consider a hypothetical scenario where two identical computers are running resource-intensive software applications. The only difference lies in their RAM configurations; one uses 8GB modules while the other utilizes 16GB modules. During intense workloads, it can be observed that the latter computer consumes slightly more power due to its higher-capacity RAM requiring increased electrical current flow.
To better comprehend how different aspects affect RAM power consumption, we can refer to the following bullet points:
- Operating frequency: Higher frequencies often result in increased power consumption.
- Overclocking: When pushing RAM beyond manufacturer-specified limits, additional power might be required.
- Temperature: Elevated temperatures can influence resistance within electronic components and subsequently impact power usage.
- Voltage regulation efficiency: Inefficient voltage regulators may introduce unnecessary overheads and compromise overall system stability.
|Factor||Impact on Power Consumption|
|Voltage Regulation Efficiency||Decreased|
By considering these factors, system builders and enthusiasts can make informed decisions regarding RAM selection to strike a balance between performance requirements and power consumption. In the subsequent section, we will explore additional factors that affect RAM power consumption.
With an understanding of RAM power requirements established, let us now delve into the various factors that influence its power consumption.
Factors Affecting RAM Power Consumption
Understanding the power requirements of RAM is crucial for ensuring optimal performance and longevity of computer hardware. In this section, we will explore factors that affect RAM power consumption and how voltage plays a significant role in determining these requirements.
To illustrate the importance of considering RAM power requirements, let’s consider a hypothetical scenario involving two identical computers with different RAM modules. Computer A has a low-voltage DDR3 RAM module installed, while Computer B uses a high-voltage DDR4 RAM module. Despite having similar specifications in other areas, Computer A consumes less power due to its lower voltage requirement compared to Computer B. This example highlights the impact of voltage on overall system power consumption.
Several factors influence the power consumption of RAM modules:
- Operating Frequency: Higher operating frequencies tend to result in increased power consumption as more data is processed within a given time frame.
- Memory Capacity: Larger memory capacities generally require more power since additional circuits are needed to address and store larger amounts of data.
- Voltage Rating: Different generations of RAM (e.g., DDR2, DDR3, DDR4) have varying voltage requirements. Newer generations often operate at lower voltages to improve energy efficiency.
- Active vs Idle State: When actively used or accessed by the processor, RAM consumes more power than when it remains idle.
Consider the following bullet point list highlighting key takeaways from this section:
- Understanding RAM power requirements is essential for optimizing system performance.
- Voltage plays a critical role in determining the amount of power consumed by RAM modules.
- Factors such as operating frequency, memory capacity, voltage rating, and usage state influence RAM power consumption.
- Upgrading to newer generation RAM modules can help reduce overall system power consumption.
In conclusion to this section about understanding RAM power requirements, it is evident that considering voltage specifications is vital for selecting appropriate RAM modules that align with desired energy efficiency goals. The next section will delve into methods for determining the specific voltage requirements of different RAM modules and how to ensure compatibility with computer hardware.
How to Determine the Voltage Requirements for RAM
Factors affecting the power consumption of Random Access Memory (RAM) are crucial in understanding its voltage requirements. By considering these factors, users can ensure efficient usage of their computer hardware while maintaining optimal performance levels.
To illustrate this point, let’s take a hypothetical scenario where an individual upgrades their computer system with additional RAM modules. The newly installed modules require more power due to increased data processing demands. Consequently, if the existing power supply does not meet these new requirements, it may lead to unstable system operation or even potential damage to both the RAM and other components.
Understanding how different factors affect RAM power consumption is essential for determining appropriate voltage requirements. Some key considerations include:
Clock Speed and Frequency:
- Higher clock speeds generally result in increased power consumption.
- Overclocking can further amplify this effect but may also compromise stability.
- Different types of RAM modules have varying energy efficiency characteristics.
- For example, DDR4 memory typically operates at lower voltages compared to older generations like DDR3.
- Larger capacity modules tend to consume more power than smaller ones.
- Increased storage capabilities necessitate higher electrical currents for reliable functioning.
- The level of workload placed on the RAM affects its overall power demand.
- Heavy multitasking or running resource-intensive applications increases power consumption accordingly.
By taking these factors into account, users can make informed decisions regarding voltage requirements when selecting or upgrading their RAM modules. Referencing manufacturer specifications and consulting professionals ensures compatibility between hardware components and promotes stable system operations.
Moving forward, our next section will provide valuable tips on maintaining optimal voltage levels in computer hardware systems that utilize various types of RAM modules. Understanding how to regulate and monitor voltage levels is vital in safeguarding the longevity and reliability of computer components.
Tips for Maintaining Optimal Voltage Levels in Computer Hardware.
Tips for Maintaining Optimal Voltage Levels in Computer Hardware
Having understood how to determine the voltage requirements for RAM, it is essential to ensure that these voltage levels are maintained at an optimal level. This section will provide valuable tips on maintaining ideal voltage levels in computer hardware, which can enhance system performance and prevent potential damage.
Tips for Maintaining Optimal Voltage Levels in Computer Hardware:
- Regularly Clean Dust Buildup:
- Accumulated dust inside a computer’s casing can hinder proper airflow and cooling.
- Use compressed air or specialized cleaning tools to remove dust particles from fans, heat sinks, and other components.
- Regular cleaning decreases the risk of overheating due to reduced ventilation, contributing to stable voltage regulation.
- Utilize High-Quality Power Supply Units (PSUs):
- Invest in high-quality PSUs with reliable power delivery capabilities.
- Inferior quality PSUs may fluctuate voltages, leading to unstable power supply and potentially damaging sensitive hardware components such as RAM modules.
- Choose reputable brands known for their stable output voltages and efficiency ratings.
- Implement Surge Protectors or Uninterruptible Power Supplies (UPS):
- Protect your computer against sudden surges or drops in electricity by using surge protectors or UPS devices.
- These devices guard against voltage spikes caused by lightning strikes or electrical faults, preventing possible damage to internal components like RAM chips.
- A UPS also provides backup power during short-term outages, allowing you sufficient time to save data and shut down your system properly.
- Monitor System Temperatures:
- Excessive temperatures can affect overall system stability by increasing resistance within electronic components.
- Use temperature monitoring software to keep tabs on CPU/GPU temperatures as well as motherboard sensor readings.
- Elevated temperatures could indicate poor cooling or inadequate voltage regulation, necessitating immediate action to prevent potential damage.
- Protect your investment: Preventing unstable voltages can extend the lifespan of your computer hardware.
- Enhance performance and reliability: Maintaining optimal voltage levels ensures consistent power supply, preventing system crashes and data loss.
- Save time and money on repairs: By implementing preventative measures against voltage-related issues, you reduce the risk of costly repairs or replacements.
- Gain peace of mind: Knowing that your computer is operating at optimum voltage levels allows you to focus on productivity without worrying about unexpected failures.
|Potential Risks||Benefits of Optimal Voltage|
|Component failure||Increased stability|
|Data corruption||Enhanced system performance|
|System crashes||Reduced downtime|
|Overheating||Prolonged hardware lifespan|
By adhering to these tips, users can safeguard their RAM modules and other critical components from potential damage caused by fluctuating voltages. Remember that maintaining stable power supply contributes significantly to a reliable computing experience, ensuring smooth operations and extended hardware longevity. |
Gross value added
In economics, gross value added (GVA) is the measure of the value of goods and services produced in an area, industry or sector of an economy. "Gross value added is the value of output minus the value of intermediate consumption; it is a measure of the contribution to GDP made by an individual producer, industry or sector; gross value added is the source from which the primary incomes of the SNA are generated and is therefore carried forward into the primary distribution of income account."
Relationship to gross domestic product
GVA is a very important measure, because it is used to determine gross domestic product (GDP). GDP is an indicator of the health of a national economy and economic growth. It represents the monetary value of all products and services produced in the country within a defined period of time. "In comparing GVA and GDP, we can say that GVA is a better measure for the economic welfare of the population, because it includes all primary incomes. From the point of view of the society as a whole GDP, despite its disadvantages, is probably a better measure for economic growth and welfare, because it includes also NET INDIRECT TAX (indirect taxes minus subsidies) which are the financial basis for the collective consumption of the society."
The relationship between GVA and GDP is defined as:
- GVA= GDP + Subsidies on products – Taxes on products
As the total aggregates of taxes on products and subsidies on products are only available at whole economy level, Gross value added is used for measuring gross regional domestic product and other measures of the output of entities smaller than a whole economy.
- GDP at factor cost = Gross value added(GVA) at factor cost
- GDP at factor cost = value of the final goods and services produced within the domestic territory of a country during one year by all production units inclusive of depreciation.
- GDP at market price = GDP at factor cost + net indirect taxes(indirect taxes- subsidies)
- GVA at factor cost = value of output (quantity * price) - value of intermediary consumption.
GVA at different levels
GVA can be used for measuring of the contribution to GDP made by an individual producer, industry or sector. For instance, to analyze the productivity of the market sector one can use GVA per worker or GVA per hour. The measure preferred by the Organisation for Economic Cooperation and Development’s (OECD) in the Productivity Database is GVA per hour.
At the company level GVA refers to the net income of a produced particular good. "In other words, the gross value added number reveals and potentially creating a bottom-line profit. Once the consumption of fixed capital and the effects of depreciation are subtracted, the company knows how much net value a particular operation adds to its bottom line."
Disadvantages of GVA
- Comparison over time is difficult.
Advantages of GVA
- Internationally comparable figure.
- Better market condition projection globally, especially in case of FIIs.
Over-simplistically, GVA is the grand total of all revenues, from final sales and (net) subsidies, which are incomes into businesses. Those incomes are then used to cover expenses (wages & salaries, dividends), savings (profits, depreciation), and (indirect) taxes.
GVA is sector specific, and GDP is calculated by summation of GVA of all sectors of economy with taxes added and subsidies are deducted.
- OECD Glossary of Statistical Terms.
- Kramer, Leslie (March 20, 2020). "What is GDP and Why is It So Important to Economists and Investors?".
- Ivanov; Webster (September 1, 2007). "Measuring the Impact of Tourism on Economic Growth". Tourism Economics. 13 (3): 21–30. doi:10.5367/000000007781497773. S2CID 153597825.CS1 maint: multiple names: authors list (link)
- "Guide to Gross Value Added (GVA)". Office for National Statistics. 2002-11-15. Retrieved 2012-07-08.
- "Productivity measures in the OECD Productivity Database". OECD Compendium of Productivity Indicators. 2019: 122 – via Google scholar.
- Kenton, Will (March 20, 2020). "Gross Value Added – GVA Definition". |
A Closer Look at Fats (Grades 6-8)
This lesson describes the role of fats in food and in the body, and how they serve as a source of energy. It provides information on different types of fats that are listed on the Nutrition Facts label – including total fat, saturated fat, and trans fat—and defines trans fat and cholesterol. The lesson also includes dietary guidance for fat consumption. Grades 6-8
Activity 1: Get the Facts about Fats!
- One copy of the Get the Facts About Fats! — Interactive Label Research worksheet, 1 copy for each student
- Internet access
- Printed Fact Sheets or online access to:
Activity 2: Grease Spot Test
- Six test foods from the Suggested Test Foods list. Each team should have one sample of each food.
- Copies of food labels for chosen food samples from the food package itself
- One plastic teaspoon (or craft stick) for each food sample tested
- Squares of unglazed, quarter-inch graph paper: 5” x 5” (at least 3 for each group)
- Cardboard circle template: 2.5” diameter
- One 6-inch ruler
- 1/4 teaspoon measuring spoon
- Grease Spot Test worksheet, 1 copy per group to record data
- Grease Spot Test Student Instructions, 1 copy per group
Note: All worksheets can be downloaded as a fillable PDF.
lipids: fats, oils, and waxes which are produced by living things
saturated fat: a type of fat containing a high proportion of fatty acid molecules without double bonds, considered to be less healthy in the diet than unsaturated fat
unsaturated fat: a type of fat containing a high proportion of fatty acid molecules with at least one double bond; considered to be healthier in the diet than saturated fat
Did You Know?
- Many consumer education and outreach efforts use the term “Fat” in place of “Fatty Acid” for Total Fat, Saturated Fat, Mono- and Polyunsaturated Fat, and Trans Fat. This Guide generally uses the more common term “Fat” for “Fatty Acid” also.
- HDL cholesterol and LDL cholesterol are found only in blood, not in food. They are the forms of cholesterol that move through the body. You can’t “look for” foods high in HDL and low in LDL to optimize your diet, but regular aerobic exercise may increase levels of HDL (“good”) cholesterol in the blood.
- Unsaturated fats and oils should replace saturated fats in the diet, rather than just being added to it.
Background Agricultural Connections
Lipids are a large group of organic compounds that are oily to the touch and insoluble in water. Lipids include fats, oils, and waxes and are a source of stored energy. The terms lipids and fats are often used interchangeably. Fats are also called triglycerides, because they are usually made up of three fatty acids and a glycerol molecule. For this module, we will use the term “fat” to represent all dietary lipids. Oils are usually liquid at room temperature, high in monounsaturated or polyunsaturated fatty acids, and lower in saturated fatty acids than fats that are solid at room temperature.
Understanding Dietary Fat
Dietary fats are found in both plant and animal foods, and they are broken down into fatty acids during digestion. All dietary fats are composed of a mix of saturated, monounsaturated, and polyunsaturated fatty acids, in varied proportions. For example, most of the fatty acids in butter are saturated, but it also contains some monounsaturated and polyunsaturated fatty acids. Fat is also a source of essential fatty acids (linoleic acid and alpha-linolenic acid), which the body cannot synthesize (produce) and therefore must obtain from the diet.
Fat in foods is a major source of energy for the body and aids in the absorption of the fat-soluble vitamins A, D, E, and K. Fats are also important for proper growth and maintenance of good health, since they play a role in the structure and function of cell membranes, the integrity of skin, maintaining healthy blood cells, and fertility. As a food ingredient, fats provide taste, consistency, and stability and help us feel full.
The Daily Value for total fat is 35% of total calories, which is 78 grams/day based on a 2,000-calorie diet: saturated fats should contribute less than 10% of daily calories. All fat has 9 calories per gram, making it a concentrated source of energy, so it should be eaten in moderation. Although most people consume enough fat, many people consume too much saturated fat and not enough unsaturated fat. The Nutrition Facts label is a useful tool for checking how much, and what kind of fat is in a food.
About Saturated Fatty Acids
Saturated fats are typically found in animal products. Dietary fats that have more saturated fatty acids tend to be solid at room temperature. They are called “saturated” because all the spaces on the fat molecule that can hold a hydrogen atom do so and are “full” – that is, the molecule is “saturated” with hydrogen atoms.
Saturated fats taste good and reduce hunger, but eating too much of them increases the risk of cardiovascular disease. Saturated fatty acids are found in the greatest amounts in animal fats (including beef, pork, lamb, and poultry with skin), full-fat dairy products (butter, cream, cheese, and ice cream), many sweet desserts (cakes and cookies), fried foods, and some plant-based oils such as coconut oil, palm oil, and palm kernel oil.
About Unsaturated Fatty Acids – Heart Healthy Fats!
Unsaturated fatty acids include monounsaturated and polyunsaturated fatty acids. They are called “unsaturated” because some of the carbon atoms in the fat molecule do not hold a hydrogen atom. They are found in higher proportions in plants and seafood.
Monounsaturated fatty acids have one double bond in the fat molecule, and polyunsaturated fatty acids have more than one double bond. Oils that are high in unsaturated fatty acids are not considered to be a separate food group, but they are important because they can reduce the risk of developing cardiovascular disease when eaten in place of saturated fat.
- Monounsaturated fatty acids (MUFAs) are found in relatively large amounts in olive, canola, saffower, and sunfower oils as well as in avocados, peanut butter, and most nuts. There is no recommended daily intake of MUFAs.
- Polyunsaturated fatty acids (PUFAs) are found in vegetable oils and fatty fish such as salmon, mackerel, and sardines. PUFAs include omega-3 and omega-6 fatty acids, which are the two primary types of essential fatty acids (EFAs). EFAs are nutrients required for normal body functioning, but they cannot be made by the body and must be obtained from food. The body uses this fat to build cell membranes and nerve tissue (including the brain), and to regulate hormones.
Reducing Saturated Fats
Unsaturated fats and oils should replace saturated fats in the diet, rather than just being added to it. This allows the total amount of fat consumed to remain within recommendations without exceeding daily calorie limits. While unsaturated fatty acids are an optional listing on Nutrition Facts labels, they are included in the Total Fat category. One gram of unsaturated fat is healthier than one gram of saturated fat, but both have the same number of calories: 9 calories per gram.
About Trans Fats, A Danger Zone!
Trans fat is an unhealthy fat. Although trans fatty acids are unsaturated, they are structurally similar to saturated fatty acids and therefore behave like them. Trans fat raises LDL (“bad” cholesterol), and an elevated LDL increases the risk of developing cardiovascular disease (see the Cholesterol section below).
The National Academies of Science, Engineering, and Medicine recommends that trans fat consumption be as low as possible without compromising the nutritional adequacy of the diet. As of June 2018, partially hydrogenated oils (PHOs), the major source of artifcial trans fat in the food supply, are no longer Generally Recognized as Safe (GRAS). Therefore, PHOs are no longer added to foods. But trans fat will not be completely gone from foods because it occurs naturally in small amounts in some animal products and is present at very low levels in refined vegetable oils. This hidden fat can add up if you eat several servings of products that contain it. Learn more about trans fat at this FDA webpage.
Cholesterol is a waxy, fat-like substance made by all cells of the body. The organs that make the most cholesterol are the liver and intestines. The body uses cholesterol to produce vitamin D and certain hormones (e.g., estrogen and testosterone) and bile (a fuid that aids in fat digestion). Cholesterol in food is referred to as “dietary cholesterol” and is found only in animal products—never in plants. Cholesterol is transported in the blood by particles called “lipoproteins,” which contain both fat and protein. Over time, cholesterol and other substances can build up in the arteries and cause cardiovascular problems. The human body makes all the cholesterol that it needs, so it is not necessary to get cholesterol from food.
HDL & LDL Cholesterol
High Density Lipoprotein (HDL) cholesterol is often referred to as “good” cholesterol. HDL cholesterol travels from the body tissues to the liver, where it is broken down and removed. Higher levels of HDL cholesterol in the blood can help prevent cholesterol buildup in blood vessels, decreasing the risk of developing cardiovascular disease.
Low-density Lipoprotein (LDL) cholesterol is often referred to as “bad” cholesterol. It is the form that moves cholesterol from the liver to the arteries and body tissues. Higher levels of LDL in the blood can lead to a harmful cholesterol buildup in blood vessels, increasing the risk of cardiovascular disease.
Foods such as meats and dairy products that are high in saturated fats may also be sources of dietary cholesterol. This combination can increase the risk of developing cardiovascular disease. The goal for consumption is to get less than 100% of the Daily Value for saturated fat and cholesterol each day: limiting intake of saturated fats will also help to limit intakes of dietary cholesterol.
More About Cholesterol
Saturated fat and trans fat intake affect the level of cholesterol in blood more than consumption of dietary cholesterol does; therefore saturated and trans fats are more important dietary risk factors for coronary heart disease than is dietary cholesterol. It’s more important to limit saturated fat and trans fat in the diet than it is to limit dietary cholesterol.
- Foods that are high in cholesterol also are often high in saturated fat, so by limiting the consumption of saturated fat from animal sources, one can usually also reduce cholesterol intake.
- FDA considers the amount of dietary cholesterol in foods to be important information for consumers to know.
- The Dietary Guidelines for Americans, 2020-2025, address dietary cholesterol with the following statement: “The National Academies recommends that trans fat and dietary cholesterol consumption to be as low as possible without compromising the nutritional adequacy of the diet. The USDA Dietary Patterns are limited in trans fats and low in dietary cholesterol. Cholesterol and a small amount of trans fat occur naturally in some animal source foods.”
- Ask these questions to introduce fats and foods that contain fats:
- Do you think that most Americans consume too much fat?
- What is the basis of your opinion?
- What is fat?
- Which of your favorite foods contain fat?
- What do you know about the different kinds of fat?
Explore and Explain
Activity 1: Get the Facts About Fats!
- Watch, Good Fats vs. Bad Fats.
- Watch, What is Fat?
- Give each student one copy of the Get the Facts About Fats! — Interactive Label Research worksheet.
- Provide students access (digital or printed) to the following Fact Sheets to complete their worksheet. Students should use the Fact Sheets to complete the table on their worksheet.
- Discuss responses as a class.
Activity 2: Grease Spot Test
- Introduce this activity with these questions:
- Now that you have learned about fat and the different kinds of fat, let’s see which of your foods really do contain fat. How could we test these foods to determine if they contain fat?
- What observations have you made about “greasy” foods that might help you answer this question? If no one suggests “leave a grease spot,” continue the discussion until someone does.
- Explain that some fats are easy to identify because we can see and feel their properties. (Hold up butter and oil.) Fats at room temperature come in both solid and liquid form depending on the amounts of different types of fatty acids they contain, but when they are hidden in food, they are harder to identify.
- Continue by explaining that we can identify most foods that contain fat by the grease spot that is left on the paper. Today we will test some of your favorite foods to see if they contain fat.
- Ask, "For this to be a fair test for all food samples, what factors do we need to keep the same (control) and which one can we change?" (The kind of food would change but the other factors, such as the amount and size of the sample, should stay the same.)
- Give each student one copy of the Grease Spot Test worksheet.
- Divide students into small groups. Give each group a Grease Spot Test Student Instructions sheet and the lab supplies.
- Summarize the activity and review the worksheet when students have completed the lab activity.
- Chip Dip Challenge
- Read the following scenario: You are having a party. Which one of the chip dips listed below would you choose and why? Include evidence from what you have learned along with reasoning to support your position. Consider both health and taste concerns in your evaluation. Look at the options below and consider the grams (g) of saturated fat per serving. Also check/evaluate other nutrients to make the best (healthiest) choice. Note: Total fat on a Nutrition Facts label may be higher than the amount of saturated fat, since the total also includes unsaturated fats. How can you use the Nutrition Facts label to tell which one of the foods on the chart below would be the best choice for a dip?
- FDA Snack Shack: Students can visit Snack Shack in the virtual world of Whyville to practice what they have learned about fats and other nutrients. After players register (it is free) and create an avatar, they can play two different games that will test their knowledge about making healthy snack choices:
- Label Lingo introduces players to the Nutrition Facts label and its various elements. A series of “challenge” rounds quiz players about each label element. For example, “Choose the food that has the lowest % Daily Value of sodium.” Hints are available when needed.
- Snack Sort builds upon key knowledge from Label Lingo. Players sort and rank colorful cartoon foods in the Snack Shack pantry, using the Nutrition Facts label for reference. They can also play this game with other Whyvillians who join the game.
Have students answer the following questions:
- What are the different types of fat? (Saturated, monounsaturated, polyunsaturated, and trans)
- What food sources are high in saturated fats? Saturated fatty acids (saturated fats) are found in the greatest amounts in coconut and palm kernel oils, in butter and beef fats, and in palm oil. They are also found in other animal fats such as pork and chicken fats, and in fats from some plant foods.
- What are good sources of unsaturated fats? They are found in higher proportions in plants and seafood.
- What dietary fats limits are recommended? The recommended daily amount of fats to eat is 25-35% of total daily calories, with saturated fats contributing less than 10% of daily calories. Try to limit trans fats as much as possible.
Summarize the following key concept:
- Dietary fats are a good source of energy. Although most people consume enough fat overall, many people consume too much saturated fat and not enough unsaturated fat. You can use the Nutrition Facts label to make smart choices about dietary fat consumption.
The Science and Our Food Supply: Using the Nutrition Facts Label to Make Healthy Food Choices (2022 Edition) was brought to you by the Food and Drug Administration Center for Food Safety and Applied Nutrition. |
Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
Chapter 8 Key Terms and Concepts
Chapter 8 Key Terms and Concepts
When selecting the U.S. president, there are two ways to count votes:
- The popular vote is the number for votes citizens cast for each of the presidential candidates.
- The electoral vote is the number of votes electoral college members cast for each of the presidential candidates. If a candidate wins a certain state, the candidate wins the votes of electoral college members. These votes are the ones that determine the winner.
Many congressional races occur during presidential election, and there are two ways to classify most congressional races:
- Most congressional elections are an example of a normal election, with relatively low seat shift and rather stable party ratios, ultimately leading to high reelection rates for both party’s incumbents.
- Nationalized elections occur rarely, and typically bring about a large seat shift and low reelection rates for one party’s incumbents. These are often associated with a broad shift in the national political climate.
Elections allow citizens the opportunity to choose their representatives and to reward or punish incumbent politicians.
- Those currently holding political office are known as incumbents.
- A challenger is a politician running for an office that he or she does not hold at the time of the election.
- A challenger is a politician running for an office that he or she does not hold at the time of the election.
- Voters often choose between voting for the incumbent and the challenger by evaluating the incumbent’s performance in the past term, which is called retrospective evaluation.
- There are two steps to congressional elections: the primary, when a political party determines which nominee will run on their behalf in the general election, when the voters determine who the office holder will be.
- While senators represent the entirety of one state, House members represent specific districts.
- Most House and Senate races are determined using plurality voting, meaning that the person who received the most votes wins, while others use majority voting, which requires that a candidate has to receive more than 50 percent of the votes to be declared winner.
The rules of voting instruments can influence the results:
- The likelihood of an undervote is influenced by the type of voting instrument.
- Different counties use different forms of ballots: some use keypunch paper ballots, some use mechanical hole punch ballots, while others use touch-screen voting machines, and there are many more.
- Primaries and Caucuses: At the state level, the primary and caucus nominees win delegates, who cast votesin the national convention to determine their party’s candidate for the general election.
- The Democratic Party: uses proportional allocation of delegates reflecting each candidate’s vote share.In addition to these pledged delegates, Democrats also have superdelegates, who are party leaders and elected officials. Superdelegates are not committed to any candidate and can make their decision based on their own judgment.
- The Republican Party allocates delegates in two ways: proportional allocation and winner-take-all, depending on the state.
While success in the early contests is not a sure predictor of receiving the nomination, a poor showing in the early contests is likely to lead to an early exit. Because of the importance of these early contests, many states are frontloading: moving their primaries and caucuses earlier in the year to exert more influence on the outcome.
The National Convention
- Each party hs its own nationl convention, where delegates vote for the party's nominee.
- Vice Presidents canidates are officially named and the party platform is voted on.
- The convention is heavily publicized and gives the party an opportunity to increase its visibility.
- Voters actually vote for the candidate's pledged suppots (electors), who then vote for the president.
- Electors: the number of electors a state has equals that state's number of House and Senate Members.
- All states besides Nebraska and Maine, state electors are determined through a winner-take-all-system.
- Winner Take All System: causes candidates focusing on large states (with lots of electors) and swing states, at the expense of smaller and less competitive states.
- Electoral College: the rules of the electoral college do not require that a candidate receive a majority of the popular vote, only the majority of the electoral college votes.
Decisions in Running for Electoral Office:
- Following each election, a party’s control of a seat is determined to be safe or vulnerable based on a number of calculations. Political parties and candidates make
- strategic decisions based on these assessments.
- -Possible to raise money for the campaign?
- -Will the upcoming year be one that favors a particular party?
- -Will the incumbent be seeking reelection or will be an open seat?
Things Candidates Do to Secure Themself in Campaigns:
- Most incumbent operate a permanent campaign which is always gathering support by traveling their district and talking with constituents.
- Some politicians will attempt to increase support by boosting the economy; known as Politcal Business Cycle
- "Money Primary" - candidates compete by beginning fundraising well in advance of the election
- "Talent Primary" - candidates work to attract talented people to join in their campaign staff
What Candidates Do During the Campaign?
- Retail Politics: Candidates may contact voters directly
- Wholesale Politics: indirectly contacting voters through mailings and other advertisements.
- GOTV: candidates seek to mobilize their supports to vote on election day
- Publicize their Campaign Platform (Issue Stances): candidates must balance their stances and their party's stances. Issue stances determines what campaign support groups contribute to the campaign and endorse the candidate.
- Common Beliefs: candidates attempt to present themselves to the people as "average Americans" doing everyday things like they do.
- Challenge Their Opponents: debate with opponents on policy issues and swapping columns in newspaper opinion and editorials. They also use negative campaign strategies such uncovering damaging info about opponents and running "attack ads".
- Each year, parties, candidates, organize interests and business spend over 1 Billion primarily on advertisements
- Campaign Ads are Generally Positive
- Advocacy Group Ads are Generally Negative
- Campaign Ads arguably depress voter turnout and reinforces negative stereotypes about Gov't.
- Campaign Ads = Higher Interest in Campaigns and Highlight differences between candidates helping voters make informed choices
Federal Elections Commission
- tasked with regulating how much money political campaigns spend and how they spend it.
- 6 appointees but no more than 3 from each party to prevent any majoritys.
- Most recent set of rules was in 2002 known as the Bipartisan Campaign Reform Act (BCRA); also known as McCain-Feingold Act.
- Hard Money - is the money political action committees give directly to candidates which is limited by under the B.C.R.A.
- Soft Money - is the money that can be used to support campaign advertising and the mobilization of voters, as long as it does exlicitly support or oppose a candidate.
- Political parties are limited in the amount of hard money they give to candidates but are not limited in their "independent expenditures" to support a candidate.
Campaign finance reform is difficult because it requires balancing the right to free speech with the idea that the rich should not dominate campaigns and decide outcomes.
- Most campaign constributions come from small donations by everyday americans and not big business.
- The majority of the money spent in campaigns is alloted to television ads which can be extremely expensive
- Raising a lot does not gurantee outcomes
- Little evidence that campaign contributions alter legislator behavior, or that contributors "buy votes".
- The number of people who turnout is generally around 50 percent of elgible citizens for general elections, and about 30 percent for primaries and caucuses.
- Turnout is lower among younger citizens, nonwhite citizens, and less educated citizens.
- Many people who vote do so because they feel an obligation of citizenship.
- Many people who do not vote are angry with the government and feel that the government's actions will not help them.
Deciding Whom to Vote For:
- Gathering information on all the candidates is costly, so citizens rely on voting cues as shortcuts to a reasonable vote.
- Some use incumbency, partisanship, and personal economic experience as a way to inform vote choice
- Others vote based on the candidate's backgrounds or life experiences
Normal and Nationalized Elections
- Vote Decisions for presidential and congressional elections are made independently, particulary in normal elections.
- Split ticket voting usually occurs because voters often focus on the candidate not the party
- In nationalized elections, voters focus more on the party that is in power and vote against most of the candiates of that party.
- Rupublican and Democratic parties provide clear and systematic differences on a wide range of issues
- Voters are able to make reasonable votes based on cues and shortcuts
- Elections provide a mechanism for citizens to control how politicians behave and to hold them accountable for their actions.
a local meeting in which party members select a party's nominee for the general election
Begin Chapter 8 Key Terms
a primary election in which only registered members of a political party vote
idea that a popular president can generate additional support for candidates affiliated for his party
vote to select their party's nominee for presidency. They are elected in a series of caucuses and primaries.
states moving their presidential primaries or caucuses to take place earlier in the nomination process to exert influence over the outcome
election in which voters cast ballots for house, senate, and a president/vice-president every 4 years
a typical congressional election in which the reelection rate for 1 party's house/senate is low.
normal election in which incumbent reelection rate is high - influences over the house and senate are local.
primary in which any registered voter can vote regardless of party affiliation
Paradox of Voting
question of why citizens vote even though their individual vote stands little chance of changing an election outcome.
voting system iin which a candidate who wins the most votes in a geographic location wins regardless of getting a majority of votes.
Political Business Cycle
attempts by elected officials to manipulate the economy before elections by growing employment or reducing unemployment.
the practice of deciding how many delegates are allocated to each candidates based on popular votes.
questions are presented in a biased way to influence the respondent.
a vote that is likely to be consistent with the voters true preference for 1 candidate over the others.
a citizens judgement of an officeholders job performance since the last election
under a majority voting system a second election is held to determine a winner after no majority was found in the primary
a ballot in which a voter selects candidates for more than 1 political party
a ballot in which a voter selects candidates from only 1 political party
democratic members of congress/party officials selected by collegues to be delegates at party's presidential nomination.
highly competitive states in which both major party canidates stand a good chance of winning state's electoral votes.
Winner Take All
practice of assigning all of a given states delegates to a candidate who recieves most popular vote. Some republican state primaries use this system.
What Are Political Parties?
Political parties are organizations that run candidates for political office and coordinate the actions of officials elected under the party banner. American politicalparties are best described as a collection of nodes, groups of people who belong to, are candidates of, or work for a political party, but do not necessarily work together or hold similar preferences
The First Party System (1789-1828)
- The first political parties were the Federalists and the Jeffersonian Democratic-Republications.
- Federalists: favored a strong central govt and a national bank
- Jeffersonian-Democratic-Republican: favored concentrated state power
- These differed from the modern party system in that few citizens thought of themselves as party members, and candidates did not campaign as representatives of a political party
The Second Party System (1829-1856)
- Democratic Party: evolved from the Jeffersonian-Democratic-Republic Party; most of the politicians became democrats when the party dissolved.
- The Democratic Party Embodied: it cultivated electoral support as a way of strengthening the party's hold on power in Washington. The party also built organizations at the local and state level to mobilize citizens to support the party's candidates. This became know as the Party Principle.
- Party Principle: the idea that a political party exists as an organization distinct from its elected officials or party leaders.
- Whigs: a party formed from the dissolved Jeffersonian-Democratic-Republican Party who didnt become Democrats
The Third Party System (1857-1892)
- The issues of slavery split the second party system.
- Republican Party: formed from anti-salvery whigs. The party also attracted antislavery democrats.
- Parties exist only because elites, politicians, party leaders, and activists want them to.
The Fourth Party System (1893-1932)
- While the Civil War settled the issue of slavery, it did not change the identity of the major American Parties.
- The Democratic and Republican parties both still existed. They differed on issues such as the withdrawal of the union army from the southern states and the size/scope of the federal govt.
The Fifth Party System (1933-1968)
- The New Deal Coalition assembled groups who alligned with and supported the Democratic Party in support of New Deal policies including African Americans, Catholics, Jewish People, union members and white southerners.
- This change established the basic division between Democratics and Republicans that would persist for the rest of the 20th c.
- Democrats: favored large federal govt that took an active role in managing the economy and regulating individual and corporate behavior.
- Republicans: believed many of the programs should be provided by the state and local govts or kept entirely seperate from the government.
The Sixth Party System (1969-Present)
- Changes in political issues and technology fueled the transition to the 6th party system.
- Democrats: came out on the "seperate but equal" system of racial discrimination in southern states, and in favor of programs designed to ensure equal opportunity for minority citizens. They also argued to expand the federal govt into health care funding, antipoverty programs, education, and public works.
- Republicans: opposed expanding the role of government into society.
- Both republicans and democratic parties became Parties in Service, involving recruiting, training, and campaigning for their party's congressional and presidential candidate.
- Realignments: each party system is seperated from the next by a realignment, a change in the size or composition of the party coalitions or in the nature of the issues that divide the parties.
- Realignments typically occur within an election cycle or two, but they can also occur gradually over the course of a decade or longer.
Modern Party Organization
- The principle policy making body in each party organization is the national committee which is comprised of party representatives from each state.
- Parties include Constituency Groups (Democrat Group) and Teams (Republican Group) - organizations within the party that work to attractthe support of particualar demographic groups who likely share the parties view.
- Political Action Committees (PACS): interest groups or divisions of interest groups that can raise money to contribute to campaigns or to spend on ads in support of candiates. They are limited in how much money they get from donors and how much they spend on electioneering.
- 527 Organizations: are tax exempt groups formed primarily to influence elections through voter mobilization efforts and issue ads that do not directly endorse or oppose a candidate. However they are NOT always affiliated with the parties or agree with their positions.
- Parties are like Brand Names because they offer a shorthand way of providing information to voters about the partys candidates.
- The national party organization is unable to force state or local parties to share its positions on issues or comply with other requests. State and local parties make their own decisions about state-and local level candidates and issue positions.
- Political Machine: unofficial patronage system within a political party that seeks to gain political power and government contracts, jobs, and other benefits for party leaders, workers, and supporters.
The Party in Government
- Consists of elected officials in national, state, and local offices who directly affect government policy.
- In the House and Senate, the parties have working groups known as Caucuses (Democrats) and Conference (Republicans). These serve as forums for debate, compromise, and strategizing among paty's elected officials.
- Modern congress is Polarized in both the House and Senate; both parties hold diff. views on govt. policies. Magnitude of polarization has increased over the years in Congress.
The Party in the Electorate
- The party in the electorate consists of citizens who identify with a particular political party.
- Party ID: is critical to understanding votes and other forms of political participation. Party ID determines the most probable party vote of the voter.
- Activists: those who actively participate in the party organization. Only 5-10 percent of the populatio are Activists.
Early theories of party ID were thought to be of heart-felt attachment but now it is understood to be a Running Tally
of which the voter changes views/votes based on new information and what is seen in politics. However it usually reinforces loyalties.
- 1970s - 50 percent identified as Democrats and 20 percent as Republicans
- 1990s - evened out by 2002 - 50%/50%
- Independents: were unaffiliated with a party because
- they were in the process of shifting their identification from one party to the other. Others saw independents as evidence of dealignment, a decline in the percentage of citizens who identify with one of the major parties, usually over the course of a decade or longer. Many people who identify as independents actually have some weak attachment to one of the major political parties.
- Party Coalitions: groups who identify with a political party,
- usually described in demographic terms, such as African American Democrats or evangelical Republicans. The Republican and Democratic party coalitions differ systematically in terms of their policy preferences.
Role of Policial Parties in a Democracy
- Recuiting Candidates: the process of recruiting candidates has become systematic with national party leaders becoming key elements in recruiting and finding candidates. In most states, the # of signatures needed to earn a candidate a spot on the ballot is lower for the major parties than the independent or minor party candidates. Therefore virtually all candidates for congress or presidency run either as a Democrat or Republican even if they do not stand for the party's view.
- Nominating Candidates: national parties manage the nomination process for presidential candidates. Voters in primaries and caucuses determine how many of each candidates supporters become delegats to the party's national nominating convention where delegates of each state select the party's presidential and vice presidential nominees and approve the party platform.
- Campaign Assistance: along with supplying campaign funds, party organizations give candidates other kinds of assistance ranging from offering campaign advice to conducting polls for them.
- Party Platforms: a set of objectives outlining the party's issue positions and priorities, although candidates are not required to support their party's platform. Party platforms generally reflect the brand name differences giving citizens an easy judgement about candidates.
Cooperation in Government
Conditional Party Govt:
refers to the theory that lawmakers from the same party will cooperate to develop policy proposals. These policies are preferably attractive to Backbenchers
or not leadership position holding legislators.
- Developing Agendas: strategies for legislative action. Leaders in Congress use their power to determine when proposals are considered, which amendmants are allowed, and how long debate will proceed to ensure speedy consideration and to prevent the opposing minority party from delaying votes or offering alternatives.
- Coordination: coordination is important in enacting laws. Unless supporters in congress can amass a two-thirds majority to override a veto, they need the president's support. Similiarly the president needs the support of congress to enact proposals that he or she favors. Thus, the president routinely meets with congressional leaders from his party, and occasionally meets with the entire caucus or conference.
- Accountability: One of the most important roles of political parties in a democracy is giving citizens identifiable groups to reward or punish for government actions, thereby providing a means for voters to focus their desire for accountability
- During period of Unified Government, a situation in which one party holds a majoirty of seats in the House and Senate and the president is a member of that party, that party is in the Party in Power, it has enough votes to enact policies in Congress.
- During periods of During times of Divided Government,
- when one party controls Congress but not the presidency or the House and Senate are controlled by different parties, the president’s party is considered the party in power.
- Responsible Parties: legislators from the same party working together, running on the same campaign platform, work together in Washington, and be collectively held accountable ... NEVER EXISTED IN USA POLITICS
- Not Significant Players on the Political Stage
- Very Few Americans Identify w/ Minor Parties since they only exist for a short period of time
- Many see voting for minor parties being a waste of a vote due to the concept of plurality voting
- People vote for minor parties because they relate to their views and positions moreso than the other major parties whom they find incapable as leaders of the government
- Duverger's Law: states that in a democracy with single-member districts and plurality voting, like the USA, only two parties candidates will have a realistic change of winning political office.
- Single Member Districts: comprise an electoral system in which every elected offical represents a geographically defined area, such as a state or congressional district, and each area elects 1 representative.
- Plurality Voting: is a voting system in which the candidate who receives the most votes within a geographic area wins an election, regardless of whether that candidate wins a majority of the votes.
What Kind of Democracy Do American Political Parties Create?
Despite all the efforts parties put forth to select good candidates, the problem remainsthat the people who make up American political parties are not primarily interestedin democracy; they are interested in their own careers, policy goals, and winningpolitical office.
- Recruiting Candidates: One of the most important things the Republican and Democratic parties can do for democracy is to recruit candidates for national political offices who can run effective campaigns and responsibly uphold their elected positions.
- Working Together in Campaigns: Parties can also work to simplify voters’ choices by trying to get candidates to emphasize the same issues or take similar issue positions. The problem is that members of the party organization and the party in government do not always agree on what government should do. Party leaders have very little power over candidates by way of rewards and punishments.
- Working Together in Office: Voters cannot expect that putting one party in power is going to result in specific policy changes. Instead, policy outcomes depend on how (and whether) individual officeholders from the party can resolve their differences.
- Accountability: A party must serve as an accountability mechanism that gives citizens an identifiable group to reward when policies work well and to punish when policies fail.
- Citizens’ Behavior: Citizens are under no obligation to give money or time to the party they identify with or to any of the party’s candidates. They do not have to vote for their party’scandidates, or even to vote at all. When party members refuse to cooperate, political parties may be unable to do the things that help American democracy work well.
Begin Chapter 7 Vocabulary
Legislators who do not hold leadership positions within their party caucus or conference
The use of party names to evoke certain positions or issues
The organization of Democrats in the House and the Senate that meets to discuss and debate the party's positions on various issues in order to reach a concensus and to assign leadership positions
Conditional Party Governemnt
The theory that lawmakers from the same party will cooperate to develop policy proposals
The organization of Republicans in the House and the Senate that meets to discuss and debate the party's positions on various issues in order to reach a concensus and to assign leadership positions
A term describing issues that raise diagreements within a party coalition or between poltical parties about what government should do
A decline in the % of citizens who identify with one of the major parties; usually occurs over the course of a decade or longer.
A situation in which the House, Senate, and Presidentcy are not controlled by same party such as if the Democrats hold th emajoity of House and Senate seats, and the president is a Republican.
The principle that in a democracy with a single member districts and plurality volting, like the USA, only two parties candidates will have a realistic chance of wining political office.
a tax exempt group formed primarily to influence elections through voter mobilization efforts and issue ads that do not directly endorse or oppose a candidate. Unlike politcal action committees, they are not subject to contribution limits and spending caps.
New Deal Coalition
The assemblage of groups who aligned with and supported the Democratic Party in support of New Deal policies during the 5th party system including African Americans, Catholics, Jews, Union Members, and White Southerners.
groups of people who belong to, are candidates of, or work for a political party but do not necessarily work together or hold similiar policy preferences.
meeting every 4 years at which states delegates select the partys presidential and VP nominees and approve the party platform.
Parties in Service
Role of parties in recruiting, training, and contributing to and campaigning for congressional and presidential candidates - became popular in 6th party system.
Party in Power
Under unified governemnt, the party that controls the House, Senate, and the Presidency. Under divided government, the presidents party.
Party in the Electorate
the grouup of citizens who identify with a specific political party
a set of objectives outlining the party's issue positions and priorities - although candidates are not required to support their party's platform
the idea that a political party exists as an organization distinct from its elected officials or party leaders
A voting system in which the candiate who receives the msot votes within a greographic area wins the election, regardless of whether that candidates wins a majority (more than half) of the votes.
a term describing the alignment of both parties members with their own party's issues and priorities, with little crossover support for the other party's goals.
Political Action Committee (PAC)
an interest group or division of an interst group that can raise money to contribute to campaigns or to spend on ads in support of candidates. The amount of a PAC can receive from each of it donors and its expenditures on federal campaigning are strictly limited.
an unofficial patronage system within a political party that seeks to gain political power and government contracts, jobs, and other benefits for party's leaders, workers, and supporters.
A ballot vote in which citizens select a party's nominee for the general election
a change in the size or composition of the part coalitions or in the nature of the issues that divide the parties. Typically occurs within an election cycle ore two but they can also gradually occur over the course of a decade.
a system in which eahc poltiical party's candidates campaign on the party platform, work together in office to implement the platform, and are judged by voters based on whether they achieve the platform's objectives.
A frequently updates mental record that a person uses to incorporate new information, like the information that leads a citizen to identify with a particular political party.
Single Member Districts
an electoral system in which every elected offical represents a geographically defined area, such as a state or congressional district, and each area elects one representative.
The practice of rewarding party supporters with benefits like federal govt positions.
Interest groups are organizations of people whshare common political interests and aim tinfluence public policy by electioneering and lobbying. Interest groups and political parties share the goal of changing what government does
Involves persuasion, using reports, protests, informal meetings, or other techniques to convince an elected official or bureaucrat to help enact a law, craft a regulation, or do something else that a group wants.
Difference Between Interest Groups and Political Parties
- Political Parties run Candidates for office, Interest groups do not run candidates
- Political parties hold legal advantages over itnerest groups when it comes to influencing policy such as guranteed positions on electoral ballots.
- Elected members have direct influence while interest groups have indirect influence
- the idea that americans excersise political power through participation in interest groups rather than as individuals. Interest groups are america's fundamental political actors.
- America is described as an interest group state, a government in which most policy decisions are determined by the influence of interest groups
Regulation of Lobbying
- Annual Reports Must be Filed by Firms and Interest Groups about Activities and Expenses
- Number of Lobbyists Doulbed from 2000 to 2005
- Due to the large size and widespread influence of the federal govt, the number of interest groups has doubled.
an interest group composed of companies in the same business or industry (the same “trade”) that lobbies for policies that will benefit members of the group.
Types of Interest Groups
- Economic groups seek public policies that will provide monetary benefits ttheir members. Labor organizations fall under this category.
- Citizen groups seek change in spending, regulations, or government programs concerning a wide range of policies (alsknown as public interest groups). Issues of interest may vary from legislation that defines marriage between a man and a woman tthe elimination of estate taxes.
- Single-issue groups form around a narrowly focused goal, seeking change on a single topic, government program, or piece of legislation. For example, the National Right tLife campaign lobbies for regulations on abortion rights.
Historically, economic interest groups outnumbered citizen groups and single-issue groups. While the number of all types of interest groups has increased in recent years, the increase in citizen groups has far outpaced the growth in economic groups. This may be attributed tthe increased role of the government in citizens’ everyday lives.
- Centralized groups are interest groups with a headquarters, usually in Washington, DC, as well as members and field offices throughout the country. In general, these groups’ lobbying decisions are made at headquarters by the group leaders. Most well-known organizations like the AARP and the NRA are centralized groups.
- A centralized organization controls all of the group’s resources and can deploy them efficiently, but it can be challenging tfind out what members want.
- are interest groups made up of several independent, local organizations that provide much of their funding and hold most of the power.
- confederation has the advantage of maintaining independent chapters at state and local levels, sit is easier for the national headquarters tlearn what members want. Conflict, however, is more rampant in confederations because when chapters send funds theadquarters, they can specify how the funds must be used.
- The practice of transitioning from government positions to working for interest groups or lobbying firms
- Over 40 percent of representatives leaving the House or Senate join a lobbying firm after their departure.
an interest group that has a large number of dues-paying individuals as members. Not all mass associations give members a say in selecting a group’s leaders or determining its mission.
an interest group whose members are businesses or other organizations rather than individuals.
Interest Group Resources
- People are among the most important resources an interest group can utilize. Group members write letters telected officials, send e-mails, travel tWashington for demonstrations, and son.
- Money is important because virtually everything interest groups dcan be purchased as services. Well-funded groups can purchase resources they lack.
- Expertise can take many forms. Areas of expertise may include knowing members’ preferences, or having information on policy questions and legislative proposals. This information is an asset group leaders can use tnegotiate with elected officials or bureaucrats
The Logic of Collective Action
- Changes in policy are public goods; everyone whis eligible benefits. Regardless of how many other people join, an individual is better off free riding—refusing tjoin an organization, and still enjoying the benefits of any success the group might have. But, if everyone acts on this calculation, none will join the group and the organization will be unable tlobby for grants or anything else.
- Prisoners Dilemna - all participants will be better off if they cooperate or coordinate their behavior, but each individual participant alshas an incentive tdefect or refuse tcooperate, in hopes of enjoying the benefits of the other participants’ efforts without contributing themselves.
- Collective action problems involving interest groups are usually more difficult tresolve than the prisoner’s dilemma since there are typically more participants, and there is nway for each participant tknow whether others are free riding.
Solving Collective Action Problems
- Some organizations offer immaterial benefits for participation like
- Solidary benefits include the satisfaction derived from the experience of working with like-minded people, even if the group’s efforts dnot achieve the desired impact.
- Purposive benefits include the satisfaction derived from the experience of working toward a desired policy goal, even if the goal is not achieved.
- Coercion is a method of eliminating nonparticipation or free riding by requiring participation. For example, workers in certain industries are required tjoin their respective union.
- Selective incentives are benefits that can motivate participation in a group effort because they are available only tthose whparticipate, such as member services offered by interest groups.
- Interest group entrepreneurs play a critical role in successful collective action. They are leaders of an interest group whdefine the group’s mission and its goals and create a plan tachieve them.
Interest group entrepreneurs
play a critical role in successful collective action. They are leaders of an interest group who define the group’s mission and its goals and create a plan to achieve them.
are benefits that can motivate participation in a group effort because they are available only to those who participate, such as member services offered by interest groups.
is a method of eliminating nonparticipation or free riding by requiring participation. For example, workers in certain industries are required to join their respective union.
Implications of the Logic of Collective Action
Unless people see benefits from participating in an organization, group leaders must worry about finding the right mix of coercion and selective incentives tget people tjoin. Economic groups are generally easier tform than citizen groups. Since economic groups generally involve a small number of corporations or individuals, the costs of free riding are relatively high; one actor’s efforts significantly boost the probability of success. Thus, economic groups can often form on the strength of their shared policy or monetary goals, without coercion, selective incentives, or solidary benefits. Citizen groups, on the other hand, with many more potential members, typically need tincentivize people tjoin.
Inside Lobbying Strategies
are tactics used by interest groups within Washington, DC, to achieve their policy goals.
Direct Lobbying Strategy
attempts by interest group staff tinfluence policy by speaking with elected officials or bureaucrats, is very common. Interest groups try thelp like-minded legislators secure policy changes that they both want. Little time is spent trying tconvert opposing legislators and bureaucrats.
Other Interest Lobbying Strategies:
- draft legislation and deliver it directly to legislators
- prepare research reports on topics on interest - makes congress more likely to accept a groups legislative proposal if they believe in their research claims
- Interest group staff often testify before congressional committees in order to inform members of congress about issues that matter to the group
- groups can sue the govt based on constitutionality or that the govt misinterpreted the provisions of existing law
- collaborate short term to achieve a specific outcome with other interest groups.
Outside Lobbying Strategies
Grassroots Lobbying is a strategy that relies on particpation by group members such as in a protect os a letter writing campaign. This strategy is effective because elected officials hate tact against a large group of citizens who care enough about an issue to express their position
is often ignored because it says more about a group's ability to make participation accessible rather than the number of people who strongly support an issue.
Most interest groups are organized as a 501(c) organization, a tax code classification that makes donations tthe group tax-deductible but limits the group’s political activities (the formal limit is 20 percent of the group’s activities or budget)
Separate Political Action Committee (PAC) or 527 organization
which is a tax-exempt group formed primarily tinfluence elections through voter mobilization efforts and issue ads that do not directly endorse or oppose a candidate. They are not subject tspending caps or contribution limits.
"Taking the late train"
Some interest groups use the strategy of taking the late train by donating money tthe winning candidate after the election in hopes of securing a meeting with that person when he takes office.
is a direct vote by citizens on a policy change proposed by fellow citizens or organized groups outside government. Getting a question on the ballot typically requires collecting a set number of signatures from registered voters in support of the proposal. The initiative process favors well-funded groups that can advertise their proposal.
is a direct vote by citizens on a policy change proposed by a legislature or another government body. While referenda are common in state and local elections, there is nmechanism for a national-level referendum.
How Much Power Do Interest Groups Have?
- Interest groups lobby their friends in government rather than their enemies, and tend tmoderate their demands in the face of resistance.
- Some complaints about the power of interest groups come from losers in the political process.
- Many interest groups claim responsibility for policies and election outcomes regardless of whether their lobbying made the difference.
- The sizable amounts that groups spend tlobby Congress can easily overshadow the more important issue of what they got for their money.
What Determines When Groups Succeed?
Two related factors determine the success of lobbying efforts: salience and conflict.
Interest groups are more likely tsucceed when their request has low salience, or attracts little public attention. Legislators and bureaucrats dnot have to worry about the political consequences of giving a group what it wants if the issue is not well known.
level that issue attracts public attention. Legislators and bureaucrats dnot have to worry about the political consequences of giving a group what it wants if the issue is not well known.
Two kinds of conflict over lobbying
- Disagreements between interest groups
- Differences between what a particular interest group wants and public opinion |
This page uses content from Wikipedia and is licensed under CC BY-SA.
|Synonyms||Hard of hearing; anakusis or anacusis is total deafness|
|The international symbol of deafness and hearing loss|
|Types||Conductive, Sensorineural, mixed|
|Causes||Genetics, aging, exposure to noise, some infections, birth complications, trauma to the ear, certain medications or toxins|
|Prevention||Immunization, proper care around pregnancy, avoiding loud noise, avoiding certain medications|
|Treatment||Hearing aids, sign language, cochlear implants, subtitles|
|Frequency||1.33 billion / 18.5% (2015)|
Hearing loss, also known as hearing impairment, is a partial or total inability to hear. A deaf person has little to no hearing. Hearing loss may occur in one or both ears. In children, hearing problems can affect the ability to learn spoken language and in adults it can create difficulties with social interaction and at work. In some people, particularly older people, hearing loss can result in loneliness. Hearing loss can be temporary or permanent.
Hearing loss may be caused by a number of factors, including: genetics, ageing, exposure to noise, some infections, birth complications, trauma to the ear, and certain medications or toxins. A common condition that results in hearing loss is chronic ear infections. Certain infections during pregnancy, such as syphilis and rubella, may also cause hearing loss in the child. Hearing loss is diagnosed when hearing testing finds that a person is unable to hear 25 decibels in at least one ear. Testing for poor hearing is recommended for all newborns. Hearing loss can be categorized as mild (25 to 40 dB), moderate (41 to 55 dB), moderate-severe (56 to 70 dB), severe (71 to 90 dB), or profound (greater than 90 dB). There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss.
About half of hearing loss globally is preventable through public health measures. Such practices include immunization, proper care around pregnancy, avoiding loud noise, and avoiding certain medications. The World Health Organization recommends that young people limit the use of personal audio players to an hour a day in an effort to limit exposure to noise. Early identification and support are particularly important in children. For many hearing aids, sign language, cochlear implants and subtitles are useful. Lip reading is another useful skill some develop. Access to hearing aids, however, is limited in many areas of the world.
As of 2013 hearing loss affects about 1.1 billion people to some degree. It causes disability in 5% (360 to 538 million) and moderate to severe disability in 124 million people. Of those with moderate to severe disability 108 million live in low and middle income countries. Of those with hearing loss, it began during childhood for 65 million. Those who use sign language and are members of Deaf culture see themselves as having a difference rather than an illness. Most members of Deaf culture oppose attempts to cure deafness and some within this community view cochlear implants with concern as they have the potential to eliminate their culture. The term hearing impairment is often viewed negatively as it emphasises what people cannot do.
Use of the terms "hearing impaired", "deaf-mute", or "deaf and dumb" to describe deaf and hard of hearing people is discouraged by advocacy organizations as they are offensive to many deaf and hard of hearing people.
Human hearing extends in frequency from 20–20,000 Hz, and in intensity from 0 dB to 120 dB HL or more. 0 dB does not represent absence of sound, but rather the softest sound an average unimpaired human ear can hear; some people can hear down to −5 or even −10 dB. Sound is generally uncomfortably loud above 90 dB and 115 dB represents the threshold of pain. The ear does not hear all frequencies equally well; hearing sensitivity peaks around 3000 Hz. There are many qualities of human hearing besides frequency range and intensity that cannot easily be measured quantitatively. But for many practical purposes, normal hearing is defined by a frequency versus intensity graph, or audiogram, charting sensitivity thresholds of hearing at defined frequencies. Because of the cumulative impact of age and exposure to noise and other acoustic insults, 'typical' hearing may not be normal.
Hearing loss is sensory, but may have accompanying symptoms:
There may also be accompanying secondary symptoms:
Hearing loss has multiple causes, including ageing, genetics, perinatal problems and acquired causes like noise and disease. For some kinds of hearing loss the cause may be classified as of unknown cause.
There is a progressive loss of ability to hear high frequencies with aging known as presbycusis. For men, this can start as early as 25 and women at 30. Although genetically variable it is a normal concomitant of ageing and is distinct from hearing losses caused by noise exposure, toxins or disease agents. Common conditions that can increase the risk of hearing loss in elderly people are high blood pressure, diabetes or the use of certain medications harmful to the ear. While everyone loses hearing with age, the amount and type of hearing loss is variable.
Noise exposure is the cause of approximately half of all cases of hearing loss, causing some degree of problems in 5% of the population globally. The National Institute for Occupational Safety and Health (NIOSH) recognizes that the majority of hearing loss is not due to age, but due to noise exposure. By correcting for age in assessing hearing, one tends to overestimate the hearing loss due to noise for some and underestimate it for others.
Hearing loss due to noise may be temporary, called a 'temporary threshold shift', a reduced sensitivity to sound over a wide frequency range resulting from exposure to a brief but very loud noise like a gunshot, firecracker, jet engine, jackhammer, etc. or to exposure to loud sound over a few hours such as during a pop concert or nightclub session. Recovery of hearing is usually within 24 hours, but may take up to a week. Both constant exposure to loud sounds (85 dB(A) or above) and one-time exposure to extremely loud sounds (120 dB(A) or above) may cause permanent hearing loss.
Noise-induced hearing loss (NIHL) typically manifests as elevated hearing thresholds (i.e. less sensitivity or muting) between 3000 and 6000 Hz, centred at 4000 Hz. As noise damage progresses, damage spreads to affect lower and higher frequencies. On an audiogram, the resulting configuration has a distinctive notch, called a 'noise' notch. As ageing and other effects contribute to higher frequency loss (6–8 kHz on an audiogram), this notch may be obscured and entirely disappear.
Various governmental, industry and standards organizations set noise standards.
The U.S. Environmental Protection Agency has identified the level of 70 dB(A) (40% louder to twice as loud as normal conversation; typical level of TV, radio, stereo; city street noise) for 24‑hour exposure as the level necessary to protect the public from hearing loss and other disruptive effects from noise, such as sleep disturbance, stress-related problems, learning detriment, etc. Noise levels are typically in the 65 to 75 dB (A) range for those living near airports of freeways and may result in hearing damage if sufficient time is spent outdoors.
Louder sounds cause damage in a shorter period of time. Estimation of a "safe" duration of exposure is possible using an exchange rate of 3 dB. As 3 dB represents a doubling of the intensity of sound, duration of exposure must be cut in half to maintain the same energy dose. For workplace noise regulation, the "safe" daily exposure amount at 85 dB A, known as an exposure action value, is 8 hours, while the "safe" exposure at 91 dB(A) is only 2 hours. Different standards use exposure action values between 80dBA and 90dBA. Note that for some people, sound may be damaging at even lower levels than 85 dB A. Exposures to other ototoxins (such as pesticides, some medications including chemotherapy agents, solvents, etc.) can lead to greater susceptibility to noise damage, as well as causing its own damage. This is called a synergistic interaction. Since noise damage is cumulative over long periods of time, persons who are exposed to non-workplace noise, like recreational activities or environmental noise, may have compounding damage from all sources.
Some national and international organizations and agencies use an exchange rate of 4 dB or 5 dB. While these exchange rates may indicate a wider zone of comfort or safety, they can significantly underestimate the damage caused by loud noise. For example, at 100 dB (nightclub music level), a 3 dB exchange rate would limit exposure to 15 minutes; the 5 dB exchange rate allows an hour.
Many people are unaware of the presence of environmental sound at damaging levels, or of the level at which sound becomes harmful. Common sources of damaging noise levels include car stereos, children's toys, motor vehicles, crowds, lawn and maintenance equipment, power tools, gun use, musical instruments, and even hair dryers. Noise damage is cumulative; all sources of damage must be considered to assess risk. If one is exposed to loud sound (including music) at high levels or for extended durations (85 dB A or greater), then hearing loss will occur. Sound intensity (sound energy, or propensity to cause damage to the ears) increases dramatically with proximity according to an inverse square law: halving the distance to the sound quadruples the sound intensity.
In the USA, 12.5% of children aged 6–19 years have permanent hearing damage from excessive noise exposure. The World Health Organization estimates that half of those between 12 and 35 are at risk from using personal audio devices that are too loud.
Hearing loss due to noise has been described as primarily a condition of modern society. In preindustrial times, humans had far less exposure to loud sounds. Studies of primitive peoples indicate that much of what has been attributed to age-related hearing loss may be long term cumulative damage from all sources, especially noise. People living in preindustrial societies have considerably less hearing loss than similar populations living in modern society. Among primitive people who have migrated into modern society, hearing loss is proportional to the number of years spent in modern society. Military service in World War II, the Korean War, and the Vietnam War, has likely also caused hearing loss in large numbers of men from those generations, though proving that hearing loss was a direct result of military service is problematic without entry and exit audiograms.
Hearing loss in adolescents may be caused by loud noise from toys, music by headphones, and concerts or events. In 2017, the Centers for Disease Control and Prevention brought their researchers together with experts from the World Health Organization and academia to examine the risk of hearing loss from excessive noise exposure in and outside the workplace in different age groups, as well as actions being taken to reduce the burden of the condition. A summary report was published in 2018.
Hearing loss can be inherited. Around 75–80% of all these cases are inherited by recessive genes, 20–25% are inherited by dominant genes, 1–2% are inherited by X-linked patterns, and fewer than 1% are inherited by mitochondrial inheritance.
When looking at the genetics of deafness, there are 2 different forms, syndromic and nonsyndromic. Syndromic deafness occurs when there are other signs or medical problems aside from deafness in an individual. This accounts for around 30% of deaf individuals who are deaf from a genetic standpoint. Nonsyndromic deafness occurs when there are no other signs or medical problems associated with an individual other than deafness. From a genetic standpoint, this accounts for the other 70% of cases, and represents the majority of hereditary hearing loss. Syndromic cases occur with disorders such as Usher syndrome, Stickler syndrome, Waardenburg syndrome, Alport's syndrome, and neurofibromatosis type 2. These are diseases that have deafness as one of the symptoms or as a common feature associated with it. Many of the genetic mutations giving rise to syndromic deafness have been identified. In nonsyndromic cases, where deafness is the only finding, it is more difficult to identify the genetic mutation although some have been discovered.
Some medications may reversibly affect hearing. These medications are considered ototoxic. This includes loop diuretics such as furosemide and bumetanide, non-steroidal anti-inflammatory drugs (NSAIDs) both over-the-counter (aspirin, ibuprofen, naproxen) as well as prescription (celecoxib, diclofenac, etc.), paracetamol, quinine, and macrolide antibiotics. The link between NSAIDs and hearing loss tends to be greater in women, especially those who take ibuprofen six or more times a week. Others may cause permanent hearing loss. The most important group is the aminoglycosides (main member gentamicin) and platinum based chemotherapeutics such as cisplatin and carboplatin.
On October 18, 2007, the U.S. Food and Drug Administration (FDA) announced that a warning about possible sudden hearing loss would be added to drug labels of PDE5 inhibitors, which are used for erectile dysfunction.
Audiologic monitoring for ototoxicity allows for the (1) early detection of changes to hearing status presumably attributed to a drug/treatment regime so that changes in the drug regimen may be considered, and (2) audiologic intervention when handicapping hearing impairment has occurred.
In addition to medications, hearing loss can also result from specific chemicals in the environment: metals, such as lead; solvents, such as toluene (found in crude oil, gasoline and automobile exhaust, for example); and asphyxiants. Combined with noise, these ototoxic chemicals have an additive effect on a person’s hearing loss.
Hearing loss due to chemicals starts in the high frequency range and is irreversible. It damages the cochlea with lesions and degrades central portions of the auditory system. For some ototoxic chemical exposures, particularly styrene, the risk of hearing loss can be higher than being exposed to noise alone. The effects is greatest when the combined exposure include impulse noise.
A 2018 informational bulletin by the US Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH) introduces the issue, provides examples of ototoxic chemicals, lists the industries and occupations at risk and provides prevention information.
There can be damage either to the ear, whether the external or middle ear, to the cochlea, or to the brain centers that process the aural information conveyed by the ears. Damage to the middle ear may include fracture and discontinuity of the ossicular chain. Damage to the inner ear (cochlea) may be caused by temporal bone fracture. People who sustain head injury are especially vulnerable to hearing loss or tinnitus, either temporary or permanent.
Sound waves reach the outer ear and are conducted down the ear canal to the eardrum, causing it to vibrate. The vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear. The fluid moves hair cells (stereocilia), and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve. The auditory nerve takes the impulses to the brainstem, which sends the impulses to the midbrain. Finally, the signal goes to the auditory cortex of the temporal lobe to be interpreted as sound.
Older people may lose their hearing from long exposure to noise, changes in the inner ear, changes in the middle ear, or from changes along the nerves from the ear to the brain.
Identification of a hearing loss is usually conducted by a general practitioner medical doctor, otolaryngologist, certified and licensed audiologist, school or industrial audiometrist, or other audiometric technician. Diagnosis of the cause of a hearing loss is carried out by a specialist physician (audiovestibular physician) or otorhinolaryngologist.
A case history (usually a written form, with questionnaire) can provide valuable information about the context of the hearing loss, and indicate what kind of diagnostic procedures to employ. Case history will include such items as:
In case of infection or inflammation, blood or other body fluids may be submitted for laboratory analysis.
Hearing loss is generally measured by playing generated or recorded sounds, and determining whether the person can hear them. Hearing sensitivity varies according to the frequency of sounds. To take this into account, hearing sensitivity can be measured for a range of frequencies and plotted on an audiogram.
Another method for quantifying hearing loss is a speech-in-noise test. As the name implies, a speech-in-noise test gives an indication of how well one can understand speech in a noisy environment. A person with a hearing loss will often be less able to understand speech, especially in noisy conditions. This is especially true for people who have a sensorineural loss – which is by far the most common type of hearing loss. As such, speech-in-noise tests can provide valuable information about a person's hearing ability, and can be used to detect the presence of a sensorineural hearing loss. A recently developed digit-triple speech-in-noise test may be a more efficient screening test.
Otoacoustic emissions test is an objective hearing test that may be administered to toddlers and children too young to cooperate in a conventional hearing test. The test is also useful in older children and adults and is an important measure in diagnosing auditory neuropathy described above.
Auditory brainstem response testing is an electrophysiological test used to test for hearing deficits caused by pathology within the ear, the cochlear nerve and also within the brainstem. This test can be used to identify delay in the conduction of neural impulses due to tumours or inflammation but can also be an objective test of hearing thresholds. Other electrophysiological tests, such as cortical evoked responses, can look at the hearing pathway up to the level of the auditory cortex.
MRI and CT scans can be useful to identify the pathology of many causes of hearing loss. They are only needed in selected cases.
Hearing loss is categorized by type, severity, and configuration. Furthermore, a hearing loss may exist in only one ear (unilateral) or in both ears (bilateral). Hearing loss can be temporary or permanent, sudden or progressive.
The severity of a hearing loss is ranked according to ranges of nominal thresholds in which a sound must be so it can be detected by an individual. It is measured in decibels of hearing loss, or dB HL. The measurement of hearing loss in an individual is conducted over several frequencies, mostly 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz. The hearing loss of the individual is the average of the hearing loss values over the different frequencies. Hearing loss can be ranked differently according to different organisations; and so, in different countries different systems are in use.
Hearing loss may be ranked as slight, mild, moderate, moderately severe, severe or profound as defined below:[medical citation needed]
The 'Audiometric Classifications of Hearing Impairment' according to the International Bureau Audiophonology (BIAP) in Belgium is as follows:
Hearing loss may affect one or both ears. If both ears are affected, then one ear may be more affected than the other. Thus it is possible, for example, to have normal hearing in one ear and none at all in the other, or to have mild hearing loss in one ear and moderate hearing loss in the other.
For certain legal purposes such as insurance claims, hearing loss is described in terms of percentages. Given that hearing loss can vary by frequency and that audiograms are plotted with a logarithmic scale, the idea of a percentage of hearing loss is somewhat arbitrary, but where decibels of loss are converted via a legally recognized formula, it is possible to calculate a standardized "percentage of hearing loss", which is suitable for legal purposes only.
There are three main types of hearing loss, conductive hearing loss, sensorineural hearing loss. Combinations of conductive and sensorineural hearing losses are called a mixed hearing loss. An additional problem which is increasingly recognised is auditory processing disorder which is not a hearing loss as such but a difficulty perceiving sound.
Conductive hearing loss is present when the sound is not reaching the inner ear, the cochlea. This can be due to external ear canal malformation, dysfunction of the eardrum or malfunction of the bones of the middle ear. The eardrum may show defects from small to total resulting in hearing loss of different degree. Scar tissue after ear infections may also make the eardrum dysfunction as well as when it is retracted and adherent to the medial part of the middle ear.
Dysfunction of the three small bones of the middle ear – malleus, incus, and stapes – may cause conductive hearing loss. The mobility of the ossicles may be impaired for different reasons including a boney disorder of the ossicles called otosclerosis and disruption of the ossicular chain due to trauma, infection or ankylosis may also cause hearing loss.
Sensorineural hearing loss is one caused by dysfunction of the inner ear, the cochlea or the nerve that transmits the impulses from the cochlea to the hearing centre in the brain. The most common reason for sensorineural hearing loss is damage to the hair cells in the cochlea. Depending on the definition it could be estimated that more than 50% of the population over the age of 70 has impaired hearing.
Damage to the brain can lead to a central deafness. The peripheral ear and the auditory nerve may function well but the central connections are damaged by tumour, trauma or other disease and the patient is unable to process speech information.
Mixed hearing loss is a combination of conductive and sensorineural hearing loss. Chronic ear infection (a fairly common diagnosis) can cause a defective ear drum or middle-ear ossicle damages, or both. In addition to the conductive loss, a sensory component may be present.
This is not an actual hearing loss but gives rise to significant difficulties in hearing. One kind of auditory processing disorder is King-Kopetzky syndrome, which is characterized by an inability to process out background noise in noisy environments despite normal performance on traditional hearing tests. An auditory processing disorders is sometimes linked to language disorders in persons of all ages.
The shape of an audiogram shows the relative configuration of the hearing loss, such as a Carhart notch for otosclerosis, 'noise' notch for noise-induced damage, high frequency rolloff for presbycusis, or a flat audiogram for conductive hearing loss. In conjunction with speech audiometry, it may indicate central auditory processing disorder, or the presence of a schwannoma or other tumor. There are four general configurations of hearing loss:
1. Flat: thresholds essentially equal across test frequencies.
2. Sloping: lower (better) thresholds in low-frequency regions and higher (poorer) thresholds in high-frequency regions.
3. Rising: higher (poorer) thresholds in low-frequency regions and lower (better) thresholds in higher-frequency regions.
4. Trough-shaped ("cookie-bite" or "U" shaped): greatest hearing loss in the mid-frequency range, with lower (better) thresholds in low- and high-frequency regions.
People with unilateral hearing loss or single-sided deafness (SSD) have difficulty in:
In quiet conditions, speech discrimination is approximately the same for normal hearing and those with unilateral deafness; however, in noisy environments speech discrimination varies individually and ranges from mild to severe.
One reason for the hearing problems these patients often experience is due to the head shadow effect. Newborn children with no hearing on one side but one normal ear could still have problems. Speech development could be delayed and difficulties to concentrate in school are common. More children with unilateral hearing loss have to repeat classes than their peers. Taking part in social activities could be a problem. Early aiding is therefore of utmost importance.
It is estimated that half of cases of hearing loss are preventable. About 60% of hearing loss in children under the age of 15 can be avoided. A number of preventative strategies are effective including: immunization against rubella to prevent congenital rubella syndrome, immunization against H. influenza and S. pneumoniae to reduce cases of meningitis, and avoiding or protecting against excessive noise exposure. The World Health Organization also recommends immunization against measles, mumps, and meningitis, efforts to prevent premature birth, and avoidance of certain medication as prevention.
Noise exposure is the most significant risk factor for noise-induced hearing loss that can be prevented. Different programs exist for specific populations such as school-age children, adolescents and workers. Education regarding noise exposure increases the use of hearing protectors. The use of antioxidants is being studied for the prevention of noise-induced hearing loss, particularly for scenarios in which noise exposure cannot be reduced, such as during military operations.
Noise is widely recognized as an occupational hazard. In the United States, the National Institute for Occupational Safety and Health (NIOSH) and the Occupational Safety and Health Administration (OSHA) work together to provide standards and enforcement on workplace noise levels. The hierarchy of hazard controls demonstrates the different levels of controls to reduce or eliminate exposure to noise and prevent hearing loss, including engineering controls and personal protective equipment (PPE). Other programs and initiative have been created to prevent hearing loss in the workplace. For example, the Safe-in-Sound Award was created to recognize organizations that can demonstrate results of successful noise control and other interventions. Additionally, the Buy Quiet program was created to encourage employers to purchase quieter machinery and tools. By purchasing less noisy power tools like those found on the NIOSH Power Tools Database and limiting exposure to ototoxic chemicals, great strides can be made in preventing hearing loss.
Companies can also provide personal hearing protector devices tailored to both the worker and type of employment. Some hearing protectors universally block out all noise, and some allow for certain noises to be heard. Workers are more likely to wear hearing protector devices when they are properly fitted.
Often interventions to prevent noise-induced hearing loss have many components. A 2017 Cochrane review found that stricter legislation might reduce noise levels. Providing workers with information on their noise exposure levels was not shown to decrease exposure to noise. Ear protection, if used correctly, can reduce noise to safer levels, but often, providing them is not sufficient to prevent hearing loss. Engineering noise out and other solutions such as proper maintenance of equipment can lead to noise reduction, but further field studies on resulting noise exposures following such interventions are needed. Other possible solutions include improved enforcement of existing legislation and better implementation of well-designed prevention programmes, which have not yet been proven conclusively to be effective. The conclusion of the Cochrane Review was that further research could modify what is now regarding the effectiveness of the evaluated interventions.
While the American College of Physicians indicated that there is not enough evidence to determine the utility of screening in adults over 50 years old who do not have any symptoms, the American Language, Speech Pathology and Hearing Association recommends that adults should be screened at least every decade through age 50 and at 3-year intervals thereafter, to minimize the detrimental effects of the untreated condition on quality of life. For the same reason, the US Office of Disease Prevention and Health Promotion included as one of Healthy People 2020 objectives: to increase the proportion of persons who have had a hearing examination.
Treatment depends on the specific cause if known as well as the extent, type and configuration of the hearing loss. Most hearing loss, that resulting from age and noise, is progressive and irreversible, and there are currently no approved or recommended treatments; management is by hearing aid. A few specific kinds of hearing loss are amenable to surgical treatment. In other cases, treatment is addressed to underlying pathologies, but any hearing loss incurred may be permanent.
There are a number of devices that can improve hearing in those who are deaf or hard of hearing or allow people with these conditions to manage better in their lives.
Hearing aids are devices that work to improve the hearing and speech comprehension of those with hearing loss. They work by magnifying the sound vibrations in the ear so that one can understand what is being said around them. Hearing aids have been shown to have a large beneficial effect in helping adults with mild to moderate hearing loss take part in everyday situations, and a smaller beneficial effect in improving physical, social, emotional and mental well-being in these people. Some people feel as if they cannot live without one because they say it is the only thing that keeps them engaged with the public. Conversely, there are many people who choose not to wear their hearing aids for a multitude of reasons. Up to 40% of adults with hearing aids for hearing loss fail to use them, or do not use them to their full effect. There are a number of reasons for this, stemming from factors such as: the aid amplifying background noises instead of the sounds they intended to hear; issues with comfort, care, or maintenance of the device; aesthetic factors; financial factors; and personal preference for quietness.
There is little evidence that interventions to encourage the regular use of hearing aids, (e.g. improving the information given to people about how to use hearing aids), increase daily hours of hearing aid use, and there is currently no agreed set of outcome measures for assessing this type of intervention.
Many deaf and hard of hearing individuals use assistive devices in their daily lives:
A wireless device has two main components: a transmitter and a receiver. The transmitter broadcasts the captured sound, and the receiver detects the broadcast audio and enables the incoming audio stream to be connected to accommodations such as hearing aids or captioning systems.
Three types of wireless systems are commonly used: FM, audio induction loop, and InfraRed. Each system has advantages and benefits for particular uses. FM systems can be battery operated or plugged into an electrical outlet. FM system produce an analog audio signal, meaning they have extremely high fidelity. Many FM systems are very small in size, allowing them to be used in mobile situations. The audio induction loop permits the listener with hearing loss to be free of wearing a receiver provided that the listener has a hearing aid or cochlear implant processor with an accessory called a "telecoil". If the listener does not have a telecoil, then he or she must carry a receiver with an earpiece. As with FM systems, the infrared (IR) system also requires a receiver to be worn or carried by the listener. An advantage of IR wireless systems is that people in adjoining rooms cannot listen in on conversations, making it useful for situations where privacy and confidentiality are required. Another way to achieve confidentiality is to use a hardwired amplifier, which contains or is connected to a microphone and transmits no signal beyond the earpiece plugged directly into it.
There is no treatment, surgical or otherwise, for sensorineural hearing loss due to the most common causes (age, noise, and genetic defects). For a few specific conditions, surgical intervention can provide a remedy:
Surgical and implantable hearing aids are an alternative to conventional external hearing aids. If the ear is dry and not infected, an air conduction aid could be tried; if the ear is draining, a direct bone conduction hearing aid is often the best solution. If the conductive part of the hearing loss is more than 30–35 dB, an air conduction device could have problems overcoming this gap. A bone-anchored hearing aid could, in this situation, be a good option. The active bone conduction hearing implant Bonebridge (a product of MED-EL corporation) is also an option. This implant is invisible under the intact skin and therefore minimises the risk of skin irritations.
Cochlear implants improve outcomes in people with hearing loss in either one or both ears. They work by artificial stimulation of the cochlear nerve by providing an electric impulse substitution for the firing of hair cells. They are expensive, and require programming along with extensive training for effectiveness.
Cochlear implants as well as bone conduction implants can help with single sided deafness. Middle ear implants or bone conduction implants can help with conductive hearing loss.
People with cochlear implants are at a higher risk for bacterial meningitis. Thus, meningitis vaccination is recommended. People who have hearing loss, especially those who develop a hearing problem in childhood or old age, may need support and technical adaptations as part of the rehabilitation process. Recent research shows variations in efficacy but some studies show that if implanted at a very young age, some profoundly impaired children can acquire effective hearing and speech, particularly if supported by appropriate rehabilitation.
For a classroom setting, children with hearing loss often benefit from direct instruction and communication. Optimally children with hearing loss will be mainstreamed in a typical classroom and receive supportive services. One such is to sit as close to the teacher as possible improves the student's ability to hear the teacher's voice and to more easily read the teacher's lips. When lecturing, teachers can help the student by facing them and by limiting unnecessary noise in the classroom. In particular, the teacher can avoid talking when their back is turned to the classroom, such as while writing on a whiteboard.
Some other approaches for classroom accommodations include pairing deaf or hard of hearing students with hearing students. This allows the deaf or hard of hearing student to ask the hearing student questions about concepts that they have not understood. The use of CART (Communication Access Real Time) systems, where an individual types a captioning of what the teacher is saying, is also beneficial. The student views this captioning on their computer. Automated captioning systems are also becoming a popular option. In an automated system, software, instead of a person, is used to generate the captioning. Unlike CART systems, automated systems generally do not require an Internet connection and thus they can be used anywhere and anytime. Another advantage of automated systems over CART is that they are much lower in cost. However, automated systems are generally designed to only transcribe what the teacher is saying and to not transcribe what other students say. An automated system works best for situations where just the teacher is speaking, whereas a CART system will be preferred for situations where there is a lot of classroom discussion.
For those students who are completely deaf, one of the most common interventions is having the child communicate with others through an interpreter using sign language.
Globally, hearing loss affects about 10% of the population to some degree. It caused moderate to severe disability in 124.2 million people as of 2004 (107.9 million of whom are in low and middle income countries). Of these 65 million acquired the condition during childhood. At birth ~3 per 1000 in developed countries and more than 6 per 1000 in developing countries have hearing problems.
Hearing loss increases with age. In those between 20 and 35 rates of hearing loss are 3% while in those 44 to 55 it is 11% and in those 65 to 85 it is 43%.
A 2017 report by the World Health Organization estimated the costs of unaddressed hearing loss and the cost-effectiveness of interventions, for the health-care sector, for the education sector and as broad societal costs. Globally, the annual cost of unaddressed hearing loss was estimated to be in the range of $750–790 billion international dollars.
Data from the United States in 2011-2012 found that rates of hearing loss has declined among adults aged 20 to 69 years, when compared with the results from an earlier time period (1999-2004). It also found that adult hearing loss is associated with increasing age, sex, race/ethnicity, educational level, and noise exposure.
Nearly one in four adults had audiometric results suggesting noise-induced hearing loss. Almost one in four adults who reported excellent or good hearing had a similar pattern (5.5% on both sides and 18% on one side). Among people who reported exposure to loud noise at work, almost one third had such changes.
Abbé Charles-Michel de l'Épée opened the first school for the deaf in Paris at the deaf school. The American Thomas Gallaudet witnessed a demonstration of deaf teaching skills from Épée's successor Abbé Sicard and two of the school's deaf faculty members, Laurent Clerc and Jean Massieu; accompanied by Clerc, he returned to the United States, where in 1817 they founded American School for the Deaf in Hartford, Connecticut. American Sign Language (ASL) started to evolve from primarily French Sign Language (LSF), and other outside influences.
Post-lingual deafness is hearing loss that is sustained after the acquisition of language, which can occur due to disease, trauma, or as a side-effect of a medicine. Typically, hearing loss is gradual and often detected by family and friends of affected individuals long before the patients themselves will acknowledge the disability. Post-lingual deafness is far more common than pre-lingual deafness. Those who lose their hearing later in life, such as in late adolescence or adulthood, face their own challenges, living with the adaptations that allow them to live independently.
Prelingual deafness is profound hearing loss that is sustained before the acquisition of language, which can occur due to a congenital condition or through hearing loss before birth or in early infancy. Prelingual deafness impairs an individual's ability to acquire a spoken language in children, but deaf children can acquire spoken language through support from cochlear implants (sometimes combined with hearing aids). Non-signing (hearing) parents of deaf babies (90-95% of cases) usually go with oral approach without the support of sign language as the these families lack previous experience with sign language and cannot competently provide it to their children. Unfortunately, this may in some rare cases (late implantation or not sufficient benefit from cochlear implants) bring the risk of language deprivation for the deaf baby because the deaf baby wouldn't have a sign language if the child is unable to acquire spoken language successfully. The 5-10% of cases of deaf babies born into signing families have the potential of age-appropriate development of language due to early exposure to sign language by sign-competent parents, thus they have the potential to meet language milestones, but in sign language in lieu of spoken language.
There has been considerable controversy within the culturally deaf community over cochlear implants. For the most part, there is little objection to those who lost their hearing later in life, or culturally deaf adults choosing to be fitted with a cochlear implant.
Many in the deaf community strongly object to a deaf child being fitted with a cochlear implant (often on the advice of an audiologist); new parents may not have sufficient information on raising deaf children and placed in an oral-only program that emphasizes the ability to speak and listen over other forms of communication such as sign language or total communication. Many Deaf people view cochlear implants and other hearing devices as confusing to one's identity. A Deaf person will never be a hearing person and therefore would be trying to fit into a way of living that is not their own. Other concerns include loss of Deaf culture and identity and limitations on hearing restoration.
Jack Gannon, a professor at Gallaudet University, said this about Deaf culture: "Deaf culture is a set of learned behaviors and perceptions that shape the values and norms of deaf people based on their shared or common experiences." Some doctors believe that being deaf makes a person more social. Bill Vicar, from ASL University, shared his experiences as a deaf person, "[deaf people] tend to congregate around the kitchen table rather than the living room sofa… our good-byes take nearly forever, and our hellos often consist of serious hugs. When two of us meet for the first time we tend to exchange detailed biographies." Deaf culture is not about contemplating what deaf people cannot do and how to fix their problems, an approach known as the "pathological view of the deaf." Instead deaf people celebrate what they can do. There is a strong sense of unity between deaf people as they share their experiences of suffering through a similar struggle. This celebration creates a unity between even deaf strangers. Bill Vicars expresses the power of this bond when stating, "if given the chance to become hearing most [deaf people] would choose to remain deaf."
The United States-based National Association of the Deaf has a statement on its website regarding cochlear implants. The NAD asserts that the choice to implant is up to the individual (or the parents), yet strongly advocates a fully informed decision in all aspects of a cochlear implant. Much of the negative reaction to cochlear implants stems from the medical viewpoint that deafness is a condition that needs to be "cured," while the Deaf community instead regards deafness a defining cultural characteristic.
Many other assistive devices are more acceptable to the Deaf community, including but not limited to, hearing aids, closed captioning, email and the Internet, text telephones, and video relay services.
Sign languages convey meaning through manual communication and body language instead of acoustically conveyed sound patterns. This involves the simultaneous combination of hand shapes, orientation and movement of the hands, arms or body, and facial expressions to express a speaker's thoughts. "Sign languages are based on the idea that vision is the most useful tool a deaf person has to communicate and receive information".
Those who are deaf (by either state or federal standards) have access to a free and appropriate public education. If a child does qualify as being deaf or hard of hearing and receives an individualized education plan, the IEP team must consider, "the child's language and communication needs. The IEP must include opportunities for direct communication with peers and professionals. It must also include the student’s academic level, and finally must include the students full range of needs"
In part, the Department of Education defines deafness as "… a hearing impairment that is so severe that the child is impaired in processing linguistic information through hearing, with or without amplification …." Hearing impairment is defined as "… an impairment in hearing, whether permanent or fluctuating, that adversely affects a child's educational performance but that is not included under the definition of deafness …."
In a residential school where all the children use the same communication system (whether it is a school using ASL, Total Communication or Oralism), students will be able to interact normally with other students, without having to worry about being criticized. An argument supporting inclusion, on the other hand, exposes the student to people who are not just like them, preparing them for adult life. Through interacting, children with hearing disabilities can expose themselves to other cultures which in the future may be beneficial for them when it comes to finding jobs and living on their own in a society where their disability may put them in the minority. These are some reasons why a person may or may not want to put their child in an inclusion classroom.
The communication limitations between people who are deaf and their hearing family members can often cause difficulties in family relationships, and affect the strength of relationships among individual family members. It was found that most people who are deaf have hearing parents, which means that the channel that the child and parents communicate through can be very different, often affecting their relationship in a negative way. If a parent communicates best verbally, and their child communicates best using sign language, this could result in ineffective communication between parents and children. Ineffective communication can potentially lead to fights caused by misunderstanding, less willingness to talk about life events and issues, and an overall weaker relationship. Even if individuals in the family made an effort to learn deaf communication techniques such as sign language, a deaf family member often will feel excluded from casual banter; such as the exchange of daily events and news at the dinner table. It is often difficult for people who are deaf to follow these conversations due to the fast-paced and overlapping nature of these exchanges. This can cause a deaf individual to become frustrated and take part in less family conversations. This can potentially result in weaker relationships between the hearing individual and their immediate family members. This communication barrier can have a particularly negative effect on relationships with extended family members as well. Communication between a deaf individual and their extended family members can be very difficult due to the gap in verbal and non-verbal communication. This can cause the individuals to feel frustrated and unwilling to put effort into communicating effectively. The lack of effort put into communicating can result in anger, miscommunication, and unwillingness to build a strong relationship.
People who have hearing loss can often experience many difficulties as a result of communication barriers among them and other hearing individuals in the community. Some major areas that can be impacted by this are involvement in extracurricular activities and social relationships. For young people, extracurricular activities are vehicles for physical, emotional, social, and intellectual development. However, it is often the case that communication barriers between people who are deaf and their hearing peers and coaches/club advisors limit them from getting involved. These communication barriers make it difficult for someone with a hearing loss to understand directions, take advice, collaborate, and form bonding relationships with other team or club members. As a result, extracurricular activities such as sports teams, clubs, and volunteering are often not as enjoyable and beneficial for individuals who have hearing loss, and they may engage in them less often. A lack of community involvement through extracurricular activities may also limit the individual’s social network. In general, it can be difficult for someone who is deaf to develop and maintain friendships with their hearing peers due to the communication gap that they experience. They can often miss the jokes, informal banter, and "messing around" that is associated with the formation of many friendships among young people. Conversations between people who are deaf and their hearing peers can often be limited and short due to their differences in communication methods and lack of knowledge on how to overcome these differences. Deaf individuals can often experience rejection by hearing peers who are not willing to make an effort to find their way around communication difficulties. Patience and motivation to overcome such communication barriers is required by both the deaf or hard of hearing and hearing individuals in order to establish and maintain good friendships.
Many people tend to forget about the difficulties that deaf children encounter, as they view the deaf child differently from a deaf adult. Deaf children grow up being unable to fully communicate with their parents, siblings and other family members. Examples include being unable to tell their family what they have learned, what they did, asking for help, or even simply being unable to interact in daily conversation. Deaf children have to learn sign language and to read lips at a young age, however they cannot communicate with others using it unless the others are educated in sign language as well. Children who are deaf or hard of hearing are faced with many complications while growing up, for example some children have to wear hearing aids and others require assistance from sign language (ASL) interpreters. The interpreters help them to communicate with other individuals until they develop the skills they need to efficiently communicate on their own. Although growing up for deaf children may entitle more difficulties than for other children, there are many support groups that allow deaf children to interact with other children. This is where they develop friendships. There are also classes for young children to learn sign language in an environment that has other children in their same situation and around their same age. These groups and classes can be very beneficial in providing the child with the proper knowledge and not to mention the societal interactions that they need in order to live a healthy, young, playful and carefree life that any child deserves.
There are three typical adjustment patterns adopted by adults with hearing loss. The first one is to remain withdrawn into your own self. This provides a sense of safety and familiarity which can be a comforting way to lead your life. The second is to act "as if" one does not even have hearing loss. A positive attitude will help people to live a life with no barriers and thus, engage in optimal interaction. The final and third pattern is for the person to accept their hearing loss as a part of them without undervaluing oneself. This means understanding that one is forced to live life with this disability, however it is not the only thing that constitutes life’s meaning. Furthermore, many feel as if their inability to hear others during conversation is their fault. It's important that these individuals learn how to become more assertive individuals who do not lack fear when it comes to asking someone to repeat something or to speak a little louder. Although there is much fatigue and frustration that is produced from one’s inability to hear, it is important to learn from personal experiences in order to improve on one’s communication skills. In essence, these patterns will help adults with hearing loss deal with the communication barriers that are present.
In most instances, people who are deaf find themselves working with hearing colleagues, where they can often be cut off from the communication going on around them. Interpreters can be provided for meetings and workshops, however are seldom provided for everyday work interactions. Communication of important information needed for jobs typically comes in the form of written or verbal summaries, which do not convey subtle meanings such as tone of voice, side conversations during group discussions, and body language. This can result in confusion and misunderstanding for the worker who is deaf, therefore making it harder to do their job effectively. Additionally, deaf workers can be unintentionally left out of professional networks, informal gatherings, and casual conversations among their collogues. Information about informal rules and organizational culture in the workplace is often communicated though these types of interactions, which puts the worker who is deaf at a professional and personal disadvantage. This could sever their job performance due to lack of access to information and therefore, reduce their opportunity to form relationships with their co-workers. Additionally, these communication barriers can all affect a deaf person’s career development. Since being able to effectively communicate with one's co-workers and other people relevant to one's job is essential to managerial positions, people with hearing loss can often be denied such opportunities.
To avoid these situations in the workplace, individuals can take full-time or part-time sign language courses. In this way, they can become better able to communicate with the deaf and hard of hearing. Such courses teach the American Sign Language (ASL) language as most North Americans use this particular language to communicate. It is a visual language made up of specific gestures (signs), hand shapes, and facial expressions that contain their own unique grammatical rules and sentence structures By completing sign language courses, it ensures that deaf individuals feel a part of the workplace and have the ability to communicate with their co-workers and employer in the manner as other hearing employees do.
Not only can communication barriers between deaf and hearing people affect family relationships, work, and school, but they can also have a very significant effect on a deaf individual’s physical and mental health care. As a result of poor communication between the health care professional and the deaf or hard of hearing patient, many patients report that they are not properly informed about their disease and prognosis. This lack of or poor communication could also lead to other issues such as misdiagnosis, poor assessments, mistreatment, and even possibly harm to patients. Poor communication in this setting is often the result of health care providers having the misconception that all people who are deaf or hard of hearing have the same type of hearing loss, and require the same type of communication methods. In reality, there are many different types and range of hearing loss, and in order to communicate effectively a health care provider needs to understand that each individual with hearing loss has unique needs. This affects how individuals have been educated to communicate, as some communication methods work better depending on an individual’s severity of hearing loss. For example, assuming every deaf or hard of hearing patient knows American Sign Language would be incorrect because there are different types of sign language, each varying in signs and meanings. A patient could have been educated to use cued speech which is entirely different from ASL. Therefore, in order to communicate effectively, a health care provider needs to understand that each individual has unique needs when communicating.
Although there are specific laws and rules to govern communication between health care professionals and people who are deaf, they are not always followed due to the health care professional’s insufficient knowledge of communication techniques. This lack of knowledge can lead them to make assumptions about communicating with someone who is deaf, which can in turn cause them to use an unsuitable form of communication. Acts in countries such as the Americans with Disabilities Act (ADA) state that all health care providers are required to provide reasonable communication accommodations when caring for patients who are deaf. These accommodations could include qualified sign language interpreters, CDIs, and technology such as Internet interpretation services. A qualified sign language interpreter will enhance communication between a deaf individual and a health care professional by interpreting not only a health professional’s verbal communication, but also their non-verbal such as expressions, perceptions, and body language. A Certified Deaf Interpreter (CDI) is a sign language interpreter who is also a member of the Deaf community. They accompany a sign language interpreter and are useful for communication with deaf individuals who also have language or cognitive deficits. A CDI will transform what the health care professional communicates into basic, simple language. This method takes much longer, however it can also be more effective than other techniques. Internet interpretation services are convenient and less costly, but can potentially pose significant risks. They involve the use of a sign language interpreter over a video device rather than directly in the room. This can often be an inaccurate form of communication because the interpreter may not be licensed, is often unfamiliar with the patient and their signs, and can lack knowledge of medical terminology.
Aside from utilizing interpreters, healthcare professionals can improve their communication with deaf or hard of hearing patients by educating themselves on common misconceptions and proper practices depending on the patient’s needs. For example, a common misconception is that exaggerating words and speaking loudly will help the patient understand more clearly. However, many individuals with hearing loss depend on lip-reading to identify words. Exaggerated pronunciation and a raised voice can distort the lips, making it even more difficult to understand. Another common mistake health care professionals make are the use of single words rather than full sentences. Although language should be kept simple and short, keeping context is important because certain homophonous words are difficult to distinguish by lip-reading. Health care professionals can further improve their own communication with their patients by eliminating any background noise and positioning themselves in a way where their face is clearly visible to the patient, and suitably lit. The healthcare professional should know how to use body language and facial expressions to properly communicate different feelings.
A 2005 study achieved successful regrowth of cochlea cells in guinea pigs. However, the regrowth of cochlear hair cells does not imply the restoration of hearing sensitivity, as the sensory cells may or may not make connections with neurons that carry the signals from hair cells to the brain. A 2008 study has shown that gene therapy targeting Atoh1 can cause hair cell growth and attract neuronal processes in embryonic mice. Some hope that a similar treatment will one day ameliorate hearing loss in humans.
Recent research, reported in 2012 achieved growth of cochlear nerve cells resulting in hearing improvements in gerbils, using stem cells. Also reported in 2013 was regrowth of hair cells in deaf adult mice using a drug intervention resulting in hearing improvement. The Hearing Health Foundation in the US has embarked on a project called the Hearing Restoration Project. Also Action on Hearing Loss in the UK is also aiming to restore hearing.
Researchers reported in 2015 that genetically deaf mice which were treated with TMC1 gene therapy recovered some of their hearing. In 2017, additional studies were performed to treat Usher syndrome and here, a recombinant adeno-associated virus seemed to outperform the older vectors.
Besides research studies seeking to improve hearing, such as the ones listed above, research studies on the deaf have also been carried out in order to understand more about audition. Pijil and Shwarz (2005) conducted their study on the deaf who lost their hearing later in life and, hence, used cochlear implants to hear. They discovered further evidence for rate coding of pitch, a system that codes for information for frequencies by the rate that neurons fire in the auditory system, especially for lower frequencies as they are coded by the frequencies that neurons fire from the basilar membrane in a synchronous manner. Their results showed that the subjects could identify different pitches that were proportional to the frequency stimulated by a single electrode. The lower frequencies were detected when the basilar membrane was stimulated, providing even further evidence for rate coding.
Over 30% of childhood hearing loss is caused by diseases such as measles, mumps, rubella, meningitis and ear infections. These can be prevented through immunization and good hygiene practices. Another 17% of childhood hearing loss results from complications at birth, including prematurity, low birth weight, birth asphyxia and neonatal jaundice. Improved maternal and child health practices would help to prevent these complications. The use of ototoxic medicines in expectant mothers and newborns, which is responsible for 4% of childhood hearing loss, could potentially be avoided.
|Wikimedia Commons has media related to Hearing impairment.|
|Wikiquote has quotations related to: Hearing loss| |
DNA and RNA are the two types of nucleic acids that act as the hereditary material to transfer genetic information from one generation to the next one. They store the inherited data in them, thus can be considered as the repositories for the genetic information.
DNA (double-stranded) as a genetic transporter is present in almost all the higher eukaryotic organisms. Whereas RNA (single-stranded) as genetic material is found only in lower organisms like prokaryotes. Eukaryotes too have a large proportion of RNA but here it serves different purposes.
The DNA molecule comprises a 5-carbon deoxyribose sugar as the backbone. Contrarily, ribose sugar makes the backbone of RNA.
In DNA, guanine pairs with cytosine and adenine pairs with thymine. But in RNA, the thymine molecule is replaced by uracil.
In the following content, we will be studying more about the key differences between DNA and RNA along with a comparison chart, diagrams and a thorough description.
Content: Deoxyribonucleic acid (DNA) Vs Ribonucleic acid (RNA)
|Basis For Comparison||Deoxyribonucleic acid (DNA)||Ribonucleic acid (RNA)|
|Meaning||DNA stands for Deoxyribonucleic acid consisting of double-stranded molecule consisting a long chain of nucleotide's.||RNA stands for Ribonucleic acid is single-stranded helix consisting of shorter chains of nucleotide's.|
|Nitrogenous base||Adenine (A), Thymine (T), Cytosine (C), Guanine (G).||Adenine (A), Uracil (U), Cytosine (C), Guanine (G).
|Base pairing||AT (adenine-thymine) CG (guanine-cytosine).||AU (adenine-uracil) CG (guanine-cytosine).|
|Helix form||B form of double-stranded structure presently, consisting long chains of nucleotides.||A form and is single stranded, consisting of shorter chains of nucleotides.|
|Radiations to Ultra-violet rays||DNA can be damaged.||RNA is resistant to UV rays.|
|Reactivity||Less reactive due to the presence of C-H bond.||More reactive due to the presence of C-OH (hydroxyl) bond.|
|Replication||DNA is self replicating.||RNA is synthesised from DNA.|
|Stability in alkaline conditions||DNA is stable.||RNA are unstable.|
|Types||No types.||Three types - mRNA, tRNA, rRNA.|
|Function||Plays role in storing of genetic information, for further development and organisations of other cells.||It helps coding, decoding, gene expression, and protein synthesis.|
What is DNA?
The DNA stands for deoxyribonucleic acid. It acts as the genetic material mostly in the higher organisms and seldom in the prokaryote. Because of the double-strand structure, it is highly stable. enclosed in the nucleus, where it changes its conformation from chromatic thread to the condensed chromosomes as per the need of the cell.
Apart from the nucleus, some other cell organelles like mitochondria and plastids also contain DNA. However, this is not passed from generation to generation, and thus, it is non-hereditary DNA.
Monomer of DNA
The sugar molecule, when combined with nucleobases, they form nucleosides. Four types of nucleosides are- deoxyadenosine, deoxyguanosine, deoxycytidine and deoxythymidine.
The phosphate group gets attached to the nucleosides to form a nucleotide. These nucleotides act as the monomeric unit for the DNA molecule.
Composition of DNA
DNA molecule has three prominent components- Pentose sugar, a phosphate group and nitrogenous bases.
- Pentose Sugar: The DNA has a 5-carbon pentose sugar named deoxyribose sugar. These sugar molecules serve as the backbone of the DNA that supports its structure and orientation.
- Phosphate Group: Phosphate is made by combining one molecule of phosphate and four molecules of oxygen. When this combo attaches to the carbon, it is known as the phosphate group.
In the DNA strand, single phosphate is attached to the 5th carbon of one sugar molecule as well as to the 3rd carbon of the subsequent sugar molecule.
- Nitrogenous Bases: They are also referred to as nucleobases as they make up an essential element of the nucleic acids. They are of two major classes- Purines and pyrimidines.
The purines and pyrimidines both are planar non-polar molecules that resemble pyridine. Pyrimidines are the single organic heterocyclic ring structure, whereas the purines are two fused imidazole double-ringed structures.
Adenine and guanine are purines, while cytosine, thymine and uracil are pyrimidines. One purine binds to one pyrimidine with a hydrogen bond in a double strand.
Structural Characteristic of DNA
- DNA is a double-stranded helical structure. The two strands of the molecule run in an antiparallel fashion to each other i.e. one from 3’ to 5’ and the other from 5’ to 3’.
- The strands comprise the polydeoxynucleotide chains that are right handily twisted around each other with a common axis.
- Each turn is referred to as a pitch that consists of 10 nucleotide base pairs with 34 Ao which means that each subsequent base pair is placed at the distance of 3.4 Ao
- The width of the DNA helix is about 20 Ao
- Both the strands have the hydrophilic backbone on the outer side, while the hydrophobic nucleobases are located on the inner side of the strand.
- The antiparallel strands are not identical but are complementary to one another because of base pairing.
The purines adenine and guanine pair with pyrimidines thymine and cytosine with double and triple hydrogen bonds, respectively.
- The bond between guanine and cytosine (G-C) is 50 times stronger due to triple bonding than that of the adenine and thymine (A-T) bond.
- The double-stranded DNA fragment obeys Chargaff’s rule. According to this the amount of adenine-thymine (A-T) content is exactly equal to the guanine-cytosine (G-C) content.
- 9’ N of the nitrogenous base binds with 1’C of the sugar molecule with glycosidic bond.
- Two consequent monomeric nucleoside units are attached together by 3’-5’ phosphodiester bond.
Types of DNA
The structure and the orientation of the DNA vary on the basis of a number of strands, the direction of the helix, the twist angle of a helix, the distance between subsequent base pairs etc. Thus, the DNA can be of single, double, triple or even quadruple helix but will perform similar functions.
Among these, double-stranded DNA is most prominent in higher organisms. Its structure was well-explained by the two famous scientists, Watson and Crick. They gave the ladder orientation of DNA with the characteristics mentioned above.
The double-stranded DNA is present in three basic forms- A, B, C, D, E and Z. Among these, A, B and Z types are dominantly found.
Organization of DNA
As per the complexity of the organism, the DNA can be of either linear or circular form. The prokaryotes containing DNA as genetic material mainly possess the circular form. For instance, bacteria like Escherichia coli and Vibrio cholerae have circular DNAs.
Double-stranded DNA is approximately a thousand times in length than the diameter of the nucleus. For this reason, it is ultra compactly arranged inside the nucleus in a supercoiled manner. For example, a normal human DNA is of 2-meter-long which is supercoiled and marvellously compacted in such a way that it effectively fits in 10 um small nucleus.
The RNA is a single-stranded nucleic acid prevalent among the lower prokaryotes. It is mostly a single-stranded unstable structure. RNA acts as the genetic material in prokaryotes which freely occurs scattered in the cytoplasm.
However, eukaryotic cells also contain RNA but not as genetic material. This RNA occurs in the nucleolus, ribosomes as well as also found freely in the cytoplasm. In eukaryotes, they facilitate different other processes such as protein synthesis.
Composition of RNA
Similar to the DNA, the RNA also is made up of three essential components that are –
- Pentose sugar: This sugar molecule is one of the major distinctive elements between DNA and RNA. Unlike DNA, RNA comprises ribose sugar instead of deoxyribose.
- Nitrogenous bases: RNA also constitutes the same purine and pyrimidine. But thymine is replaced by uracil in the RNA molecule.
- Phosphate group: There is no difference in the phosphate group, and is as same as DNA.
Monomer of RNA
Ribo-nucleotides are the monomeric units of the RNA that are formed by the combination of ribose sugar, nucleobases and phosphate assemblage.
Around 70-12,000 ribonucleotides are stacked together in a linear fashion and are attached from end to end through 3’-5’ phosphodiester bond.
Structural characteristics of RNA
- RNA is a single-stranded polynucleotide chain that frequently folds, creating the helical loop in itself.
- It consists of ribose sugar instead of deoxyribose in DNA.
- Thymine here is replaced by uracil.
- It doesn’t follow Chargaff’s rule as there is only a single strand and no scene of purine pyrimidine relation.
- They are histologically detected with the help of the orcinol colour test, which identifies the presence of ribose molecules.
- The RNA strand is highly susceptible to the alkali as they hydrolyse it into 2’ 3’- cyclic diesters.
Types of RNA
The RNA is categorised as genetic RNA and non-genetic RNA.
A. Genetic RNA: The genetic RNA is one that contains the hereditary information that is transmitted from one generation to the next. This genetic RNA can be single-stranded or double-stranded.
B. Non-genetic RNA: In higher organisms, the RNA is present to support several other processes than as a genetic transporter. For this reason, it is referred to as a non-genetic form of RNA. They are synthesised from the DNA template but are not passed from generation to generation.
They are of three main types-
- Messenger RNA
- The messenger RNA is also known as mRNA, or nuclear RNA is associated with the protein synthesis mechanism.
- In eukaryotes, the template DNA gets transcripted to this mRNA form which is later translated into proteins.
- They are present in the ribosomes and freely scattered in the cytoplasm.
- They are synthesised inside the nucleus as a complementary strand of template DNA.
- Ribosomal RNA
- They form about 80% of the total RNA content of a cell.
- rRNA forms a considerable part of the protein synthesising ribosomal unit.
- They play a crucial role in identifying conserved regions of tRNA and mRNA.
- They mainly aid the catalytic reaction of protein synthesis.
- Transfer RNA
- Also called as t-RNA.
- The smallest form of RNA comprises only 70-90 nucleotides only.
- Since their primary function is to transfer the amino acids at the time of protein synthesis, therefore they are referred to as transfer RNA.
- All the 20 amino acids have specific tRNAs to bind. These particular t RNA molecules transfer the amino acid to the newly forming polypeptide chain.
- It also performs the role of adapter to translate the genetic sequence of mRNA in the respective proteins.
Key Differences Between Deoxyribonucleic acid (DNA) and Ribonucleic acid (RNA)
- Deoxyribonucleic acid or DNA is a double-stranded structure that serves the role of transporter to carry the genetic information from one to the next generation in higher eukaryotic organisms. On the other hand, ribonucleic acid is a single-stranded molecule that acts as genetic material only in prokaryotes.
- The backbone of DNA is deoxyribose sugar which is made up of a long chain of nucleotides, while RNA is of ribose sugar and a short chain of nucleotides.
- The base pairing of guanine (G) is with cytosine (C) while adenine (A) is with thymine (T) in DNA and adenine with uracil (U) in RNA.
- The function of DNA is to store the genetic information and pass it to other cells also, while RNA functions in coding, decoding and protein synthesis.
- The DNA is susceptible to UV radiation, whereas RNA is less susceptible to UV exposure.
From the above discussion, we can say that DNA and RNA both are equally important, as one contains genetic material which is required to be transferred for further body development and functioning, while RNA helps coding, decoding, regulation and expression of genes. |
Understeer and oversteer
This article needs additional citations for verification. (May 2019) (Learn how and when to remove this template message)
Understeer and oversteer are vehicle dynamics terms used to describe the sensitivity of a vehicle to steering. Oversteer is what occurs when a car turns (steers) by more than the amount commanded by the driver. Conversely, understeer is what occurs when a car steers less than the amount commanded by the driver.
Automotive engineers define understeer and oversteer based on changes in steering angle associated with changes in lateral acceleration over a sequence of steady-state circular turning tests. Car and motorsport enthusiasts often use the terminology more generally in magazines and blogs to describe vehicle response to steering in all kinds of maneuvers.
Vehicle dynamics terminology
Standard terminology used to describe understeer and oversteer are defined by the Society of Automotive Engineers (SAE) in document J670 and by the International Organization for Standardization (ISO) in document 8855. By these terms, understeer and oversteer are based on differences in steady-state conditions where the vehicle is following a constant-radius path at a constant speed with a constant steering wheel angle, on a flat and level surface.
Understeer and oversteer are defined by an understeer gradient (K) that is a measure of how the steering needed for a steady turn changes as a function of lateral acceleration. Steering at a steady speed is compared to the steering that would be needed to follow the same circular path at low speed. The low-speed steering for a given radius of turn is called Ackermann steer. The vehicle has a positive understeer gradient if the difference between required steer and the Ackermann steer increases with respect to incremental increases in lateral acceleration. The vehicle has a negative gradient if the difference in steer decreases with respect to incremental increases in lateral acceleration.
Understeer and oversteer are formally defined using the gradient “K”. If K is positive, the vehicle shows understeer; if K is negative, the vehicle shows oversteer; if K is zero, the vehicle is neutral.
Several tests can be used to determine understeer gradient: constant radius (repeat tests at different speeds), constant speed (repeat tests with different steering angles), or constant steer (repeat tests at different speeds). Formal descriptions of these three kinds of testing are provided by ISO. Gillespie goes into some detail on two of the measurement methods.
Results depend on the type of test, so simply giving a deg/g value is not sufficient; it is also necessary to indicate the type of procedure used to measure the gradient.
Vehicles are inherently nonlinear systems, and it is normal for K to vary over the range of testing. It is possible for a vehicle to show understeer in some conditions and oversteer in others. Therefore, it is necessary to specify the speed and lateral acceleration whenever reporting understeer/oversteer characteristics.
Contributions to understeer gradient
Many properties of the vehicle affect the understeer gradient, including tire cornering stiffness, camber thrust, lateral force compliance steer, self aligning torque, lateral weight transfer, and compliance in the steering system. Weight distribution affects the normal force on each tire and therefore its grip. These individual contributions can be identified analytically or by measurement in a Bundorf analysis.
Simple understanding of real-world handling characteristics
While much of this article is focused on the empirical measurement of understeer gradient, this section will be focused on on-road performance.
Understeer can typically be understood as a condition where, while cornering, the front tires begin to slip first. Since the front tires are slipping and the rear tires have grip, the vehicle will turn less than if all tires had grip. Since the amount of turning is less than it would be if all tires had traction, this is known as under-steering.
The opposite is true if the rear tires break traction first. The front tires will continue to accelerate the front of the vehicle laterally, tracing a circle. The rear tires will have a tendency to continue along the tangent of that circle, but cannot because of their attachment to the front of the car, which still has traction. The result is that the rear tires will swing outwards relative to the front of the vehicle. This turns the vehicle towards the inside of the curve. If the steering angle is not changed (i.e. the steering wheel stays in the same position), then the front wheels will trace out a smaller and smaller circle while the rear wheels continue to swing around the front of the car. This is what is happening when a car 'spins out'. A car susceptible to oversteer is sometimes known as 'tail happy', as in the way a dog wags its tail when happy, and a common problem in negative-k vehicles is fishtailing.
A car is called 'neutral' when the front and rear tires will lose traction at the same time. This is desirable because while the vehicle may slide towards the outside of the turn, it maintains the effective steering angle set by the driver. This makes it 'safer' to drive near the limit condition of traction because the outcome of breaking traction is more predictable.
In real-world driving (where both the speed and turn radius may be constantly changing) several extra factors affect the distribution of traction, and therefore the tendency to oversteer or understeer. These can primarily be split up into things that affect weight distribution to the tires and extra frictional loads put on each tire.
The weight distribution of a vehicle at standstill will affect handling. If the center of gravity is moved closer to the front axle, the vehicle tends to understeer due to tire load sensitivity. When the center of gravity is toward the back of the vehicle, the rear axle tends to swing out, which is oversteer. Weight transfer is inversely proportional to the direction and magnitude of acceleration, and is proportional to the height of the center of gravity. When braking, weight is transferred to the front and the rear tires have less traction. When accelerating, weight will transfer to the rear and decrease front tire traction. In extreme cases, the front tires may completely lift off the ground meaning no steering input can be transferred to the ground at all.
Tires must transmit the forces of acceleration and braking to the ground in addition to lateral forces of turning. These vectors are added, and if the new vector exceeds the tire's maximum static frictional force in any direction, the tire will slip. If a rear-wheel-drive vehicle has enough power to spin the rear wheels, it can initiate oversteer at any time by sending enough engine power to the wheels that they start spinning. Once traction is broken, they are relatively free to swing laterally. Under braking load, more work is typically done by the front brakes. If this forward bias is too great, then the front tires may lose traction, causing understeer.
While weight distribution and suspension geometry have the greatest effect on measured understeer gradient in a steady-state test, power distribution, brake bias, and front-rear weight transfer will also affect which wheels lose traction first in many real-world scenarios.
This section needs additional citations for verification. (October 2019) (Learn how and when to remove this template message)
When an understeer vehicle is taken to the grip limit of the tires, where it is no longer possible to increase lateral acceleration, the vehicle will follow a path with a radius larger than intended. Although the vehicle cannot increase lateral acceleration, it is dynamically stable.
When an oversteer vehicle is taken to the grip limit of the tires, it becomes dynamically unstable with a tendency to spinout. Although the vehicle is unstable in open-loop control, a skilled driver can maintain control past the point of instability with countersteering, and/or correct use of the throttle or even brakes; this can be referred to as drifting.
Understeer gradient is one of the main measures for characterizing steady-state cornering behavior. It is involved in other properties such as characteristic speed (the speed for an understeer vehicle where the steer angle needed to negotiate a turn is twice the Ackermann angle), lateral acceleration gain (g's/deg), yaw velocity gain (1/s), and critical speed (the speed where an oversteer vehicle has infinite lateral acceleration gain).
- SAE International Surface Vehicle Recommended Practice, "Vehicle Dynamics Terminology", SAE Standard J670, Rev. 2008-01-24
- International Organization for Standardization, "Road vehicles – Vehicle dynamics and road-holding ability – Vocabulary", ISO Standard 8855, Rev. 2010
- International Organization for Standardization, "Passenger cars – Steady-state circular driving behaviour – Open-loop test methods", ISO Standard 4138
- T. D. Gillespie, "Fundamentals of Vehicle Dynamics", Society of Automotive Engineers, Inc., Warrendale, PA, 1992. pp 226–230 |
Conversation Teacher Resources
Find Conversation educational ideas and activities
Showing 1 - 20 of 11,360 resources
Conduct a written literary discussion and diminish stress about public writing. Class members, already arranged into literature circles, compose and post responses to novels, signing with initials or class number. The process continues until each member of each literature circle has had a chance to contribute to the conversation several times. The anonymity of the conversation gives every learner the chance to participate and interact with literature in a low-stress environment!
Elevate young scientists' skills with unit conversion using the stair-step method. Detailed instructions and a neat stair-step diagram are on the first page. Four pages of practice problems follow, mostly with real-world applications. All science students need to be able to convert units in the metric system, and this is an outstanding tool for teaching or practice!
An example of unit conversion using drug dosage is the focus of this video. Sal, the all-knowing pre-algebra guide, takes a problem from a nursing class to show the process of unit conversion within an algebraic expression. This helpful video contains examples and hints that are well-paced for all learners.
In this writing a conversation worksheet, students use the conversation map to identify the greeting, starting comment or question, body, short explanation, and farewell and then use them to create a written conversation. Students write 21 answers.
In this unit conversions worksheet, students calculate and convert measurements from one unit to another. They complete 109 short answer and problem solving questions involving unit conversions.
In this electrical worksheet, students draw a schematic design and build a circuit board to grasp the understanding of power conversion circuits before answering a series of 30 open-ended questions including analyzing schematics. This worksheet is printable and there are on-line answers to the questions.
Students participate in a conversation about money. In this money conversion lesson students discuss money in a variety of related topics. They respond to a conversation between a bank manager and a poor parent as well as decide what should be done with amounts of money.
Is bigger really better? By the end of this lesson, learners will be able to apply formulas for computing the diameter of tires and wheel assemblies. Begin by showing a slide presentation that will review definitions for radius and diameter, unit conversions, and decimal/percent conversions. A real-world sample problem is worked out step-by-step using the formula for diameter. Two worksheets and their answer keys are provided.
Learners investigate unit conversion. In this unit conversion lesson, students will build models of square and cubic centimeters using grid paper and generate formula tables for converting units of area and volume. The tables will be used to solve real-life problems.
Young scholars explore the concept of measurement as it relates to equivalencies. They complete simple conversions using visual models of measurement units, and record their answers in a two-column table.
Students, in groups, measure and record each other's height. They identify measurement conversion methods and use online resources to convert measured heights to multiple systems.
Conduct a classroom conversation about communication using this resource as a jumping-off point. For this The Learning Network activity, learners read an excerpt from The New York Times opinion piece, "The Flight From Conversation," and respond to a series of questions about conversation in modern times. Their response can be published on paper or directly onto the web page.
For this conversion factors worksheet, students read word problems, convert them to equations, and compute the answer. They determine the distance between two points and find the cost to travel a predetermined distance. Students determine the unit price of items, and compute volume problems. This five-page worksheet contains 15 problems. Examples and detailed notes are provided.
In this SI units and conversion worksheet, students are given five problems to solve. They convert from one SI unit to another.
In this conversions worksheet, students use metric unit conversions to solve fifteen word problems that involve converting from one unit to another.
In this unit conversion and radiation worksheet, students are given a chart with basic unit conversions for calculating radiation dosages. Students use the conversion factors to solve 7 problems.
Fifth graders determine how to convert fractions, decimals, and percents. In this conversion instructional activity, 5th graders use an on-line applet to practice making these conversions. They review how to make the conversions in a whole class instructional activity before accessing the "Fraction Four" applet. They work with a partner to play the web site game.
Do you have some shy kids in your classroom? This worksheets provides them with five tips on starting a conversation. After reading each tip, they are asked to write about a personal experience with someone. This conversation worksheet will be useful for kids both in and outside of school. If you are doing a unit on interviews, this is a great resource to use as a supplement.
For this unit conversion worksheet, learners are given 3 stories about situations where errors in the conversion of units caused dramatic problems in science. For each situation, students practice converting units to solve the errors.
What is a standard measurement? That's what learners explore here as they use an ordinary item to measure other items. They then determine a conversion factor between their chosen measuring device and a standard unit of measure, and complete division or multiplication to make the conversions. Practice sheet and homework are included. |
The Inca Empire, also known as Incan Empire and the Inka Empire, and at the time known as the Realm of the Four Parts,[a] was the largest empire in pre-Columbian America. The administrative, political and military center of the empire was in the city of Cusco. The Inca civilization arose from the Peruvian highlands sometime in the early 13th century. The Spanish began the conquest of the Inca Empire in 1532 and its last stronghold was conquered in 1572.
Realm of the Four Parts
Banner reconstructions of the Sapa Inca
|Common languages||Aymara, Puquina, Jaqi family, Muchik and scores of smaller languages.|
|Government||Divine, absolute monarchy|
|Túpac Inca Yupanqui|
|Historical era||Pre-Columbian era|
• Pachacuti created the Tawantinsuyu
• End of the last Inca resistance
|1527||2,000,000 km2 (770,000 sq mi)|
From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined Peru, western Ecuador, western and south central Bolivia, northwest Argentina, a large portion of what is today Chile, and the southwesternmost tip of Colombia into a state comparable to the historical empires of Eurasia. Its official language was Quechua. The Inca Empire was unique in that it lacked many of the features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that the Incas were able to construct "one of the greatest imperial states in human history" without the use of the wheel, draft animals, knowledge of iron or steel, or even a system of writing. Notable features of the Inca Empire included its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations and production in a difficult environment, and the organization and management fostered or imposed on its people and their labor.
The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. "Taxes" consisted of a labour obligation of a person to the Empire. The Inca rulers (who theoretically owned all the means of production) reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects. Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama. The Incas considered their king, the Sapa Inca, to be the "son of the sun."
The Incan economy has been described in contradictory ways by scholars; Darrell E. La Lone, in his work The Inca as a Nonmarket Economy, noted that the Inca economy has been described as "feudal, slave, [and] socialist", and added "here one may choose between socialist paradise or socialist tyranny."
The Inca referred to their empire as Tawantinsuyu, "the four suyu". In Quechua, tawa is four and -ntin is a suffix naming a group, so that a tawantin is a quartet, a group of four things taken together, in this case the four suyu ("regions" or "provinces") whose corners met at the capital. The four suyu were: Chinchaysuyu (north), Antisuyu (east; the Amazon jungle), Qullasuyu (south) and Kuntisuyu (west). The name Tawantinsuyu was, therefore, a descriptive term indicating a union of provinces. The Spanish transliterated the name as Tahuatinsuyo or Tahuatinsuyu.
The term Inka means "ruler" or "lord" in Quechua and was used to refer to the ruling class or the ruling family. The Incas were a very small percentage of the total population of the empire, probably numbering only 15,000 to 40,000, but ruling a population of around 10 million people. The Spanish adopted the term (transliterated as Inca in Spanish) as an ethnic term referring to all subjects of the empire rather than simply the ruling class. As such, the name Imperio inca ("Inca Empire") referred to the nation that they encountered and subsequently conquered.
The Inca Empire was the last chapter of thousands of years of Andean civilizations. The Andean civilization is one of five civilizations in the world deemed by scholars to be "pristine", that is indigenous and not derivative from other civilizations.
The Inca Empire was preceded by two large-scale empires in the Andes: the Tiwanaku (c. 300–1100 AD), based around Lake Titicaca and the Wari or Huari (c. 600–1100 AD) centered near the city of Ayacucho. The Wari occupied the Cuzco area for about 400 years. Thus, many of the characteristics of the Inca Empire derived from earlier multi-ethnic and expansive Andean cultures. To those earlier civilizations may be owed some of the accomplishments cited for the Inca Empire: "thousands of miles of roads and dozens of large administrative centers with elaborate stone construction...terraced mountainsides and filled in valleys," and the production of "vast quantities of goods."
Carl Troll has argued that the development of the Inca state in the central Andes was aided by conditions that allow for the elaboration of the staple food chuño. Chuño, which can be stored for long periods, is made of potato dried at the freezing temperatures that are common at nighttime in the southern Peruvian highlands. Such a link between the Inca state and chuño may be questioned, as other crops such as maize can also be dried with only sunlight. Troll also argued that llamas, the Inca's pack animal, can be found in its largest numbers in this very same region. The maximum extent of the Inca Empire roughly coincided with the distribution of llamas and alpacas, the only large domesticated animals in Pre-Hispanic America. As a third point Troll pointed out irrigation technology as advantageous to the Inca state-building. While Troll theorized environmental influences on the Inca Empire, he opposed environmental determinism, arguing that culture lay at the core of the Inca civilization.
This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (December 2021)
The Inca people were a pastoral tribe in the Cusco area around the 12th century. Peruvian oral history tells an origin story of three caves. The center cave at Tampu T'uqu (Tambo Tocco) was named Qhapaq T'uqu ("principal niche", also spelled Capac Tocco). The other caves were Maras T'uqu (Maras Tocco) and Sutiq T'uqu (Sutic Tocco). Four brothers and four sisters stepped out of the middle cave. They were: Ayar Manco, Ayar Cachi, Ayar Awqa (Ayar Auca) and Ayar Uchu; and Mama Ocllo, Mama Raua, Mama Huaco and Mama Qura (Mama Cora). Out of the side caves came the people who were to be the ancestors of all the Inca clans.
Ayar Manco carried a magic staff made of the finest gold. Where this staff landed, the people would live. They traveled for a long time. On the way, Ayar Cachi boasted about his strength and power. His siblings tricked him into returning to the cave to get a sacred llama. When he went into the cave, they trapped him inside to get rid of him.
Ayar Uchu decided to stay on the top of the cave to look over the Inca people. The minute he proclaimed that, he turned to stone. They built a shrine around the stone and it became a sacred object. Ayar Auca grew tired of all this and decided to travel alone. Only Ayar Manco and his four sisters remained.
Finally, they reached Cusco. The staff sank into the ground. Before they arrived, Mama Ocllo had already borne Ayar Manco a child, Sinchi Roca. The people who were already living in Cusco fought hard to keep their land, but Mama Huaca was a good fighter. When the enemy attacked, she threw her bolas (several stones tied together that spun through the air when thrown) at a soldier (gualla) and killed him instantly. The other people became afraid and ran away.
After that, Ayar Manco became known as Manco Cápac, the founder of the Inca. It is said that he and his sisters built the first Inca homes in the valley with their own hands. When the time came, Manco Cápac turned to stone like his brothers before him. His son, Sinchi Roca, became the second emperor of the Inca.
Kingdom of Cusco
Under the leadership of Manco Cápac, the Inca formed the small city-state Kingdom of Cusco (Quechua Qusqu', Qosqo). In 1438, they began a far-reaching expansion under the command of Sapa Inca (paramount leader) Pachacuti-Cusi Yupanqui, whose name meant "earth-shaker." The name of Pachacuti was given to him after he conquered the Tribe of Chancas (modern Apurímac). During his reign, he and his son Tupac Yupanqui brought much of the modern-day territory of Peru under Inca control.
Reorganization and formation
Pachacuti reorganized the kingdom of Cusco into the Tahuantinsuyu, which consisted of a central government with the Inca at its head and four provincial governments with strong leaders: Chinchasuyu (NW), Antisuyu (NE), Kuntisuyu (SW) and Qullasuyu (SE). Pachacuti is thought to have built Machu Picchu, either as a family home or summer retreat, although it may have been an agricultural station.
Pachacuti sent spies to regions he wanted in his empire and they brought to him reports on political organization, military strength and wealth. He then sent messages to their leaders extolling the benefits of joining his empire, offering them presents of luxury goods such as high quality textiles and promising that they would be materially richer as his subjects.
Most accepted the rule of the Inca as a fait accompli and acquiesced peacefully. Refusal to accept Inca rule resulted in military conquest. Following conquest the local rulers were executed. The ruler's children were brought to Cusco to learn about Inca administration systems, then return to rule their native lands. This allowed the Inca to indoctrinate them into the Inca nobility and, with luck, marry their daughters into families at various corners of the empire.
Expansion and consolidation
Traditionally the son of the Inca ruler led the army. Pachacuti's son Túpac Inca Yupanqui began conquests to the north in 1463 and continued them as Inca ruler after Pachacuti's death in 1471. Túpac Inca's most important conquest was the Kingdom of Chimor, the Inca's only serious rival for the Peruvian coast. Túpac Inca's empire then stretched north into modern-day Ecuador and Colombia.
Túpac Inca's son Huayna Cápac added a small portion of land to the north in modern-day Ecuador. At its height, the Inca Empire included Peru, western and south central Bolivia, southwest Ecuador and a large portion of what is today Chile, north of the Maule River. Traditional historiography claims the advance south halted after the Battle of the Maule where they met determined resistance from the Mapuche. This view is challenged by historian Osvaldo Silva who argues instead that it was the social and political framework of the Mapuche that posed the main difficulty in imposing imperial rule. Silva does accept that the battle of the Maule was a stalemate, but argues the Incas lacked incentives for conquest they had had when fighting more complex societies such as the Chimú Empire. Silva also disputes the date given by traditional historiography for the battle: the late 15th century during the reign of Topa Inca Yupanqui (1471–93). Instead, he places it in 1532 during the Inca Civil War. Nevertheless, Silva agrees on the claim that the bulk of the Incan conquests were made during the late 15th century. At the time of the Incan Civil War an Inca army was, according to Diego de Rosales, subduing a revolt among the Diaguitas of Copiapó and Coquimbo.
The empire's push into the Amazon Basin near the Chinchipe River was stopped by the Shuar in 1527. The empire extended into corners of Argentina and Colombia. However, most of the southern portion of the Inca empire, the portion denominated as Qullasuyu, was located in the Altiplano.
The Inca Empire was an amalgamation of languages, cultures and peoples. The components of the empire were not all uniformly loyal, nor were the local cultures all fully integrated. The Inca empire as a whole had an economy based on exchange and taxation of luxury goods and labour. The following quote describes a method of taxation:
For as is well known to all, not a single village of the highlands or the plains failed to pay the tribute levied on it by those who were in charge of these matters. There were even provinces where, when the natives alleged that they were unable to pay their tribute, the Inca ordered that each inhabitant should be obliged to turn in every four months a large quill full of live lice, which was the Inca's way of teaching and accustoming them to pay tribute.
Inca Civil War and Spanish conquest
Spanish conquistadors led by Francisco Pizarro and his brothers explored south from what is today Panama, reaching Inca territory by 1526. It was clear that they had reached a wealthy land with prospects of great treasure, and after another expedition in 1529 Pizarro traveled to Spain and received royal approval to conquer the region and be its viceroy. This approval was received as detailed in the following quote: "In July 1529 the Queen of Spain signed a charter allowing Pizarro to conquer the Incas. Pizarro was named governor and captain of all conquests in Peru, or New Castile, as the Spanish now called the land."
When the conquistadors returned to Peru in 1532, a war of succession between the sons of Sapa Inca Huayna Capac, Huáscar and Atahualpa, and unrest among newly conquered territories weakened the empire. Perhaps more importantly, smallpox, influenza, typhus and measles had spread from Central America. The first epidemic of European disease in the Inca Empire was probably in the 1520s, killing Huayna Capac, his designated heir, and an unknown, probably large, number of other Incan subjects.
The forces led by Pizarro consisted of 168 men, one cannon, and 27 horses. Conquistadors ported lances, arquebuses, steel armor and long swords. In contrast, the Inca used weapons made out of wood, stone, copper and bronze, while using an Alpaca fiber based armor, putting them at significant technological disadvantage—none of their weapons could pierce the Spanish steel armor. In addition, due to the absence of horses in Peru, the Inca did not develop tactics to fight cavalry. However, the Inca were still effective warriors, being able to successfully fight the Mapuche, which later would strategically defeat the Spanish as they expanded further south.
The first engagement between the Inca and the Spanish was the Battle of Puná, near present-day Guayaquil, Ecuador, on the Pacific Coast; Pizarro then founded the city of Piura in July 1532. Hernando de Soto was sent inland to explore the interior and returned with an invitation to meet the Inca, Atahualpa, who had defeated his brother in the civil war and was resting at Cajamarca with his army of 80,000 troops, that were at the moment armed only with hunting tools (knives and lassos for hunting llamas).
Pizarro and some of his men, most notably a friar named Vincente de Valverde, met with the Inca, who had brought only a small retinue. The Inca offered them ceremonial chicha in a golden cup, which the Spanish rejected. The Spanish interpreter, Friar Vincente, read the "Requerimiento" that demanded that he and his empire accept the rule of King Charles I of Spain and convert to Christianity. Atahualpa dismissed the message and asked them to leave. After this, the Spanish began their attack against the mostly unarmed Inca, captured Atahualpa as hostage, and forced the Inca to collaborate.
Atahualpa offered the Spaniards enough gold to fill the room he was imprisoned in and twice that amount of silver. The Inca fulfilled this ransom, but Pizarro deceived them, refusing to release the Inca afterwards. During Atahualpa's imprisonment Huáscar was assassinated elsewhere. The Spaniards maintained that this was at Atahualpa's orders; this was used as one of the charges against Atahualpa when the Spaniards finally executed him, in August 1533.
Although "defeat" often implies an unwanted loss in battle, many of the diverse ethnic groups ruled by the Inca "welcomed the Spanish invaders as liberators and willingly settled down with them to share rule of Andean farmers and miners." Many regional leaders, called Kurakas, continued to serve the Spanish overlords, called encomenderos, as they had served the Inca overlords. Other than efforts to spread the religion of Christianity, the Spanish benefited from and made little effort to change the society and culture of the former Inca Empire until the rule of Francisco de Toledo as viceroy from 1569 to 1581.
The Spanish installed Atahualpa's brother Manco Inca Yupanqui in power; for some time Manco cooperated with the Spanish while they fought to put down resistance in the north. Meanwhile, an associate of Pizarro, Diego de Almagro, attempted to claim Cusco. Manco tried to use this intra-Spanish feud to his advantage, recapturing Cusco in 1536, but the Spanish retook the city afterwards. Manco Inca then retreated to the mountains of Vilcabamba and established the small Neo-Inca State, where he and his successors ruled for another 36 years, sometimes raiding the Spanish or inciting revolts against them. In 1572 the last Inca stronghold was conquered and the last ruler, Túpac Amaru, Manco's son, was captured and executed. This ended resistance to the Spanish conquest under the political authority of the Inca state.
After the fall of the Inca Empire many aspects of Inca culture were systematically destroyed, including their sophisticated farming system, known as the vertical archipelago model of agriculture. Spanish colonial officials used the Inca mita corvée labor system for colonial aims, sometimes brutally. One member of each family was forced to work in the gold and silver mines, the foremost of which was the titanic silver mine at Potosí. When a family member died, which would usually happen within a year or two, the family was required to send a replacement.
The effects of smallpox on the Inca empire were even more devastating. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Smallpox was only the first epidemic. Other diseases, including a probable Typhus outbreak in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, and measles in 1618, all ravaged the Inca people.
The number of people inhabiting Tawantinsuyu at its peak is uncertain, with estimates ranging from 4–37 million. Most population estimates are in the range of 6 to 14 million. In spite of the fact that the Inca kept excellent census records using their quipus, knowledge of how to read them was lost as almost all fell into disuse and disintegrated over time or were destroyed by the Spaniards.
The empire was extremely linguistically diverse. Some of the most important languages were Quechua, Aymara, Puquina and Mochica, respectively mainly spoken in the Central Andes, the Altiplano or (Qullasuyu), the south Peruvian coast (Kuntisuyu), and the area of the north Peruvian coast (Chinchaysuyu) around Chan Chan, today Trujillo. Other languages included Quignam, Jaqaru, Leco, Uru-Chipaya languages, Kunza, Humahuaca, Cacán, Mapudungun, Culle, Chachapoya, Catacao languages, Manta, and Barbacoan languages, as well as numerous Amazonian languages on the frontier regions. The exact linguistic topography of the pre-Columbian and early colonial Andes remains incompletely understood, owing to the extinction of several languages and the loss of historical records.
In order to manage this diversity, the Inca lords promoted the usage of Quechua, especially the variety of what is now Lima as the Qhapaq Runasimi ("great language of the people"), or the official language/lingua franca. Defined by mutual intelligibility, Quechua is actually a family of languages rather than one single language, parallel to the Romance or Slavic languages in Europe. Most communities within the empire, even those resistant to Inca rule, learned to speak a variety of Quechua (forming new regional varieties with distinct phonetics) in order to communicate with the Inca lords and mitma colonists, as well as the wider integrating society, but largely retained their native languages as well. The Incas also had their own ethnic language, referred to as Qhapaq simi ("royal language"), which is thought to have been closely related to or a dialect of Puquina. The split between Qhapaq simi and Qhapaq Runasimi exemplifies the larger split between hatun and hunin (high and low) society in general.
There are several common misconceptions about the history of Quechua, as it is frequently identified as the "Inca language". Quechua did not originate with the Incas, had been a lingua franca in multiple areas before the Inca expansions, was diverse before the rise of the Incas, and it was not the native or original language of the Incas. However, the Incas left an impressive linguistic legacy, in that they introduced Quechua to many areas where it is still widely spoken today, including Ecuador, southern Bolivia, southern Colombia, and parts of the Amazon basin. The Spanish conquerors continued the official usage of Quechua during the early colonial period, and transformed it into a literary language.
The Incas were not known to develop a written form of language; however, they visually recorded narratives through paintings on vases and cups (qirus). These paintings are usually accompanied by geometric patterns known as toqapu, which are also found in textiles. Researchers have speculated that toqapu patterns could have served as a form of written communication (e.g.: heraldry, or glyphs), however this remains unclear. The Incas also kept records by using quipus.
Age and defining gender
The high infant mortality rates that plagued the Inca Empire caused all newborn infants to be given the term 'wawa' when they were born. Most families did not invest very much into their child until they reached the age of two or three years old. Once the child reached the age of three, a "coming of age" ceremony occurred, called the rutuchikuy. For the Incas, this ceremony indicated that the child had entered the stage of "ignorance". During this ceremony, the family would invite all relatives to their house for food and dance, and then each member of the family would receive a lock of hair from the child. After each family member had received a lock, the father would shave the child's head. This stage of life was categorized by a stage of "ignorance, inexperience, and lack of reason, a condition that the child would overcome with time." For Incan society, in order to advance from the stage of ignorance to development the child must learn the roles associated with their gender.
The next important ritual was to celebrate the maturity of a child. Unlike the coming of age ceremony, the celebration of maturity signified the child's sexual potency. This celebration of puberty was called warachikuy for boys and qikuchikuy for girls. The warachikuy ceremony included dancing, fasting, tasks to display strength, and family ceremonies. The boy would also be given new clothes and taught how to act as an unmarried man. The qikuchikuy signified the onset of menstruation, upon which the girl would go into the forest alone and return only once the bleeding had ended. In the forest she would fast, and, once returned, the girl would be given a new name, adult clothing, and advice. This "folly" stage of life was the time young adults were allowed to have sex without being a parent.
Between the ages of 20 and 30, people were considered young adults, "ripe for serious thought and labor." Young adults were able to retain their youthful status by living at home and assisting in their home community. Young adults only reached full maturity and independence once they had married.
At the end of life, the terms for men and women denote loss of sexual vitality and humanity. Specifically, the "decrepitude" stage signifies the loss of mental well-being and further physical decline.
|Table 7.1 from R. Alan Covey's Article|
|Age||Social Value of Life Stage||Female Term||Male Term|
|3–7||Ignorance (not speaking)||Warma||Warma|
|7–14||Development||Thaski (or P'asña)||Maqt'a|
|14–20||Folly (sexually active)||Sipas (unmarried)||Wayna (unmarried)|
|20+||Maturity (body and mind)||Warmi||Qhari|
In the Incan Empire, the age of marriage differed for men and women: men typically married at the age of 20, while women usually got married about four years earlier at the age of 16. Men who were highly ranked in society could have multiple wives, but those lower in the ranks could only take a single wife. Marriages were typically within classes and resembled a more business-like agreement. Once married, the women were expected to cook, collect food and watch over the children and livestock. Girls and mothers would also work around the house to keep it orderly to please the public inspectors. These duties remained the same even after wives became pregnant and with the added responsibility of praying and making offerings to Kanopa, who was the god of pregnancy. It was typical for marriages to begin on a trial basis with both men and women having a say in the longevity of the marriage. If the man felt that it wouldn't work out or if the woman wanted to return to her parents' home the marriage would end. Once the marriage was final, the only way the two could be divorced was if they did not have a child together. Marriage within the Empire was crucial for survival. A family was considered disadvantaged if there was not a married couple at the center because everyday life centered around the balance of male and female tasks.
According to some historians, such as Terence N. D'Altroy, male and female roles were considered equal in Inca society. The "indigenous cultures saw the two genders as complementary parts of a whole." In other words, there was not a hierarchical structure in the domestic sphere for the Incas. Within the domestic sphere, women came to be known as weavers, although there is significant evidence to suggest that this gender role did not appear until colonizing Spaniards realized women's productive talents in this sphere and used it to their economic advantage. There is evidence to suggest that both men and women contributed equally to the weaving tasks in pre-Hispanic Andean culture. Women's everyday tasks included: spinning, watching the children, weaving cloth, cooking, brewing chichi, preparing fields for cultivation, planting seeds, bearing children, harvesting, weeding, hoeing, herding, and carrying water. Men on the other hand, "weeded, plowed, participated in combat, helped in the harvest, carried firewood, built houses, herded llama and alpaca, and spun and wove when necessary". This relationship between the genders may have been complementary. Unsurprisingly, onlooking Spaniards believed women were treated like slaves, because women did not work in Spanish society to the same extent, and certainly did not work in fields. Women were sometimes allowed to own land and herds because inheritance was passed down from both the mother's and father's side of the family. Kinship within the Inca society followed a parallel line of descent. In other words, women ascended from women and men ascended from men. Due to the parallel descent, a woman had access to land and other necessities through her mother.
The Inca believed in reincarnation. After death, the passage to the next world was fraught with difficulties. The spirit of the dead, camaquen, would need to follow a long road and during the trip the assistance of a black dog that could see in the dark was required. Most Incas imagined the after world to be like an earthly paradise with flower-covered fields and snow-capped mountains.
It was important to the Inca that they not die as a result of burning or that the body of the deceased not be incinerated. Burning would cause their vital force to disappear and threaten their passage to the after world. The Inca nobility practiced cranial deformation. They wrapped tight cloth straps around the heads of newborns to shape their soft skulls into a more conical form, thus distinguishing the nobility from other social classes.
The Incas made human sacrifices. As many as 4,000 servants, court officials, favorites and concubines were killed upon the death of the Inca Huayna Capac in 1527. The Incas performed child sacrifices around important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as qhapaq hucha.
The Incas were polytheists who worshipped many gods. These included:
- Viracocha (also Pachacamac) – Created all living things
- Apu Illapu – Rain God, prayed to when they need rain
- Ayar Cachi – Hot-tempered God, causes earthquakes
- Illapa – Goddess of lightning and thunder (also Yakumama water goddess)
- Inti – sun god and patron deity of the holy city of Cusco (home of the sun)
- Kuychi – Rainbow God, connected with fertility
- Mama Killa – Wife of Inti, called Moon Mother
- Mama Occlo – Wisdom to civilize the people, taught women to weave cloth and build houses
- Manco Cápac – known for his courage and sent to earth to become first king of the Incas. Taught people how to grow plants, make weapons, work together, share resources and worship the Gods
- Pachamama – The Goddess of earth and wife of Viracocha. People give her offerings of coca leaves and beer and pray to her for major agricultural occasions
- Quchamama – Goddess of the sea
- Sachamama – Means Mother Tree, goddess in the shape of a snake with two heads
- Yakumama – Means mother Water. Represented as a snake. When she came to earth she transformed into a great river (also Illapa).
The Inca Empire employed central planning. The Inca Empire traded with outside regions, although they did not operate a substantial internal market economy. While axe-monies were used along the northern coast, presumably by the provincial mindaláe trading class, most households in the empire lived in a traditional economy in which households were required to pay taxes, usually in the form of the mit'a corvée labor, and military obligations, though barter (or trueque) was present in some areas. In return, the state provided security, food in times of hardship through the supply of emergency resources, agricultural projects (e.g. aqueducts and terraces) to increase productivity and occasional feasts. While mit'a was used by the state to obtain labor, individual villages had a pre-inca system of communal work, known as mink'a. This system survives to the modern day, known as mink'a or faena. The economy rested on the material foundations of the vertical archipelago, a system of ecological complementarity in accessing resources and the cultural foundation of ayni, or reciprocal exchange.
The Sapa Inca was conceptualized as divine and was effectively head of the state religion. The Willaq Umu (or Chief Priest) was second to the emperor. Local religious traditions continued and in some cases such as the Oracle at Pachacamac on the Peruvian coast, were officially venerated. Following Pachacuti, the Sapa Inca claimed descent from Inti, who placed a high value on imperial blood; by the end of the empire, it was common to incestuously wed brother and sister. He was "son of the sun," and his people the intip churin, or "children of the sun," and both his right to rule and mission to conquer derived from his holy ancestor. The Sapa Inca also presided over ideologically important festivals, notably during the Inti Raymi, or "Sunfest" attended by soldiers, mummified rulers, nobles, clerics and the general population of Cusco beginning on the June solstice and culminating nine days later with the ritual breaking of the earth using a foot plow by the Inca. Moreover, Cusco was considered cosmologically central, loaded as it was with huacas and radiating ceque lines as the geographic center of the Four-Quarters; Inca Garcilaso de la Vega called it "the navel of the universe".
Organization of the empire
The Inca Empire was a federalist system consisting of a central government with the Inca at its head and four-quarters, or suyu: Chinchay Suyu (NW), Anti Suyu (NE), Kunti Suyu (SW) and Qulla Suyu (SE). The four corners of these quarters met at the center, Cusco. These suyu were likely created around 1460 during the reign of Pachacuti before the empire reached its largest territorial extent. At the time the suyu were established they were roughly of equal size and only later changed their proportions as the empire expanded north and south along the Andes.
Cusco was likely not organized as a wamani, or province. Rather, it was probably somewhat akin to a modern federal district, like Washington, DC or Mexico City. The city sat at the center of the four suyu and served as the preeminent center of politics and religion. While Cusco was essentially governed by the Sapa Inca, his relatives and the royal panaqa lineages, each suyu was governed by an Apu, a term of esteem used for men of high status and for venerated mountains. Both Cusco as a district and the four suyu as administrative regions were grouped into upper hanan and lower hurin divisions. As the Inca did not have written records, it is impossible to exhaustively list the constituent wamani. However, colonial records allow us to reconstruct a partial list. There were likely more than 86 wamani, with more than 48 in the highlands and more than 38 on the coast.
The most populous suyu was Chinchaysuyu, which encompassed the former Chimu empire and much of the northern Andes. At its largest extent, it extended through much of modern Ecuador and into modern Colombia.
The largest suyu by area was Qullasuyu, named after the Aymara-speaking Qulla people. It encompassed the Bolivian Altiplano and much of the southern Andes, reaching Argentina and as far south as the Maipo or Maule river in Central Chile. Historian José Bengoa singled out Quillota as likely being the foremost Inca settlement in Chile.
The second smallest suyu, Antisuyu, was northwest of Cusco in the high Andes. Its name is the root of the word "Andes."
Kuntisuyu was the smallest suyu, located along the southern coast of modern Peru, extending into the highlands towards Cusco.
The Inca state had no separate judiciary or codified laws. Customs, expectations and traditional local power holders governed behavior. The state had legal force, such as through tokoyrikoq (lit. "he who sees all"), or inspectors. The highest such inspector, typically a blood relative to the Sapa Inca, acted independently of the conventional hierarchy, providing a point of view for the Sapa Inca free of bureaucratic influence.
The Inca had three moral precepts that governed their behavior:
- Ama sua: Do not steal
- Ama llulla: Do not lie
- Ama quella: Do not be lazy
Colonial sources are not entirely clear or in agreement about Inca government structure, such as exact duties and functions of government positions. But the basic structure can be broadly described. The top was the Sapa Inca. Below that may have been the Willaq Umu, literally the "priest who recounts", the High Priest of the Sun. However, beneath the Sapa Inca also sat the Inkap rantin, who was a confidant and assistant to the Sapa Inca, perhaps similar to a Prime Minister. Starting with Topa Inca Yupanqui, a "Council of the Realm" was composed of 16 nobles: 2 from hanan Cusco; 2 from hurin Cusco; 4 from Chinchaysuyu; 2 from Cuntisuyu; 4 from Collasuyu; and 2 from Antisuyu. This weighting of representation balanced the hanan and hurin divisions of the empire, both within Cusco and within the Quarters (hanan suyukuna and hurin suyukuna).
While provincial bureaucracy and government varied greatly, the basic organization was decimal. Taxpayers – male heads of household of a certain age range – were organized into corvée labor units (often doubling as military units) that formed the state's muscle as part of mit'a service. Each unit of more than 100 tax-payers were headed by a kuraka, while smaller units were headed by a kamayuq, a lower, non-hereditary status. However, while kuraka status was hereditary and typically served for life, the position of a kuraka in the hierarchy was subject to change based on the privileges of superiors in the hierarchy; a pachaka kuraka could be appointed to the position by a waranqa kuraka. Furthermore, one kuraka in each decimal level could serve as the head of one of the nine groups at a lower level, so that a pachaka kuraka might also be a waranqa kuraka, in effect directly responsible for one unit of 100 tax-payers and less directly responsible for nine other such units.
|Kuraka in Charge||Number of Taxpayers|
Arts and technology
We can assure your majesty that it is so beautiful and has such fine buildings that it would even be remarkable in Spain.
Architecture was the most important of the Incan arts, with textiles reflecting architectural motifs. The most notable example is Machu Picchu, which was constructed by Inca engineers. The prime Inca structures were made of stone blocks that fit together so well that a knife could not be fitted through the stonework. These constructs have survived for centuries, with no use of mortar to sustain them.
This process was first used on a large scale by the Pucara (c. 300 BC–AD 300) peoples to the south in Lake Titicaca and later in the city of Tiwanaku (c. AD 400–1100) in present-day Bolivia. The rocks were sculpted to fit together exactly by repeatedly lowering a rock onto another and carving away any sections on the lower rock where the dust was compressed. The tight fit and the concavity on the lower rocks made them extraordinarily stable, despite the ongoing challenge of earthquakes and volcanic activity.
Measures, calendrics and mathematics
Physical measures used by the Inca were based on human body parts. Units included fingers, the distance from thumb to forefinger, palms, cubits and wingspans. The most basic distance unit was thatkiy or thatki, or one pace. The next largest unit was reported by Cobo to be the topo or tupu, measuring 6,000 thatkiys, or about 7.7 km (4.8 mi); careful study has shown that a range of 4.0 to 6.3 km (2.5 to 3.9 mi) is likely. Next was the wamani, composed of 30 topos (roughly 232 km or 144 mi). To measure area, 25 by 50 wingspans were used, reckoned in topos (roughly 3,280 km2 or 1,270 sq mi). It seems likely that distance was often interpreted as one day's walk; the distance between tambo way-stations varies widely in terms of distance, but far less in terms of time to walk that distance.
Inca calendars were strongly tied to astronomy. Inca astronomers understood equinoxes, solstices and zenith passages, along with the Venus cycle. They could not, however, predict eclipses. The Inca calendar was essentially lunisolar, as two calendars were maintained in parallel, one solar and one lunar. As 12 lunar months fall 11 days short of a full 365-day solar year, those in charge of the calendar had to adjust every winter solstice. Each lunar month was marked with festivals and rituals. Apparently, the days of the week were not named and days were not grouped into weeks. Similarly, months were not grouped into seasons. Time during a day was not measured in hours or minutes, but in terms of how far the sun had travelled or in how long it had taken to perform a task.
The sophistication of Inca administration, calendrics and engineering required facility with numbers. Numerical information was stored in the knots of quipu strings, allowing for compact storage of large numbers. These numbers were stored in base-10 digits, the same base used by the Quechua language and in administrative and military units. These numbers, stored in quipu, could be calculated on yupanas, grids with squares of positionally varying mathematical values, perhaps functioning as an abacus. Calculation was facilitated by moving piles of tokens, seeds or pebbles between compartments of the yupana. It is likely that Inca mathematics at least allowed division of integers into integers or fractions and multiplication of integers and fractions.
According to mid-17th-century Jesuit chronicler Bernabé Cobo, the Inca designated officials to perform accounting-related tasks. These officials were called quipo camayos. Study of khipu sample VA 42527 (Museum für Völkerkunde, Berlin) revealed that the numbers arranged in calendrically significant patterns were used for agricultural purposes in the "farm account books" kept by the khipukamayuq (accountant or warehouse keeper) to facilitate the closing of accounting books.
Tunics were created by skilled Incan textile-makers as a piece of warm clothing, but they also symbolized cultural and political status and power. Cumbi was the fine, tapestry-woven woolen cloth that was produced and necessary for the creation of tunics. Cumbi was produced by specially-appointed women and men. Generally, textile-making was practiced by both men and women. As emphasized by certain historians, only with European conquest was it deemed that women would become the primary weavers in society, as opposed to Incan society where specialty textiles were produced by men and women equally.
Complex patterns and designs were meant to convey information about order in Andean society as well as the Universe. Tunics could also symbolize one's relationship to ancient rulers or important ancestors. These textiles were frequently designed to represent the physical order of a society, for example, the flow of tribute within an empire. Many tunics have a "checkerboard effect" which is known as the collcapata. According to historians Kenneth Mills, William B. Taylor, and Sandra Lauderdale Graham, the collcapata patterns "seem to have expressed concepts of commonality, and, ultimately, unity of all ranks of people, representing a careful kind of foundation upon which the structure of Inkaic universalism was built." Rulers wore various tunics throughout the year, switching them out for different occasions and feasts.
The symbols present within the tunics suggest the importance of "pictographic expression" within Inkan and other Andean societies far before the iconographies of the Spanish Christians.
Ceramics, precious metals and textiles
Ceramics were painted using the polychrome technique portraying numerous motifs including animals, birds, waves, felines (popular in the Chavin culture) and geometric patterns found in the Nazca style of ceramics. In a culture without a written language, ceramics portrayed the basic scenes of everyday life, including the smelting of metals, relationships and scenes of tribal warfare. The most distinctive Inca ceramic objects are the Cusco bottles or "aryballos". Many of these pieces are on display in Lima in the Larco Archaeological Museum and the National Museum of Archaeology, Anthropology and History.
Almost all of the gold and silver work of the Incan empire was melted down by the conquistadors, and shipped back to Spain.
Communication and medicine
The Inca recorded information on assemblages of knotted strings, known as Quipu, although they can no longer be decoded. Originally it was thought that Quipu were used only as mnemonic devices or to record numerical data. Quipus are also believed to record history and literature.
The Inca made many discoveries in medicine. They performed successful skull surgery, by cutting holes in the skull to alleviate fluid buildup and inflammation caused by head wounds. Many skull surgeries performed by Inca surgeons were successful. Survival rates were 80–90%, compared to about 30% before Inca times.
The Incas revered the coca plant as sacred/magical. Its leaves were used in moderate amounts to lessen hunger and pain during work, but were mostly used for religious and health purposes. The Spaniards took advantage of the effects of chewing coca leaves. The Chasqui, messengers who ran throughout the empire to deliver messages, chewed coca leaves for extra energy. Coca leaves were also used as an anaesthetic during surgeries.
Weapons, armor and warfare
The Inca army was the most powerful at that time, because any ordinary villager or farmer could be recruited as a soldier as part of the mit'a system of mandatory public service. Every able bodied male Inca of fighting age had to take part in war in some capacity at least once and to prepare for warfare again when needed. By the time the empire reached its largest size, every section of the empire contributed in setting up an army for war.
The Incas had no iron or steel and their weapons were not much more effective than those of their opponents so they often defeated opponents by sheer force of numbers, or else by persuading them to surrender beforehand by offering generous terms. Inca weaponry included "hardwood spears launched using throwers, arrows, javelins, slings, the bolas, clubs, and maces with star-shaped heads made of copper or bronze." Rolling rocks downhill onto the enemy was a common strategy, taking advantage of the hilly terrain. Fighting was sometimes accompanied by drums and trumpets made of wood, shell or bone. Armor included:
- Helmets made of wood, cane, or animal skin, often lined with copper or bronze; some were adorned with feathers
- Round or square shields made from wood or hide
- Cloth tunics padded with cotton and small wooden planks to protect the spine
- Ceremonial metal breastplates, of copper, silver, and gold, have been found in burial sites, some of which may have also been used in battle.
Roads allowed quick movement (on foot) for the Inca army and shelters called tambo and storage silos called qullqas were built one day's travelling distance from each other, so that an army on campaign could always be fed and rested. This can be seen in names of ruins such as Ollantay Tambo, or My Lord's Storehouse. These were set up so the Inca and his entourage would always have supplies (and possibly shelter) ready as they traveled.
Banner of the Inca
Chronicles and references from the 16th and 17th centuries support the idea of a banner. However, it represented the Inca (emperor), not the empire.
Francisco López de Jerez wrote in 1534:
... todos venían repartidos en sus escuadras con sus banderas y capitanes que los mandan, con tanto concierto como turcos.
(... all of them came distributed into squads, with their flags and captains commanding them, as well-ordered as Turks.)
Chronicler Bernabé Cobo wrote:
The royal standard or banner was a small square flag, ten or twelve spans around, made of cotton or wool cloth, placed on the end of a long staff, stretched and stiff such that it did not wave in the air and on it each king painted his arms and emblems, for each one chose different ones, though the sign of the Incas was the rainbow and two parallel snakes along the width with the tassel as a crown, which each king used to add for a badge or blazon those preferred, like a lion, an eagle and other figures.
(... el guión o estandarte real era una banderilla cuadrada y pequeña, de diez o doce palmos de ruedo, hecha de lienzo de algodón o de lana, iba puesta en el remate de una asta larga, tendida y tiesa, sin que ondease al aire, y en ella pintaba cada rey sus armas y divisas, porque cada uno las escogía diferentes, aunque las generales de los Incas eran el arco celeste y dos culebras tendidas a lo largo paralelas con la borda que le servía de corona, a las cuales solía añadir por divisa y blasón cada rey las que le parecía, como un león, un águila y otras figuras.)
-Bernabé Cobo, Historia del Nuevo Mundo (1653)
Guaman Poma's 1615 book, El primer nueva corónica y buen gobierno, shows numerous line drawings of Inca flags. In his 1847 book A History of the Conquest of Peru, "William H. Prescott ... says that in the Inca army each company had its particular banner and that the imperial standard, high above all, displayed the glittering device of the rainbow, the armorial ensign of the Incas." A 1917 world flags book says the Inca "heir-apparent ... was entitled to display the royal standard of the rainbow in his military campaigns."
In modern times the rainbow flag has been wrongly associated with the Tawantinsuyu and displayed as a symbol of Inca heritage by some groups in Peru and Bolivia. The city of Cusco also flies the Rainbow Flag, but as an official flag of the city. The Peruvian president Alejandro Toledo (2001–2006) flew the Rainbow Flag in Lima's presidential palace. However, according to Peruvian historiography, the Inca Empire never had a flag. Peruvian historian María Rostworowski said, "I bet my life, the Inca never had that flag, it never existed, no chronicler mentioned it". Also, to the Peruvian newspaper El Comercio, the flag dates to the first decades of the 20th century, and even the Congress of the Republic of Peru has determined that flag is a fake by citing the conclusion of National Academy of Peruvian History:
"The official use of the wrongly called 'Tawantinsuyu flag' is a mistake. In the Pre-Hispanic Andean World there did not exist the concept of a flag, it did not belong to their historic context".
National Academy of Peruvian History
Adaptations to altitude
The people of the Andes, including the Incas, were able to adapt to high-altitude living through successful acclimatization, which is characterized by increasing oxygen supply to the blood tissues. For the native living in the Andean highlands, this was achieved through the development of a larger lung capacity, and an increase in red blood cell counts, hemoglobin concentration, and capillary beds.
Compared to other humans, the Andeans had slower heart rates, almost one-third larger lung capacity, about 2 L (4 pints) more blood volume and double the amount of hemoglobin, which transfers oxygen from the lungs to the rest of the body. While the Conquistadors may have been taller, the Inca had the advantage of coping with the extraordinary altitude. The Tibetans in Asia living in the Himalayas are also adapted to living in high-altitudes, although the adaptation is different from that of the Andeans.
Incan archeological sites
- Aclla, the "chosen women"
- Amauta, Inca teachers
- Amazonas before the Inca Empire
- Anden, agricultural terrace
- Inca army
- Inca cuisine
- Incan aqueducts
- Felipe Guaman Poma de Ayala
- Paria, Bolivia
- Religion in the Inca Empire
- Tampukancha, Inca religious site
- Society of the Spanish-Americans in the Spanish Colonial Americas
- Turchin, Peter; Adams, Jonathan M.; Hall, Thomas D (December 2006). "East-West Orientation of Historical Empires". Journal of World-Systems Research. 12 (2): 222. ISSN 1076-156X. Retrieved 16 September 2016.
- Rein Taagepera (September 1997). "Expansion and Contraction Patterns of Large Polities: Context for Russia". International Studies Quarterly. 41 (3): 497. doi:10.1111/0020-8833.00053. JSTOR 2600793. Retrieved 7 September 2018.
- McEwan 2008, p. 221.
- Schwartz, Glenn M.; Nichols, John J. (2010). After Collapse: The Regeneration of Complex Societies. University of Arizona Press. ISBN 978-0-8165-2936-0.
- "Quechua, the Language of the Incas". 11 November 2013.
- McEwan, Gordon F. (2006). The Incas: New Perspectives. New York: W.W. Norton & Co. p. 5.
- Morris, Craig and von Hagen, Adrianna (2011), The Incas, London: Thames & Hudson, pp. 48–58
- "The Inca – All Empires".
- "The Inca." The National Foreign Language Center at the University of Maryland. 29 May 2007. Retrieved 10 September 2013.
- La Lone, Darrell E. "The Inca as a Nonmarket Economy: Supply on Command versus Supply and Demand". p. 292. Retrieved 10 August 2017.
- "Inca". American Heritage Dictionary. Houghton Mifflin Company. 2009.
- McEwan 2008, p. 93.
- Upton, Gary and von Hagen, Adriana (2015), Encyclopedia of the Incas, New York: Rowand & Littlefield, p. 2. Some scholars cite 6 or 7 pristine civilizations. ISBN 0804715165.
- McEwan, Gordon F. (2006), The Incas: New Perspectives, New York: W. W. Norton & Company, p. 65
- Spalding, Karen (1984), Huarocochí Stanford University Press: Stanford, page 77
- Gade, Daniel (2016). "Urubamba Verticality: Reflections on Crops and Diseases". Spell of the Urubamba: Anthropogeographical Essays on an Andean Valley in Space and Time. p. 86. ISBN 978-3-319-20849-7.
- Hardoy, Jorge Henríque (1973). Pre-Columbian Cities. p. 24. ISBN 978-0-8027-0380-4.
- Gade, Daniel W. (1996). "Carl Troll on Nature and Culture in the Andes (Carl Troll über die Natur und Kultur in den Anden)". Erdkunde. 50 (4): 301–16. doi:10.3112/erdkunde.1996.04.02.
- McEwan 2008, p. 57.
- McEwan 2008, p. 69.
- Demarest, Arthur Andrew; Conrad, Geoffrey W. (1984). Religion and Empire: The Dynamics of Aztec and Inca Expansionism. Cambridge: Cambridge University Press. pp. 57–59. ISBN 0-521-31896-3.
- The three laws of Tawantinsuyu are still referred to in Bolivia these days as the three laws of the Qullasuyu.
- Weatherford, J. McIver (1988). Indian Givers: How the Indians of the Americas Transformed the World. New York: Fawcett Columbine. pp. 60–62. ISBN 0-449-90496-2.
- Silva Galdames, Osvaldo (1983). "¿Detuvo la batalla del Maule la expansión inca hacia el sur de Chile?". Cuadernos de Historia (in Spanish). 3: 7–25. Retrieved 10 January 2019.
- Ernesto Salazar (1977). An Indian federation in lowland Ecuador (PDF). International Work Group for Indigenous Affairs. p. 13. Retrieved 16 February 2013.
- Starn, Orin; Kirk, Carlos Iván; Degregori, Carlos Iván (2009). The Peru Reader: History, Culture, Politics. Duke University Press. ISBN 978-0-8223-8750-3.
- *Juan de Samano (9 October 2009). "Relacion de los primeros descubrimientos de Francisco Pizarro y Diego de Almagro, 1526". bloknot.info (A. Skromnitsky). Retrieved 10 October 2009.
- Somervill, Barbara (2005). Francisco Pizarro: Conqueror of the Incas. Compass Point Books. p. 52. ISBN 978-0-7565-1061-9.
- D'Altroy, Terence N. (2003). The Incas. Malden, Massachusetts: Blackwell Publishing. p. 76. ISBN 9780631176770.
- McEwan 2008, p. 79.
- Raudzens, George, ed. (2003). Technology, Disease, and Colonial Conquest. Boston: Brill Academic. p. xiv.
- Mumford, Jeremy Ravi (2012), Vertical Empire, Duke University Press: Durham, pages 19-30, 56-57. ISBN 9780822353102.
- McEwan 2008, p. 31.
- Sanderson 1992, p. 76.
- Millersville University Silent Killers of the New World Archived 3 November 2006 at the Wayback Machine
- McEwan 2008, pp. 93–96. The 10 million population estimate in the info box is a mid-range estimate of the population..
- Torero Fernández de Córdoba, Alfredo. (1970) "Lingüística e historia de la Sociedad Andina", Anales Científicos de la Universidad Agraria, VIII, 3-4, págs. 249-251. Lima: UNALM.
- "Origins And Diversity of Quechua".
- "Comparing chronicles and Andean visual texts. Issues for analysis" (PDF). Chungara, Revista de Antropología Chilena. 46, Nº 1, 2014: 91–113.
- "Royal Tocapu in Guacan Poma: An Inca Heraldic?". Boletin de Arqueologia PUCP. Nº 8, 2004: 305–23.
- Covey, R. Alan. "Inca Gender Relations: from household to empire." Gender in cross-cultural perspective. Brettell, Caroline; Sargent, Carolyn F., 1947 (Seventh edition ed.). Abingdon, Oxon. ISBN 978-0-415-78386-6. OCLC 962171839.
- Incas : lords of gold and glory. Alexandria, VA: Time-Life Books. 1992. ISBN 0-8094-9870-7. OCLC 25371192.
- Gouda, F. (2008). Colonial Encounters, Body Politics, and Flows of Desire. Journal of Women's History, 20(3), 166–80.
- Gerard, K. (1997). Ancient Lives. New Moon, 4(4), 44.
- N., D'Altroy, Terence (2002). The Incas. Malden, MA: Blackwell. ISBN 0-631-17677-2. OCLC 46449340.
- Karen B. Graubart. "Weaving and the Construction of a Gender Division of Labor in Early Colonial Peru." The American Indian Quarterly 24, no. 4 (2000): 537-561.
- Irene., Silverblatt (1987). Moon, sun, and witches : gender ideologies and class in Inca and colonial Peru. Princeton, NJ: Princeton University Press. ISBN 0-691-07726-6. OCLC 14165734.
- 1580–1657., Cobo, Bernabé (1979). History of the Inca Empire : an account of the Indians' customs and their origin, together with a treatise on Inca legends, history, and social institutions. Hamilton, Roland. Austin: University of Texas Press. ISBN 0-292-73008-X. OCLC 4933087.CS1 maint: numeric names: authors list (link)
- Andrew., Malpass, Michael (1996). Daily life in the Inca empire. Westport, CN: Greenwood Press. ISBN 0-313-29390-2. OCLC 33405288.
- Urton, Gary (2009). Signs of the Inka Khipu: Binary Coding in the Andean Knotted-String Records. University of Texas Press. ISBN 978-0-292-77375-2.
- The Incas of Peru
- Burger, Richard L.; Salazar, Lucy C. (2004). Machu Picchu: Unveiling the Mystery of the Incas. Yale University Press. ISBN 978-0-300-09763-4.
- Davies, Nigel (1981). Human sacrifice: in history and today. Morrow. pp. 261–62. ISBN 978-0-688-03755-0.
- Reinhard, Johan (November 1999). "A 6,700 metros niños incas sacrificados quedaron congelados en el tiempo". National Geographic, Spanish version: 36–55.
- Salomon, Frank (1 January 1987). "A North Andean Status Trader Complex under Inka Rule". Ethnohistory. 34 (1): 63–77. doi:10.2307/482266. JSTOR 482266.
- Earls, J. The Character of Inca and Andean Agriculture. pp. 1–29
- Moseley 2001, p. 44.
- Murra, John V.; Rowe, John Howland (1 January 1984). "An Interview with John V. Murra". The Hispanic American Historical Review. 64 (4): 633–53. doi:10.2307/2514748. JSTOR 2514748. S2CID 222285111.
- Maffie, J. (5 March 2013). "Pre-Columbian Philosophies". In Nuccetelli, Susana; Schutte, Ofelia; Bueno, Otávio (eds.). A Companion to Latin American Philosophy. John Wiley & Sons. pp. 137–38. ISBN 978-1-118-61056-5.
- Newitz, Annalee (3 January 2012), The greatest mystery of the Inca Empire was its strange economy, io9, retrieved 4 January 2012
- Willey, Gordon R. (1966). An Introduction to American Archaeology: South America. Englewood Cliffs: Prentice-Hall. pp. 173–75.
- D'Altroy 2014, pp. 86–89, 111, 154–55.
- Moseley2001, pp. 81–85.
- McEwan 2008, pp. 138–39.
- Rowe in Steward, Ed., p. 262
- Rowe in Steward, ed., pp. 185–92
- D'Altroy 2014, pp. 42–43, 86–89.
- McEwan 2008, pp. 113–14.
- Dillehay, T.; Gordon, A. (1998). "La actividad prehispánica y su influencia en la Araucanía". In Dillehay, Tom; Netherly, Patricia (eds.). La frontera del estado Inca. Editorial Abya Yala. ISBN 978-9978-04-977-8.
- Bengoa 2003, pp. 37–38.
- D'Altroy 2014, p. 87.
- D'Altroy 2014, pp. 87–88.
- D'Altroy 2014, pp. 235–36.
- D'Altroy 2014, p. 99.
- R. T. Zuidema, Hierarchy and Space in Incaic Social Organization. Ethnohistory, Vol. 30, No. 2. (Spring, 1983), p. 97
- Zuidema 1983, p. 48. sfn error: no target: CITEREFZuidema1983 (help)
- Julien 1982, pp. 121–27.
- D'Altroy 2014, pp. 233–34.
- McEwan 2008, pp. 114–15.
- Julien 1982, p. 123.
- D'Altroy 2014, p. 233.
- D'Altroy 2014, pp. 246–47.
- McEwan 2008, pp. 179–80.
- D'Altroy 2014, pp. 150–54.
- McEwan 2008, pp. 185–87.
- Neuman, William (2 January 2016). "Untangling an Accounting Tool and an Ancient Incan Mystery". The New York Times. Retrieved 2 January 2016.
- McEwan 2008, p. 183-185.
- "Supplementary Information for: Heggarty 2008". Arch.cam.ac.uk. Archived from the original on 12 March 2013. Retrieved 24 September 2012.
- "Inca mathematics". History.mcs.st-and.ac.uk. Retrieved 24 September 2012.
- McEwan 2008, p. 185.
- Cobo, B. (1983 ). Obras del P. Bernabé Cobo. Vol. 1. Edited and preliminary study By Francisco Mateos. Biblioteca de Autores Españoles, vol. 91. Madrid: Ediciones Atlas.
- Sáez-Rodríguez, A. (2012). "An Ethnomathematics Exercise for Analyzing a Khipu Sample from Pachacamac (Perú)". Revista Latinoamericana de Etnomatemática. 5 (1): 62–88.
- Sáez-Rodríguez, A. (2013). "Knot numbers used as labels for identifying subject matter of a khipu". Revista Latinoamericana de Etnomatemática. 6 (1): 4–19.
- Mills, Kenneth, Taylor, William B., and Graham, Sandra Lauderdale, eds. Colonial Latin America : A Documentary History. Denver: Rowman & Littlefield Publishers, 2002, 14-18.
- Cummins, Thomas B. F.; Anderson, Barbara (23 September 2008). The Getty Murua: Essays on the Making of Martin de Murua's "Historia General del Piru", J. Paul Getty Museum Ms. Ludwig XIII 16. Getty Publications. p. 127. ISBN 978-0-89236-894-5.
- Feltham, Jane (1989). Peruvian textiles. Internet Archive. Aylesbury : Shire. p. 57. ISBN 978-0-7478-0014-9.
- Berrin, Kathleen (1997). The Spirit of Ancient Peru: Treasures from the Museo Arqueológico Rafael Larco Herrera. Thames and Hudson. ISBN 978-0-500-01802-6.
- Minster, Christopher; PhD. "What Happened to the Treasure Hoard of the Inca Emperor?". ThoughtCo. Retrieved 13 February 2019.
- McEwan 2008, p. 183.
- Somervill, Barbara A. (2005). Empire of the Inca. New York: Facts on File, Inc. pp. 101–03. ISBN 0-8160-5560-2.
- "Incan skull surgery". Science News.
- "Cocaine's use: From the Incas to the U.S." Boca Raton News. 4 April 1985. Retrieved 2 February 2014.
- Cartwright, Mark (19 May 2016). "Inca Warfare". World History Encyclopedia.
- Kim MacQuarrie (17 June 2008). The Last Days of the Incas. Simon and Schuster. p. 144. ISBN 978-0-7432-6050-3.
- Geoffrey Parker (29 September 2008). The Cambridge Illustrated History of Warfare: The Triumph of the West. Cambridge University Press. p. 136. ISBN 978-0-521-73806-4.
- Robert Stevenson (1 January 1968). Music in Aztec & Inca Territory. University of California Press. p. 77. ISBN 978-0-520-03169-2.
- Father Bernabe Cobo; Roland Hamilton (1 May 1990). Inca Religion and Customs. University of Texas Press. p. 218. ISBN 978-0-292-73861-4.
- Cottie Arthur Burland (1968). Peru Under the Incas. Putnam. p. 101.
The sling was the most deadly projectile weapon. Spear, long-handled axe and bronze-headed mace were the effective weapons. Protection was afforded by a wooden helmet covered with bronze, long quilted tunic and flexible quilted shield.
- Peter Von Sivers; Charles Desnoyers; George B. Stow (2012). Patterns of World History. Oxford University Press. p. 505. ISBN 978-0-19-533334-3.
- Maestro, Carmen Pérez (1999). "Armas de metal en el Perú prehispánico". Espacio, Tiempo y Forma, Señe I, Prehistoria y Arqueología (in Spanish): 319–346.
- Francisco López de Jerez,Verdadera relación de la conquista del Peru y provincia de Cusco, llamada la Nueva Castilla, 1534.
- Guaman Poma, El primer nueva corónica y buen gobierno, (1615/1616), pp. 256, 286, 344, 346, 400, 434, 1077, this pagination corresponds to the Det Kongelige Bibliotek search engine pagination of the book. Additionally Poma shows both well drafted European flags and coats of arms on pp. 373, 515, 558, 1077. On pp. 83, 167–71 Poma uses a European heraldic graphic convention, a shield, to place certain totems related to Inca leaders.
- Preble, George Henry; Charles Edward Asnis (1917). Origin and History of the American Flag and of the Naval and Yacht-Club Signals... 1. N. L. Brown. p. 85.
- McCandless, Byron (1917). Flags of the world. National Geographic Society. p. 356.
- "¿Bandera gay o del Tahuantinsuyo?". Terra. 19 April 2010.
- "La Bandera del Tahuantisuyo" (PDF) (in Spanish). Retrieved 12 June 2009.
- Frisancho, A. Roberto (2013), "Developmental Functional Adaptation to High Altitude: Review" (PDF), American Journal of Human Biology, 25 (2): 151–68, doi:10.1002/ajhb.22367, hdl:2027.42/96751, PMID 24065360, S2CID 33055072
- Kellog, RH (1968). "Altitude acclimatization, A historical introduction emphasizing the regulation of breathing". Physiologist. 11 (1): 37–57. PMID 4865521 – via https://www.lib.umich.edu/collections/deep-blue-repositories.
- Gibbons, Ann. "Tibertans inherited high-altitude gene from ancient humans". Science.org. Retrieved 18 June 2021.
- Куприенко, Сергей (2013). Источники XVI–XVII веков по истории инков: хроники, документы, письма. Kyiv: Видавець Купрієнко СА. ISBN 978-617-7085-03-3.
- Bengoa, José (2003). Historia de los antiguos mapuches del sur: desde antes de la llegada de los españoles hasta las paces de Quilín : siglos XVI y XVII (in Spanish). BPR Publishers. ISBN 978-956-8303-02-0.
- de la Vega, Garcilaso (2006). The Royal Commentaries of the Incas and General History of Peru, Abridged. Hackett Publishing. pp. 32–. ISBN 978-1-60384-856-5.
- Hemming, John (2003). The Conquest of the Incas. Harvest Press. ISBN 0-15-602826-3.
- MacQuarrie, Kim (2007). The Last Days of the Incas. Simon & Schuster. ISBN 978-0-7432-6049-7.
- Mann, Charles C. (2005). 1491: New Revelations of the Americas Before Columbus. Knopf. pp. 64–105. ISBN 978-0-307-27818-0.
- McEwan, Gordon F. (2008). The Incas: New Perspectives. W.W. Norton, Incorporated. pp. 221–. ISBN 978-0-393-33301-5.
- Morales, Edmundo (1995). The guinea pig: healing, food, and ritual in the Andes. University of Arizona Press.
- Popenoe, Hugh; Steven R. King; Jorge Leon; Luis Sumar Kalinowski; Noel D. Vietmeyer (1989). Lost Crops of the Incas: Little-Known Plants of the Andes with Promise for Worldwide Cultivation. Washington, DC: National Academy Press. ISBN 0-309-04264-X.
- Sanderson, Steven E. (1992). The Politics of Trade in Latin American Development. Stanford University Press. ISBN 978-0-8047-2021-2.
- D'Altroy, Terence N. (2014). The Incas. Wiley. ISBN 978-1-118-61059-6.
- Steward, Julian H., ed. (1946). The Handbook of South American Indians: The Andean Civilizations. no. 143 v. 2 Bulletin / Smithsonian Institution, Bureau of American Ethnology. Biodiversity Heritage Library / Washington, DC: Smithsonian Institution. p. 1935.
- Julien, Catherine J. (1982). Inca Decimal Administration in the Lake Titicaca Region in The Inca and Aztec States: 1400–1800. New York: Academic Press.
- Moseley, Michael Edward (2001). The Incas and Their Ancestors: The Archaeology of Peru. Thames & Hudson. ISBN 978-0-500-28277-9.
|Wikimedia Commons has media related to:|
- "Guaman Poma – El Primer Nueva Corónica Y Buen Gobierno" – A high-quality digital version of the Corónica, scanned from the original manuscript.
- Conquest nts.html Inca Land by Hiram Bingham (published 1912–1922).
- Inca Artifacts, Peru and Machu Picchu 360-degree movies of inca artifacts and Peruvian landscapes.
- Ancient Civilizations – Inca
- "Ice Treasures of the Inca" National Geographic site.
- "The Sacred Hymns of Pachacutec," poetry of an Inca emperor.
- Incan Religion
- Engineering in the Andes Mountains, lecture on Inca suspension bridges
- A Map and Timeline of Inca Empire events
- Ancient Peruvian art: contributions to the archaeology of the empire of the Incas, a four volume work from 1902 (fully available online as PDF) |
Topic 2.3 - Slope
In this slope intercept worksheet, students solve 15 problems in which they find the slope of a linear equation, the slope of a graph, the slope using 2 points, and the equation of a line parallel or perpendicular to a given line.
12 Views 73 Downloads
Solution Sets to Equations with Two Variables
Can an equation have an infinite number of solutions? Allow your class to discover the relationship between the input and output variables in a two-variable equation. Class members explore the concept through tables and graphs and...
9th - 10th Math CCSS: Designed
There is Only One Line Passing Through a Given Point with a Given Slope
Prove that an equation in slope-intercept form names only one line. At the beginning, the teacher leads the class through a proof that there is only one line passing through a given point with a given slope using contradiction. The 19th...
8th Math CCSS: Designed
Saxon Math: Algebra 2 (Section 1)
This first of twelve algebra 2 resources provides a broad review of many algebra 1 concepts through a number of separable lessons and labs. Starting with the real number system and its subsystems, the sections quickly but thoroughly move...
9th - 12th Math CCSS: Adaptable |
Lists are a fundamental part of Haskell, and we've used them extensively before getting to this chapter. The novel insight is that the list type is a monad too!
As monads, lists are used to model nondeterministic computations which may return an arbitrary number of results. There is a certain parallel with how
Maybe represented computations which could return zero or one value; but with lists, we can return zero, one, or many values (the number of values being reflected in the length of the list).
List instantiated as monad
return function for lists simply injects a value into a list:
return x = [x]
In other words,
return here makes a list containing one element, namely the single argument it took. The type of the list return is
return :: a -> [a], or, equivalently,
return :: a -> a. The latter style of writing it makes it more obvious that we are replacing the generic type constructor in the signature of
return (which we had called
M in Understanding monads) by the list type constructor
(which is distinct from but easy to confuse with the empty list!).
The binding operator is less trivial. We will begin by considering its type, which for the case of lists should be:
[a] -> (a -> [b]) -> [b]
This is just what we'd expect: it pulls out the value from the list to give to a function that returns a new list.
The actual process here involves first
mapping a given function over a given list to get back a list of lists, i.e. type
[[b]] (of course, many functions which you might use in mapping do not return lists; but, as shown in the type signature above, monadic binding for lists only works with functions that return lists). To get back to a regular list, we then concatenate the elements of our list of lists to get a final result of type
[b]. Thus, we can define the list version of
xs >>= f = concat (map f xs)
The bind operator is key to understanding how different monads do their jobs, and its definition indicates the chaining strategy for working with the monad.
For the list monad, non-determinism is present because different functions may return any number of different results when mapped over lists.
It is easy to incorporate the familiar list processing functions in monadic code. Consider this example: rabbits raise an average of six kits in each litter, half of which will be female. Starting with a single mother, we can model the number of female kits in each successive generation (i.e. the number of new kits after the rabbits grow up and have their own litters):
Prelude> let generation = replicate 3 Prelude> ["bunny"] >>= generation ["bunny","bunny","bunny"] Prelude> ["bunny"] >>= generation >>= generation ["bunny","bunny","bunny","bunny","bunny","bunny","bunny","bunny","bunny"]
In this silly example all elements are equal, but the same overall logic could be used to model radioactive decay, or chemical reactions, or any phenomena that produces a series of elements starting from a single one.
Board game example
Suppose we are modeling a turn-based board game and want to find all the possible ways the game could progress. We would need a function to calculate the list of options for the next turn, given a current board state:
nextConfigs :: Board -> [Board] nextConfigs bd = undefined -- details not important
To figure out all the possibilities after two turns, we would again apply our function to each of the elements of our new list of board states. Our function takes a single board state and returns a list of possible new states. Thus, we can use monadic binding to map the function over each element from the list:
nextConfigs bd >>= nextConfigs
In the same fashion, we could bind the result back to the function yet again (ad infinitum) to generate the next turn's possibilities. Depending on the particular game's rules, we may reach board states that have no possible next-turns; in those cases, our function will return the empty list.
On a side note, we could translate several turns into a
do block (like we did for the grandparents example in Understanding monads):
threeTurns :: Board -> [Board] threeTurns bd = do bd1 <- nextConfigs bd -- bd1 refers to a board configuration after 1 turn bd2 <- nextConfigs bd1 nextConfigs bd2
If the above looks too magical, keep in mind that
do notation is syntactic sugar for
(>>=) operations. To the right of each left-arrow, there is a function with arguments that evaluate to a list; the variable to the left of the arrow stands for the list elements. After a left-arrow assignment line, there can be later lines that call the assigned variable as an argument for a function. This later function will be performed for each of the elements from within the list that came from the left-arrow line's function. This per-element process corresponds to the `map` in the definition of
(>>=). A resulting list of lists (one per element of the original list) will be flattened into a single list (the `concat` in the definition of
The list monad works in a way that has uncanny similarity to list comprehensions. Let's slightly modify the
do block we just wrote for
threeTurns so that it ends with a
threeTurns bd = do bd1 <- nextConfigs bd bd2 <- nextConfigs bd1 bd3 <- nextConfigs bd2 return bd3
This mirrors exactly the following list comprehension:
threeTurns bd = [ bd3 | bd1 <- nextConfigs bd, bd2 <- nextConfigs bd1, bd3 <- nextConfigs bd2 ]
(In a list comprehension, it is perfectly legal to use the elements drawn from one list to define the following ones, like we did here.)
The resemblance is no coincidence: list comprehensions are, behind the scenes, defined in terms of
concatMap f xs = concat (map f xs). That's just the list monad binding definition again! To summarize the nature of the list monad: binding for the list monad is a combination of concatenation and mapping, and so the combined function
concatMap is effectively the same as
>>= for lists (except for different syntactic order).
For the correspondence between list monad and list comprehension to be complete, we need a way to reproduce the filtering that list comprehensions can do. We will explain how that can be achieved a little later in the Additive monads chapter.
- As an optional advanced exercise: research how we could do recursive binding to find all possible results for games that have a finite number of possibilities. Furthermore, consider how we might handle the empty list results when they are reached and still retain the list of possible final actual board states. |
IEEE 802.11 is part of the IEEE 802 set of local area network (LAN) technical standards, and specifies the set of media access control (MAC) and physical layer (PHY) protocols for implementing wireless local area network (WLAN) computer communication. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand and are the world's most widely used wireless computer networking standards. IEEE 802.11 is used in most home and office networks to allow laptops, printers, smartphones, and other devices to communicate with each other and access the Internet without connecting wires.
The standards are created and maintained by the Institute of Electrical and Electronics Engineers (IEEE) LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had subsequent amendments. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote the capabilities of their products. As a result, in the marketplace, each revision tends to become its own standard.
IEEE 802.11 uses various frequencies including, but not limited to, 2.4 GHz, 5 GHz, 6 GHz, and 60 GHz frequency bands. Although IEEE 802.11 specifications list channels that might be used, the radio frequency spectrum availability allowed varies significantly by regulatory domain.
The protocols are typically used in conjunction with IEEE 802.2, and are designed to interwork seamlessly with Ethernet, and are very often used to carry Internet Protocol traffic.
The 802.11 family consists of a series of half-duplex over-the-air modulation techniques that use the same basic protocol. The 802.11 protocol family employs carrier-sense multiple access with collision avoidance whereby equipment listens to a channel for other users (including non 802.11 users) before transmitting each frame (some use the term "packet", which may be ambiguous: "frame" is more technically correct).
802.11-1997 was the first wireless networking standard in the family, but 802.11b was the first widely accepted one, followed by 802.11a, 802.11g, 802.11n, and 802.11ac. Other standards in the family (c–f, h, j) are service amendments that are used to extend the current scope of the existing standard, which amendments may also include corrections to a previous specification.
802.11b and 802.11g use the 2.4-GHz ISM band, operating in the United States under Part 15 of the U.S. Federal Communications Commission Rules and Regulations. 802.11n can also use that 2.4-GHz band. Because of this choice of frequency band, 802.11b/g/n equipment may occasionally suffer interference in the 2.4-GHz band from microwave ovens, cordless telephones, and Bluetooth devices. 802.11b and 802.11g control their interference and susceptibility to interference by using direct-sequence spread spectrum (DSSS) and orthogonal frequency-division multiplexing (OFDM) signaling methods, respectively.
802.11a uses the 5 GHz U-NII band which, for much of the world, offers at least 23 non-overlapping, 20-MHz-wide channels. This is an advantage over the 2.4-GHz, ISM-frequency band, which offers only three non-overlapping, 20-MHz-wide channels where other adjacent channels overlap (see: list of WLAN channels). Better or worse performance with higher or lower frequencies (channels) may be realized, depending on the environment. 802.11n and 802.11ax can use either the 2.4 GHz or 5 GHz band; 802.11ac uses only the 5 GHz band.
The segment of the radio frequency spectrum used by 802.11 varies between countries. In the US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802.11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption.
|Wi‑Fi 6E||802.11ax||600 to 9608||2020||2.4/5/6|
|Wi‑Fi 5||802.11ac||433 to 6933||2014||5|
|Wi‑Fi 4||802.11n||72 to 600||2008||2.4/5|
|(Wi-Fi 3*)||802.11g||6 to 54||2003||2.4|
|(Wi-Fi 2*)||802.11a||6 to 54||1999||5|
|(Wi-Fi 1*)||802.11b||1 to 11||1999||2.4|
|(Wi-Fi 0*)||802.11||1 to 2||1997||2.4|
|*: (Wi-Fi 0, 1, 2, 3, are unbranded common usage.)|
In 2018, the Wi-Fi Alliance began using a consumer-friendly generation numbering scheme for the publicly used 802.11 protocols. Wi-Fi generations 1–6 refer to the 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, and 802.11ax protocols, in that order.
802.11 technology has its origins in a 1985 ruling by the U.S. Federal Communications Commission that released the ISM band for unlicensed use.
In 1991 NCR Corporation/AT&T (now Nokia Labs and LSI Corporation) invented a precursor to 802.11 in Nieuwegein, the Netherlands. The inventors initially intended to use the technology for cashier systems. The first wireless products were brought to the market under the name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s.
Vic Hayes, who held the chair of IEEE 802.11 for 10 years, and has been called the "father of Wi-Fi", was involved in designing the initial 802.11b and 802.11a standards within the IEEE. He, along with Bell Labs Engineer Bruce Tuch, approached IEEE to create a standard.
In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
The major commercial breakthrough came with Apple's adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. One year later IBM followed with its ThinkPad 1300 series in 2000.
range, or type
|PHY||Protocol||Release date||Frequency||Bandwidth||Stream data rate||Allowable
|1–6 GHz||DSSS/FHSS||802.11-1997||Jun 1997||2.4||22||1, 2||—||DSSS, FHSS||20 m (66 ft)||100 m (330 ft)|
|HR-DSSS||802.11b||Sep 1999||2.4||22||1, 2, 5.5, 11||—||DSSS||35 m (115 ft)||140 m (460 ft)|
|OFDM||802.11a||Sep 1999||5||5/10/20||6, 9, 12, 18, 24, 36, 48, 54
(for 20 MHz bandwidth,
divide by 2 and 4 for 10 and 5 MHz)
|—||OFDM||35 m (115 ft)||120 m (390 ft)|
|802.11j||Nov 2004||4.9/5.0[D][failed verification]||?||?|
|802.11p||Jul 2010||5.9||?||1,000 m (3,300 ft)|
|802.11y||Nov 2008||3.7[A]||?||5,000 m (16,000 ft)[A]|
|ERP-OFDM||802.11g||Jun 2003||2.4||38 m (125 ft)||140 m (460 ft)|
|Oct 2009||2.4/5||20||Up to 288.8[B]||4||MIMO-OFDM||70 m (230 ft)||250 m (820 ft)[failed verification]|
|40||Up to 600[B]|
|Dec 2013||5||20||Up to 346.8[B]||8||MIMO-OFDM||35 m (115 ft)||?|
|40||Up to 800[B]|
|80||Up to 1733.2[B]|
|160||Up to 3466.8[B]|
|Feb 2021||2.4/5/6||20||Up to 1147[F]||8||MIMO-OFDM||30 m (98 ft)||120 m (390 ft) [G]|
|40||Up to 2294[F]|
|80||Up to 4804[F]|
|80+80||Up to 9608[F]|
|mmWave||DMG||802.11ad||Dec 2012||60||2,160||Up to 6,757
|—||OFDM, single carrier, low-power single carrier||3.3 m (11 ft)||?|
|802.11aj||Apr 2018||45/60[C]||540/1,080||Up to 15,000
|4||OFDM, single carrier||?||?|
|EDMG||802.11ay||Est. March 2021||60||8000||Up to 20,000 (20 Gbit/s)||4||OFDM, single carrier||10 m (33 ft)||100 m (328 ft)|
|Sub-1 GHz IoT||TVHT||802.11af||Feb 2014||0.054–0.79||6–8||Up to 568.9||4||MIMO-OFDM||?||?|
|S1G||802.11ah||Dec 2016||0.7/0.8/0.9||1–16||Up to 8.67 (@2 MHz)||4||?||?|
|2.4 GHz, 5 GHz||WUR||802.11ba[E]||Oct 2021||2.4/5||4.06||0.0625, 0.25 (62.5 kbit/s, 250 kbit/s)||—||OOK (Multi-carrier OOK)||?||?|
|Light (Li-Fi)||IR||802.11-1997||Jun 1997||?||?||1, 2||—||PPM||?||?|
|?||802.11bb||Est. Jul 2022||60000-790000||?||?||—||?||?||?|
|802.11 Standard rollups|
|802.11-2007||Mar 2007||2.4, 5||Up to 54||DSSS, OFDM|
|802.11-2012||Mar 2012||2.4, 5||Up to 150[B]||DSSS, OFDM|
|802.11-2016||Dec 2016||2.4, 5, 60||Up to 866.7 or 6,757[B]||DSSS, OFDM|
|802.11-2020||Dec 2020||2.4, 5, 60||Up to 866.7 or 6,757[B]||DSSS, OFDM|
Main article: IEEE 802.11 (legacy mode)
The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is now obsolete. It specified two net bit rates of 1 or 2 megabits per second (Mbit/s), plus forward error correction code. It specified three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM band.
Legacy 802.11 with direct-sequence spread spectrum was rapidly supplanted and popularized by 802.11b.
Main article: IEEE 802.11a-1999
802.11a, published in 1999, uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface (physical layer) was added.
It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20 Mbit/s. It has seen widespread worldwide implementation, particularly within the corporate workspace.
Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively unused 5 GHz band gives 802.11a a significant advantage. However, this high carrier frequency also brings a disadvantage: the effective overall range of 802.11a is less than that of 802.11b/g. In theory, 802.11a signals are absorbed more readily by walls and other solid objects in their path due to their smaller wavelength, and, as a result, cannot penetrate as far as those of 802.11b. In practice, 802.11b typically has a higher range at low speeds (802.11b will reduce speed to 5.5 Mbit/s or even 1 Mbit/s at low signal strengths). 802.11a also suffers from interference, but locally there may be fewer signals to interfere with, resulting in less interference and better throughput.
Main article: IEEE 802.11b-1999
The 802.11b standard has a maximum raw data rate of 11 Mbit/s (Megabits per second) and uses the same media access method defined in the original standard. 802.11b products appeared on the market in early 2000, since 802.11b is a direct extension of the modulation technique defined in the original standard. The dramatic increase in throughput of 802.11b (compared to the original standard) along with simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive wireless LAN technology.
Devices using 802.11b experience interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include microwave ovens, Bluetooth devices, baby monitors, cordless telephones, and some amateur radio equipment. As unlicensed intentional radiators in this ISM band, they must not interfere with and must tolerate interference from primary or secondary allocations (users) of this band, such as amateur radio.
Main article: IEEE 802.11g-2003
In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band (like 802.11b), but uses the same OFDM based transmission scheme as 802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or about 22 Mbit/s average throughput. 802.11g hardware is fully backward compatible with 802.11b hardware, and therefore is encumbered with legacy issues that reduce throughput by ~21% when compared to 802.11a.
The then-proposed 802.11g standard was rapidly adopted in the market starting in January 2003, well before ratification, due to the desire for higher data rates as well as reductions in manufacturing costs. By summer 2003, most dual-band 802.11a/b products became dual-band/tri-mode, supporting a and b/g in a single mobile adapter card or access point. Details of making b and g work well together occupied much of the lingering technical process; in an 802.11g network, however, the activity of an 802.11b participant will reduce the data rate of the overall 802.11g network.
Like 802.11b, 802.11g devices also suffer interference from other products operating in the 2.4 GHz band, for example, wireless keyboards.
In 2003, task group TGma was authorized to "roll up" many of the amendments to the 1999 version of the 802.11 standard. REVma or 802.11ma, as it was called, created a single document that merged 8 amendments (802.11a, b, d, e, g, h, i, j) with the base standard. Upon approval on 8 March 2007, 802.11REVma was renamed to the then-current base standard IEEE 802.11-2007.
Main article: IEEE 802.11n-2009
802.11n is an amendment that improves upon the previous 802.11 standards; its first draft of certification was published in 2006. The 802.11n standard was retroactively labelled as Wi-Fi 4 by the Wi-Fi Alliance. The standard added support for multiple-input multiple-output antennas (MIMO). 802.11n operates on both the 2.4 GHz and the 5 GHz bands. Support for 5 GHz bands is optional. Its net data rate ranges from 54 Mbit/s to 600 Mbit/s. The IEEE has approved the amendment, and it was published in October 2009. Prior to the final ratification, enterprises were already migrating to 802.11n networks based on the Wi-Fi Alliance's certification of products conforming to a 2007 draft of the 802.11n proposal.
In May 2007, task group TGmb was authorized to "roll up" many of the amendments to the 2007 version of the 802.11 standard. REVmb or 802.11mb, as it was called, created a single document that merged ten amendments (802.11k, r, y, n, w, p, z, v, u, s) with the 2007 base standard. In addition much cleanup was done, including a reordering of many of the clauses. Upon publication on 29 March 2012, the new standard was referred to as IEEE 802.11-2012.
Main article: IEEE 802.11ac
IEEE 802.11ac-2013 is an amendment to IEEE 802.11, published in December 2013, that builds on 802.11n. The 802.11ac standard was retroactively labelled as Wi-Fi 5 by the Wi-Fi Alliance. Changes compared to 802.11n include wider channels (80 or 160 MHz versus 40 MHz) in the 5 GHz band, more spatial streams (up to eight versus four), higher-order modulation (up to 256-QAM vs. 64-QAM), and the addition of Multi-user MIMO (MU-MIMO). The Wi-Fi Alliance separated the introduction of ac wireless products into two phases ("waves"), named "Wave 1" and "Wave 2". From mid-2013, the alliance started certifying Wave 1 802.11ac products shipped by manufacturers, based on the IEEE 802.11ac Draft 3.0 (the IEEE standard was not finalized until later that year). In 2016 Wi-Fi Alliance introduced the Wave 2 certification, to provide higher bandwidth and capacity than Wave 1 products. Wave 2 products include additional features like MU-MIMO, 160 MHz channel width support, support for more 5 GHz channels, and four spatial streams (with four antennas; compared to three in Wave 1 and 802.11n, and eight in IEEE's 802.11ax specification).
Main article: IEEE 802.11ad
IEEE 802.11ad is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. This frequency band has significantly different propagation characteristics than the 2.4 GHz and 5 GHz bands where Wi-Fi networks operate. Products implementing the 802.11ad standard are being brought to market under the WiGig brand name. The certification program is now being developed by the Wi-Fi Alliance instead of the now defunct Wireless Gigabit Alliance. The peak transmission rate of 802.11ad is 7 Gbit/s.
IEEE 802.11ad is a protocol used for very high data rates (about 8 Gbit/s) and for short range communication (about 1–10 meters).
TP-Link announced the world's first 802.11ad router in January 2016.
The WiGig standard is not too well known, although it was announced in 2009 and added to the IEEE 802.11 family in December 2012.
Main article: IEEE 802.11af
IEEE 802.11af, also referred to as "White-Fi" and "Super Wi-Fi", is an amendment, approved in February 2014, that allows WLAN operation in TV white space spectrum in the VHF and UHF bands between 54 and 790 MHz. It uses cognitive radio technology to transmit on unused TV channels, with the standard taking measures to limit interference for primary users, such as analog TV, digital TV, and wireless microphones. Access points and stations determine their position using a satellite positioning system such as GPS, and use the Internet to query a geolocation database (GDB) provided by a regional regulatory agency to discover what frequency channels are available for use at a given time and position. The physical layer uses OFDM and is based on 802.11ac. The propagation path loss as well as the attenuation by materials such as brick and concrete is lower in the UHF and VHF bands than in the 2.4 GHz and 5 GHz bands, which increases the possible range. The frequency channels are 6 to 8 MHz wide, depending on the regulatory domain. Up to four channels may be bonded in either one or two contiguous blocks. MIMO operation is possible with up to four streams used for either space–time block code (STBC) or multi-user (MU) operation. The achievable data rate per spatial stream is 26.7 Mbit/s for 6 and 7 MHz channels, and 35.6 Mbit/s for 8 MHz channels. With four spatial streams and four bonded channels, the maximum data rate is 426.7 Mbit/s for 6 and 7 MHz channels and 568.9 Mbit/s for 8 MHz channels.
IEEE 802.11-2016 which was known as IEEE 802.11 REVmc, is a revision based on IEEE 802.11-2012, incorporating 5 amendments (11ae, 11aa, 11ad, 11ac, 11af). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been renumbered.
Main article: IEEE 802.11ah
IEEE 802.11ah, published in 2017, defines a WLAN system operating at sub-1 GHz license-exempt bands. Due to the favorable propagation characteristics of the low frequency spectra, 802.11ah can provide improved transmission range compared with the conventional 802.11 WLANs operating in the 2.4 GHz and 5 GHz bands. 802.11ah can be used for various purposes including large scale sensor networks, extended range hotspot, and outdoor Wi-Fi for cellular traffic offloading, whereas the available bandwidth is relatively narrow. The protocol intends consumption to be competitive with low power Bluetooth, at a much wider range.
Main article: IEEE 802.11ai
IEEE 802.11ai is an amendment to the 802.11 standard that added new mechanisms for a faster initial link setup time.
IEEE 802.11aj is a derivative of 802.11ad for use in the 45 GHz unlicensed spectrum available in some regions of the world (specifically China); it also provides additional capabilities for use in the 60 GHz band.
Alternatively known as China Millimeter Wave (CMMW).
IEEE 802.11aq is an amendment to the 802.11 standard that will enable pre-association discovery of services. This extends some of the mechanisms in 802.11u that enabled device discovery to discover further the services running on a device, or provided by a network.
IEEE 802.11-2020, which was known as IEEE 802.11 REVmd, is a revision based on IEEE 802.11-2016 incorporating 5 amendments (11ai, 11ah, 11aj, 11ak, 11aq). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been added.
Main article: IEEE 802.11ax
IEEE 802.11ax is the successor to 802.11ac, marketed as Wi-Fi 6 (2.4 GHz and 5 GHz) and Wi-Fi 6E (6 GHz) by the Wi-Fi Alliance. It is also known as High Efficiency Wi-Fi, for the overall improvements to Wi-Fi 6 clients under dense environments. For an individual client, the maximum improvement in data rate (PHY speed) against the predecessor (802.11ac) is only 39%[a] (for comparison, this improvement was nearly 500%[b] for the predecessors).[c] Yet, even with this comparatively minor 39% figure, the goal was to provide 4 times the throughput-per-area[d] of 802.11ac (hence High Efficiency). The motivation behind this goal was the deployment of WLAN in dense environments such as corporate offices, shopping malls and dense residential apartments. This is achieved by means of a technique called OFDMA, which is basically multiplexing in the frequency domain (as opposed to spatial multiplexing, as in 802.11ac). This is equivalent to cellular technology applied into Wi-Fi.:
The IEEE 802.11ax‑2021 standard was approved on February 9, 2021.
Main article: IEEE 802.11ay
IEEE 802.11ay is a standard that is being developed, also called EDMG: Enhanced Directional MultiGigabit PHY. It is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. It will be an extension of the existing 11ad, aimed to extend the throughput, range, and use-cases. The main use-cases include indoor operation and short-range communications due to atmospheric oxygen absorption and inability to penetrate walls. The peak transmission rate of 802.11ay is 40 Gbit/s. The main extensions include: channel bonding (2, 3 and 4), MIMO (up to 4 streams) and higher modulation schemes. The expected range is 300-500 m.
IEEE 802.11ba Wake-up Radio (WUR) Operation is an amendment to the IEEE 802.11 standard that enables energy efficient operation for data reception without increasing latency. The target active power consumption to receive a WUR packet is less than 1 milliwatt and supports data rates of 62.5 kbit/s and 250 kbit/s. The WUR PHY uses MC-OOK (multicarrier OOK) to achieve extremely low power consumption.
Main article: IEEE 802.11be
IEEE 802.11be Extremely High Throughput (EHT) is the potential next amendment to the 802.11 IEEE standard, and will likely be designated as Wi-Fi 7. It will build upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4 GHz, 5 GHz, and 6 GHz frequency bands.
Across all variations of 802.11, maximum achievable throughputs are given either based on measurements under ideal conditions or in the layer-2 data rates. However, this does not apply to typical deployments in which data is being transferred between two endpoints, of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link.
This means that, typically, data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the application's packet size determines the speed of the data transfer. This means applications that use small packets (e.g., VoIP) create dataflows with high-overhead traffic (i.e., a low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e., the data rate) and, of course, the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices.
The same references apply to the attached graphs that show measurements of UDP throughput. Each represents an average (UDP) throughput (please note that the error bars are there but barely visible due to the small variation) of 25 measurements. Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. These figures assume there are no packet errors, which, if occurring, will lower the transmission rate further.
See also: List of WLAN channels
802.11b, 802.11g, and 802.11n-2.4 utilize the 2.400–2.500 GHz spectrum, one of the ISM bands. 802.11a, 802.11n, and 802.11ac use the more heavily regulated 4.915–5.825 GHz band. These are commonly referred to as the "2.4 GHz and 5 GHz bands" in most sales literature. Each spectrum is sub-divided into channels with a center frequency and bandwidth, analogous to how radio and TV broadcast bands are sub-divided.
The 2.4 GHz band is divided into 14 channels spaced 5 MHz apart, beginning with channel 1, which is centered on 2.412 GHz. The latter channels have additional restrictions or are unavailable for use in some regulatory domains.
The channel numbering of the 5.725–5.875 GHz spectrum is less intuitive due to the differences in regulations between countries. These are discussed in greater detail on the list of WLAN channels.
In addition to specifying the channel center frequency, 802.11 also specifies (in Clause 17) a spectral mask defining the permitted power distribution across each channel. The mask requires the signal to be attenuated a minimum of 20 dB from its peak amplitude at ±11 MHz from the center frequency, the point at which a channel is effectively 22 MHz wide. One consequence is that stations can use only every fourth or fifth channel without overlap.
Availability of channels is regulated by country, constrained in part by how each country allocates radio spectrum to various services. At one extreme, Japan permits the use of all 14 channels for 802.11b, and 1–13 for 802.11g/n-2.4. Other countries such as Spain initially allowed only channels 10 and 11, and France allowed only 10, 11, 12, and 13; however, Europe now allow channels 1 through 13. North America and some Central and South American countries allow only 1 through 11.
Since the spectral mask defines only power output restrictions up to ±11 MHz from the center frequency to be attenuated by −50 dBr, it is often assumed that the energy of the channel extends no further than these limits. It is more correct to say that the overlapping signal on any channel should be sufficiently attenuated to interfere with a transmitter on any other channel minimally, given the separation between channels. Due to the near–far problem a transmitter can impact (desensitize) a receiver on a "non-overlapping" channel, but only if it is close to the victim receiver (within a meter) or operating above allowed power levels. Conversely, a sufficiently distant transmitter on an overlapping channel can have little to no significant effect.
Confusion often arises over the amount of channel separation required between transmitting devices. 802.11b was based on direct-sequence spread spectrum (DSSS) modulation and utilized a channel bandwidth of 22 MHz, resulting in three "non-overlapping" channels (1, 6, and 11). 802.11g was based on OFDM modulation and utilized a channel bandwidth of 20 MHz. This occasionally leads to the belief that four "non-overlapping" channels (1, 5, 9, and 13) exist under 802.11g. However, this is not the case as per 188.8.131.52 Channel Numbering of operating channels of the IEEE Std 802.11 (2012), which states, "In a multiple cell network topology, overlapping and/or adjacent cells using different channels can operate simultaneously without interference if the distance between the center frequencies is at least 25 MHz." and section 184.108.40.206 and Figure 18-13.
This does not mean that the technical overlap of the channels recommends the non-use of overlapping channels. The amount of inter-channel interference seen on a configuration using channels 1, 5, 9, and 13 (which is permitted in Europe, but not in North America) is barely different from a three-channel configuration, but with an entire extra channel.
However, overlap between channels with more narrow spacing (e.g. 1, 4, 7, 11 in North America) may cause unacceptable degradation of signal quality and throughput, particularly when users transmit near the boundaries of AP cells.
IEEE uses the phrase regdomain to refer to a legal regulatory region. Different countries define different levels of allowable transmitter power, time that a channel can be occupied, and different available channels. Domain codes are specified for the United States, Canada, ETSI (Europe), Spain, France, Japan, and China.
Most Wi-Fi certified devices default to regdomain 0, which means least common denominator settings, i.e., the device will not transmit at a power above the allowable power in any nation, nor will it use frequencies that are not permitted in any nation.
The regdomain setting is often made difficult or impossible to change so that the end-users do not conflict with local regulatory agencies such as the United States' Federal Communications Commission.
The datagrams are called frames. Current 802.11 standards specify frame types for use in the transmission of data as well as management and control of wireless links.
Frames are divided into very specific and standardized sections. Each frame consists of a MAC header, payload, and frame check sequence (FCS). Some frames may not have a payload.
|Frame check |
|Length (Bytes)||2||2||6||6||6||0, or 2||6||0, or 2||0, or 4||Variable||4|
The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. This frame control field is subdivided into the following sub-fields:
The next two bytes are reserved for the Duration ID field, indicating how long the field's transmission will take so other devices know when the channel will be available again. This field can take one of three forms: Duration, Contention-Free Period (CFP), and Association ID (AID).
An 802.11 frame can have up to four address fields. Each field can carry a MAC address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used for filtering purposes by the receiver.[dubious ] Address 4 is only present in data frames transmitted between access points in an Extended Service Set or between intermediate nodes in a mesh network.
The remaining fields of the header are:
The payload or frame body field is variable in size, from 0 to 2304 bytes plus any overhead from security encapsulation, and contains information from higher layers.
The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for integrity checks of retrieved frames. As frames are about to be sent, the FCS is calculated and appended. When a station receives a frame, it can calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the frame was not distorted during transmission.
Management frames are not always authenticated, and allow for the maintenance, or discontinuance, of communication. Some common 802.11 subtypes include:
The body of a management frame consists of frame-subtype-dependent fixed fields followed by a sequence of information elements (IEs).
The common structure of an IE is as follows:
Control frames facilitate the exchange of data frames between stations. Some common 802.11 control frames include:
Data frames carry packets from web pages, files, etc. within the body. The body begins with an IEEE 802.2 header, with the Destination Service Access Point (DSAP) specifying the protocol, followed by a Subnetwork Access Protocol (SNAP) header if the DSAP is hex AA, with the organizationally unique identifier (OUI) and protocol ID (PID) fields specifying the protocol. If the OUI is all zeroes, the protocol ID field is an EtherType value. Almost all 802.11 data frames use 802.2 and SNAP headers, and most use an OUI of 00:00:00 and an EtherType value.
Similar to TCP congestion control on the internet, frame loss is built into the operation of 802.11. To select the correct transmission speed or Modulation and Coding Scheme, a rate control algorithm may test different speeds. The actual packet loss rate of Access points varies widely for different link conditions. There are variations in the loss rate experienced on production Access points, between 10% and 80%, with 30% being a common average. It is important to be aware that the link layer should recover these lost frames. If the sender does not receive an Acknowledgement (ACK) frame, then it will be resent.
Within the IEEE 802.11 Working Group, the following IEEE Standards Association Standard and Amendments exist:
802.11F and 802.11T are recommended practices rather than standards and are capitalized as such.
802.11m is used for standard maintenance. 802.11ma was completed for 802.11-2007, 802.11mb for 802.11-2012, 802.11mc for 802.11-2016, and 802.11md for 802.11-2020.
Both the terms "standard" and "amendment" are used when referring to the different variants of IEEE standards.
As far as the IEEE Standards Association is concerned, there is only one current standard; it is denoted by IEEE 802.11 followed by the date published. IEEE 802.11-2020 is the only version currently in publication, superseding previous releases. The standard is updated by means of amendments. Amendments are created by task groups (TG). Both the task group and their finished document are denoted by 802.11 followed by one or two lower case letters, for example, IEEE 802.11a or IEEE 802.11ax. Updating 802.11 is the responsibility of task group m. In order to create a new version, TGm combines the previous version of the standard and all published amendments. TGm also provides clarification and interpretation to industry on published documents. New versions of the IEEE 802.11 were published in 1999, 2007, 2012, 2016, and 2020.
Various terms in 802.11 are used to specify aspects of wireless local-area networking operation and may be unfamiliar to some readers.
For example, Time Unit (usually abbreviated TU) is used to indicate a unit of time equal to 1024 microseconds. Numerous time constants are defined in terms of TU (rather than the nearly equal millisecond).
Also, the term "Portal" is used to describe an entity that is similar to an 802.1H bridge. A Portal provides access to the WLAN by non-802.11 LAN STAs.
In 2001, a group from the University of California, Berkeley presented a paper describing weaknesses in the 802.11 Wired Equivalent Privacy (WEP) security mechanism defined in the original standard; they were followed by Fluhrer, Mantin, and Shamir's paper titled "Weaknesses in the Key Scheduling Algorithm of RC4". Not long after, Adam Stubblefield and AT&T publicly announced the first verification of the attack. In the attack, they were able to intercept transmissions and gain unauthorized access to wireless networks.
The IEEE set up a dedicated task group to create a replacement security solution, 802.11i (previously, this work was handled as part of a broader 802.11e effort to enhance the MAC layer). The Wi-Fi Alliance announced an interim specification called Wi-Fi Protected Access (WPA) based on a subset of the then-current IEEE 802.11i draft. These started to appear in products in mid-2003. IEEE 802.11i (also known as WPA2) itself was ratified in June 2004, and uses the Advanced Encryption Standard (AES), instead of RC4, which was used in WEP. The modern recommended encryption for the home/consumer space is WPA2 (AES Pre-Shared Key), and for the enterprise space is WPA2 along with a RADIUS authentication server (or another type of authentication server) and a strong authentication method such as EAP-TLS.
In January 2005, the IEEE set up yet another task group "w" to protect management and broadcast frames, which previously were sent unsecured. Its standard was published in 2009.
In December 2011, a security flaw was revealed that affects some wireless routers with a specific implementation of the optional Wi-Fi Protected Setup (WPS) feature. While WPS is not a part of 802.11, the flaw allows an attacker within the range of the wireless router to recover the WPS PIN and, with it, the router's 802.11i password in a few hours.
In late 2014, Apple announced that its iOS 8 mobile operating system would scramble MAC addresses during the pre-association stage to thwart retail footfall tracking made possible by the regular transmission of uniquely identifiable probe requests.
Wi-Fi users may be subjected to a Wi-Fi deauthentication attack to eavesdrop, attack passwords, or force the use of another, usually more expensive access point.
((cite web)): CS1 maint: url-status (link)
((cite web)): CS1 maint: url-status (link)
((cite journal)): CS1 maint: uses authors parameter (link)
This amendment defines standardized modifications to both the IEEE 802.11 physical layers (PHY) and the IEEE 802.11 medium access control layer (MAC) that enables at least one mode of operation capable of supporting a maximum throughput of at least 20 gigabits per second (measured at the MAC data service access point), while maintaining or improving the power efficiency per station. |
If a candidate in an election does not achieve a majority of first preference votes, the winner is determined by the allocation of subsequent second, third and so on preferences.
How Preferential voting works:
There are two systems of preferential voting: Full preferential and optional preferential voting. With full preferential voting, voters are required to indicate their first preference by placing a “1” against a candidate’s name, then make a second preference and so on for the number of candidates on the ballot paper. Optional preferential voting only requires the voter to make a first preference.
If a candidate does not get an absolute majority of first preference votes, then the candidate with the least number of votes is eliminated and those votes are allocated to the other candidates according to the number of second preference votes. If no majority has been achieved, the next candidate with the least number of primary votes is eliminated and those votes are allocated to other candidates according to the second preference or third preference and so on if the second preferences have been exhausted.
Supporters of preferential voting say:
- The winning candidate is the most preferred or least disliked candidate by the entire electorate.
- Voters who support minor parties know that their votes will count towards deciding the winner.
- Parties sharing overlapping philosophies and policies can assist each other to win.
Opponents of preferential voting say:
- Vote counting is complex under current manual procedures.
- The process is costly and time consuming, potentially delaying a result.
- Some people don’t like having to choose more than one candidate.
- Preferential voting makes voting more difficult. Some people do not like having to rank their preference of candidates. They either neglect to do so or make mistakes, leading to higher levels of informal voting.
- Some people do not like being forced to make a preference for candidates they do not support.
- A candidate not supported by most of the electorate could still win.
What do you think? Have your say. Join the conversation below.
Drawing of Electoral Boundaries
Sometimes people find themselves in a new electorate when voting comes around. That’s because electoral boundaries - approximately 100,000 people within an area - are sometimes changed to reflect changes in the movement of people and the demographic makeup of the area. Electoral authorities regularly hold hearings to review boundaries. Political parties are not allowed to participate in the hearings so as to avoid the perception of manipulation of the system in their favour. Some people do not think electoral boundaries are being decided fairly.
What do you think? Have your say. Join the conversation below.
How we vote
Postal voting is designed for people who cannot attend a polling place in their electorate.
How voting works:
Once a person has voted, the ballot paper is placed in a sealed envelope which does not contain any voter identification, and then is placed in another sealed envelope that contains the name and address of the voter. When it is received by the electoral authority the outside envelope is used to confirm the person has voted. The ballot paper is removed from the inside envelope and place in a pile for counting. The system is designed so the identity of the voter cannot be linked to the ballot paper, thus ensuring tick the person’s vote is anonymous.
Opponents of postal voting claim that the system is open to abuse because votes can be tampered with and there is nothing stopping the voter’s personal details being copied.
What do you think? Have your say. Join the conversation below.
Early voting is officially known as 'Pre-Poll' voting -- voting before the actual day of the election or poll. When voting early, voters are required by law to give a valid reason for their request to vote before election day.
How early voting works:
For the 2019 federal election, early-voting or pre-poll voting centres opened in each electorate three weeks before election day in metropolitan areas and two weeks before election day in rural areas.
According to AEC figures, 2980498 people voted early for the 2016 federal election. In 2019, 4766853 people voted early -- a 60% increase in early voting compared with the 2016 election.
Vote Australia recognises that early voting is convenient for those who need it. Should all voters be allowed to vote before all issues have been fully debated?
What do you think? Have your say. Join the conversation below.
Sign in with...
Only, repeat ONLY, Borda-style counts guarantee fair counting of preferential voting elections for single or multimember electorates. Historically, Borda counts have been rejected (18 to 20th century, e.g. “Arrow’s Impossibility Theorem”) because they will contradict an “Absolute Majority” – but that’s only when the voters make it clear that the “absolute” majority is weak AND that preferences prove that there is a MORE-preferred candidate.
E.g., If Candidate A gets 51% of first preferences and 49% of last preferences, but Candidate D (of 4 candidates) gets 0% 1st preferences but 100% of 2nd preferences (extremely unlikely but this is s reductio-ad-absurdum proof) hence Candidate D is clearly the preferred candidate with an average preference of 2.000 which is much closer to average 1st preferences than A’s average of 2.470. Another way of expressing that is is that A is only 51.000% of the way to winning unanimously, whereas D is (1-(2.000-1)/(4-1)) = 66.667% of the way towards unanimous 1st preferences – (two steps out of 3 steps from last of 4th to 2nd out of 4). That’s my DCAP count where DCAP=100x(1-(PrefAv-1)/(Candidates-1). The Borda count applicable is Borda=DCAPx(Candidates-1)xVoters = 153 for A for 100 voters, and 200 for D in the above example.
DCAP easily accepts Partial Preferential Votes and SPLIT Partial Preferential Votes (e.g., if there are 9 candidates and a voter votes 1st 2nd & 3rd preferences and 8th & 9th preferences then DCAP “normalises” that vote by filling the 4 empty preference boxes with the average (5.5) of the 4 missing numbers (4, 5, 6 & 7). Further DCAP can often correct voters errors such as omitting a number of duplicating a number, provided that the voter’s intention is logically clear. “Normalising” ensures that every vote carries exactly the same weight as every other vote.
DCAP is the “Candidate” version count for single-member electorates, whereas DPAP is the “Party” version which allows for fair proportional representation on a Party basis in multi-member electorates.
With regards to computer entry, AEC always enters every vote on their computer system and that is what determines the result. Doing an election night manual count would give enough information from 1st and 2nd preferences to predict a PLERS result.
The chapter I am currently writing uses PLERS Simplified for political elections, which simplifies the count process. For Reps elections 1st pref gets a 1.0, 2nd 0.9, 3rd 0.8, etc till the 11th and all subsequent getting zero (0.0). It’s a simple process, you just add up the vote value and rank the candidates.
Senate is a little more complex but follows similar lines.
I have clearly demonstrated clear numerical examples how the PR system you advocate can get it WRONG, but that those problems can be reduced by adding Preferential Voting and, better still, even totally eliminated by counting the votes via DPAP. DCAP/DPAP totally eliminates the problems of electing Parties or Candidates contrary to voters’ collective preferences. Without PV PR totally ignore everything except first preferences and that risks unfair results.
DPAP is the ONLY guaranteed fair vote counting method that allow you, John, to vote only for the party you want, while allowing others to use full or partial or split preferential voting as they see fit, and yet every vote still carries exactly the same value. The Borda Count comes close, but can’t handle partial preferential voting. PLERS comes closer, but it does NOT handle partial preferential voting fairly because it does not count partial preferential votes linearly – unless Erik has changed his system to overcome that fault since I pointed it out some months ago in this forum.
DCAP facilitates high security against electoral fraud. It suits transparent block-chain security with a complete audit trail from paper ballot paper and/or electronic screen voting all the way from polling booth to final declaration. Being totally linear, the results from different polling places, as extremely small digital files, are easily and instantly aggregated as highly secure digital files that are very difficult to alter because they can be checked against the individual votes retained in the polling place machines. The fact that voting machine fraud is widespread in some countries simply highlights the need for legislation for machines and procedures to be rigidly controlled. As outlined in links given previously, individual voters can choose to receive anonymous receipts showing their actual vote as scanned and/or as approved on screen, that they can the check that their vote was actually correctly recorded and counted. Further, the online semi-real-time output summaries from individual polling stations can be made transparent to the media, public, candidates and parties. All this make voting fraud difficult and auditing easy.
The fact is DCAP/DPAP can fix PR’s serious fault that it can give parties more seats than they deserve – as I have documented. So, your claim that: “(The Israeli system) is a fair system” is clearly not necessarily true. Also, while “Voters would be very reluctant to vote for a party that could form a coalition with a party they despise. (and) Pre-election promises about forming coalitions are usually widely known and adhered to,” there are plenty of Israeli and Kiwi voters, who despair over parties and candidates betraying their trust – plus Aussie voters betrayed where promised policies are ignored etc.
While traditional Preferential Vote counting is a slow error-prone process because candidates are eliminated and preferences distributed (often more than 100 times in a mathematically chaotic process with numerous unpredictable tipping points) DPAP requires no time-wasting sifting through preferences. DPAP never distributes preferences – but all preferences are taken into account fairly. DPAP never “eliminates” any Candidate or Party in the count, all are accurately rated into a hierarchy. Everything is decided in a quick efficient linear manner.
In summary, if you want to guarantee a fair and just level playing field, then Preferential Voting using DPAP’s fair counting system is required in a Proportional Representation system, even when voting for a large number of representatives. While voters wanting to ensure a fair and just result need to fill in more preferences, voters have the option to vote as little or much as they choose and all votes are counted equally.
Preferential voting for multiple seats in a single electorate using proportional representation (PR), would not achieve much at all. There would hardly be any difference in outcome compared to voting for a single candidate.
I agree that PR is difficult to achieve. It would require a constitutional change in Australia. I do not agree that it is difficult to implement. It would make life easier for the voters, the AEC, and government. And as far as fairness is concerned – it is unbeatable.
I do think your PLERS is a fairer system, compared to existing preferential voting and other alternatives. However, it requires a spreadsheet to process results. The existing system has a method that can be counted manually by stacking and eliminating votes in a process that is easily checked and monitored – by people. It is slow, but traceable.
Your method (PLERS) requires data-entry into a spreadsheet – which I guarantee you is a much slower and error-prone process. Preferably this would be done on 3 spreadsheets simultaneously, so that the results can be compared. If they don’t match – start again. If you were to combine PLERS with voting for multiple seats in a single electorate using proportional representation, i.e. all of Australia in a federal election, you would also have to combine spreadsheets from regional counting centres in a country-wide spreadsheet. Another error-prone level of complexity. Mistakes are inevitable and voters would lose confidence. I foresee very long wait times before election results are confirmed.
PLERS would be OK if we got the input for the spreadsheets directly from a voting machine. However voting machines are not allowed in Australia – and I agree with that. It is too easy to manipulate – as is the merging of spreadsheets.
Forgive me for my opinion, but I still dislike any form of preferential voting – even PLERS, because it is simply too complex and achieves nothing in a PR system.
1. This is not my system. This is a system used in many countries, Israel among others, as you have stated. The Israeli system does create a complex of parties with – in our eyes – ‘unstable’ governments. The parties and their policies are a direct reflection of the diverse opinions of the citizens of Israel. It is a fair system. Sometimes this means failures to govern, and a re-election is required. But when it works – it works very well.
2. I retract my ‘THAT IS IMPOSSIBLE’. It is possible, though extremely unlikely. Your example states a party with 30% of the vote may be considered by 51 to 70% of the voters to be the worst possible choice. Well then, the parties these ‘51 to 70%’ voted for can form a coalition. But realistically, the opinions of voters are reflected in the parties they vote for – otherwise why vote for them. No party would form a coalition with a party that has alienated itself from more than 50% of the voters. Voters would be very reluctant to vote for a party that could form a coalition with a party they despise. Pre-election promises about forming coalitions are usually widely known and adhered to. Theoretically it can happen – but in practice it hardly ever does, and when it happens the government does not last long – and a re-election is required – like in Israel.
Preferential voting is simply not required in a proportional representation system when voting for a large group of representatives. We do not have to sift through preferences per regional seat to determine a winner per seat. We just vote for one candidate by ticking his name on a list. If party X has 30% of the vote – they get 30% of the seats – fair and transparent. And simple to count.
You have not addressed that criticism at all.
As I stated below “Clearly it is ‘flying blind’ to ignore preferences because it risks electing the worst party to power.” It seems to me that ignoring the real possibility that sometimes your system can elects a wrong government is vincible ignorance.
Why do you ignore that possibility without addressing the detail of the argument?
You present very good arguments re one large proportionally elected government, and I present good arguments to improve it, yet you have not supported your claim that “’THAT IS IMPOSSIBLE”. I detailed how it is possible. You have made that bold false assertion without backup. Where are your figures to show that you are right?
Note that the Israelis use the same system that you propose. They have just had another coalition government collapse and are headed for yet another election.
Why? We simply don’t know for sure.
But it could be that one or more of the parties in coalition differ widely on policies and many of the supporters of each of the coalition parties regard other coalition parties as having unacceptable policies. However, if preferential voting were used together with the proportional system, such problems should be far less likely giving more stable democracy.
You say: ‘proportional representation is capable of electing a government that the majority of voters do NOT want’. THAT IS IMPOSSIBLE. This is the popular vote! All votes are counted equally, and no votes are discarded. It is transparent and fair – and completely clear who has won the election. Only a party or coalition of parties that have more than 50% of the votes will win the election. The people’s mandate is perfectly clear.
You say my vote will not be discarded. In a federal election, if I vote in my REGIONAL electorate for party B and party A wins, my vote is discarded in my regional electorate – it does not count for the rest of Australia, because of this evil division into regional electorates. There should be just one electorate – ALL OF AUSTRALIA. Why should I be limited to the local fools running for government? I want to vote for someone I know in Melbourne, thousands of kilometers from my regional electorate – because I know this person will do a better job. It is a federal election – for all of Australia. I live in Australia. My preferred candidate lives in Australia – and he just happens to be the very best chance to get my regional issues solved – even though he lives way off in Melbourne.
You are confused if you think first-past-the-post voting is the same as proportional representation.
And preference voting is just silly, far too complex. Give the voters a break. Keep it simple. Just vote for the one person you think will do the best job. One tick and you’re done.
I agree that the system needs changing because the current system is far from ideal. But while your proposal has the merit of a larger electorate, which allows better proportionality, it suffers from an inherent flaw that, like the current system, is quite capable of electing a government that the majority of voters do NOT want.
But before I prove that claim, let me correct a misunderstanding re actual practice:
Your vote is NOT discarded when you only vote for a single candidate and give no other preferences. That is officially discouraged because it actually gives your vote less value because it fails to take advantage of the ability to specify second and subsequent preferences if your 1st preference candidate or party does not win election. However, such votes ARE actually counted and are followed as far as it is possible to interpret the voter’s intention – as also pointed out by Erik Jochimsen
a week or so ago.
You write: “I want my vote to be of equal value to any other vote”. However, your approach advocates scrapping preferential voting which then becomes functionally first-past-the-post voting used to determine proportional representation- and that can significantly reduce the value of your vote as shown below. The fact is that the system you propose, like Australia’s current system, is capable of electing a government that a majority of voters actually oppose. That is not equal value for each vote.
Current systems, like your proposal, are effectively obsessed with winners, obsessed with 1st (or only) preferences and totally ignore voters’ last preferences – preferences which may be just as significant as first preferences. Now sometimes the system you propose, and Australia’s current system, gets it right and comes up with a reasonable compromise – but both systems are inherently UNABLE to guarantee a fair result (as “Arrow’s Impossibility Theorem” claims). Now while Arrow is correct in those cases, Arrow was and is wrong because he failed to recognise that Borda and DCAP, an improved version of Borda, actually guarantee a fair result.
This is easily proved by a simple example: Parties A, B & C contesting 100 seats allocated proportionally, A gets 40%, B=35% and C=25% of the votes. So, A gets 40 seats, B 35, and C 25. Simple. Easy. But is it always fair?
Not necessarily. It can be anywhere between perfectly fair and grossly unfair.
Consider the exact same primary votes a above but with preferential voting. Now consider three extreme possibilities that clearly show why preferential voting is essential for a fair result – and why Borda/DCAP is the only vote-counting system to guarantee a fair count.
Three extreme possibilities, amongst many others, include:
1. All Party B and C voters vote Party A last, while A & B voters give all 2nd preferences to C.
2. All Party C and A voters vote Party B last, while B & C voters give all 2nd preferences to A.
3. All Party A and B voters vote Party C last, while C & A voters give all 2nd preferences to B.
Those three possibilities can ONLY mean that voters have rated the parties with Average Preference Counts (APCs in DCAP’s terminology) of exactly:
1. A=2.20; B= 2.05; C= 1.75 average preference received from voters. C is the closest to 1st.
2. A=1.60; B= 2.30; C= 1.65 average preference received from voters. A is the closest to 1st.
3. A=1.85; B= 1.65; C= 2.50 average preference received from voters. B is the closest to 1st.
Clearly it is ‘flying blind’ to ignore preferences because it risks electing the worst party to power.
Now those average preference translate mathematically into DPAP score (DPAP = (voter) Party Acceptance Percentage = (1-(APC-1)/(Parties-1))% to produce a DPAP score that shows that exactly:
1. 40.00%; 47.50% and 62.50% of voters Accept Parties A, B and C, respectively, on average;
2. 70.00%; 35.00% and 45.00% of voters Accept Parties A, B and C, respectively, on average;
3. 57.50%; 67.50% and 25.00% of voters Accept Parties A, B and C, respectively, on average;
Note that in each case here the sum is 150.00%. That is not an error: it is correct because the other side of the story is that, e.g., in case 1, 60%; 52.50% and 37.5% DISapprove of the respective candidates – so, there are 3 sets of 100%. In the general case, the sum of all DPAPs must be ½ the number of parties as a %.
The Borda count gives exactly the same relative rating in each case, as the DPAP scores above, but Borda expresses results in a manner inscrutable to the public.
When translated into seats, and leaving aside the usual problems of allocating partial seats, the comparisons between those three cases and Case 0 (the defacto 1st-past-the-post method) the results are:
0. A 40.00 seats, B 35.00 seats; C 25.00 seats when ignoring preferences;
1. A 26.67 seats, B 31.67 seats; C 42.67 seats in consideration of actual preferences;
2. A 46.67 seats, B 23.33 seats. C 30.00 seats in consideration of actual preferences;
3. A 38.33 seats, B 45.00 seats; C 16.67 seats in consideration of actual preferences.
Those examples clearly show that traditional vote-counting methods can result in electing an unpopular government while rejecting a government that the voters collectively would prefer.
D’Hondt is a method of resolving partial seat allocations that is biased towards big parties but the bias has less effect with more seats in the electorate. In contrast DPAP has no bias – DPAP proportional representation allocation of partial quota is described elsewhere.
1. If you can’t trust the candidate or party – to act appropriately with a simple thing like surplus votes – you just don’t vote. Trust is all important. This is how democracy works.
2. The candidate lists are printed in order of preference per party. The party assigns the order of their candidates. The leader of the party (and preferred PM) at the top. There is a column per party. The columns are ordered according to the results of the last election. You cast a vote for only one candidate. Their position in the lists is irrelevant. If you just want to vote for a party, you vote for the top person in the party column.
3. Assigning the last seats for candidates not meeting the quota is done using the d’Hondt system. There is no need to reinvent a new method.
I want the choice to vote – or not. I do not want my vote to be discarded when I do vote. I want my vote to be of equal value to any other vote. And yes, to achieve these simple goals – we will need constitutional changes.
While you do not advocating a first-past-the-post system per se, just as I didn’t accuse you of that, the system you propose is a defacto preferential system where the preferences are stolen from voters and arbitrarily surrendered to the candidates or parties.
Nevertheless, your proposal has merit – provided preferential voting and DCAP counting are used. But first, what are the problems with the system you propose:
1. I completely disagree that a candidate with more than a quota has a mandate and duty to award those votes to any candidate they see fit. That’s theft of part of my vote, MY VOTE, not THEIR vote, by claiming an alleged mandate which does NOT exist – how dare that candidate distribute “THEIR” surplus votes to a candidate whose policies I abhor – whether in the same or a different party! It completely ignores that I, like many voters, Voted-1 for that candidate as the best of a bad lot – the best compromise chosen reluctantly despite disagreeing with some policies they support, and/or despite objecting to some candidates in that party.
2. What If if I don’t agree with the ‘next in line’ in that party? Do candidates or parties publish their preference distribution before or after the election and what affect do whims, bribes and favours have? The system is open to ‘preference whispering’ and corruption regardless of whether surplus distribution is decided before or after the election. Again, it is THEFT of the voter’s prerogative rather than asking each voter their actual preferences via a fairly counted preferential voting system.
3. What if a candidate receives surplus distribution but still fails to reach a quota? Do they pass it on to another candidate and who decides that distribution? The only fair way is for the voters to decide preferences not the 2nd or 3rd-hand candidate or party.
So, to guarantee the fairest possible result, the least subject to corruption, demands preferential voting COUNTED BY BORDA OR DCAP. And that applies equally to proportional representation and to single-member elections. Note that DCAP is lightning-fast to count because preferences are never distributed, they are fairly allowed for in a count guaranteed to be fair as described elsewhere.
The example I gave in my last post applies equally to proportional representation assuming that there is one seat yet to be decided. With 1st-past-the-post A with 45% is closest to winning, and that could be EITHER: the fairest, OR, the most unfair result. Preferential voting with traditional elimination and distribution, often gives a result which is more fair than 1st-past-the-post, but only Borda/DCAP guarantee a fair result – a result which depends on the voters collective preferences, not on backroom wheeling a dealing buying and selling surpluses or preferences.
We already have grave injustice. Preferential voting just obfuscates the fact. The only fair and transparent voting system is proportional representation. The mandate of the people is perfectly clear. I am not proposing a first-past-the-post-election at all. That is not how proportional representation works.
If you want to have an example of how it works:
In the case of a federal election – there is only one electorate – the whole country. Voters vote for a single person from a list of candidates per party.
Let’s say we have 15 million [non-compulsory] votes counted. Let’s say there are 150 seats in parliament. If any candidate receives 100,000 votes, they automatically have a seat in parliament. If a candidate receives more than 100,000 votes, they automatically have a seat in parliament and the mandate (and duty) to award the votes above 100,000 to any other candidate in the list of candidates, usually the next in line in their own party. So potentially a candidate with 200,000 votes will have his own seat and can give away a seat to someone who has received no vote at all. The voters have given him this mandate by supplying him with 100,000 extra votes. In this system the popular vote always wins the election, and every vote is exactly equal to any other.
Other advantages: No more shifting of electorate boundaries. No more gerrymandering. No more pork barreling. A single list of candidates for the whole country (instead of multiple randomized lists per regional electorate). A fast and easy election count. Better regional representation because regional issues can be picked up by any of the 150 Members of Parliament – not just some local yokel who only got 50% of the local vote (the other 50% will not feel represented – and like me – will feel their vote has been thrown away – which is exactly what happens in our current system).
But if you don’t have preferential voting, you risk grave injustice. E.g. if 45% vote for A, 40% for B and 15% for C. Then a 1st past the post election, will elect A. But if C voters ALL dislike candidate A, preferential voting elects B with 55% after preferences.
In that case, that’s a better result! But it’s not necessarily the best result.
What if all A and B voters prefer C as second best? That means that all A and B voters (85% of all voters) gave second preferences to C. So 100% of voters think electing C is the best compromise. That’s clearly a better result, assuming those preferences. Now while it’s unlikely to be as clears-cut as that, the point is that when elections are close effects such as described can and do give unfair results.
However, sadly, we don’t count preferential votes fairly: we eliminate C even if their 1st plus 2nd preferences indicate a healthy majority.
Instead we should count preferential votes by the Borda count, or better, by my DCAP method which like Borda, guarantees a fair count always. But DCAP is more flexible in making sure that partial preferential votes have equal value with all other votes.
DCAP does this by fairly interpreting Vote-1 for A, with no preference expressed for B or C, as being C=1; B=1.5, C=1.5 which adds up to a total of 4.0 just like a full preference vote using 1, 2 and 3. This makes it truly fair regardless of how you vote. DCAP also has the advantage that it allow split partial preferential voting: E.g 7 candidates A-G where you Vote C=1, B=2, and F =7 and DCAP fills in A, D, E, G as equal 4.5. As described elsewhere, this is lightning fast and even Senate election results will be available in hours instead of weeks or months.
This is not a fair voting system.
It was a country wide election. I know exactly which candidate I would like to have had in Parliament, based on merit, but this person does not live in my electorate. This idea of regional representation, with a seat per electorate, is just wrong. I certainly do not feel represented by the candidate who won the election in my electorate. We need a fairer system where every vote is counted and exactly equal to any other, i.e. proportional representation. The popular vote wins – the mandate of the people is clear. None of this silly obfuscation with preferential voting. And so what if we end up with ‘hung parliaments’ – that works quite well in many countries. With compromise and negotiation, long term projects can be realised – instead of build-up, tear-down switches of government.
The problem is that FPTP elections can elect the most popular candidate with less than 50% of the vote, even if that candidate is unpopular with more than 50% of voters. That’s a dangerous risk. Preferential voting avoids those risks.
That’s not just opinion: lets put some numbers on it.
Consider an election where 100 voters choose one of Candidates A, B and C. If Candidate A gets 40 votes, B 35 and C 25, then A is clearly the ‘front-runner’. However, we simply don’t know whether A is really the voters’ choice unless we consider second preferences.
So let’s eliminate C with only 25 votes and have a run-off between A and B just to make sure. Now it’s possible that all C voters choose B as their second choice. So, Candidate B wins with 60 votes which is a clear majority and, is apparently the rightful winner.
Now Preferential Voting effectively does ‘instant run-offs’ in one single election without wasting time and money doing two or more FPTP elections. So while FPTP elections sometimes give fair results, fairness is not guaranteed. Preferential voting is more likely to give a fair result, But sadly, not even that can be guaranteed.
Why not? How so?
While Preferential Voting is the best VOTING system, it is not COUNTED in a way that guarantees fairness. The ‘counting’ system can get it badly wrong. In the example above B is NOT necessarily the rightful winner. Why not? How so?
In the example above, all C voters gave their second preferences to B, and so B apparently won ‘after preferences’ with 60 votes. But suppose, all A and all B voters gave their second preferences to C. That can be summarised as:
A; 40 Likes, 00 neutrals, 60 dislikes; of 100 votes cast, 60 unhappy with A
B; 35 Likes, 25 neutrals, 40 dislikes; of 100 votes cast, 40 unhappy with B
C: 25 Likes, 75 neutrals, 00 dislikes; of 100 votes cast, 00 unhappy with C
100 1sts; 100 neutrals, 100 dislikes totals
So if C is declared the winner, no voters are unhappy with the result compared with 60% unhappy if A was elected and 40% unhappy if B was elected.
Why are the results different? Because FPTP and the counting method used for Preferential voting with totally ignore last preferences and are effectively obsesses with 1st preferences. Yet first and last preferences should be given equal rating to get a better result. Now there are counting methods that are totally fair, but sadly they are not being used and we need Voting Reform to get fair voting results.
So, in the example above the way Preferential Voting votes were ‘counted’ got it wrong and elected B instead of C, while FPTP gave even worse results. The Borda count, invented in the 1700s, always gives a fair count. How? It never eliminates candidates and it always takes into account every preference for every candidate. My DCAP counting method, a variant of Borda, also guarantees a fair counting method that always gets it right. DCAP and DPAP are more flexible than Borda (used mostly in sporting codes) and can easily fairly handle Optional preferential and Split optional preferential voting – see https://tinyurl.com/ElectoralReformOz for more detail.
So whatever the theoretical merits of large proportional representation electorates, I think you are tilting at (Dutch) windmills expecting to get a majority of States to agree to a change in the constitution to suit.
If you are really serious about that then the best way forward, in my opinion, is to clean up the current Coalition/Labor/Greens stranglehold on government. How? There’s lots happening where the basic idea is to encourage all smaller parties to recommend preferencing each other so that the strongest gets elected. This could stop the Greens turning the Coalition & Labor into Tweedledum & Tweedledee who try to do do whatever the Greens dictate to stay in power. E.g. see
In essence what they are all trying to do is to get past the electoral system which is strongly biased towards a 2-party-preferred system via atrocities like the Vote-1-to-6 Above-The-Line rules which almost guarantees that small new parties have no hope and so their voters’ votes are likely to either end up with a major party or be totally useless. But if all the small parties encourage their voters to number every square and put the majors and worse parties last then the strongest of the new small parties could gain control of the senate and force reforms.
That would then allow a campaign to reform the voting system. And that’s where my DCAP vote counting system would open the door for easier entry of smaller parties under the proportional represenatation system. OK too few representatives per electorate for your liking but at least it’s start. Then if we embark on multiple major Snowy-River type schemes to divert the humus volumes of water that flow to waste around Australia and divert them inland to replenish the Great Artesian basin and even make Lake Eye a permanent inland sea that would green Australia and greatly improve population density as well as greatly protect our sovereignty. Interestingly we can literally print money to do this because it actually creates common-wealth but putting the nation to work productively. So give it 100 years and we may well have enough population to have your preferred one electorate. But I doubt its necessity.
And please do not confuse Holland with the Netherlands (e.g. England is not the UK). Population density is not an issue. We have mass media and almost instantaneous communication across the globe. It makes no difference if you are 3000 km away or in the house next door. Distance does not make regional differences larger. I can assure you the regional differences in the Netherlands are far greater that any in Australia. Equality in voting and proportional representation brings The Netherlands together, whether you are Frisian, Zeeuw, Limburger, Groninger, Tucker, Brabander, Hollander, or some other regional identity speaking their own incomprehensible dialect or completely different language.
a phone APP is simply not secure and does not identify the user. In there same way that I regularly log on to my wife’s bank account using her ID and her password and make substantial transactions, with her approval, a phone APP is not hack-proof. It is far safer to insist on personal voting with 100-points of photo ID such as licence or passport. I think voter fraud multiple voting penalties should include jail options especially when it’s orchestrated on a large scale. See also John de Wit’s comment on the “Identifying Voters” tab in the “Issues” tab above.
John de Wit, on this “A Fair Voting System” Tab you say:
“Our laws and the separation of powers are the safeguards for freedom and against injustice – not the electoral system. "
However, our electoral system does have significant flaws that can and do elect candidates against the combined preferences of the voters. That is not “safeguard(ing) for freedom and against injustice”. For example ACT uses 5 electorates each with 5 proportionally elected representatives where voters vote for candidates not parties but parties are, allegedly, represented proportionally. However, in their 2020 election, the preferences prove that in each of 3 of the 5 electorates the last elected MLA would have lost a run-off election against a candidate for that electorate who was eliminated by the flawed counting method. That’s simply unjust and it is highly likely that this applies to the Australian Federal Senate also. We NEED reform. My DCAP vote-counting guarantees a fair counting method.
John, you propose voting for only a single candidate/party: i.e. no preferential voting, only 1st-part-the-post voting. That means voters must choose a 1-dimensional top issue or best fit for the moment with no hope of expressing other nuances. OK that may work with large single electorates and you can always contact other representatives re different issues, but there is no concept of my electorate, my local area. Presumably it works in Holland with a population density of over 400 people per square kilometre, but Australia’s population density is OVER ONE HUNDRED TIMES lower.
I will answer the easy questions first:
No – I do not want a single house. An upper house and a lower house, i.e. the Senate and House of Representatives must remain as it is now. There is nothing wrong with that – as well as the separation of powers into three branches: legislative, executive, and judicial. These are all proven foundations of a good democracy. So, I really don’t want any radical changes there.
Our laws and the separation of powers are the safeguards for freedom and against injustice – not the electoral system.
Yes – I want a single electorate but just for the House of Representatives – not for the Senate. Senators are popularly elected under the single transferable vote system of proportional representation. So, no need to change that – just maybe more senators.
There is nothing radically new about one electorate using proportional representation. Many countries do this:
and my favourite:
(You did ask for links. But please research this yourself.)
I am quite familiar with the Dutch voting system. There are Pros and Cons to this system but in general the sanctity of proportionality is considered foundational and a great win for democracy. Under the Dutch electoral system, voters can only express a choice for one individual candidate, and these votes are treated as a choice for a particular party. (Easy for voters – just one checkbox to choose!). The parties do not encourage preference voting, and the preference votes have only a small impact on the original list ordering of the parties. The Netherlands has a large, open-party system with very low barriers to entry.
I would propose using a similar system as the Dutch system for the Australian House of Representatives and State Parliaments.
This would end the two-party dominance that you also seem to despise. Smaller parties will arise to defend regional issues – and get their say in parliament – so dominance of city versus regional voters can be averted. And anyone in Australia, wherever they live, who is concerned about issues in Woop Woop can vote for the Woop Woop party. They don’t have to live in Woop Woop. (If you are wondering – I do not live in a big city – I also live in regional Australia – no disrespect intended with Woop Woop – just a nice generic name that kind of fits well with where I live.)
I too have lived in electorates where my MP was always from a party I disliked.
But you haven’t explained what to do considering that your proposed system allows a Party leader to personally win multiple seats. Referring to Google/Wikipedia with no links is no explanation. You also say it’s fair when the city dominates the country because that’s democracy. Freedom is only freedom if my freedom does not curtail your freedom – so allowing the City to dominate the country unhindered is not the sort of democracy I want.
Fairness for small States was effectively the whole point of the Federation agreement and the Senate was that large States would not have unhindered domination of small states – the majority should not suppress the minority. That’s why we need electorates smaller than Australia, and smaller than States. So, yes we have the possibility of a government that does not win the popular vote; and that’s not good. But I disagree with your conclusion that “The idea that our current system represents the regions better doesn’t make sense.” If we equate regions with electorates, it’s clearly much easier to change an MP who ignores issues relevant to their electorate.
How big an electorate do you propose. It sounds like you want a single house with a single electorate for the whole of Australia! Is that so?
If we have, as your 10Million votes suggest, a single huge electorate for Australia, it becomes harder to hold MPs accountable for problems and injustices in our own area or region. Now proportional representation for our current smallish electorates/regions would be good but the number of MPs would increase. However, we could amalgamate regions/electorates, but that makes it harder to hold our MPs accountable because they are more remote.
I think that the 2-dominant-party system is the problem: electoral reform as I’ve suggested makes it easier for smaller parties and so the more small-party MPs there are, the better our governments should be because they will be forced into coalitions that take into account proportionality of party interests.
Note that my proposed electoral reform makes most difference to the Senate which already has proportional representation within States. If it ain’t broke, don’t fix it: well it certainly needs fixing, but not smashing it completely – which is what a single house, single electorate proportional representation system would do. But while electoral reform could happen, changes such as you’ve proposed would, I hope, be unlikely to get past a referendum.
in our current Australian electoral system, in my electoral division, party A always gets more that 50% of the votes. I always vote for party B. My vote is thereby worthless. It gets thrown out. Preferential voting is just obfuscation – a band aid on an amputation – it doesn’t fix this major problem.
You see problems with fractional or residual seats when a proportional representation system is used. This is nothing new. All these problems are solved using various methods – for example: the highest averages method or the largest remainder method – and all these methods are much simpler than preferential voting! Proportional representation is used in many countries where they recognized – early on – that this was the only fair way to elect a parliament. Just look at how that works instead of just trying to improve a system that has a fundamental fault. If you want more details, it can be found using Google and Wikipedia. I am not trying to re-invent the wheel.
You say there will be domination by big states and big cities – but that is already the case – and it is fair. The majority vote rules – this is democracy. But without proportional representation you do have the possibility that the country is governed by a party that does not have the popular vote.
In the case of a parliament, be it federal or state, each elected member represents the whole country or whole state. Whether the parliament was elected via proportional representation or not, it makes no difference. Any member of parliament that disregards regional issues does that at their own peril. The regions are part of the whole. The idea that our current system represents the regions better doesn’t make sense.
To me it seems like a recipe for domination by the big-states and big cities.
Hence I can’t see it ever getting accepted, especially considering the extremely wide range of population densities in various party of Australia that have different needs form the cities.
But to the details:
In you example, how many vacancies are there being contested by the 200 candidates? But let’s suppose there are 100 vacancies for a small parliament. So 40% of the vote = 40 seats. Now if the leader of Party A is very popular personally and gets 10% of the vote, how can he/she hold 10 seats at once? So, it seems to me that you must have some form of preferential voting so that Party Leader A’s ‘Surplus Votes’ can be distributed to the voters’ second preferences.
Further, consider a close election with 3 front-runners where: A=32.74%, B=31.75%, C=30.76% & D=4.75% how can anyone get a fractional seat? Who gets the 3 seats no one fully earned? Preferences could show that A, B or C could be the actual front-runner. 3 seats could decide who governs. That may be OK for a homogenous population spread evenly with similar interests and concerns, but it certainly isn’t one size fits all.
I think the result will not be fair: rather it will lead to bigger and bigger governments with more and more power such that absolute power corrupts absolutely. Some areas will be very pleased, and other areas will be very upset.
One big electorate! How far will it go. One World Government? In my opinion, it’s worse than a gerrymander: the biggest population group, the biggest countries, the richest with most access to media etc will dominate.
If there are 10 million voters, 200 candidates, and 4 parties (A, B, C and D) where A gets 40% of the votes, B gets 30%, C gets 20% and D gets 10% then party A, got the most votes – just not enough to win the election since you would need more than 50% to do this. So, party A should form a coalition with one of the other parties. Or B, C and D could. Simple, easy, fair.
In 1901 proportional representation did not make much sense, especially because of distance and all the voters were accustomed to the British model. Today we have far better means of communication and information gathering and proportional representation does make sense.
A long habit of not thinking a thing WRONG, gives it a superficial appearance of being RIGHT (Thomas Paine).
The more simple any thing is, the less liable it is to be disordered, and the easier repaired when disordered (Thomas Paine).
If there are 100 voters and 4 candidates (“A”, “B”, “C” and “D”) where “A” gets 40 votes, “B” gets 30, “C” gets 20 and “C” gets 10 votes then you say the “A”, with the most votes (40) is the one that “the most people want elected because if they truly wanted someone else to win, they would have voted that first vote for that person".
But that’s not completely true for two reasons.
Firstly: 40 people is a minority of 100 people, NOT “most people”.
Secondly: what if the other 60 people preferenced “A” as LAST preference. That clearly means that MOST people, by a clear 60% majority, do NOT want “A” to win.
In such a case that is PROOF that “A” should NOT win.
So who should win? It all depends on the 2nd and perhaps 3rd preferences, but in this case where “A” gets 60 votes as LAST preference, there is no way that we could justify electing “A”. That’s why preferential voting is far superior to first-past-the-post voting which can easily give the wrong result.
I believe that the one winning the most votes is who the most people want elected because if they truly wanted someone else to win, they would have voted that first vote for that person.
I understand where your coming from, but this preference system is I find alikened to a playing the lottery where you can put in more than one entry to win your way, whereas if you only have one say, thats it, straightforward and simple. With pass the post type you have just one vote per person, which is fair, but even though I understand your logic, it just does not seem fair to be allowed to vote multiple times in one election. I also disagree on the examples on people being dissatisfied with the results because if only 1 vote is allowed, it is black and white who got the most votes, and while there maybe dissatisfaction with who won, the result of how it came to be someone cannot be dissatisfied with because the one with most votes won, as their would be only one vote…it is hard to compare of how a preference vote would have gone compared with a past post because of the difference of having only one vote per person, compared to multiple votes per person, in my opinion |
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byKaylin Crull
Modified about 1 year ago
© Boardworks Ltd of 54 Contents © Boardworks Ltd of 54 Using graphs to solve equations The change-of-sign rule Solving equations by iteration Review of the trapezium rule Simpson's rule Examination-style questions Simpson's rule
© Boardworks Ltd of 54 Simpson's rule When we used the trapezium rule we split the area that we were trying to find equal strips and then fitted straight lines to the curve. This led to approximations that often had a large percentage error. Simpson's rule also works by dividing the area to be found into equal strips, but instead of fitting straight lines to the curve we fit parabolas. The general form of a parabolic curve is y = ax 2 + bx + c. If we are given the coordinates of any three non-collinear points we can draw a parabola through them. We can find the equation of this parabola using the three points to give us three equations in the three unknowns, a, b and c.
© Boardworks Ltd of 54 Defining a parabola using three points
© Boardworks Ltd of 54 Simpson’s rule Consider the following parabola passing through the three points P, Q and R with coordinates (– h, y 0 ),(0, y 1 ) and ( h, y 2 ). If we take the equation of the parabola to be y = ax 2 + bx + c then we can write the area, A, of the two strips from – h to h as: x 0 y y0y0 y1y1 y2y2 P Q R We can use these points to define the area, A, divided by the ordinates y 0, y 1 and y 2, into two strips of equal width, h. –h–h h A
© Boardworks Ltd of 54 Simpson’s rule We can find a and c using the points (– h, y 0 ), (0, y 1 ) and ( h, y 2 ) to write three equations: When y = y 0 and x = – h : y 0 = a (– h ) 2 + b (– h ) + c = ah 2 – bh + c y 1 = c y 2 = ah 2 + bh + c When y = y 1 and x = 0: When y = y 2 and x = h : Adding the first and last equations together gives: y 0 + y 2 = 2 ah c y 0 + y 2 = 2 ah y 1 Since c = y 1
© Boardworks Ltd of 54 Simpson’s rule So, We can now use this to write the area, A, in terms of the ordinates y 0, y 1 and y 2 : In general, the area under any quadratic function, q ( x ), divided into two equal strips from x = a to x = b is given by: 2 ah 2 = y 0 + y 2 – 2 y 1 It is best not to isolate a because we actually want an expression for. where h =.
© Boardworks Ltd of 54 Simpson’s rule This forms the basis of Simpson’s rule where we divide the area under a curve into an even number of strips and fit a parabola to the curve across every pair of strips. The area of each pair of strips is taken to be approximately: If there are four strips (with 5 ordinates) the area will be: Adding another pair of strips would give the area as:
© Boardworks Ltd of 54 Simpson’s rule In general, for n strips the approximate area under the curve y = f ( x ) between the x -axis and x = a and x = b is given by: Where n is an even number and. This can be more easily remembered as: ( y first + y last + 4(sum of y odds ) + 2(sum of y evens )) This is Simpson’s rule. Don’t forget that Simpson’s rule can only be used with an even number of strips (or an odd number of ordinates).
© Boardworks Ltd of 54 x y 0 Simpson’s rule Let’s apply Simpson’s rule to approximate the area under the curve y = e –2 x between x = 0, x = 2 and the x -axis. Let’s use four strips as we did when we approximated this area before. 2 y = e –2 x y1y1 1 y2y2 y3y3 y4y4 y0y0 These five points define two parabolas over each pair of strips. The coordinates of the five points can be found using a table: x
© Boardworks Ltd of 54 Simpson’s rule Using Simpson’s rule with h = gives: = (to 3 s.f.) When we calculated this area using the same number of strips with the trapezium rule we obtained an area of (to 3 s.f.). This result had a percentage error of 8.15%. Comparing the area given by Simpson’s rule to the actual area gives us a percentage error of: Therefore, Simpson’s rule is much more accurate than the trapezium rule using the same number of strips.
© Boardworks Ltd of 54 Contents © Boardworks Ltd of 54 Using graphs to solve equations The change-of-sign rule Solving equations by iteration Review of the trapezium rule Simpson's rule Examination-style questions
© Boardworks Ltd of 54 Examination-style question 1 a)Use Simpson's rule with five ordinates to find an approximate value for to 3 decimal places. b) Use integration by parts to find the exact value of the definite integral given in part a). c) Give the percentage error of the approximation found in part a) to 2 decimal places. a) If there are five ordinates there are four strips. The width of each strip is therefore
© Boardworks Ltd of 54 Examination-style question 1 Using Simpson's rule: 321 x 0 b)Let and So and
© Boardworks Ltd of 54 Examination-style question 1 Integrating by parts: c) The percentage error is 3.28%
Finding the area under a curve: Riemann, Trapezoidal, and Simpsons Rule Adguary Calwile Laura Rogers Autrey~ 2 nd Per. 3/14/11.
The Fundamental Theorem of Calculus Calculus was developed by the work of several mathematicians from the 17th to 18th century Newton and Leibniz (Gottfried)
Roots & Zeros of Polynomials How the roots, solutions, zeros, x-intercepts and factors of a polynomial function are related. 2.5 Zeros of Polynomial Functions.
Area and the Definite Integral Tidewater Community College Mr. Joyner, Dr. Julia Arnold and Ms. Shirley Brown using Tans 5th edition Applied Calculus for.
Derivatives A Physics 100 Tutorial. Why Do We Need Derivatives? In physics, and life too, things are constantly “changing.” Specifically, what we’ll be.
Things to do in Lecture 1 Outline basic concepts of causality Go through the ideas & principles underlying ordinary least squares (OLS) estimation Derive.
More Vectors. Linear Combination of Vectors or These two vectors are on the same line (collinear)
Graphing Quadratic Functions y = ax 2 + bx + c. All the slides in this presentation are timed. You do not need to click the mouse or press any keys on.
Chapter 10 Quadratic Relations. In this chapter you should … Learn to write and graph the equation of a circle. Learn to find the center and radius of.
The Rational Zero Theorem. The Rational Zero Theorem gives a list of possible rational zeros of a polynomial function. Equivalently, the theorem gives.
17: Iteration using © Christine Crisp “Teach A Level Maths” Vol. 2: A2 Core Modules.
1 How to Factor Quadratics of the Form ax 2 + bx + c.
1.4 Linear Equations in Two Variables. Definition of Slope The slope of the line through the distinct points (x 1, y 1 ) and (x 2, y 2 ) is where x 2.
Differentiation – Product, Quotient and Chain Rules Department of Mathematics University of Leicester.
The. of and a to in is you that it he for.
Please answer the following multiple choice questions. Click on your answer. Please feel free to use paper or graphing calculators to find the correct.
5.1 Real Vector Spaces. Definition (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition and multiplication.
INTERSECTION OF 3 PLANES.. Consider the 3 planes given by the following equations: x + 2y + z = 14 2x + 2y – z = 10 x – y + z = 5 The traditional.
Chapter 11. f(x) = ax ² + bx + c, where a ≠ 0 ( why a ≠ 0 ?) A symmetric function that reaches either a maximum or minimum value as x increases
Slope Problems. Slope Problem Examples Determine a value for x such that the line through the points has the given slope. Let's use the slope formula.
10.5 Lines and Planes in Space Parametric Equations for a line in space Linear equation for plane in space Sketching planes given equations Finding distance.
Page 6 As we can see, the formula is really the same as the formula. So, Furthermore, if an equation of the tangent line at (a, f(a)) can be written as:
Function - Domain & Range Cubic, Circle, Hyperbola, etc By Mr Porter X-axis Y-axis f(x)=(x-3) 2 (x+4) 0 0 h k (h,k) r r– r.
Linear & Quadratic Functions PPT What is a function? In order for a relation to be a function, for every input value, there can only be one output.
Page 0. Page 1 Section IThe Basics Page 2 Antiderivatives If I ask you what is the derivative of x, you will say 1. If I ask you what is the antiderivative.
Function III Functions: Domain and Range By Mr Porter -24 (1,-9) axis 1 X-axis Y-axis y = x 2 - 2x X-axis Y-axis y = mx + b b -b m.
The following slides show one of the 51 presentations that cover the AS Mathematics core modules C1 and C2. Demo Disc Teach A Level Maths Vol. 1: AS Core.
Using First Derivatives to Find Maximum and Minimum Values and Sketch Graphs OBJECTIVES Find relative extrema of a continuous function using the First-Derivative.
© 2016 SlidePlayer.com Inc. All rights reserved. |
|Part of a series on|
Sediment is a naturally occurring material that is broken down by processes of weathering and erosion, and is subsequently transported by the action of wind, water, or ice or by the force of gravity acting on the particles. For example, sand and silt can be carried in suspension in river water and on reaching the sea bed deposited by sedimentation; if buried, they may eventually become sandstone and siltstone (sedimentary rocks) through lithification.
Sediments are most often transported by water (fluvial processes), but also wind (aeolian processes) and glaciers. Beach sands and river channel deposits are examples of fluvial transport and deposition, though sediment also often settles out of slow-moving or standing water in lakes and oceans. Desert sand dunes and loess are examples of aeolian transport and deposition. Glacial moraine deposits and till are ice-transported sediments.
Sediment can be classified based on its grain size, grain shape, and composition.
Sediment size is measured on a log base 2 scale, called the "Phi" scale, which classifies particles by size from "colloid" to "boulder".
|φ scale||Size range|
|< −8||> 256 mm||> 10.1 in||Boulder|
|−6 to −8||64–256 mm||2.5–10.1 in||Cobble|
|−5 to −6||32–64 mm||1.26–2.5 in||Very coarse gravel||Pebble|
|−4 to −5||16–32 mm||0.63–1.26 in||Coarse gravel||Pebble|
|−3 to −4||8–16 mm||0.31–0.63 in||Medium gravel||Pebble|
|−2 to −3||4–8 mm||0.157–0.31 in||Fine gravel||Pebble|
|−1 to −2||2–4 mm||0.079–0.157 in||Very fine gravel||Granule|
|0 to −1||1–2 mm||0.039–0.079 in||Very coarse sand|
|1 to 0||0.5–1 mm||0.020–0.039 in||Coarse sand|
|2 to 1||0.25–0.5 mm||0.010–0.020 in||Medium sand|
|3 to 2||125–250 μm||0.0049–0.010 in||Fine sand|
|4 to 3||62.5–125 μm||0.0025–0.0049 in||Very fine sand|
|8 to 4||3.9–62.5 μm||0.00015–0.0025 in||Silt||Mud|
|> 8||< 3.9 μm||< 0.00015 in||Clay||Mud|
|> 10||< 1 μm||< 0.000039 in||Colloid||Mud|
The shape of particles can be defined in terms of three parameters. The form is the overall shape of the particle, with common descriptions being spherical, platy, or rodlike. The roundness is a measure of how sharp grain corners are. This varies from well-rounded grains with smooth corners and edges to poorly rounded grains with sharp corners and edges. Finally, surface texture describes small-scale features such as scratches, pits, or ridges on the surface of the grain.
Form (also called sphericity) is determined by measuring the size of the particle on its major axes. William C. Krumbein proposed formulas for converting these numbers to a single measure of form,such as
where , , and are the long, intermediate, and short axis lengths of the particle. The form varies from 1 for a perfectly spherical particle to very small values for a platelike or rodlike particle.
An alternate measure was proposed by Sneed and Folk:
which, again, varies from 0 to 1 with increasing sphericity.
Roundness describes how sharp the edges and corners of particle are. Complex mathematical formulas have been devised for its precise measurement, but these are difficult to apply, and most geologists estimate roundness from comparison charts. Common descriptive terms range from very angular to angular to subangular to subrounded to rounded to very rounded, with increasing degree of roundness.
Surface texture describes the small-scale features of a grain, such as pits, fractures, ridges, and scratches. These are most commonly evaluated on quartz grains, because these retain their surface markings for long periods of time. Surface texture varies from polished to frosted, and can reveal the history of transport of the grain; for example, frosted grains are particularly characteristic of aeolian sediments, transported by wind. Evaluation of these features often requires the use of a scanning electron microscope.
Composition of sediment can be measured in terms of:
This leads to an ambiguity in which clay can be used as both a size-range and a composition (see clay minerals).
Sediment is transported based on the strength of the flow that carries it and its own size, volume, density, and shape. Stronger flows will increase the lift and drag on the particle, causing it to rise, while larger or denser particles will be more likely to fall through the flow.
Rivers and streams carry sediment in their flows. This sediment can be in a variety of locations within the flow, depending on the balance between the upwards velocity on the particle (drag and lift forces), and the settling velocity of the particle. These relationships are shown in the following table for the Rouse number, which is a ratio of sediment settling velocity (fall velocity) to upwards velocity.
|Mode of transport||Rouse number|
|Suspended load: 50% Suspended||>1.2, <2.5|
|Suspended load: 100% Suspended||>0.8, <1.2|
If the upwards velocity is approximately equal to the settling velocity, sediment will be transported downstream entirely as suspended load. If the upwards velocity is much less than the settling velocity, but still high enough for the sediment to move (see Initiation of motion), it will move along the bed as bed load by rolling, sliding, and saltating (jumping up into the flow, being transported a short distance then settling again). If the upwards velocity is higher than the settling velocity, the sediment will be transported high in the flow as wash load.
As there are generally a range of different particle sizes in the flow, it is common for material of different sizes to move through all areas of the flow for given stream conditions.
Sediment motion can create self-organized structures such as ripples, dunes, or antidunes on the river or stream bed. These bedforms are often preserved in sedimentary rocks and can be used to estimate the direction and magnitude of the flow that deposited the sediment.
Overland flow can erode soil particles and transport them downslope. The erosion associated with overland flow may occur through different methods depending on meteorological and flow conditions.
The major fluvial (river and stream) environments for deposition of sediments include:
Wind results in the transportation of fine sediment and the formation of sand dune fields and soils from airborne dust.
Glaciers carry a wide range of sediment sizes, and deposit it in moraines.
The overall balance between sediment in transport and sediment being deposited on the bed is given by the Exner equation. This expression states that the rate of increase in bed elevation due to deposition is proportional to the amount of sediment that falls out of the flow. This equation is important in that changes in the power of the flow change the ability of the flow to carry sediment, and this is reflected in the patterns of erosion and deposition observed throughout a stream. This can be localized, and simply due to small obstacles; examples are scour holes behind boulders, where flow accelerates, and deposition on the inside of meander bends. Erosion and deposition can also be regional; erosion can occur due to dam removal and base level fall. Deposition can occur due to dam emplacement that causes the river to pool and deposit its entire load, or due to base level rise.
Seas, oceans, and lakes accumulate sediment over time. The sediment can consist of terrigenous material, which originates on land, but may be deposited in either terrestrial, marine, or lacustrine (lake) environments, or of sediments (often biological) originating in the body of water. Terrigenous material is often supplied by nearby rivers and streams or reworked marine sediment (e.g. sand). In the mid-ocean, the exoskeletons of dead organisms are primarily responsible for sediment accumulation.
Deposited sediments are the source of sedimentary rocks, which can contain fossils of the inhabitants of the body of water that were, upon death, covered by accumulating sediment. Lake bed sediments that have not solidified into rock can be used to determine past climatic conditions.
The major areas for deposition of sediments in the marine environment include:
One other depositional environment which is a mixture of fluvial and marine is the turbidite system, which is a major source of sediment to the deep sedimentary and abyssal basins as well as the deep oceanic trenches.
Any depression in a marine environment where sediments accumulate over time is known as a sediment trap.
The null point theory explains how sediment deposition undergoes a hydrodynamic sorting process within the marine environment leading to a seaward fining of sediment grain size.
One cause of high sediment loads is slash and burn and shifting cultivation of tropical forests. When the ground surface is stripped of vegetation and then seared of all living organisms, the upper soils are vulnerable to both wind and water erosion. In a number of regions of the earth, entire sectors of a country have become erodible. For example, on the Madagascar high central plateau, which constitutes approximately ten percent of that country's land area, most of the land area is devegetated, and gullies have eroded into the underlying soil to form distinctive gulleys called lavakas . These are typically 40 meters (130 ft) wide, 80 meters (260 ft) long and 15 meters (49 ft) deep. Some areas have as many as 150 lavakas/square kilometer, and lavakas may account for 84% of all sediments carried off by rivers. This siltation results in discoloration of rivers to a dark red brown color and leads to fish kills.
Erosion is also an issue in areas of modern farming, where the removal of native vegetation for the cultivation and harvesting of a single type of crop has left the soil unsupported.Many of these regions are near rivers and drainages. Loss of soil due to erosion removes useful farmland, adds to sediment loads, and can help transport anthropogenic fertilizers into the river system, which leads to eutrophication.
The Sediment Delivery Ratio (SDR) is fraction of gross erosion (interill, rill, gully and stream erosion) that is expected to be delivered to the outlet of the river.The sediment transfer and deposition can be modelled with sediment distribution models such as WaTEM/SEDEM. In Europe, according to WaTEM/SEDEM model estimates the Sediment Delivery Ratio is about 15%.
Watershed development near coral reefs is a primary cause of sediment-related coral stress. The stripping of natural vegetation in the watershed for development exposes soil to increased wind and rainfall, and as a result, can cause exposed sediment to become more susceptible to erosion and delivery to the marine environment during rainfall events. Sediment can negatively affect corals in many ways, such as by physically smothering them, abrading their surfaces, causing corals to expend energy during sediment removal, and causing algal blooms that can ultimately lead to less space on the seafloor where juvenile corals (polyps) can settle.
When sediments are introduced into the coastal regions of the ocean, the proportion of land, marine and organic-derived sediment that characterizes the seafloor near sources of sediment output is altered. In addition, because the source of sediment (i.e. land, ocean, or organically) is often correlated with how coarse or fine sediment grain sizes that characterize an area are on average, grain size distribution of sediment will shift according to relative input of land (typically fine), marine (typically coarse), and organically-derived (variable with age) sediment. These alterations in marine sediment characterize the amount of sediment that is suspended in the water column at any given time and sediment-related coral stress.
In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically-poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found.
Sedimentary rocks are types of rock that are formed by the accumulation or deposition of mineral or organic particles at Earth's surface, followed by cementation. Sedimentation is the collective name for processes that cause these particles to settle in place. The particles that form a sedimentary rock are called sediment, and may be composed of geological detritus (minerals) or biological detritus. The geological detritus originated from weathering and erosion of existing rocks, or from the solidification of molten lava blobs erupted by volcanoes. The geological detritus is transported to the place of deposition by water, wind, ice or mass movement, which are called agents of denudation. Biological detritus was formed by bodies and parts of dead aquatic organisms, as well as their fecal mass, suspended in water and slowly piling up on the floor of water bodies. Sedimentation may also occur as dissolved minerals precipitate from water solution.
Till or glacial till is unsorted glacial sediment.
An alluvial fan is an accumulation of sediments that fans outwards from a concentrated source of sediments, such as a narrow canyon emerging from an escarpment. They are characteristic of mountainous terrain in arid to semiarid climates, but are also found in more humid environments subject to intense rainfall and in areas of modern glaciation. They range in area from less than 1 square kilometer (0.4 sq mi) to almost 20,000 square kilometers (7,700 sq mi).
In geography and geology, fluvial processes are associated with rivers and streams and the deposits and landforms created by them. When the stream or rivers are associated with glaciers, ice sheets, or ice caps, the term glaciofluvial or fluvioglacial is used.
Deposition is the geological process in which sediments, soil and rocks are added to a landform or landmass. Wind, ice, water, and gravity transport previously weathered surface material, which, at the loss of enough kinetic energy in the fluid, is deposited, building up layers of sediment.
Aeolian processes, also spelled eolian, pertain to wind activity in the study of geology and weather and specifically to the wind's ability to shape the surface of the Earth. Winds may erode, transport, and deposit materials and are effective agents in regions with sparse vegetation, a lack of soil moisture and a large supply of unconsolidated sediments. Although water is a much more powerful eroding force than wind, aeolian processes are important in arid environments such as deserts.
Conglomerate is a clastic sedimentary rock that is composed of a substantial fraction of rounded to subangular gravel-size clasts. A conglomerate typically contain a matrix of finer grained sediments, such as sand, silt, or clay, which fills the interstices between the clasts. The clasts and matrix are typically cemented by calcium carbonate, iron oxide, silica, or hardened clay.
A turbidite is the geologic deposit of a turbidity current, which is a type of amalgamation of fluidal and sediment gravity flow responsible for distributing vast amounts of clastic sediment into the deep ocean.
Sedimentation is the deposition of sediments. It takes place when particles in suspension settle out of the fluid in which they are entrained and come to rest against a barrier. This is due to their motion through the fluid in response to the forces acting on them: these forces can be due to gravity, centrifugal acceleration, or electromagnetism. Settling is the falling of suspended particles through the liquid, whereas sedimentation is the final result of the settling process.
Mudrocks are a class of fine-grained siliciclastic sedimentary rocks. The varying types of mudrocks include siltstone, claystone, mudstone, slate, and shale. Most of the particles of which the stone is composed are less than 1⁄16 mm and are too small to study readily in the field. At first sight, the rock types appear quite similar; however, there are important differences in composition and nomenclature.
Clastic rocks are composed of fragments, or clasts, of pre-existing minerals and rock. A clast is a fragment of geological detritus, chunks and smaller grains of rock broken off other rocks by physical weathering. Geologists use the term clastic with reference to sedimentary rocks as well as to particles in sediment transport whether in suspension or as bed load, and in sediment deposits.
An overbank is an alluvial geological deposit consisting of sediment that has been deposited on the floodplain of a river or stream by flood waters that have broken through or overtopped the banks. The sediment is carried in suspension, and because it is carried outside of the main channel, away from faster flow, the sediment is typically fine-grained. An overbank deposit usually consists primarily of fine sand, silt and clay. Overbank deposits can be beneficial because they refresh valley soils.
In geology, cross-bedding, also known as cross-stratification, is layering within a stratum and at an angle to the main bedding plane. The sedimentary structures which result are roughly horizontal units composed of inclined layers. The original depositional layering is tilted, such tilting not being the result of post-depositional deformation. Cross-beds or "sets" are the groups of inclined layers, which are known as cross-strata.
In geology, a graded bed is one characterized by a systematic change in grain or clast size from one side of the bed to the other. Most commonly this takes the form of normal grading, with coarser sediments at the base, which grade upward into progressively finer ones. Such a bed is also described as fining upward. Normally graded beds generally represent depositional environments which decrease in transport energy as time passes, but these beds can also form during rapid depositional events. They are perhaps best represented in turbidite strata, where they indicate a sudden strong current that deposits heavy, coarse sediments first, with finer ones following as the current weakens. They can also form in terrestrial stream deposits.
Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and/or the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks, mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
In geology, depositional environment or sedimentary environment describes the combination of physical, chemical, and biological processes associated with the deposition of a particular type of sediment and, therefore, the rock types that will be formed after lithification, if the sediment is preserved in the rock record. In most cases, the environments associated with particular rock types or associations of rock types can be matched to existing analogues. However, the further back in geological time sediments were deposited, the more likely that direct modern analogues are not available.
The suspended load of a flow of fluid, such as a river, is the portion of its sediment uplifted by the fluid's flow in the process of sediment transportation. It is kept suspended by the fluid's turbulence. The suspended load generally consists of smaller particles, like clay, silt, and fine sands.
Sedimentary structures include all kinds of features in sediments and sedimentary rocks, formed at the time of deposition.
A bedrock river is a river that has little to no alluvium mantling the bedrock over which it flows. However, most bedrock rivers are not pure forms; they are a combination of a bedrock channel and an alluvial channel. The way one can distinguish between bedrock rivers and alluvial rivers is through the extent of sediment cover.
In hydrology stream competency, also known as stream competence, is a measure of the maximum size of particles a stream can transport. The particles are made up of grain sizes ranging from large to small and include boulders, rocks, pebbles, sand, silt, and clay. These particles make up the bed load of the stream. Stream competence was originally simplified by the “sixth-power-law,” which states the mass of a particle that can be moved is proportional to the velocity of the river raised to the sixth power. This refers to the stream bed velocity which is difficult to measure or estimate due to the many factors that cause slight variances in stream velocities. |
Chapter 3: Applications of Differentiation
Section 3.5: Curvature of a Plane Curve
The curvature of a plane curve is a measure of how "curved" it is at each of its points. Table 3.5.1 lists formulas for the calculation of curvature of curves given in various formats.
κ=x. y..−y. x..x.2+y.23/2
κ=|r2+2 r′2−r r″|r2+r′23/2
Table 3.5.1 Formulae for curvature of a plane curve
For the explicit Cartesian curve y=yx, the primes in the formula for κ represent derivatives with respect to the independent variable x. For the parametric curve given in Cartesian coordinates, the overdots represent derivatives with respect to the parameter t. For the polar curve given in the form r=rθ, the primes represent derivatives with respect to the independent variable θ.
Most modern calculus texts take the curvature as positive; hence, the absolute values in the numerators of the formulas for κ (the Greek letter "kappa"). Some older texts, and some applications in the sciences, use a signed curvature that omits this absolute value.
Curvature is a measure of the rate at which the tangent line turns as the point of contact moves along the curve. See Figure 3.5.1.
Specifically, κ=dθds, where θ is the angle made by the tangent line and the horizontal, and s=sx is the "arc length" or distance along the curve.
Since y′=tanθ, it follows that θ=arctany′.
The differential of the arc length function is obtained from Figure 3.5.2 by approximating the arc length s by the hypotenuse of the dotted right triangle: ds=dx2+dy2=dx1+dydx2=dx 1+y′2.
p1 := plot([x^2,Student:-Calculus1:-Tangent(x^2,1)],x=0..2, color=[red,blue], view=[0..1.5,0..2.5]):
p2 := plots:-textplot([.65,.11,q], font=[SYMBOL,12]):
p3 := plot([[1,1]],style=point,symbol=solidcircle,symbolsize=15,color=green):
plots:-display([p1,p2,p3], scaling=constrained, tickmarks=[[0,2],[0,3]], labels=[x,y]);
Figure 3.5.1 Angle made by tangent line and horizontal
Figure 3.5.2 Element of arc length
The calculation of κ as the derivative of θ with respect to s is then as follows.
The graphs of two functions f and g make second-order contact at x=a if the values of f and g, and their first two derivatives, agree at x=a. Table 3.5.2 lists these three conditions as equations, and provides amusing interpretations for this degree of contact between two curves.
Table 3.5.2 Conditions for second-order contact
Center and Circle of Curvature
The center of curvature for a plane curve that is the graph of y=fx is the center of the circle of curvature, the circle that makes second-order contact with the plane curve. The radius of the circle of curvature is the radius of curvature. Because the curvature of a circle of radius r is κ=1/r, the radius of curvature is R=1/κ.
Table 3.5.3 lists formulas for h,k, the coordinates of the center of curvature, and for the radius of curvature.
Table 3.5.3 Center and radius of curvature
The overdots represent differentiation with respect to the independent variable; because some of these derivatives are squared, this notation is used in place of the prime.
The trajectory traced by the center of curvature as the circle of curvature traverses the curve C is called the evolute of C. The curve C is called the involute.
Show that the curvature of the straight line y=m x+b is zero.
Show that the circle x−h2+y−k2=r2 everywhere has constant curvature, that is, show κ=1/r.
Obtain and graph the curvature κx for y=x2.
At x=1, obtain the equation of the circle of curvature for y=x2.
Show that at x=1, the first and second derivatives for the curve and the circle of curvature agree.
Obtain the evolute for C, the graph of yx=x2, and show that it is the locus of the center of curvature.
Use the appropriate formula from Table 3.5.1 to determine the curvature of yx=x3/2,x≥0, then obtain the curvature from first principles, that is, by calculating the rate at which the tangent turns as arc length increases.
<< Previous Section Table of Contents Next Section >>
© Maplesoft, a division of Waterloo Maple Inc., 2021. All rights reserved. This product is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation.
For more information on Maplesoft products and services, visit www.maplesoft.com
Download Help Document
What kind of issue would you like to report? (Optional) |
Not all oceanographers have their perspectives on the ocean shaped by experiences at sea. Take Kathie Kelly. She does not venture out on a ship, dive in a submarine, or even go to the beach to conduct her research. Instead, she experiences the ocean through satellite images and numbers on her computer. While she may miss out on the excitement of a life on the open ocean, the copious amounts of data at her fingertips more than make up for it. For Kathie, having enough data to pursue important questions about the ocean is what research is all about.
Observed variations in sea surface height (SSH) measured by the TOPEX/Poseidon altimeter (left). Two different model results of SSH (center and right).
These satellite data are helping Kathie gain insights into how the ocean influences the climate. At the simplest level, the ocean acts as a heat reservoir. It absorbs heat from the atmosphere when the atmosphere is warmer and releases heat to the atmosphere when the atmosphere is cooler. This transfer of heat between the ocean and atmosphere is called heat flux. The reality, however, is more complicated. Ocean currents redistribute the heat around the world by carrying warm water from the equator towards the poles. For example, the Gulf Stream moves huge amounts of warm water from the tropics up along the east coast of North America and across to northern Europe. The presence of this warm water in the North Atlantic helps explain why Scotland has a relatively mild climate when compared to places at similar latitudes in North America such as Churchill, Manitoba—a Canadian town famous for its seasonal polar bear population.
Kathie in her office at the Applied Physics Lab, University of Washington looking at NASA scatterometer.
An understanding of these interactions between air and water is critical to understanding the climate and predicting climate change. It requires sophisticated models both of the ocean and the atmosphere. Kathie is focusing her efforts on the ocean end. Specifically, she is trying to determine the most important factors influencing sea surface temperatures. These factors include heat flux from the atmosphere, ocean currents, winds, and the upwelling of water from beneath the surface.
Up until a decade ago, oceanographers had to go out to sea to measure parameters such as temperature, wind, and currents. These measurements were time-consuming, expensive, and infrequently repeated; therefore it was impossible to compare these parameters from year to year. Now Kathie receives satellite images from NASA containing this information every ten days. One satellite instrument called an altimeter detects currents by measuring horizontal differences in sea surface height. Changes in sea surface height can indicate the amount of heat in the ocean because the ocean expands when it is heated and contracts when it is cooled. So if you look downstream along a warm-water current, such as the Gulf Stream, the sea height on the right is higher (warm water) and the sea height on the left is lower (cool water). The altimeter measures sea surface height by bouncing radar signals off the ocean and timing their echo. They have an accuracy of three centimeters—an amazing number considering that the satellites are 1,300 kilometers high.
Tracks (or paths) of buoys drifting at sea (green). Tracks of the TOPEX/Poseidon satellite (blue). Study region is marked by red rectangle.
A second satellite instrument called a scatterometer enables Kathie and her colleagues to calculate wind stress. The satellite sends down a pulse and measures how much it is scattered by ripples and waves created by wind stress. (Think about how wind creates ripples on a pond). Unlike instruments that just measure air speed, the scatterometer measures the movement in air relative to the movement of water. For example, if the wind was blowing at the same speed and direction as a current, it would create no stress on the water and there would be no ripples. Wind stress is an important factor in determining the transfer of heat between the ocean and atmosphere.
Kathie and her colleagues are working with an eight-year record of the currents. These records show changes in some of the major currents, including a change in the heat content of the Kuroshio Extension, a warm-water current off of the coast of Japan. It is unclear what is causing these changes and whether it is a cyclical event or represents a long-term trend.
The larger question is how changes in sea surface temperature affect the atmosphere, particularly surrounding the major currents. Some studies suggest that it is very important, but others suggest that the effect is small. Kathie’s studies of the factors affecting sea surface temperature will contribute to answering this question.
- Principal Oceanographer, Applied Physics Laboratory
- Professor (Affiliate), School of Oceanography University of Washington
More about Kathryn
Read an interview with Kathryn.
Get more info on Kathryn's background.
- Picture Gallery
See images of Kathryn at work.
- Learn More
Learn more about Kathryn's field
- Kathryn's Calendar
See Kathryn's typical work week.
- Related Links
Other sites related to Kathryn's career.
More Remarkable Careers
- Melanie Holland
- Faculty Research Associate, Microbial Ecology
Melanie Holland studies the microbes that thrive in scalding temperatures surrounding hydrothermal vents. These amazing organisms not only reveal important information about the vent communities, they may also provide insights into the origin of life on Earth and the possible existence of life on other planets.
- Rose Dufour
- Ship Scheduler and Clearance Officer, Ship Operations and Marine Technical Support
Rose Dufour and her job-share partner Elizabeth Brenner create the schedules for four research ships. The challenge is to keep the scientists, funding agencies, and foreign governments happy.
- Claudia Benitez-Nelson
- Assistant Professor, Chemical Oceanography
Claudia Benitez-Nelson uses radioactive isotopes to study the complex world of nutrient cycling in the oceans.
- Lauren Mullineaux
- Senior Scientist, Marine Biology
Lauren Mullineaux’s research group studies a side of benthic organisms (animals that live on the seafloor) that until recently has received little attention.
- Wen-lu Zhu
- Associate Scientist, Geology and Geophysics
Wen-lu Zhu studies the properties of rocks found deep in the ocean crust by recreating those conditions in the laboratory.
- Jo Griffith
- Principal Illustrator, Scientific and Oceanographic Data
Technical illustrator Jo Griffith hasn’t picked up a pen in over five years. Instead she uses a variety of computer programs to create graphs, maps, and illustrations for researchers.
- Emily Klein
- Professor of Geology, Geochemistry
Emily collects rocks from the deep seafloor. The chemicals that make up the rocks provide clues to how the oceanic crust is built.
- Margaret Leinen
- Assistant Director for Geosciences
As a scientist, Margaret Leinen studied sediments that have accumulated on the ocean floor. Now as the Assistant Director of Geosciences at the National Science Foundation, she oversees programs in Earth, Atmosphere, Ocean, and Environmental Sciences. She is also working on initiatives to bring more women and minorities into these fields.
- Maya Tolstoy
- Research Scientist, Geophysics
Marine seismologist Maya Tolstoy helps find active volcanoes on the seafloor by listening for their eruptions.
- Amy Bower
- Associate Scientist, Physical Oceanography
Amy studies the interactions between ocean currents and climate. These interactions are very complex.
- Kathryn Gillis
- Professor, Earth and Ocean Sciences
Kathryn Gillis dives to rifts in the seafloor that are as deep as six kilometers to learn about the processes taking place within the ocean crust.
- Dawn Wright
- Associate Scientist, Geography/Marine Geology
Master Lego-constructor and former bicycle-racer Dawn Wright has immersed herself in two disciplines. As a geologist, she is studying the cracks that form in the seafloor along the mid-ocean ridge. As a geographer, she is developing software that oceanographers are using to interpret seafloor data.
- Debby Ramsey
- Third Engineer, Marine Crew
As Third Engineer onboard the Research Vessel Thomas G. Thompson, Debby Ramsey helps keep all of the equipment that has moving parts running smoothly. |
Computer programming, a.k.a. coding, used to be something done only by those in-the-know involving complex strings of characters and more than a bit of magical mystique.
But that was before computer programming was made more accessible for everyone with block-based code: interlocking, graphical blocks representing text-based code. With this evolution, coding became less about the character strings and more about the problem solving. In essence, it simplified the process to the point where even a Kindergartener can do it.
Everybody in the country should learn how to program a computer… because it teaches you how to think – Steve Jobs
Teach Coding to Develop Creative Thinking Skills
The Partnership for 21st Century Skills (P21) defines creative thinking skills in two parts: thinking creatively as an individual and working with others creatively. It’s about generating new ideas, understanding that failure is part of the creative process, and considering diverse perspectives.
Individual Creative Thinking Skills: When we teach coding to students, they are given a digital toolbox, a coding stage, and unlimited possibilities for projects – creating games, writing songs, conjuring up characters, developing plotlines – there’s no end to what they can create.
Along the way, they invariably run into roadblocks as they work through the creative process: a tool doesn’t work the way they expect, the outcome is off base, or there’s a bug they can’t easily solve. To push past those challenges, they must seek alternate solutions, explore different possibilities, and once in a while start over from scratch.
Collaborative Creative Thinking Skills: In coding in the classroom with partnerships or groups, a thought shared by a collaborator triggers an idea in someone else’s mind. By considering the issue from multiple angles and melding the ideas, the project grows stronger. One person’s big vision combined with another’s attention to detail and a third’s left (or right) of center idea that brings it all together. The creative process is stronger as a direct result of the contributions of more than one person.
Digging deeper, programmers often work to solve problems, and that’s when creative thinking skills are a game changer. The creative angle applies to every discipline from applied physics to zoology. It’s about thinking outside the proverbial box to find solutions that work.
When there’s more than one head in the game, the potential expands. Harness that potential to impact the biggest problems we face as humans, and the results can be seriously inspiring!
Teach Coding to Promote Student Collaboration
The organizations involved in P21 work know that students’ future success, regardless of their path, will depend on their ability to play well with others. Building essential collaboration skills for students like listening intently, considering ideas different from their own, and learning to be flexible is key. In today’s classrooms, teachers regularly create opportunities for student collaboration, and coding is another perfect fit.
- In paired programming, kids can learn to be the guide on the side rather than the one who jumps in to fix a string of code that needs to be debugged, allowing everyone to learn. They learn to sit side-by-side with their classmate, talking things through but allowing the puzzled classmate to do the work.
- The “Ask Three Then Me” strategy works wonders to facilitate student collaboration when coding. The rule: ask three classmates before asking the teacher for help. Invariably, someone in the classroom has figured out how to navigate that step.
- Imagine a gaggle of girls digging in together to find a bug and resolve it, the quietest student taking the lead in demonstrating how a loop makes code more efficient, or the celebration of a group that persevered to successful completion of their coded game past multiple points of frustration.
Along those same lines, when we teach coding, it’s absolutely okay to say, “I don’t know how to code that yet. Let’s see if we can figure this out together.” That brave yet simple admission demonstrates in a very real way that student agency is key. There isn’t always going to be a teacher (or boss or leader) who has all the answers in coding – or in anything else.
In a true learning community, authentic student collaboration becomes part of the day-to-day business of the classroom allowing for those same skills and behaviors to be part of what students will one day bring to jobs we cannot yet imagine. Tackling coding together brings abundant opportunities for working together, for leading and following, and for listening to each voice.
Teach Coding to Cultivate Student Communication Skills
When teachers talk about student communication skills, they typically mean writing and speaking, and in coding, those are definitely part of the equation. But coding presents an opportunity to communicate in a new language.
In teaching students to code, they learn complex vocabulary – algorithm, sequence, computation – but in addition, they learn to logically order their thoughts. In paired programming, students are pushed beyond statements like, “Put that there” to “Move the jump block in front of the draw block.”
These communication skills, with an increased level of articulation and order, impact their learning in other areas as well. Their brains understand the logical sequence of events, leading them to be able to sequence a story. They grasp more firmly the idea of cause and effect, deepening their understanding of other texts they tackle, and they can communicate those ideas more effectively.
The coding guide-on-the-side must use clear vocabulary in a logical sequence to communicate to their partner which blocks need to be adjusted to debug the code. In creating a new idea, they must find words to convey their thoughts to their peers.
Teach Coding to Teach Critical Thinking
At the heart of coding is computational thinking, which encompasses logic, analysis, abstract representation, algorithms, and the ability to generalize and transfer a process to new situations. Computer programming is founded on logic and algorithms that are made stronger when critical thinking skills are applied to turn abstract ideas into real world solutions.
In Kindergarten, student critical thinking skills can be tested and challenged by troubleshooting a marble run that doesn’t work as expected. The thinking is the same as debugging code. It is a step-by-step process in which analysis is made at each juncture – it works here; it doesn’t work there.
Ultimately, the goal with teaching coding in the classroom is to transfer critical thinking skills to more sophisticated applications: the workings of a machine, troubleshooting technology devices, analyzing the logistics of a project, or examining lines of code.
As a serious bonus, the frustration of code that doesn’t work gives students ample opportunity to grow their persistence, to learn the grit that will give them the edge in every other aspect of their lives.
Diving in to Coding in the Classroom
Ready to take the plunge to start teaching coding in the classroom? Or perhaps just take things slowly, one step at a time? Either approach works. What’s important is to begin.
Even if devices aren’t available or a 1:1 student ratio isn’t possible, there are unplugged coding activities that require no electricity at all but deepen student learning, nonetheless. And honestly, lots of ways to teaching coding to students require no buttons – just a brain. It isn’t even necessary to overhaul curriculum. So much of the work of computational thinking and coding begins with incorporating simple changes in instructional methodology.
To push student creative thinking skills, provide time for them to brainstorm solutions to a variety of problems, offer them many opportunities to work on open-ended questions, and celebrate the range of answers that results. Encourage them to find more than one way to solve a math problem or give them a pile of recycled material and see what they can conjure up. What matters here is the openness of creative thinking – more than one right answer, more than one way to one right answer, and more opportunities to create new ideas and products.
To encourage student collaboration skills, design learning experiences that require the input of everyone on the team. When brainstorming, give each student a different color of marker for the chart paper brainstorm, providing a visual representation of each student’s additions to the overall thought process. When designing group experiences, create an essential role for each student, rotating those roles each time, so that the successful completion of the task requires the contributions of all.
To enhance student communication skills, provide opportunities to share ideas in a variety of ways. Sure, discussion and conversation can play a part, but communicating also occurs via artistic expression, written language, and using manipulatives. Task one student with summarizing the group’s ideas, another with note-taking, and a third with timekeeping. What’s essential here is giving students a chance to contribute to the ideas of the group and to listen and process others’ thoughts as well.
To build students’ critical thinking skills gently grow thinking with questions like:
- What are the possible outcomes of that?
- What seems to be in the way?
- Can an idea be simplified?
- Is there another way to solve that problem? How?
The most powerful learning opportunities come when all four skills are tapped simultaneously. Imagine a project that requires out-of-the-box thinking, working together, sharing ideas, and solving an authentic problem. Coding delivers exactly that kind of experience.
Coding Their Way to 21st Century Learning
Computer languages will evolve and change. Apps will come and go. But the thinking about how to write code transcends. There are problems to be solved, and the minds sitting in our classrooms will be the ones who solve the world’s greatest challenges.
- Students must learn creative thinking skills to think beyond the box, to imagine the world in a different way, and to adjust when Plans A, B, and C haven’t worked.
- Students must cultivate communication skills to listen to one another, challenge each other, celebrate successes, and overcome challenges.
- Students must develop communication skills to share their ideas, communicate effectively, and articulate their thoughts.
- Students must apply their critical thinking skills to analyze data, examine narratives, find connections, and work through logic.
Want to do all of that? Teach students to code. Take one hour to try it out. For support, reach out to the coding communities and engage with other educators on social media. Take that first step. After all, it’s so easy even the Kindergarteners are doing it! |
June 26, 2019
A regular polygon has equal angles and sides.
Regular polygons can be inscribed in circles.
Elements of a Regular Polygon
The center is the inner point equidistant from each vertex.
The radius, r, is the segment that goes from the center to each vertex.
The apothem, a, is the distance from the center to the midpoint of one side.
Angles of a regular polygon
Central Angle of a Regular Polygon
The central angle of a regular polygon is formed by two consecutive radius.
If n is the number of sides of a polygon:
Central angle = 360° : n
Central angle of a regular pentagon = 360°: 5 = 72º
Interior Angle of a Regular Polygon
The interior angle of a regular polygon is formed by two consecutive sides.
Interior angle = 180° − central angle
Interior angle of a regular pentagon = 180° − 72° = 108º
Exterior Angle of a Regular Polygon
The exterior angle of a regular polygon is formed by a side and the extension of a consecutive side.
The exterior and interior angles are supplementary, that is to say, that add up 180º.
Exterior angle = central angle
Exterior angle of a regular pentagon = 72º
The perimeter is equal to the sum of the lengths of all sides or the length of a side multiplied by the number of sides.
P = n · l
Calculate the perimeter and area of the hexagon: |
In mathematics, an abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written. That is, the group operation is commutative. With addition as an operation, the integers and the real numbers form abelian groups, and the concept of an abelian group may be viewed as a generalization of these examples. Abelian groups are named after early 19th century mathematician Niels Henrik Abel.
The concept of an abelian group underlies many fundamental algebraic structures, such as fields, rings, vector spaces, and algebras. The theory of abelian groups is generally simpler than that of their non-abelian counterparts, and finite abelian groups are very well understood and fully classified.
A group in which the group operation is not commutative is called a "non-abelian group" or "non-commutative group".:11
There are two main notational conventions for abelian groups – additive and multiplicative.
Generally, the multiplicative notation is the usual notation for groups, while the additive notation is the usual notation for modules and rings. The additive notation may also be used to emphasize that a particular group is abelian, whenever both abelian and non-abelian groups are considered, some notable exceptions being near-rings and partially ordered groups, where an operation is written additively even when non-abelian.:28–29
Camille Jordan named abelian groups after Norwegian mathematician Niels Henrik Abel, because Abel found that the commutativity of the group of a polynomial implies that the roots of the polynomial can be calculated by using radicals.:144–145
Somewhat akin to the dimension of vector spaces, every abelian group has a rank. It is defined as the maximal cardinality of a set of linearly independent (over the integers) elements of the group.:49–50 Finite abelian groups and torsion groups have rank zero, and every abelian group of rank zero is a torsion group. The integers and the rational numbers have rank one, as well as every nonzero additive subgroup of the rationals. On the other hand, the multiplicative group of the nonzero rationals has an infinite rank, as it is a free abelian group with the set of the prime numbers as a basis (this results from the fundamental theorem of arithmetic).
The classification was proven by Leopold Kronecker in 1870, though it was not stated in modern group-theoretic terms until later, and was preceded by a similar classification of quadratic forms by Carl Friedrich Gauss in 1801; see history for details.
See also list of small groups for finite abelian groups of order 30 or less.
One can check that this yields the orders in the previous examples as special cases (see Hillar, C., & Rhea, D.).
This homomorphism is surjective, and its kernel is finitely generated (since integers form a Noetherian ring). Consider the matrix M with integer entries, such that the entries of its jth column are the coefficients of the jth generator of the kernel. Then, the abelian group is isomorphic to the cokernel of linear map defined by M. Conversely every integer matrix defines a finitely generated abelian group.
It follows that the study of finitely generated abelian groups is totally equivalent with the study of integer matrices. In particular, changing the generating set of A is equivalent with multiplying M on the left by a unimodular matrix (that is, an invertible integer matrix whose inverse is also an integer matrix). Changing the generating set of the kernel of M is equivalent with multiplying M on the right by a unimodular matrix.
where r is the number of zero rows at the bottom of r (and also the rank of the group). This is the .
The existence of algorithms for Smith normal form shows that the fundamental theorem of finitely generated abelian groups is not only a theorem of abstract existence, but provides a way for computing expression of finitely generated abelian groups as direct sums.
An abelian group is called torsion-free if every non-zero element has infinite order. Several classes of torsion-free abelian groups have been studied extensively:
The classification theorems for finitely generated, divisible, countable periodic, and rank 1 torsion-free abelian groups explained above were all obtained before 1950 and form a foundation of the classification of more general infinite abelian groups. Important technical tools used in classification of infinite abelian groups are pure and basic subgroups. Introduction of various invariants of torsion-free abelian groups has been one avenue of further progress. See the books by Irving Kaplansky, László Fuchs, Phillip Griffith, and David Arnold, as well as the proceedings of the conferences on Abelian Group Theory published in Lecture Notes in Mathematics for more recent findings.
The additive group of a ring is an abelian group, but not all abelian groups are additive groups of rings (with nontrivial multiplication). Some important topics in this area of study are:
Moreover, abelian groups of infinite order lead, quite surprisingly, to deep questions about the set theory commonly assumed to underlie all of mathematics. Take the Whitehead problem: are all Whitehead groups of infinite order also free abelian groups? In the 1970s, Saharon Shelah proved that the Whitehead problem is:
Among mathematical adjectives derived from the proper name of a mathematician, the word "abelian" is rare in that it is often spelled with a lowercase a, rather than an uppercase A, the lack of capitalization being a tacit acknowledgement not only of the degree to which Abel's name has been institutionalized but also of how ubiquitous in modern mathematics are the concepts introduced by him. |
Leaves are the powerhouses of plants that generate energy. These are green-colored parts of a plant having many veins. Mix and straight veins help it carry nutrients. Functions like food storage and releasing water under humidity help flourish in all seasons.
What is a Leaf?
A leaf is the main part of vascular plants that are responsible for making food. Vascular plants contain cells or vessels to carry the fluid. Leaves get their green color due to the presence of chlorophylls that helps in making food. Leaves carry out many important functions like making food with the help of sunlight and handling the exchange process of carbon dioxide and oxygen.
Some leaves are deciduous while others are evergreen. The deciduous trees shed leaves during season change due to water loss. Whereas, evergreen leaves live through all the seasons.
Parts of a Leaf and Their Functions
A leaf is composed of various parts that help them in performing various tasks and functions. The parts of a leaf can be categorized into two types.
- Guard Cells
- Epidermal Cells
- Mesophyll Cells
- Vascular Bundles
- Leaf Base
A stoma is an entrance of a leaf that controls in and outflow of light, water, and gases for the process of photosynthesis (the process of keeping leaves green by supplying them with required food.
2. Guard Cells
Guard cells control the process of transpiration in which water passage takes place from roots through the vascular system. They are present around the stomata.
3. Epidermal Cells
Epidermal cells are present in leaves on the upper and lower side. They control the waste of water by closely sticking to each other to maintain water in a leaf. Epidermal cells are also known as the skin of a leaf.
4. Mesophyll Cells
Mesophyll cells are present under the epidermal cells. They contain chloroplast and control the process of photosynthesis. It also helps plants bend when the wind blows because of space among cells that allow carbon dioxide to move.
5. Vascular Bundles (Xylem, Phloem, Veins)
Xylem is responsible for supplying water and nutrients to leaves from roots to stems. It carries nutrients that contain minerals like salt.
Phloem’s function is to carry nutrients like sugar from root to stem. Xylem and phloem do the same function but carry different nutrients.
1. Leaf Base
A leaf base is a flat area that attaches to the stem of a plant. It is the most lower part of the leaf and acts as a support to all parts of a leaf.
It is the stalk that connects the leaf with the stem and turns it to face the sun for sunlight. Leaves without petiole are called sessile leaves.
Lamina is the uppermost part of a leaf that contains veins and chloroplast. The main functions of a leaf such as photosynthesis and transpiration start from the lamina through its internal parts.
Venation of a Leaf
It is the arrangement of veins on a leaf blade that function as a food and water carrier. There are mainly two types of venation; Reticulate Venation and Parallel Venation.
In this type of venation, the veins of the leaf are in web-like arrangement or they do not run straight, rather they interconnect with other veins.
In this type of venation, the veins of the leaf area are in a straight pattern and do not cross each other, rather they run parallel.
Types of a Leaves
There are two main types of leaves which are simple leaves and compound leaves.
Simple leaves are those which directly attach with petiole and are not further subdivided into smaller leaves or leaflets. Examples of simple leaves are Maple, Pear, Mango, Guava, etc.
Compound leaves are those which are further subdivided into small leaves or leaflets. It spreads its leaflets through a stalk. An example of a compound leaf is the chestnut leaf that spreads 5 to 7 small leaves.
Functions of a Leaf
The main function of the leaf is to fetch food from the root for the plant and carry out photosynthesis for its growth. Following are the few functions:
It is the function of the leaf through which it carries water and other nutrients from roots through the stomata of the plant. Through this process, a leaf maintains water and controls evaporation.
It is the function of the leaf through which it discharges water from its edges or tips. This function takes place at night generally in vascular plants due to moisture and closed stomata of the leaf.
Storage of Food
The food storage function of leaves often takes place in Cabbage, Lettuce, Spinach, and various other vegetable plants. A leaf stores starch as its food. The function of food storage is carried out by vascular bundles.
Exchange of Gases
The exchange of gases is the function mainly done by stomata in the leaf through its small openings. Then mesophyll cells, which lie under epidermal cells, allow the diffusion of gases in and out of a leaf. |
Hydrogen bond(Redirected from Hydrogen bonding)
A hydrogen bond is the electrostatic attraction between two polar groups that occurs when a hydrogen (H) atom covalently bound to a highly electronegative atom such as nitrogen (N), oxygen (O), or fluorine (F) experiences the electrostatic field of another highly electronegative atom nearby.
Hydrogen bonds can occur between molecules (intermolecular) or within different parts of a single molecule (intramolecular). Depending on geometry and environment, the hydrogen bond free energy content is between 1 and 5 kcal/mol. This makes it stronger than a van der Waals interaction, but weaker than covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins.
Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
The hydrogen bond is an attractive interaction between a hydrogen atom from a molecule or a molecular fragment X–H in which X is more electronegative than H, and an atom or a group of atoms in the same or a different molecule, in which there is evidence of bond formation.
An accompanying detailed technical report provides the rationale behind the new definition.
A hydrogen atom attached to a relatively electronegative atom will play the role of the hydrogen bond donor. This electronegative atom is usually fluorine, oxygen, or nitrogen. A hydrogen attached to carbon can also participate in hydrogen bonding when the carbon atom is bound to electronegative atoms, as is the case in chloroform, CHCl3. An example of a hydrogen bond donor is the hydrogen from the hydroxyl group of ethanol, which is bonded to an oxygen.
In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor.
In the donor molecule, the electronegative atom attracts the electron cloud from around the hydrogen nucleus of the donor, and, by decentralizing the cloud, leaves the atom with a positive partial charge. Because of the small size of hydrogen relative to other atoms and molecules, the resulting charge, though only partial, represents a large charge density. A hydrogen bond results when this strong positive charge density attracts a lone pair of electrons on another heteroatom, which then becomes the hydrogen-bond acceptor.
The hydrogen bond is often described as an electrostatic dipole-dipole interaction. However, it also has some features of covalent bonding: it is directional and strong, produces interatomic distances shorter than the sum of the van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a type of valence. These covalent features are more substantial when acceptors bind hydrogens from more electronegative donors.
The partially covalent nature of a hydrogen bond raises the following questions: "To which molecule or atom does the hydrogen nucleus belong?" and "Which should be labeled 'donor' and which 'acceptor'?" Usually, this is simple to determine on the basis of interatomic distances in the X−H···Y system, where the dots represent the hydrogen bond: the X−H distance is typically ≈110 pm, whereas the H···Y distance is ≈160 to 200 pm. Liquids that display hydrogen bonding (such as water) are called associated liquids.
- F−H···:F (161.5 kJ/mol or 38.6 kcal/mol)
- O−H···:N (29 kJ/mol or 6.9 kcal/mol)
- O−H···:O (21 kJ/mol or 5.0 kcal/mol)
- N−H···:N (13 kJ/mol or 3.1 kcal/mol)
- N−H···:O (8 kJ/mol or 1.9 kcal/mol)
3 (18 kJ/mol or 4.3 kcal/mol; data obtained using molecular dynamics as detailed in the reference and should be compared to 7.9 kJ/mol for bulk water, obtained using the same molecular dynamics.)
Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed[how?] large differences between individual H bonds of the same type. For example, the central interresidue N−H···N hydrogen bond between guanine and cytosine is much stronger in comparison to the N−H···N bond between the adenine-thymine pair.
The length of hydrogen bonds depends on bond strength, temperature, and pressure. The bond strength itself is dependent on temperature, pressure, bond angle, and environment (usually characterized by local dielectric constant). The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally:
|Acceptor···donor||VSEPR geometry||Angle (°)|
In the book The Nature of the Chemical Bond, Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912. Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush. In that paper, Latimer and Rodebush cite work by a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds."
Hydrogen bonds in waterEdit
The most ubiquitous and perhaps simplest example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. Two molecules of water can form a hydrogen bond between them; the simplest case, when only two molecules are present, is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances.
Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four. For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair).
The exact number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and depends on the temperature. From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69. A more recent study found a much smaller number of hydrogen bonds: 2.357 at 25 °C. The differences may be due to the use of a different method for defining and counting the hydrogen bonds.
Where the bond strengths are more equivalent, one might instead find the atoms of two interacting water molecules partitioned into two polyatomic ions of opposite charge, specifically hydroxide (OH−) and hydronium (H3O+). (Hydronium ions are also known as "hydroxonium" ions.)
- H−O− H3O+
Indeed, in pure water under conditions of standard temperature and pressure, this latter formulation is applicable only rarely; on average about one in every 5.5 × 108 molecules gives up a proton to another water molecule, in accordance with the value of the dissociation constant for water under such conditions. It is a crucial part of the uniqueness of water.
Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes. Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds.
Bifurcated and over-coordinated hydrogen bonds in waterEdit
A single hydrogen atom can participate in two hydrogen bonds, rather than one. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex natural or synthetic organic molecules. It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation.
Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens.
Hydrogen bonds in DNA and proteinsEdit
Hydrogen bonding also plays an important role in determining the three-dimensional structures adopted by proteins and nucleic bases. In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication.
In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions i and i + 4, an alpha helix is formed. When the spacing is less, between positions i and i + 3, then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (See also protein folding).
The role of hydrogen bonds in protein folding has also been linked to osmolyte-induced protein stabilization. Protective osmolytes, such as trehalose and sorbitol, shift the protein folding equilibrium toward the folded state, in a concentration dependent manner. While the prevalent explanation for osmolyte action relies on excluded volume effects, that are entropic in nature, recent Circular dichroism (CD) experiments have shown osmolyte to act through an enthalpic effect. The molecular mechanism for their role in protein stabilization is still not well established, though several mechanism have been proposed. Recently, computer molecular dynamics simulations suggested that osmolytes stabilize proteins by modifying the hydrogen bonds in the protein hydration layer.
Several studies have shown that hydrogen bonds play an important role for the stability between subunits in multimeric proteins. For example, a study of sorbitol dehydrogenase displayed an important hydrogen bonding network which stabilizes the tetrameric quaternary structure within the mammalian sorbitol dehydrogenase protein family.
A protein backbone hydrogen bond incompletely shielded from water attack is a dehydron. Dehydrons promote the removal of water through proteins or ligand binding. The exogenous dehydration enhances the electrostatic interaction between the amide and carbonyl groups by de-shielding their partial charges. Furthermore, the dehydration stabilizes the hydrogen bond by destabilizing the nonbonded state consisting of dehydrated isolated charges.
Hydrogen bonds in polymersEdit
Many polymers are strengthened by hydrogen bonds in their main chains. Among the synthetic polymers, the best known example is nylon, where hydrogen bonds occur in the repeat unit and play a major role in crystallization of the material. The bonds occur between carbonyl and amine groups in the amide repeat unit. They effectively link adjacent chains to create crystals, which help reinforce the material. The effect is greatest in aramid fibre, where hydrogen bonds stabilize the linear chains laterally. The chain axes are aligned along the fibre axis, making the fibres extremely stiff and strong. Hydrogen bonds are also important in the structure of cellulose and derived polymers in its many different forms in nature, such as wood and natural fibres such as cotton and flax.
The hydrogen bond networks make both natural and synthetic polymers sensitive to humidity levels in the atmosphere because water molecules can diffuse into the surface and disrupt the network. Some polymers are more sensitive than others. Thus nylons are more sensitive than aramids, and nylon 6 more sensitive than nylon-11.
Symmetric hydrogen bondEdit
A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a three-center four-electron bond. This type of bond is much stronger than a "normal" hydrogen bond. The effective bond order is 0.5, so its strength is comparable to a covalent bond. It is seen in ice at high pressure, and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion [F−H−F]−.
Symmetric hydrogen bonds have been observed recently spectroscopically in formic acid at high pressure (>GPa). Each hydrogen atom forms a partial covalent bond with two atoms rather than one. Symmetric hydrogen bonds have been postulated in ice at high pressure (Ice X). Low-barrier hydrogen bonds form when the distance between two heteroatoms is very small.
The hydrogen bond can be compared with the closely related dihydrogen bond, which is also an intermolecular bonding interaction involving hydrogen atoms. These structures have been known for some time, and well characterized by crystallography; however, an understanding of their relationship to the conventional hydrogen bond, ionic bond, and covalent bond remains unclear. Generally, the hydrogen bond is characterized by a proton acceptor that is a lone pair of electrons in nonmetallic atoms (most notably in the nitrogen, and chalcogen groups). In some cases, these proton acceptors may be pi-bonds or metal complexes. In the dihydrogen bond, however, a metal hydride serves as a proton acceptor, thus forming a hydrogen-hydrogen interaction. Neutron diffraction has shown that the molecular geometry of these complexes is similar to hydrogen bonds, in that the bond length is very adaptable to the metal complex/hydrogen donor system.
Advanced theory of the hydrogen bondEdit
In 1999, Isaacs et al. showed from interpretations of the anisotropies in the Compton profile of ordinary ice that the hydrogen bond is partly covalent. However, this interpretation was challenged by Ghanty et al., who concluded that considering electrostatic forces alone could explain the experimental results. Some NMR data on hydrogen bonds in proteins also indicate covalent bonding.
Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds; however, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This remained a controversial conclusion until the late 1990s when NMR techniques were employed by F. Cordier et al. to transfer information between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character. While much experimental data has been recovered for hydrogen bonds in water, for example, that provide good resolution on the scale of intermolecular distances and molecular thermodynamics, the kinetic and dynamical properties of the hydrogen bond in dynamic systems remain unchanged.
Dynamics probed by spectroscopic meansEdit
The dynamics of hydrogen bond structures in water can be probed by the IR spectrum of OH stretching vibration. In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations. The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions.
Hydrogen bonding phenomenaEdit
- Dramatically higher boiling points of NH3, H2O, and HF compared to the heavier analogues PH3, H2S, and HCl.
- Increase in the melting point, boiling point, solubility, and viscosity of many compounds can be explained by the concept of hydrogen bonding.
- Occurrence of proton tunneling during DNA replication is believed to be responsible for cell mutations.
- Viscosity of anhydrous phosphoric acid and of glycerol
- Dimer formation in carboxylic acids and hexamer formation in hydrogen fluoride, which occur even in the gas phase, resulting in gross deviations from the ideal gas law.
- Pentamer formation of water and alcohols in apolar solvents.
- High water solubility of many compounds such as ammonia is explained by hydrogen bonding with water molecules.
- Negative azeotropy of mixtures of HF and water
- Deliquescence of NaOH is caused in part by reaction of OH− with moisture to form hydrogen-bonded H
2 species. An analogous process happens between NaNH2 and NH3, and between NaF and HF.
- The fact that ice is less dense than liquid water is due to a crystal structure stabilized by hydrogen bonds.
- The presence of hydrogen bonds can cause an anomaly in the normal succession of states of matter for certain mixtures of chemical compounds as temperature increases or decreases. These compounds can be liquid until a certain temperature, then solid even as the temperature increases, and finally liquid again as the temperature rises over the "anomaly interval"
- Smart rubber utilizes hydrogen bonding as its sole means of bonding, so that it can "heal" when torn, because hydrogen bonding can occur on the fly between two surfaces of the same polymer.
- Strength of nylon and cellulose fibres.
- Wool, being a protein fibre, is held together by hydrogen bonds, causing wool to recoil when stretched. However, washing at high temperatures can permanently break the hydrogen bonds and a garment may permanently lose its shape.
- Sweetman, A. M.; Jarvis, S. P.; Sang, Hongqian; Lekkas, I.; Rahe, P.; Wang, Yu; Wang, Jianbo; Champness, N.R.; Kantorovich, L.; Moriarty, P. (2014). "Mapping the force field of a hydrogen-bonded assembly". Nature Communications. 5. Bibcode:2014NatCo...5E3931S. PMC . PMID 24875276. doi:10.1038/ncomms4931.
- Hapala, Prokop; Kichin, Georgy; Wagner, Christian; Tautz, F. Stefan; Temirov, Ruslan; Jelínek, Pavel (2014-08-19). "Mechanism of high-resolution STM/AFM imaging with functionalized tips". Physical Review B. 90 (8): 085421. doi:10.1103/PhysRevB.90.085421.
- Hämäläinen, Sampsa K.; van der Heijden, Nadine; van der Lit, Joost; den Hartog, Stephan; Liljeroth, Peter; Swart, Ingmar (2014-10-31). "Intermolecular Contrast in Atomic Force Microscopy Images without Intermolecular Bonds". Physical Review Letters. 113 (18): 186102. PMID 25396382. doi:10.1103/PhysRevLett.113.186102.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "hydrogen bond".
- John R. Sabin (1971). "Hydrogen bonds involving sulfur. I. Hydrogen sulfide dimer". J. Am. Chem. Soc. 93 (15): 3613–3620. doi:10.1021/ja00744a012.
- Arunan, Elangannan; Desiraju, Gautam R.; Klein, Roger A.; Sadlej, Joanna; Scheiner, Steve; Alkorta, Ibon; Clary, David C.; Crabtree, Robert H.; Dannenberg, Joseph J.; Hobza, Pavel; Kjaergaard, Henrik G.; Legon, Anthony C.; Mennucci, Benedetta; Nesbitt, David J. (2011). "Definition of the hydrogen bond". Pure Appl. Chem. 83 (8): 1637–1641. doi:10.1351/PAC-REC-10-01-02.
- Arunan, Elangannan; Desiraju, Gautam R.; Klein, Roger A.; Sadlej, Joanna; Scheiner, Steve; Alkorta, Ibon; Clary, David C.; Crabtree, Robert H.; Dannenberg, Joseph J.; Hobza, Pavel; Kjaergaard, Henrik G.; Legon, Anthony C.; Mennucci, Benedetta; Nesbitt, David J. (2011). "Defining the hydrogen bond: An Account". Pure Appl. Chem. 83 (8): 1619–1636. doi:10.1351/PAC-REP-10-01-01.
- Beijer, Felix H.; Kooijman, Huub; Spek, Anthony L.; Sijbesma, Rint P.; Meijer, E. W. (1998). "Self-Complementarity Achieved through Quadruple Hydrogen Bonding". Angew. Chem. Int. Ed. 37 (1–2): 75–78. doi:10.1002/(SICI)1521-3773(19980202)37:1/2<75::AID-ANIE75>3.0.CO;2-R.
- Campbell, Neil A.; Brad Williamson; Robin J. Heyden (2006). Biology: Exploring Life. Boston, Massachusetts: Pearson Prentice Hall. ISBN 0-13-250882-6.
- Wiley, G.R.; Miller, S.I. (1972). "Thermodynamic parameters for hydrogen bonding of chloroform with Lewis bases in cyclohexane. Proton magnetic resonance study". Journal of the American Chemical Society. 94 (10): 3287. doi:10.1021/ja00765a001.
- Kwak, K; Rosenfeld, DE; Chung, JK; Fayer, MD (2008). "Solute-solvent complex switching dynamics of chloroform between acetone and dimethylsulfoxide-two-dimensional IR chemical exchange spectroscopy". The Journal of Physical Chemistry B. 112 (44): 13906–15. PMC . PMID 18855462. doi:10.1021/jp806035w.
- Romańczyk, P. P.; Radoń, M.; Noga, K.; Kurek, S. S. (2013). "Autocatalytic cathodic dehalogenation triggered by dissociative electron transfer through a C-H...O hydrogen bond". Physical Chemistry Chemical Physics. 15 (40): 17522–17536. PMID 24030591. doi:10.1039/C3CP52933A.
- Larson, J. W.; McMahon, T. B. (1984). "Gas-phase bihalide and pseudobihalide ions. An ion cyclotron resonance determination of hydrogen bond energies in XHY- species (X, Y = F, Cl, Br, CN)". Inorganic Chemistry. 23 (14): 2029–2033. doi:10.1021/ic00182a010.
- Emsley, J. (1980). "Very Strong Hydrogen Bonds". Chemical Society Reviews. 9 (1): 91–124. doi:10.1039/cs9800900091.
- Markovitch, Omer; Agmon, Noam (2007). "Structure and energetics of the hydronium hydration shells". J. Phys. Chem. A. 111 (12): 2253–2256. PMID 17388314. doi:10.1021/jp068960g.
- Grunenberg, Jörg (2004). "Direct Assessment of Interresidue Forces in Watson−Crick Base Pairs Using Theoretical Compliance Constants". Journal of the American Chemical Society. 126 (50): 16310–1. PMID 15600318. doi:10.1021/ja046282a.
- Legon, A. C.; Millen, D. J. (1987). "Angular geometries and other properties of hydrogen-bonded dimers: a simple electrostatic interpretation of the success of the electron-pair model". Chemical Society Reviews. 16: 467. doi:10.1039/CS9871600467.
- Pauling, L. (1960). The nature of the chemical bond and the structure of molecules and crystals; an introduction to modern structural chemistry (3rd ed.). Ithaca (NY): Cornell University Press. p. 450. ISBN 0-8014-0333-2.
- Moore, T. S.; Winmill, T. F. (1912). "The state of amines in aqueous solution". J. Chem. Soc. 101: 1635. doi:10.1039/CT9120101635.
- Latimer, Wendell M.; Rodebush, Worth H. (1920). "Polarity and ionization from the standpoint of the Lewis theory of valence.". Journal of the American Chemical Society. 42 (7): 1419–1433. doi:10.1021/ja01452a015.
- Jorgensen, W. L.; Madura, J. D. (1985). "Temperature and size dependence for Monte Carlo simulations of TIP4P water". Mol. Phys. 56 (6): 1381. Bibcode:1985MolPh..56.1381J. doi:10.1080/00268978500103111.
- Zielkiewicz, Jan (2005). "Structural properties of water: Comparison of the SPC, SPCE, TIP4P, and TIP5P models of water". J. Chem. Phys. 123 (10): 104501. Bibcode:2005JChPh.123j4501Z. PMID 16178604. doi:10.1063/1.2018637.
- Jencks, William; Jencks, William P. (1986). "Hydrogen Bonding between Solutes in Aqueous Solution". J. Amer. Chem. Soc. 108 (14): 4196. doi:10.1021/ja00274a058.
- Dillon, P. F. (2012). Biophysics. Cambridge University Press. p. 37. ISBN 978-1-139-50462-1.
- Baron, Michel; Giorgi-Renault, Sylviane; Renault, Jean; Mailliet, Patrick; Carré, Daniel; Etienne, Jean (1984). "Hétérocycles à fonction quinone. V. Réaction anormale de la butanedione avec la diamino-1,2 anthraquinone; structure cristalline de la naphto \2,3-f] quinoxalinedione-7,12 obtenue". Can. J. Chem. 62 (3): 526–530. doi:10.1139/v84-087.
- Laage, Damien; Hynes, James T. (2006). "A Molecular Jump Mechanism for Water Reorientation". Science. 311 (5762): 832–5. Bibcode:2006Sci...311..832L. PMID 16439623. doi:10.1126/science.1122154.
- Markovitch, Omer; Agmon, Noam (2008). "The Distribution of Acceptor and Donor Hydrogen-Bonds in Bulk Liquid Water". Molecular Physics. 106 (2): 485. Bibcode:2008MolPh.106..485M. doi:10.1080/00268970701877921.
- Politi, Regina; Harries, Daniel (2010). "Enthalpically driven peptide stabilization by protective osmolytes". ChemComm. 46 (35): 6449–6451. doi:10.1039/C0CC01763A.
- Gilman-Politi, Regina; Harries, Daniel (2011). "Unraveling the Molecular Mechanism of Enthalpy Driven Peptide Folding by Polyol Osmolytes". Journal of Chemical Theory and Computation. 7 (11): 3816–3828. PMID 26598272. doi:10.1021/ct200455n.
- Hellgren, M; Kaiser, C; de Haij, S; Norberg, A; Höög, JO (December 2007). "A hydrogen-bonding network in mammalian sorbitol dehydrogenase stabilizes the tetrameric state and is essential for the catalytic power.". Cellular and molecular life sciences : CMLS. 64 (23): 3129–38. PMID 17952367. doi:10.1007/s00018-007-7318-1.
- Fernández, A; Rogale K; Scott Ridgway; Scheraga H.A. (June 2004). "Inhibitor design by wrapping packing defects in HIV-1 proteins.". Proceedings of the National Academy of Sciences. 101 (32): 11640–5. PMC . PMID 15289598. doi:10.1073/pnas.0404641101.
- Crabtree, Robert H.; Siegbahn, Per E. M.; Eisenstein, Odile; Rheingold, Arnold L.; Koetzle, Thomas F. (1996). "A New Intermolecular Interaction: Unconventional Hydrogen Bonds with Element-Hydride Bonds as Proton Acceptor". Acc. Chem. Res. 29 (7): 348–354. PMID 19904922. doi:10.1021/ar950150s.
- Isaacs, E.D.; et al. (1999). "Covalency of the Hydrogen Bond in Ice: A Direct X-Ray Measurement". Physical Review Letters. 82 (3): 600–603. Bibcode:1999PhRvL..82..600I. doi:10.1103/PhysRevLett.82.600.
- Ghanty, Tapan K.; Staroverov, Viktor N.; Koren, Patrick R.; Davidson, Ernest R. (2000-02-01). "Is the Hydrogen Bond in Water Dimer and Ice Covalent?". Journal of the American Chemical Society. 122 (6): 1210–1214. ISSN 0002-7863. doi:10.1021/ja9937019.
- Cordier, F; Rogowski, M; Grzesiek, S; Bax, A (1999). "Observation of through-hydrogen-bond (2h)J(HC') in a perdeuterated protein". J Magn Reson. 140 (2): 510–2. Bibcode:1999JMagR.140..510C. PMID 10497060. doi:10.1006/jmre.1999.1899.
- Cowan ML; Bruner BD; Huse N; et al. (2005). "Ultrafast memory loss and energy redistribution in the hydrogen bond network of liquid H2O". Nature. 434 (7030): 199–202. Bibcode:2005Natur.434..199C. PMID 15758995. doi:10.1038/nature03383.
- Luo, Jiangshui; Jensen, Annemette H.; Brooks, Neil R.; Sniekers, Jeroen; Knipper, Martin; Aili, David; Li, Qingfeng; Vanroy, Bram; Wübbenhorst, Michael; Yan, Feng; Van Meervelt, Luc; Shao, Zhigang; Fang, Jianhua; Luo, Zheng-Hong; De Vos, Dirk E.; Binnemans, Koen; Fransaer, Jan (2015). "1,2,4-Triazolium perfluorobutanesulfonate as an archetypal pure protic organic ionic plastic crystal electrolyte for all-solid-state fuel cells". Energy & Environmental Science. 8 (4): 1276. doi:10.1039/C4EE02280G.
- Löwdin, P. O. (1963). "Proton Tunneling in DNA and its Biological Implications". Rev. Mod. Phys. 35 (3): 724. Bibcode:1963RvMP...35..724L. doi:10.1103/RevModPhys.35.724.
- Law-breaking liquid defies the rules. Physicsworld.com (September 24, 2004 ) |
The circumference of a circle with radius r is given by 2*pi*r. The area of a circle with radius r is pi*r^2.
Here, circumference is 24*pi = 2*pi*r
=> r = 24*pi / 2*pi
=> r = 12
The area of the circle is pi*r^2 = pi*12^2 = pi*144
The required area is pi*144
Given the circumference of a circle is 24 pi.
We will use the circumference formula to find the radius.
We know that C = 2*pi* r = 24*pi
==> r = 24pi/2pi = 12
Then the radius is 12.
Now we will calculate the area of the the circle.
==> A + r^2 * pi = 12^2 * pi = 144pi = 452.39
Then the area of the circle is A= 144pi = 452.39 square units. |
Once known as the college entrance exam of choice for strong math students, the ACT has always demanded both broad and deep mastery of math concepts learned from grade school to high school. The SAT may currently hold the crown for the test best suited for math whizzes, but ACT Math is tough and getting tougher. Understanding the new ACT Mathematics Reporting Category provides useful insights into the test maker’s assessment goals for this part of the test.
Preparing for Higher Math
Of the 60 questions on the ACT Math test, roughly 36 (57-60%) of them evaluate what is considered high school math, spanning the point where students learn to use algebra as a general way of expressing and solving equations to advanced topics in Algebra 2 and Trigonometry:
Number & Quantity (7–10%)
Students must demonstrate knowledge of real and complex number systems, integers and rational exponents, and vectors and matrices.
Students must be able to solve, graph, and model multiple types of expressions, using different kinds of equations, including but not limited to linear, polynomial, radical, and exponential relationships.
Students must demonstrate knowledge of function definition, notation, graphing, representation, and application, across linear, radical, piecewise, polynomial, and logarithmic functions.
Students must demonstrate knowledge of shapes and solids, such as congruence and similarity relationships or surface area and volume measurements, and solve for missing values in triangles, circles, and other figures, including using trigonometric ratios and equations of conic sections.
Statistics & Probability (8–12%)
Students must be able to describe center and spread of distributions, apply and analyze data collection methods, understand and model relationships in bivariate data, and calculate probabilities, including the related sample spaces.
Integrating Essential Skills
The other 24 or so (40-43%) ACT Math questions address concepts typically learned before 8th grade, such as rates and percentages; proportional relationships; area, surface area, and volume; average and median; and expressing numbers in different ways. This portion of the math test offers hope for every student, even those who struggle with math in school. Essentials Skills concepts are relatively simple and familiar, even if word problems testing them can be wordy and complex.
This additional category does not introduce additional questions, but instead quantifies achievement on those Higher Math and Essential Skills questions that involve producing, interpreting, understanding, evaluating, and improving models. Modeling skills across mathematical topics is tested through word problems, though not every ACT Math word problem counts as a modeling problem.
**Also learn about ACT Reporting Categories in English and Reading & Science** |
- Descriptive statistics
- Hypothesis testing
- Bayesian methods
- Experimental design
- Time series and forecasting
- Nonparametric methods
- Statistical quality control
- Sample survey methods
- Decision analysis
A variety of numerical measures are used to summarize data. The proportion, or percentage, of data values in each category is the primary numerical measure for qualitative data. The mean, median, mode, percentiles, range, variance, and standard deviation are the most commonly used numerical measures for quantitative data. The mean, often called the average, is computed by adding all the data values for a variable and dividing the sum by the number of data values. The mean is a measure of the central location for the data. The median is another measure of central location that, unlike the mean, is not affected by extremely large or extremely small data values. When determining the median, the data values are first ranked in order from the smallest value to the largest value. If there is an odd number of data values, the median is the middle value; if there is an even number of data values, the median is the average of the two middle values. The third measure of central tendency is the mode, the data value that occurs with greatest frequency.
Percentiles provide an indication of how the data values are spread over the interval from the smallest value to the largest value. Approximately p percent of the data values fall below the pth percentile, and roughly 100 − p percent of the data values are above the pth percentile. Percentiles are reported, for example, on most standardized tests. Quartiles divide the data values into four parts; the first quartile is the 25th percentile, the second quartile is the 50th percentile (also the median), and the third quartile is the 75th percentile.
The range, the difference between the largest value and the smallest value, is the simplest measure of variability in the data. The range is determined by only the two extreme data values. The variance (s2) and the standard deviation (s), on the other hand, are measures of variability that are based on all the data and are more commonly used. Equation 1 shows the formula for computing the variance of a sample consisting of n items. In applying equation 1, the deviation (difference) of each data value from the sample mean is computed and squared. The squared deviations are then summed and divided by n − 1 to provide the sample variance.
The standard deviation is the square root of the variance. Because the unit of measure for the standard deviation is the same as the unit of measure for the data, many individuals prefer to use the standard deviation as the descriptive measure of variability.
Sometimes data for a variable will include one or more values that appear unusually large or small and out of place when compared with the other data values. These values are known as outliers and often have been erroneously included in the data set. Experienced statisticians take steps to identify outliers and then review each one carefully for accuracy and the appropriateness of its inclusion in the data set. If an error has been made, corrective action, such as rejecting the data value in question, can be taken. The mean and standard deviation are used to identify outliers. A z-score can be computed for each data value. With x representing the data value, x̄ the sample mean, and s the sample standard deviation, the z-score is given by z = (x − x̄)/s. The z-score represents the relative position of the data value by indicating the number of standard deviations it is from the mean. A rule of thumb is that any value with a z-score less than −3 or greater than +3 should be considered an outlier.
Exploratory data analysis provides a variety of tools for quickly summarizing and gaining insight about a set of data. Two such methods are the five-number summary and the box plot. A five-number summary simply consists of the smallest data value, the first quartile, the median, the third quartile, and the largest data value. A box plot is a graphical device based on a five-number summary. A rectangle (i.e., the box) is drawn with the ends of the rectangle located at the first and third quartiles. The rectangle represents the middle 50 percent of the data. A vertical line is drawn in the rectangle to locate the median. Finally lines, called whiskers, extend from one end of the rectangle to the smallest data value and from the other end of the rectangle to the largest data value. If outliers are present, the whiskers generally extend only to the smallest and largest data values that are not outliers. Dots, or asterisks, are then placed outside the whiskers to denote the presence of outliers.
Probability is a subject that deals with uncertainty. In everyday terminology, probability can be thought of as a numerical measure of the likelihood that a particular event will occur. Probability values are assigned on a scale from 0 to 1, with values near 0 indicating that an event is unlikely to occur and those near 1 indicating that an event is likely to take place. A probability of 0.50 means that an event is equally likely to occur as not to occur.
Oftentimes probabilities need to be computed for related events. For instance, advertisements are developed for the purpose of increasing sales of a product. If seeing the advertisement increases the probability of a person buying the product, the events “seeing the advertisement” and “buying the product” are said to be dependent. If two events are independent, the occurrence of one event does not affect the probability of the other event taking place. When two or more events are independent, the probability of their joint occurrence is the product of their individual probabilities. Two events are said to be mutually exclusive if the occurrence of one event means that the other event cannot occur; in this case, when one event takes place, the probability of the other event occurring is zero. |
Systems of Linear Equations
In this systems of equations worksheet, students complete 14 multiple choice questions involving systems of equations and finding how many solutions belong to a set of systems.
3 Views 0 Downloads
Introduction to Systems of Linear Equations
Here is a instructional activity that really delivers! Middle schoolers collaborate to consider pizza prices from four different pizza parlors. Using systems of simultaneous equations, they graph each scenario to determine the best...
7th - 9th Math CCSS: Designed
Using Linear Equations to Define Geometric Solids
Making the transition from two-dimensional shapes to three-dimensional solids can be difficult for many geometry students. This comprehensive Common Core lesson plan starts with writing and graphing linear equations to define a bounded...
9th - 11th Math CCSS: Designed
Solving General Systems of Linear Equations
Examine the usefulness of matrices when solving linear systems of higher dimensions. The instructional activity asks learners to write and solve systems of linear equations in four and five variables. Using matrices, pupils solve the...
11th - 12th Math CCSS: Designed
Simultaneous Linear Equations
Solve simultaneous linear equations, otherwise known as systems of linear equations. Pupils practice solving systems of linear equations by graphing, substitution, and elimination. The workbook provides a class activity and homework for...
7th - 10th Math CCSS: Designed
Model a Real-Life Situation Using a System of Linear Equations
Ah, the dreaded systems of equations word problem. Probably one of the best ways to connect algebra to the real-world, here is a video that goes through the steps on how to detect key words and create two equations that model the scenario.
7 mins 8th - 10th Math CCSS: Designed |
Human height or stature is the distance from the bottom of the feet to the top of the head in a human body, standing erect. It is measured using a stadiometer, usually in centimetres when using the metric system, or feet and inches when using the imperial system.
In the early phase of anthropometric research history, questions about height techniques for measuring nutritional status often concerned genetic differences. A particular genetic profile in men called Y haplotype I-M170 is correlated with height. Ecological data shows that as the frequency of this genetic profile increases in the population, the average male height in a country also increases.
Height is also important, because it is closely correlated with other health components, such as life expectancy. Studies show that there is a correlation between small stature and a longer life expectancy. Individuals of small stature are also more likely to have lower blood pressure and are less likely to acquire cancer. The University of Hawaii has found that the “longevity gene” FOXO3 that reduces the effects of aging is more commonly found in individuals of a small body size. Short stature decreases the risk of venous insufficiency.
When populations share genetic background and environmental factors, average height is frequently characteristic within the group. Exceptional height variation (around 20% deviation from average) within such a population is sometimes due to gigantism or dwarfism, which are medical conditions caused by specific genes or endocrine abnormalities.
The development of human height can serve as an indicator of two key welfare components, namely nutritional quality and health. In regions of poverty or warfare, environmental factors like chronic malnutrition during childhood or adolescence may result in delayed growth and/or marked reductions in adult stature even without the presence of any of these medical conditions. Some research indicates that a greater height correlates with greater success in dating and earning in men, although other research indicates that this does not apply to non-white men.
Height is a sexually dimorphic trait in humans. A study of 20th century British natality trends indicated that while tall men tended to reproduce more than short men, women of below average height had more children than taller women.
- 1 Determinants of growth and height
- 2 Process of growth
- 3 Height abnormalities
- 4 Role of an individual's height
- 5 History of human height
- 6 Average height around the world
- 7 See also
- 8 References
- 9 Bibliography
- 10 Further reading
- 11 External links
Determinants of growth and height
The study of height is known as auxology. Growth has long been recognized as a measure of the health of individuals, hence part of the reasoning for the use of growth charts. For individuals, as indicators of health problems, growth trends are tracked for significant deviations and growth is also monitored for significant deficiency from genetic expectations. Genetics is a major factor in determining the height of individuals, though it is far less influential in regard to differences among populations. Average height is relevant to the measurement of the health and wellness (standard of living and quality of life) of populations.
Attributed as a significant reason for the trend of increasing height in parts of Europe are the egalitarian populations where proper medical care and adequate nutrition are relatively equally distributed. Average (male) height in a nation is correlated with protein quality. Nations that consume more protein in the form of meat, dairy, eggs, and fish tend to be taller, while those that attain more protein from cereals tend to be shorter. Therefore, populations with high cattle per capita and high consumption of dairy live longer and are taller. Historically, this can be seen in the cases of the United States, Argentina, New Zealand and Australia in the beginning of the 19th century.
Changes in diet (nutrition) and a general rise in quality of health care and standard of living are the cited factors in the Asian populations. Malnutrition including chronic undernutrition and acute malnutrition is known to have caused stunted growth in various populations. This has been seen in North Korea, parts of Africa, certain historical Europe, and other populations. Developing countries such as Guatemala have rates of stunting in children under 5 living as high as 82.2% in Totonicapán, and 49.8% nationwide.
Height measurements are by nature subject to statistical sampling errors even for a single individual.[clarification needed] In a clinical situation, height measurements are seldom taken more often than once per office visit, which may mean sampling taking place a week to several months apart. The smooth 50th percentile male and female growth curves illustrated above are aggregate values from thousands of individuals sampled at ages from birth to age 20. In reality, a single individual's growth curve shows large upward and downward spikes, partly due to actual differences in growth velocity, and partly due to small measurement errors.
For example, a typical measurement error of plus or minus 0.5 cm may completely nullify 0.5 cm of actual growth resulting in either a "negative" 0.5 cm growth (due to overestimation in the previous visit combined with underestimation in the latter), up to a 1.5 cm growth (the first visit underestimating and the second visit overestimating) in the same elapsed time period between measurements. Note there is a discontinuity in the growth curves at age 2, which reflects the difference in recumbent length (with the child on his or her back), used in measuring infants and toddlers and standing height typically measured from age 2 onwards.
Height, like other phenotypic traits, is determined by a combination of genetics and environmental factors. A child's height based on parental heights is subject to regression toward the mean, therefore extremely tall or short parents will likely have correspondingly taller or shorter offspring, but their offspring will also likely be closer to average height than the parents themselves. Genetic potential and a number of hormones, minus illness, is a basic determinant for height. Other factors include the genetic response to external factors such as diet, exercise, environment, and life circumstances.
Humans grow fastest (other than in the womb) as infants and toddlers, rapidly declining from a maximum at birth to roughly age 2, tapering to a slowly declining rate, and then during the pubertal growth spurt, a rapid rise to a second maximum (at around 11–12 years for female, and 13–14 years for male), followed by a steady decline to zero. On average, female growth speed trails off to zero at about 15 or 16 years, whereas the male curve continues for approximately 3 more years, going to zero at about 18–19. These are also critical periods where stressors such as malnutrition (or even severe child neglect) have the greatest effect.
Moreover, the health of a mother throughout her life, especially during her critical period and pregnancy, has a role. A healthier child and adult develops a body that is better able to provide optimal prenatal conditions. The pregnant mother's health is important for herself but also for the fetus as gestation is itself a critical period for an embryo/fetus, though some problems affecting height during this period are resolved by catch-up growth assuming childhood conditions are good. Thus, there is a cumulative generation effect such that nutrition and health over generations influences the height of descendants to varying degrees.
The age of the mother also has some influence on her child's height. Studies in modern times have observed a gradual increase in height with maternal age, though these early studies suggest that trend is due to various socio-economic situations that select certain demographics as being more likely to have a first birth early in the mother's life. These same studies show that children born to a young mother are more likely to have below-average educational and behavioural development, again suggesting an ultimate cause of resources and family status rather than a purely biological explanation.
It has been observed that first-born males are shorter than later-born males. However, more recently the reverse observation was made. The study authors suggest that the cause may be socio-economic in nature.
Nature versus nurture
The precise relationship between genetics and environment is complex and uncertain. Differences in human height is 60–80% heritable, according to several twin studies and has been considered polygenic since the Mendelian-biometrician debate a hundred years ago. A genome-wide association (GWA) study of more than 180,000 individuals has identified hundreds of genetic variants in at least 180 loci associated with adult human height. The number of individuals has since been expanded to 253,288 individuals and the number of genetic variants identified is 697 in 423 genetic loci. In a separate study of body proportion using sitting-height ratio, it reports that these 697 variants can be partitioned into 3 specific classes, (1) variants that primarily determine leg length, (2) variants that primarily determine spine and head length, or (3) variants that affect overall body size. This gives insights into the biological mechanisms underlying how these 697 genetic variants affect overall height. These loci do not only determine height, but other features or characteristics. As an example, 4 of the 7 loci identified for intracranial volume had previously been discovered for human height.
The effect of environment on height is illustrated by studies performed by anthropologist Barry Bogin and coworkers of Guatemala Mayan children living in the United States. In the early 1970s, when Bogin first visited Guatemala, he observed that Mayan Indian men averaged 157.5 centimetres (5 ft 2 in) in height and the women averaged 142.2 centimetres (4 ft 8 in). Bogin took another series of measurements after the Guatemalan Civil War, during which up to a million Guatemalans fled to the United States. He discovered that Maya refugees, who ranged from six to twelve years old, were significantly taller than their Guatemalan counterparts. By 2000, the American Maya were 10.24 cm (4.03 in) taller than the Guatemalan Maya of the same age, largely due to better nutrition and health care. Bogin also noted that American Maya children had relatively longer legs, averaging 7.02 cm (2.76 in) longer than the Guatemalan Maya (a significantly lower sitting height ratio).
The Nilotic peoples of Sudan such as the Shilluk and Dinka have been described as some of the tallest in the world. Dinka Ruweng males investigated by Roberts in 1953–54 were on average 181.3 centimetres (5 ft 11 1⁄2 in) tall, and Shilluk males averaged 182.6 centimetres (6 ft 0 in). The Nilotic people are characterized as having long legs, narrow bodies and short trunks, an adaptation to hot weather. However, male Dinka and Shilluk refugees measured in 1995 in Southwestern Ethiopia were on average only 176.4 cm and 172.6 cm tall, respectively. As the study points out, Nilotic people "may attain greater height if privileged with favourable environmental conditions during early childhood and adolescence, allowing full expression of the genetic material." Before fleeing, these refugees were subject to privation as a consequence of the succession of civil wars in their country from 1955 to the present.
The tallest living married couple are ex-basketball players Yao Ming and Ye Li (both of China) who measure 228.6 cm (7 ft 6 in) and 190.5 cm (6 ft 3 in) respectively, giving a combined height of 419.1 cm (13 ft 9 in). They married in Shanghai, China, on 6 August 2007.
The people of the Dinaric Alps, mainly South Slavs (Montenegro and East Herzegovina), are on record as being the tallest in the world, with a male average height of 185.6 cm (6 ft 1.1 in) and female average height of 170.9 cm (5 ft 7.3 in).
Process of growth
This section does not cite any sources. (February 2014) (Learn how and when to remove this template message)
Growth in stature, determined by its various factors, results from the lengthening of bones via cellular divisions chiefly regulated by somatotropin (human growth hormone (hGH)) secreted by the anterior pituitary gland. Somatotropin also stimulates the release of another growth inducing hormone Insulin-like growth factor 1 (IGF-1) mainly by the liver. Both hormones operate on most tissues of the body, have many other functions, and continue to be secreted throughout life; with peak levels coinciding with peak growth velocity, and gradually subsiding with age after adolescence. The bulk of secretion occurs in bursts (especially for adolescents) with the largest during sleep.
The majority of linear growth occurs as growth of cartilage at the epiphysis (ends) of the long bones which gradually ossify to form hard bone. The legs compose approximately half of adult human height, and leg length is a somewhat sexually dimorphic trait, with men having proportionately longer legs. Some of this growth occurs after the growth spurt of the long bones has ceased or slowed. The majority of growth during growth spurts is of the long bones. Additionally, the variation in height between populations and across time is largely due to changes in leg length. The remainder of height consists of the cranium. Height is sexually dimorphic and statistically it is more or less normally distributed, but with heavy tails. It has been shown that a log-normal distribution fits the data equally well, besides guaranteeing a non-negative lower confidence limit, which could otherwise attain a non-physical negative height value for arbitrarily large confidence levels.
This section does not cite any sources. (February 2014) (Learn how and when to remove this template message)
Most intra-population variance of height is genetic. Short stature and tall stature are usually not a health concern. If the degree of deviation from normal is significant, hereditary short stature is known as familial short stature and tall stature is known as familial tall stature. Confirmation that exceptional height is normal for a respective person can be ascertained from comparing stature of family members and analyzing growth trends for abrupt changes, among others. There are, however, various diseases and disorders that cause growth abnormalities.
Most notably, extreme height may be pathological, such as gigantism resulting from childhood hyperpituitarism, and dwarfism which has various causes. Rarely, no cause can be found for extreme height; very short persons may be termed as having idiopathic short stature. The United States Food and Drug Administration (FDA) in 2003 approved hGH treatment for those 2.25 standard deviations below the population mean (approximately the lowest 1.2% of the population). An even rarer occurrence, or at least less used term and recognized "problem", is idiopathic tall stature.
If not enough growth hormone is produced and/or secreted by the pituitary gland, then a patient with growth hormone deficiency can undergo treatment. This treatment involves the injection of pure growth hormone into thick tissue to promote growth.
Role of an individual's height
Height and health
Studies show that there is a correlation between small stature and a longer life expectancy. Individuals of small stature are also more likely to have lower blood pressure and are less likely to acquire cancer. The University of Hawaii has found that the “longevity gene” FOXO3 that reduces the effects of aging is more commonly found in individuals of a small body size. Short stature decreases the risk of venous insufficiency. Certain studies have shown that height is a factor in overall health while some suggest tallness is associated with better cardiovascular health and shortness with longevity. Cancer risk has also been found to grow with height.
Nonetheless, modern westernized interpretations of the relationship between height and health fail to account for the observed height variations worldwide. Cavalli-Sforza and Cavalli-Sforza note that variations in height worldwide can be partly attributed to evolutionary pressures resulting from differing environments. These evolutionary pressures result in height related health implications. While tallness is an adaptive benefit in colder climates such as found in Europe, shortness helps dissipate body heat in warmer climatic regions. Consequently, the relationships between health and height cannot be easily generalized since tallness and shortness can both provide health benefits in different environmental settings.
At the extreme end, being excessively tall can cause various medical problems, including cardiovascular problems, because of the increased load on the heart to supply the body with blood, and problems resulting from the increased time it takes the brain to communicate with the extremities. For example, Robert Wadlow, the tallest man known to verifiable history, developed trouble walking as his height increased throughout his life. In many of the pictures of the later portion of his life, Wadlow can be seen gripping something for support. Late in his life, although he died at age 22, he had to wear braces on his legs and walk with a cane; and he died after developing an infection in his legs because he was unable to feel the irritation and cutting caused by his leg braces.
Sources are in disagreement about the overall relationship between height and longevity. Samaras and Elrick, in the Western Journal of Medicine, demonstrate an inverse correlation between height and longevity in several mammals including humans.
A study done in Sweden in 2005 has shown that there is a strong inverse correlation between height and suicide among Swedish men.
A large body of human and animal evidence indicates that shorter, smaller bodies age slower, and have fewer chronic diseases and greater longevity. For example, a study found eight areas of support for the "smaller lives longer" thesis. These areas of evidence include studies involving longevity, life expectancy, centenarians, male vs. female longevity differences, mortality advantages of shorter people, survival findings, smaller body size due to calorie restriction, and within species body size differences. They all support the conclusion that smaller individuals live longer in healthy environments and with good nutrition. However, the difference in longevity is modest. Several human studies have found a loss of 0.5 year/centimetre of increased height (1.2 yr/inch). But these findings do not mean that all tall people die young. Many live to advanced ages and some become centenarians.
Height and occupational success
There is a large body of research in psychology, economics, and human biology that has assessed the relationship between several seemingly innocuous physical features (e.g., body height) and occupational success. The correlation between height and success was explored decades ago. Shorter people are considered to have an advantage in certain sports (e.g., gymnastics, race car driving, etc.), whereas in many other sports taller people have a major advantage. In most occupational fields, body height is not relevant to how well people are able to perform; nonetheless several studies found that success was positively correlated with body height, although there may be other factors such as gender or socioeonomic status that are correlated with height which may account for the difference in success.
A demonstration of the height-success association can be found in the realm of politics. In the United States presidential elections, the taller candidate won 22 out of 25 times in the 20th century. Nevertheless, Ignatius Loyola, founder of the Jesuits, was 150 cm (4 ft 11 in) and several prominent world leaders of the 20th century, such as Vladimir Lenin, Benito Mussolini, Nicolae Ceaușescu and Joseph Stalin were of below average height. These examples, however, were all before modern forms of multi-media, i.e., television, which may further height discrimination in modern society. Further, growing evidence suggests that height may be a proxy for confidence, which is likewise strongly correlated with occupational success.
History of human height
In the 150 years since the mid-nineteenth century, the average human height in industrialised countries has increased by up to 10 centimetres (3.9 in). However, these increases appear to have largely levelled off. Before the mid-nineteenth century, there were cycles in height, with periods of increase and decrease; however, examinations of skeletons show no significant differences in height from the Stone Age through the early-1800s.
In general, there were no big differences in regional height levels throughout the nineteenth century. The only exceptions of this rather uniform height distribution were people in the Anglo-Saxon settlement regions who were taller than the average and people from Southeast Asia with below-average heights. However, at the end of the nineteenth century and in the middle of the first globalisation period, heights between rich and poor countries began to diverge. These differences did not disappear in the deglobalisation period of the two World wars. Baten and Blum (2014) find that in the nineteenth century, important determinants of height were the local availability of cattle, meat and milk as well as the local disease environment. In the late-twentieth century, however, technologies and trade became more important, decreasing the impact of local availability of agricultural products.
In the eighteenth and nineteenth centuries, people of European descent in North America were far taller than those in Europe and were the tallest in the world. The original indigenous population of Plains Native Americans was also among the tallest populations of the world at the time.
Some studies also suggest that there existed the correlation between the height and the real wage, moreover, correlation was higher among the less developed countries. Interestingly, the difference in height between children from different social classes was already observed, when child was around two years old.
In the late-nineteenth century, the Netherlands was a land renowned for its short population, but today Dutch people are among the world's tallest with young men averaging 183.8 cm (6 ft 0.4 in) tall.
According to a study by economist John Komlos and Francesco Cinnirella, in the first half of the eighteenth century, the average height of an English male was 165 cm (5 ft 5 in), and the average height of an Irish male was 168 cm (5 ft 6 in). The estimated mean height of English, German, and Scottish soldiers was 163.6 cm – 165.9 cm (5 ft 4.4 in – 5 ft 5.3 in) for the period as a whole, while that of Irish was 167.9 cm (5 ft 6.1 in). The average height of male slaves and convicts in North America was 171 cm (5 ft 7 in).
American-born colonial soldiers of the late-1770s were on average more than 7.6 cm (3 inches) taller than their English counterparts who served in the Royal Marines at the same time.
The average height of Americans and Europeans decreased during periods of rapid industrialisation, possibly due to rapid population growth and broad decreases in economic status. This has become known as the early-industrial growth puzzle or in the U.S. context the Antebellum Puzzle. In England during the early-nineteenth century, the difference between average height of English upper-class youth (students of Sandhurst Military Academy) and English working-class youth (Marine Society boys) reached 22 cm (8.7 in), the highest that has been observed.
Data derived from burials show that before 1850, the mean stature of males and females in Leiden, The Netherlands was respectively 166.7 cm (5 ft 5.6 in) and 156.7 cm (5 ft 1.7 in). The average height of 19-year-old Dutch orphans in 1865 was 160 cm (5 ft 3 in).
According to a study by J.W. Drukker and Vincent Tassenaar, the average height of a Dutch person decreased from 1830-57, even while Dutch real GNP per capita was growing at an average rate of more than 0.5% per year. The worst decline were in urban areas that in 1847, the urban height penalty was 2.5 cm (1 in). Urban mortality was also much higher than rural regions. In 1829, the average urban and rural Dutchman was 164 cm (5 ft 4.6 in). By 1856, the average rural Dutchman was 162 cm (5 ft 3.8 in) and urban Dutchman was 158.5 cm (5 ft 2.4 in).
A 2004 report citing a 2003 UNICEF study on the effects of malnutrition in North Korea, due to "successive famines," found young adult males to be significantly shorter.[specify] In contrast South Koreans "feasting on an increasingly Western-influenced diet," without famine, were growing taller. The height difference is minimal for Koreans over forty years old, who grew up at a time when economic conditions in the North were roughly comparable to those in the South, while height disparities are most acute for Koreans who grew up in the mid-1990s – a demographic in which South Koreans are about 12 cm (4.7 in) taller than their North Korean counterparts – as this was a period during which the North was affected by a harsh famine where hundreds of thousands, if not millions, died of hunger. A study by South Korean anthropologists of North Korean children who had defected to China found that eighteen-year-old males were 5 inches (13 cm) shorter than South Koreans their age due to malnutrition.
The tallest living man is Sultan Kösen of Turkey, at 251 cm (8 ft 3 in). The tallest man in modern history was Robert Pershing Wadlow (1918–1940), from Illinois, United States, who was 272 cm (8 ft 11 in) at the time of his death. The tallest woman in medical history was Zeng Jinlian of Hunan, China, who stood 248 cm (8 ft 1 1⁄2 in) when she died at the age of seventeen. The shortest adult human on record was Chandra Bahadur Dangi of Nepal at 54.6 cm (1 ft 9 1⁄2 in).
Adult height between populations often differs significantly. For example, the average height of women from the Czech Republic is greater than that of men from Malawi. This may be caused by genetic differences, childhood lifestyle differences (nutrition, sleep patterns, physical labor), or both.
Depending on sex, genetic and environmental factors, shrinkage of stature may begin in middle age in some individuals but tends to be universal in the extremely aged. This decrease in height is due to such factors as decreased height of inter-vertebral discs because of desiccation, atrophy of soft tissues and postural changes secondary to degenerative disease.
Working on data of Indonesia, the study by Baten, Stegl and van der Eng suggests a positive relationship of economic development and average height. In Indonesia, human height has decreased coincidentally with natural or political shocks.
Average height around the world
As with any statistical data, the accuracy of such data may be questionable for various reasons:
- Some studies may allow subjects to self-report values. Generally speaking, self-reported height tends to be taller than its measured height, although the overestimation of height depends on the reporting subject's height, age, gender and region.
- Test subjects may have been invited instead of chosen at random, resulting in sampling bias.
- Some countries may have significant height gaps between different regions. For instance, one survey shows there is 10.8 cm (4 1⁄2 in) gap between the tallest state and the shortest state in Germany. Under such circumstances, the mean height may not represent the total population unless sample subjects are appropriately taken from all regions with using weighted average of the different regional groups.
- Different social groups can show different mean height. According to a study in France, executives and professionals are 2.6 cm (1 in) taller, and university students are 2.55 cm (1 in) taller than the national average. As this case shows, data taken from a particular social group may not represent a total population in some countries.
- A relatively small sample of the population may have been measured, which makes it uncertain whether this sample accurately represents the entire population.
- The height of persons can vary over the course of a day, due to factors such as a height increase from exercise done directly before measurement (normally inversely correlated), or a height increase since lying down for a significant period of time (normally inversely correlated). For example, one study revealed a mean decrease of 1.54 centimetres (0.61 in) in the heights of 100 children from getting out of bed in the morning to between 4 and 5 p.m. that same day. Such factors may not have been controlled in some of the studies.
- Men from Bosnia and Herzegovina, the Netherlands, Croatia, Serbia, and Montenegro have the tallest average height. Data suggests that Herzegovinians have the genetic potential to be more than two inches taller than the Dutch. In the Netherlands, about 35% of men have the genetic profile Y haplogroup I-M170, but in Herzegovina, the frequency is over 70%. Extrapolating the genetic trend line suggests that the average Herzegovinian man could possibly be as tall as 190 cm (nearly 6′ 3″). Many Herzegovinians do not achieve this potential due to poverty (citizens of Bosnia and Herzegovina were 1.9 cm taller if both of their parents went to university, which is considered as a wealth indicator) and to nutritional choices: religious prohibition on pork may be largely to blame for the shorter average stature of Muslim Herzegovinians.
- Anthropometry, the measurement of the human individual
- Body weight
- History of anthropometry
- Human physical appearance
- Human variability
- Pygmy peoples
- Economics and Human Biology
- "Stadiometers and Height Measurement Devices". stadiometer.com. stadiometer.com.
- "Using the BMI-for-Age Growth Charts". cdc.gov. Center for Disease Control. Archived from the original on 30 January 2014. Retrieved 5 July 2014.
- Price, Beth; et al. (2009). MathsWorld Year 8 VELS Edition. Australia: MacMillan. p. 626. ISBN 9780732992514.
- Lapham, Robert; Agar, Heather (2009). Drug Calculations for Nurses. USA: Taylor & Francis. p. 223. ISBN 9780340987339.
- Carter, Pamela J. (2008). Lippincott's Textbook for Nursing Assistants: A Humanistic Approach to Caregiving. USA: Lippincott, Williams & Wilkins. p. 306. ISBN 9780781766852.
- Baten, Joerg; Matthias, Blum (2012). "Growing Tall: Anthropometric Welfare of World Regions and its Determinants, 1810-1989". Economic History of Developing Regions. 27. doi:10.1080/20780389.2012.657489 – via Researchgate.
- "'Shorter men live longer, study shows'".
- "Tall height".
- Ganong, William F. (2001) Review of Medical Physiology, Lange Medical, pp. 392–397, ISBN 0071605673.
- Baten, Jörg (2016). A History of the Global Economy. From 1500 to the Present. Cambridge University Press. ISBN 9781107507180.
- "The Problem with Being Tall, Male, and Black".
- Hermanussen, Michael (ed) (2013) Auxology – Studying Human Growth and Development, Schweizerbart, ISBN 9783510652785.
- Bolton-Smith, C. (2000). "Accuracy of the estimated prevalence of obesity from self reported height and weight in an adult Scottish population". Journal of Epidemiology & Community Health. 54 (2): 143–148. doi:10.1136/jech.54.2.143. PMC 1731630. PMID 10715748.
- Komlos, J.; Baur, M. (2004). "From the tallest to (one of) the fattest: The enigmatic fate of the American population in the 20th century". Economics & Human Biology. 2 (1): 57–74. CiteSeerX 10.1.1.651.9270. doi:10.1016/j.ehb.2003.12.006. PMID 15463993.
- Baten, Jörg; Blum, Matthias (2012). "An Anthropometric History of the World, 1810-1980: Did Migration and Globalization Influence Country Trends?". Journal of Anthropological Sciences. doi:10.4436/jass.90011.
- De Onis, M.; Blössner, M.; Borghi, E. (2011). "Prevalence and trends of stunting among pre-school children, 1990–2020". Public Health Nutrition. 15 (1): 142–148. doi:10.1017/S1368980011001315. PMID 21752311.
- Grantham-Mcgregor, S.; Cheung, Y. B.; Cueto, S.; Glewwe, P.; Richter, L.; Strupp, B. (2007). "Developmental potential in the first 5 years for children in developing countries". The Lancet. 369 (9555): 60–70. doi:10.1016/S0140-6736(07)60032-4. PMC 2270351. PMID 17208643.
- Encuesta Nacional de Salud Materno Infantil, 2008‒2009 (English: Guatemala Reproductive Health Survey 2008‒2009) (PDF). Guatemala City, Guatemala: Ministerio de Salud Pública y Asistencia Social. December 2010. p. 670. Archived from the original (PDF) on 13 November 2011. Retrieved 26 April 2013.
- Table 1. Association of 'biological' and demographic variables and height. Figures are coefficients (95% confidence intervals) adjusted for each of the variables shown in Rona RJ, Mahabir D, Rocke B, Chinn S, Gulliford MC (2003). "Social inequalities and children's height in Trinidad and Tobago". European Journal of Clinical Nutrition. 57 (1): 143–50. doi:10.1038/sj.ejcn.1601508. PMID 12548309.
- Miller, Jane E. (1993). "Birth Outcomes by Mother's Age At First Birth in the Philippines". International Family Planning Perspectives. 19 (3): 98–102. doi:10.2307/2133243. JSTOR 2133243.
- Pevalin, David J. (2003). "Outcomes in Childhood and Adulthood by Mother's Age at Birth: evidence from the 1970 British Cohort Study". ISER Working Papers.
- Hermanussen, M.; Hermanussen, B.; Burmeister, J. (1988). "The association between birth order and adult stature". Annals of Human Biology. 15 (2): 161–165. doi:10.1080/03014468800009581. PMID 3355105.
- Myrskyla, M (July 2013). "The association between height and birth order: evidence from 652,518 Swedish men". Journal of Epidemiology and Community Health. 67 (7): 571–7. doi:10.1136/jech-2012-202296. PMID 23645856.
- Lai, Chao-Qiang (11 December 2006). "How much of human height is genetic and how much is due to nutrition?". Scientific American.
- Lango Allen H, et al. (2010). "Hundreds of variants clustered in genomic loci and biological pathways affect human height". Nature. 467 (7317): 832–838. doi:10.1038/nature09410. PMC 2955183. PMID 20881960.
- Wood AR, et al. (2014). "Defining the role of common variation in the genomic and biological architecture of adult human height". Nature Genetics. 46 (11): 1173–1186. doi:10.1038/ng.3097. PMC 4250049. PMID 25282103.
- Chan Y, et al. (2015). "Genome-wide Analysis of Body Proportion Classifies Height-Associated Variants by Mechanism of Action and Implicates Genes Important for Skeletal Development". American Journal of Human Genetics. 96 (5): 695–708. doi:10.1016/j.ajhg.2015.02.018. PMC 4570286. PMID 25865494.
- Adams, Hieab H H; Hibar, Derrek P; Chouraki, Vincent; Stein, Jason L; Nyquist, Paul A; Rentería, Miguel E; Trompet, Stella; Arias-Vasquez, Alejandro; Seshadri, Sudha (2016). "Novel genetic loci underlying human intracranial volume identified through genome-wide association". Nature Neuroscience. 19 (12): 1569–1582. doi:10.1038/nn.4398. PMC 5227112. PMID 27694991.
- Bogin, Barry (1998). "The tall and the short of it" (PDF). Discover. 19 (2): 40–44. Retrieved 26 April 2013.
- Bogin, B.; Rios, L. (2003). "Rapid morphological change in living humans: Implications for modern human origins". Comparative Biochemistry and Physiology A. 136 (1): 71–84. doi:10.1016/S1095-6433(02)00294-5. PMID 14527631.
- Krawitz, Jan (28 June 2006). "P.O.V. – Big Enough". PBS. Retrieved 22 January 2011.
- Roberts, D. F.; Bainbridge, D. R. (1963). "Nilotic physique". American Journal of Physical Anthropology. 21 (3): 341–370. doi:10.1002/ajpa.1330210309.
- Stock, Jay (Summer 2006). "Skeleton key" (PDF). Planet Earth: 26. Archived from the original (PDF) on 10 August 2007.
- Chali D (1995). "Anthropometric measurements of the Nilotic tribes in a refugee camp". Ethiopian Medical Journal. 33 (4): 211–7. PMID 8674486.
- Guinness World Records 2014. The Jim Pattison Group. 2013. p. 49.
- Subba, Tanka Bahadur (1999). Politics of Culture: A Study of Three Kirata Communities in the Eastern Himalayas. Orient Blackswan. ISBN 978-81-250-1693-9.
- Peissel, Michel (1967). Mustang: A Lost Tibetan Kingdom. Book Faith India. ISBN 978-81-7303-002-4.
- Limpert, E; Stahel, W; Abbt, M (2001). "Lognormal distributions across the sciences: keys and clues". BioScience. 51 (5): 341–352. doi:10.1641/0006-3568(2001)051[0341:LNDATS]2.0.CO;2.
- Samaras TT, Elrick H (2002). "Height, body size, and longevity: is smaller better for the humanbody?". The Western Journal of Medicine. 176 (3): 206–8. doi:10.1136/ewjm.176.3.206. PMC 1071721. PMID 12016250.
- "Cancer risk may grow with height". CBC News. 21 July 2011.
- Cavalli-Sforza, L.L., & Cavalli-Sforza, F., 1995, The Great Human Diasporas,
- Merck. "Risk factors present before pregnancy". Merck Manual Home Edition. Merck Sharp & Dohme.
- Magnusson PK, Gunnell D, Tynelius P, Davey Smith G, Rasmussen F (2005). "Strong inverse association between height and suicide in a large cohort of Swedish men: evidence of early life origins of suicidal behavior?". The American Journal of Psychiatry. 162 (7): 1373–5. doi:10.1176/appi.ajp.162.7.1373. PMID 15994722.
- Samaras TT 2014, Evidence from eight different types of studies showing smaller body size is related to greater longevity JSRR 3(16):2150-2160. 2014: article no. JSRR.2014.16.003
- Stefan, Stieger; Christoph, Burger (2010). "Body height and occupational success for actors and actresses". Psychological Reports. 107 (1): 25–38. doi:10.2466/pr0.107.1.25-38. PMID 20923046.
- W. E., Hensley; R., Cooper (1987). "Height and occupational success: a review and critique". Psychological Reports. 60 (3 Pt 1): 843–849. doi:10.2466/pr0.19184.108.40.2063. PMID 3303094.
- Judge, T. A.; Cable, D. M. (2004). "The Effect of Physical Height on Workplace Success and Income: Preliminary Test of a Theoretical Model" (PDF). Journal of Applied Psychology. 89 (3): 428–441. doi:10.1037/0021-9010.89.3.428. PMID 15161403. Archived from the original (PDF) on 14 September 2012.
- Nicola, Persico; Andrew, Postlewaite; Silverman, Dan (2004). "The Effect of Adolescent Experience on Labor Market Outcomes: The Case of Height" (PDF). Journal of Political Economy. 112 (5): 1019–1053. doi:10.1086/422566.
- Heineck G. (2005). "Up in the skies? The relationship between body height and earnings in Germany" (PDF). Labour. 19 (3): 469–489. doi:10.1111/j.1467-9914.2005.00302.x.
- Piotr, Sorokowski (2010). "Politicians' estimated height as an indicator of their popularity". European Journal of Social Psychology. 40 (7): 1302–1309. doi:10.1002/ejsp.710.
- Nickless, Rachel (28 November 2012) Lifelong confidence rewarded in bigger pay packets. Afr.com. Retrieved on 2 September 2013.
- Adam Hadhazy (14 May 2015). "Will humans keep getting taller?". BBC. Retrieved 28 March 2017.
- Michael J. Dougherty. "Why are we getting taller as a species?". Scientific American. Retrieved 28 March 2017.
- Laura Blue (8 July 2008). "Why Are People Taller Today Than Yesterday?". Time. Retrieved 28 March 2017.
- Baten, Joerg; Blum, Matthias (2012). "Growing tall but unequal: new findings and new background evidence on anthropometric welfare in 156 countries, 1810–1989". Economic History of Developing Regions. 27: S66–S85. doi:10.1080/20780389.2012.657489.
- Baten, Joerg (2006). "Global Height Trends in Industrial and Developing Countries, 1810-1984: An Overview". Recuperado el. 20.
- Baten, Joerg; Blum, Matthias (2014). "Why are you tall while others are short? Agricultural production and other proximate determinants of global heights". European Review of Economic History. 18 (2): 144–165. doi:10.1093/ereh/heu003.
- Prince, Joseph M.; Steckel, Richard H. (December 1998). "The Tallest in the World: Native Americans of the Great Plains in the Nineteenth Century". NBER Historical Working Paper No. 112. doi:10.3386/h0112.
- Baten, Jörg (June 2000). "Heights and Real Wages in the 18th and 19th Centuries: An International Overview". Economic History Yearbook. 41 (1).
- Schönbeck, Yvonne; Talma, Henk; Van Dommelen, Paula; Bakker, Boudewijn; Buitendijk, Simone E.; Hirasing, Remy A.; Van Buuren, Stef (2012). "The world's tallest nation has stopped growing taller: The height of Dutch children from 1955 to 2009". Pediatric Research. 73 (3): 371–7. doi:10.1038/pr.2012.189. PMID 23222908.
- Komlos, John; Francesco Cinnirella (2007). "European heights in the early 18th century". Vierteljahrschrift für Sozial-und Wirtschaftsgeschichte. 94 (3): 271–284. Retrieved 26 April 2013.
- Engerman, Stanley L.; Gallman, Robert E. (2000). The Cambridge Economic History of the United States. Cambridge University Press. ISBN 978-0-521-55308-7.
- Komlos, John (1998). "Shrinking in a growing economy? The mystery of physical stature during the industrial revolution". Journal of Economic History. 58 (3): 779–802. doi:10.1017/S0022050700021161.
- Komlos, J. (2007). On English Pygmies and giants: The physical stature of English youth in the late 18th and early 19th centuries. Research in Economic History. 25. pp. 149–168. CiteSeerX 10.1.1.539.620. doi:10.1016/S0363-3268(07)25003-7. ISBN 978-0-7623-1370-9.
- Fredriks, Anke Maria (2004). Growth diagrams: fourth Dutch nation-wide survey. Houten: Bohn Stafleu van Loghum. ISBN 9789031343478.
- Drukker, J. W.; Vincent Tassenaar (2000). "Shrinking Dutchmen in a growing economy: the early industrial growth paradox in the Netherlands" (PDF). Jahrbuch für Wirtschaftsgeschichte. 2000: 77–94. ISSN 0075-2800. Retrieved 26 April 2013.
- Demick, Barbara (14 February 2004). "Effects of famine: Short stature evident in North Korean generation". Seattle Times. Seattle, Wash. Retrieved 26 April 2013.
- Demick, Barbara (8 October 2011). "The unpalatable appetites of Kim Jong-il". Retrieved 8 October 2011.
- Baten, Jörg; Stegl, Mojgan; van der Eng, Pierre: “Long-Term Economic Growth and the Standard of Living in Indonesia
- Arno J. Krul; Hein A. M. Daanen; Hyegjoo Choi (2010). "Self-reported and measured weight, height and body mass index (BMI) in Italy, the Netherlands and North America". European Journal of Public Health. 21 (4): 414–419. doi:10.1093/eurpub/ckp228. PMID 20089678.
- Lucca A, JMoura EC (2010). "Validity and reliability of self-reported weight, height and body mass index from telephone interviews" (PDF). Cadernos de Saúde Pública. 26 (1): 110–22. doi:10.1590/s0102-311x2010000100012. PMID 20209215.
- Shields, Margot; Gorber, Sarah Connor; Tremblay, Mark S. (2009). "Methodological Issues in Anthropometry: Self-reported versus Measured Height and Weight" (PDF). Proceedings of Statistic s Canada Symposium 2008. Data Collection: Challenges, Achievements and New Directions.
- Moody, Alison (18 December 2013). "10: Adult anthropometric measures, overweight and obesity". In Craig, Rachel; Mindell, Jennifer (eds.). Health Survey for England – 2012 (PDF) (Report). Volume 1: Health, social care and lifestyles. Health and Social Care Information Centre. p. 20. Retrieved 31 July 2014.
- WWC Web World Center GmbH G.R.P. Institut für Rationelle Psychologie KÖRPERMASSE BUNDESLÄNDER & STÄDTE Archived 16 August 2012 at the Wayback Machine 31. Oktober 2007
- Although the mean height of university students are slightly shorter than the national mean height aged 20-29 in this study.
- Herpin, Nicolas (2003). "La taille des hommes: son incidence sur la vie en couple et la carrière professionnelle" (PDF). Économie et Statistique. 361 (1): 71–90. doi:10.3406/estat.2003.7355.
- Buckler, JM (1978). "Variations in height throughout the day". Arch Dis Child. 53 (9): 762. doi:10.1136/adc.53.9.762. PMC 1545095. PMID 568918.
- Grasgruber, Pavel; Popović, Stevo; Bokuvka, Dominik; Davidović, Ivan; Hřebíčková, Sylva; Ingrová, Pavlína; Potpara, Predrag; Prce, Stipan; Stračárová, Nikola (1 April 2017). "The mountains of giants: an anthropometric survey of male youths in Bosnia and Herzegovina". Royal Society Open Science. 4 (4): 161054. doi:10.1098/rsos.161054. ISSN 2054-5703. PMC 5414258. PMID 28484621.
- Viegas, Jen (11 April 2017). "The Tallest Men in the World Trace Back to Paleolithic Mammoth Hunters". seeker. Retrieved 12 April 2017.
- "Move Over, Dutch Men. Herzegovinians May Be Tallest in World"
- Grandjean, Etienne (1987). Fitting the Task to the Man: An Ergonomic Approach. London, UK: Taylor & Francis. ISBN 978-0-85066-192-7. (for heights in U.S. and Japan)
- Eurostat Statistical Yearbook 2004. Luxembourg: Eurostat. ISBN 978-92-79-38906-1. (for heights in Germany)
- Netherlands Central Bureau for Statistics, 1996 (for average heights)
- Ogden, Cynthia L.; Fryar, Cheryl D.; Carroll, Margaret D. & Flegal, Katherine M. (27 October 2004). "Mean Body Weight, Height, and Body Mass Index, United States 1960–2002" (PDF). Advance Data from Vital and Health Statistics (347).
- "Health Survey for England - trend data". Department of Health and Social Care. Archived from the original on 10 October 2004.
- Bilger, Burkhard (29 March 2004). "The Height Gap". The New Yorker. Archived from the original on 2 April 2004.
- A collection of data on human height, referred to here as "karube" but originally collected from other sources, is archived here. A copy is available here (an English translation of this Japanese page would make it easier to evaluate the quality of the data...)
- "Americans Slightly Taller, Much Heavier Than Four Decades Ago". National Center for Health Statistics. 27 October 2004.
- Aminorroaya, A.; Amini, M.; Naghdi, H. & Zadeh, A. H. (2003). "Growth charts of heights and weights of male children and adolescents of Isfahan, Iran" (PDF). Journal of Health, Population, and Nutrition. 21 (4): 341–346. PMID 15038589.
- 6. Celostátní antropologický výzkum dětí a mládeže 2001, Česká republika [6th Nationwide anthropological research of children and youth 2001, Czech Republic] (in Czech). Prague: State Health Institute (SZÚ). 2005. ISBN 978-8-07071-251-1.
- Bogin, Barry (2001). The Growth of Humanity. Hoboken, NJ: Wiley-Liss. ISBN 978-0-47135-448-2.
- Eveleth, P.B.; Tanner, J.M. (1990). Worldwide Variation in Human Growth (2nd ed.). Cambridge University Press. ISBN 978-0-52135-916-0.
- Miura, K.; Nakagawa, H. & Greenland, P. (2002). "Invited commentary: Height-cardiovascular disease relation: where to go from here?". American Journal of Epidemiology. 155 (8): 688–689. doi:10.1093/aje/155.8.688. PMID 11943684.
- Ruff, Christopher (October 2002). "Variation in human body size and shape". Annual Review of Anthropology. 31: 211–232. doi:10.1146/annurev.anthro.31.040402.085407.
- "Los españoles somos 3,5 cm más altos que hace 20 años" [Spaniards are 3.5 cm taller than 20 years ago]. 20 minutos (in Spanish). 31 July 2006.
- Krishan, K. & Sharma, J. C. (2002). "Intra-individual difference between recumbent length and stature among growing children". Indian Journal of Pediatrics. 69 (7): 565–569. doi:10.1007/BF02722678. PMID 12173694.
- Case, A. & Paxson, C. (2008). "Stature and Status: Height, ability, and labor market outcomes". The Journal of Political Economy. 116 (3): 499–532. doi:10.1086/589524. PMC 2709415. PMID 19603086.
- Sakamaki, R.; Amamoto, R.; Mochida, Y.; Shinfuku, N. & Toyama, K. (2005). "A comparative study of food habits and body shape perception of university students in Japan and Korea". Nutrition Journal. 4: 31. doi:10.1186/1475-2891-4-31. PMC 1298329. PMID 16255785.
- Habicht, Michael E.; Henneberg, Maciej; Öhrström, Lena M.; Staub, Kaspar & Rühli, Frank J. (27 April 2015). "Body height of mummified pharaohs supports historical suggestions of sibling marriages". American Journal of Physical Anthropology. 157 (3): 519–525. doi:10.1002/ajpa.22728. PMID 25916977.
- Marouli, Eirini; et al. (9 February 2017). "Rare and low-frequency coding variants alter human adult height". Nature. 542 (7640): 186–190. doi:10.1038/nature21039. PMC 5302847. PMID 28146470.
|Wikimedia Commons has media related to Human height.|
- CDC National Center for Health Statistics: Growth Charts of American Percentiles
- fao.org, Body Weights and Heights by Countries (given in percentiles)
- The Height Gap, Article discussing differences in height around the world
- Tallest in the World: Native Americans of the Great Plains in the Nineteenth Century
- European Heights in the Early eighteenth Century
- Spatial Convergence in Height in East-Central Europe, 1890–1910
- The Biological Standard of Living in Europe During the Last Two Millennia
- HEALTH AND NUTRITION IN THE PREINDUSTRIAL ERA: INSIGHTS FROM A MILLENNIUM OF AVERAGE HEIGHTS IN NORTHERN EUROPE
- Our World In Data – Human Height – Visualizations of how human height around the world has changed historically (by Max Roser). Charts for all countries, world maps, and links to more data sources.
- What Has Happened to the Quality of Life in the Advanced Industrialized Nations?
- A century of trends in adult human height, NCD Risk Factor Collaboration (NCD-RisC), DOI: 10.7554/eLife.13410, 25 July 2016 |
In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. Numbers such as the real or complex numbers can be multiplied according to elementary arithmetic. On the other hand, matrices are arrays of numbers, so there is no unique way to define "the" multiplication of matrices. As such, in general the term "matrix multiplication" refers to a number of different ways to multiply matrices. The key features of any matrix multiplication include: the number of rows and columns the original matrices have (called the "size", "order" or "dimension"), and specifying how the entries of the matrices generate the new matrix.
Like vectors, matrices of any size can be multiplied by scalars, which amounts to multiplying every entry of the matrix by the same number. Similar to the entrywise definition of adding or subtracting matrices, multiplication of two matrices of the same size can be defined by multiplying the corresponding entries, and this is known as the Hadamard product. Another definition is the Kronecker product of two matrices, to obtain a block matrix.
One can form many other definitions. However, the most useful definition can be motivated by linear equations and linear transformations on vectors, which have numerous applications in applied mathematics, physics, and engineering. This definition is often called the matrix product. In words, if A is an n × m matrix and B is an m × p matrix, their matrix product AB is an n × p matrix, in which the m entries across the rows of A are multiplied with the m entries down the columns of B (the precise definition is below).
This definition is not commutative, although it still retains the associative property and is distributive over entrywise addition of matrices. The identity element of the matrix product is the identity matrix (analogous to multiplying numbers by 1), and a square matrix may have an inverse matrix (analogous to the multiplicative inverse of a number). Determinant multiplicativity applies to the matrix product. The matrix product is an important operation in linear transformations, matrix groups, and the theory of group representations and irreps.
Computing matrix products is both a central operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing C = AB, especially for large matrices.
This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. A, vectors in lowercase bold, e.g. a, and entries of vectors and matrices are italic (since they are numbers from a field), e.g. A and a. Index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by (A)ij or Aij, whereas a numerical label (not matrix entries) on a collection of matrices is subscripted only, e.g. A1, A2, etc.
- 1 Scalar multiplication
- 2 Matrix product (two matrices)
- 2.1 General definition of the matrix product
- 2.2 Illustration
- 2.3 Examples of matrix products
- 2.4 Properties of the matrix product (two matrices)
- 3 Matrix product (any number)
- 4 Operations derived from the matrix product
- 5 Applications of the matrix product
- 6 The inner and outer products
- 7 Algorithms for efficient matrix multiplication
- 8 Other forms of multiplication
- 9 See also
- 10 Notes
- 11 References
- 12 External links
The simplest form of multiplication associated with matrices is scalar multiplication, which is a special case of the Kronecker product.
The left scalar multiplication of a matrix A with a scalar λ gives another matrix λA of the same size as A. The entries of λA are defined by
Similarly, the right scalar multiplication of a matrix A with a scalar λ is defined to be
When the underlying ring is commutative, for example, the real or complex number field, these two multiplications are the same, and are simply called scalar multiplication. However, for matrices over a more general ring that are not commutative, such as the quaternions, they may not be equal.
For a real scalar and matrix:
For quaternion scalars and matrices:
where i, j, k are the quaternion units. The non-commutativity of quaternion multiplication prevents the transition of changing ij = +k to ji = −k.
Matrix product (two matrices)
Assume two matrices are to be multiplied (the generalization to any number is discussed below).
General definition of the matrix product
If A is an n × m matrix and B is an m × p matrix,
where each i, j entry is given by multiplying the entries Aik (across row i of A) by the entries Bkj (down column j of B), for k = 1, 2, ..., m, and summing the results over k:
Thus the product AB is defined only if the number of columns in A is equal to the number of rows in B, in this case m. Each entry may be computed one at a time. Sometimes, the summation convention is used as it is understood to sum over the repeated index k. To prevent any ambiguity, this convention will not be used in the article.
Usually the entries are numbers or expressions, but can even be matrices themselves (see block matrix). The matrix product can still be calculated exactly the same way. See below for details on how the matrix product can be calculated in terms of blocks taking the forms of rows and columns.
The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the product matrix corresponds to a row of A and a column of B.
The values at the intersections marked with circles are:
Examples of matrix products
Row vector and column vector
their matrix products are:
Note AB and BA are two different matrices: the first is a 1 × 1 matrix while the second is a 3 × 3 matrix. Such expressions occur for real-valued Euclidean vectors in Cartesian coordinates, displayed as row and column matrices, in which case AB is the matrix form of their dot product, while BA the matrix form of their dyadic or tensor product.
Square matrix and column vector
their matrix product is:
however BA is not defined.
The product of a square matrix multiplied by a column matrix arises naturally in linear algebra; for solving linear equations and representing linear transformations. By choosing a, b, c, p, q, r, u, v, w in A appropriately, A can represent a variety of transformations such as rotations, scaling and reflections, shears, of a geometric shape in space.
their matrix products are:
In this case, both products AB and BA are defined, and the entries show that AB and BA are not equal in general. Multiplying square matrices which represent linear transformations corresponds to the composite transformation (see below for details).
Row vector, square matrix, and column vector
their matrix product is:
however CBA is not defined. Note that A(BC) = (AB)C, this is one of many general properties listed below. Expressions of the form ABC occur when calculating the inner product of two vectors displayed as row and column vectors in an arbitrary coordinate system, and the metric tensor in these coordinates written as the square matrix.
their matrix products are:
Properties of the matrix product (two matrices)
- Not commutative:
because AB and BA may not be simultaneously defined, and even if they are they may still not be equal. This is contrary to ordinary multiplication of numbers. To specify the ordering of matrix multiplication in words; "pre-multiply (or left multiply) A by B" means BA, while "post-multiply (or right multiply) A by C" means AC. As long as the entries of the matrix come from a ring that has an identity, and n > 1 there is a pair of n × n noncommuting matrices over the ring. A notable exception is that the identity matrix (or any scalar multiple of it) commutes with every square matrix.
In index notation:
- Distributive over matrix addition:
In index notation, these are respectively:
- Scalar multiplication is compatible with matrix multiplication:
where λ is a scalar. If the entries of the matrix are real or complex numbers (or from any other commutative ring), then all four quantities are equal. More generally, all four are equal if λ belongs to the center of the ring of entries of the matrix, because in this case λX = Xλ for all matrices X. In index notation, these are respectively:
where T denotes the transpose, the interchange of row i with column i in a matrix. This identity holds for any matrices over a commutative ring, but not for all rings in general. Note that A and B are reversed.
In index notation:
- Complex conjugate:
If A and B have complex entries, then
where * denotes the complex conjugate of a matrix.
In index notation:
- Conjugate transpose:
If A and B have complex entries, then
where † denotes the Conjugate transpose of a matrix (complex conjugate and transposed).
In index notation:
The trace of a product AB is independent of the order of A and B:
In index notation:
Square matrices only
- Identity element:
If A is a square matrix, then
- Inverse matrix:
If A is a square matrix, there may be an inverse matrix A−1 of A such that
When a determinant of a matrix is defined (i.e., when the underlying ring is commutative), if A and B are square matrices of the same order, the determinant of their product AB equals the product of their determinants:
Matrix product (any number)
Matrix multiplication can be extended to the case of more than two matrices, provided that for each sequential pair, their dimensions match.
The product of n matrices A1, A2, ..., An with sizes s0 × s1, s1 × s2, ..., sn − 1 × sn (where s0, s1, s2, ..., sn are all simply positive integers and the subscripts are labels corresponding to the matrices, nothing more), is the s0 × sn matrix:
In index notation:
Properties of the matrix product (any number)
The same properties will hold, as long as the ordering of matrices is not changed. Some of the previous properties for more than two matrices generalize as follows.
The matrix product is associative. If three matrices A, B, and C are respectively m × p, p × q, and q × r matrices, then there are two ways of grouping them without changing their order, and
is an m × r matrix.
If four matrices A, B, C, and D are respectively m × p, p × q, q × r, and r × s matrices, then there are five ways of grouping them without changing their order, and
is an m × s matrix.
In general, the number of possible ways of grouping n matrices for multiplication is equal to the (n − 1)th Catalan number
The trace of a product of n matrices A1, A2, ..., An is invariant under cyclic permutations of the matrices in the product:
For square matrices only, the determinant of a product is the product of determinants:
Examples of chain multiplication
Similarity transformations involving similar matrices are matrix products of the three square matrices, in the form:
where P is the similarity matrix and A and B are said to be similar if this relation holds. This product appears frequently in linear algebra and applications, such as diagonalizing square matrices and the equivalence between different matrix representations of the same linear operator.
Operations derived from the matrix product
More operations on square matrices can be defined using the matrix product, such as powers and nth roots by repeated matrix products, the matrix exponential can be defined by a power series, the matrix logarithm is the inverse of matrix exponentiation, and so on.
Powers of matrices
Square matrices can be multiplied by themselves repeatedly in the same way as ordinary numbers, because they always have the same number of rows and columns. This repeated multiplication can be described as a power of the matrix, a special case of the ordinary matrix product. On the contrary, rectangular matrices do not have the same number of rows and columns so they can never be raised to a power. An n × n matrix A raised to a positive integer k is defined as
and the following identities hold, where λ is a scalar:
The naive computation of matrix powers is to multiply k times the matrix A to the result, starting with the identity matrix just like the scalar case. This can be improved using exponentiation by squaring, a method commonly used for scalars. For diagonalizable matrices, an even better method is to use the eigenvalue decomposition of A. Another method based on the Cayley–Hamilton theorem finds an identity using the matrices' characteristic polynomial, producing a more effective equation for Ak in which a scalar is raised to the required power, rather than an entire matrix.
A special case is the power of a diagonal matrix. Since the product of diagonal matrices amounts to simply multiplying corresponding diagonal elements together, the power k of a diagonal matrix A will have entries raised to the power. Explicitly;
meaning it is easy to raise a diagonal matrix to a power. When raising an arbitrary matrix (not necessarily a diagonal matrix) to a power, it is often helpful to exploit this property by diagonalizing the matrix first.
Applications of the matrix product
Matrices offer a concise way of representing linear transformations between vector spaces, and matrix multiplication corresponds to the composition of linear transformations. The matrix product of two matrices can be defined when their entries belong to the same ring, and hence can be added and multiplied.
Suppose that A, B, and C are the matrices representing the transformations S, T, and ST with respect to the given bases.
Then AB = C, that is, the matrix of the composition (or the product) of linear transformations is the product of their matrices with respect to the given bases.
Linear systems of equations
A system of linear equations with the same number of equations as variables can be solved by collecting the coefficients of the equations into a square matrix, then inverting the matrix equation.
Group theory and representation theory
The inner and outer products
The inner product of two vectors in matrix form is equivalent to a column vector multiplied on its left by a row vector:
where aT denotes the transpose of a.
The matrix product itself can be expressed in terms of inner product. Suppose that the first n × m matrix A is decomposed into its row vectors ai, and the second m × p matrix B into its column vectors bi:
It is also possible to express a matrix product in terms of concatenations of products of matrices and row or column vectors:
These decompositions are particularly useful for matrices that are envisioned as concatenations of particular types of row vectors or column vectors, e.g. orthogonal matrices (whose rows and columns are unit vectors orthogonal to each other) and Markov matrices (whose rows or columns sum to 1).
An alternative method is to express the matrix product in terms of the outer product. The decomposition is done the other way around, the first matrix A is decomposed into column vectors ai and the second matrix B into row vectors bi:
where this time
This method emphasizes the effect of individual column/row pairs on the result, which is a useful point of view with e.g. covariance matrices, where each such pair corresponds to the effect of a single sample point.
Algorithms for efficient matrix multiplication
The running time of square matrix multiplication, if carried out naïvely, is O(n3). The running time for multiplying rectangular matrices (one m × p-matrix with one p × n-matrix) is O(mnp), however, more efficient algorithms exist, such as Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". It is based on a way of multiplying two 2 × 2-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of . Strassen's algorithm is more complex, and the numerical stability is reduced compared to the naïve algorithm. Nevertheless, it appears in several libraries, such as BLAS, where it is significantly more efficient for matrices with dimensions n > 100, and is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue.
The current O(nk) algorithm with the lowest known exponent k is a generalization of the Coppersmith–Winograd algorithm that has an asymptotic complexity of O(n2.3728639), by François Le Gall. This algorithm, and the Coppersmith–Winograd algorithm on which it is based, are similar to Strassen's algorithm: a way is devised for multiplying two k × k-matrices with fewer than k3 multiplications, and this technique is applied recursively. However, the constant coefficient hidden by the Big O notation is so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers.
Since any algorithm for multiplying two n × n-matrices has to process all 2 × n2-entries, there is an asymptotic lower bound of Ω(n2) operations. Raz (2002) proves a lower bound of Ω(n2 log(n)) for bounded coefficient arithmetic circuits over the real or complex numbers.
Cohn et al. (2003, 2005) put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the triple product property (TPP). They show that if families of wreath products of Abelian groups with symmetric groups realise families of subset triples with a simultaneous version of the TPP, then there are matrix multiplication algorithms with essentially quadratic complexity. Most researchers believe that this is indeed the case. However, Alon, Shpilka and Chris Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the sunflower conjecture.
Freivalds' algorithm is a simple Monte Carlo algorithm that given matrices A, B, C verifies in Θ(n2) time if AB = C.
Parallel matrix multiplication
Because of the nature of matrix operations and the layout of matrices in memory, it is typically possible to gain substantial performance gains through use of parallelization and vectorization. Several algorithms are possible, among which divide and conquer algorithms based on the block matrix decomposition
that also underlies Strassen's algorithm. Here, A, B and C are presumed to be n by n (square) matrices, and C11 etc. are n/2 by n/2 submatrices. From this decomposition, one derives
which consists of eight multiplications of pairs of submatrices, which can all be performed in parallel, followed by an addition step. Applying this recursively, and performing the additions in parallel as well, one obtains an algorithm that runs in Θ(log2 n) time on an ideal machine with an infinite number of processors, and has a maximum possible speedup of Θ(n3/(log2 n)) on any real computer (although the algorithm isn't practical, a more practical variant achieves Θ(n2) speedup).
It should be noted that some lower time-complexity algorithms on paper may have indirect time complexity costs on real machines.
Communication-avoiding and distributed algorithms
On modern architectures with hierarchical memory, the cost of loading and storing input matrix elements tends to dominate the cost of arithmetic. On a single machine this is the amount of data transferred between RAM and cache, while on a distributed memory multi-node machine it is the amount transferred between nodes; in either case it is called the communication bandwidth. The naïve algorithm using three nested loops uses Ω(n3) communication bandwidth.
Cannon's algorithm, also known as the 2D algorithm, partitions each input matrix into a block matrix whose elements are submatrices of size √ by √, where M is the size of fast memory. The naïve algorithm is then used over the block matrices, computing products of submatrices entirely in fast memory. This reduces communication bandwidth to O(n3/√), which is asymptotically optimal (for algorithms performing Ω(n3) computation).
In a distributed setting with p processors arranged in a √ by √ 2D mesh, one submatrix of the result can be assigned to each processor, and the product can be computed with each processor transmitting O(n2/√) words, which is asymptotically optimal assuming that each node stores the minimum O(n2/p) elements. This can be improved by the 3D algorithm, which arranges the processors in a 3D cube mesh, assigning every product of two input submatrices to a single processor. The result submatrices are then generated by performing a reduction over each row. This algorithm transmits O(n2/p2/3) words per processor, which is asymptotically optimal. However, this requires replicating each input matrix element p1/3 times, and so requires a factor of p1/3 more memory than is needed to store the inputs. This algorithm can be combined with Strassen to further reduce runtime. "2.5D" algorithms provide a continuous tradeoff between memory usage and communication bandwidth. On modern distributed computing environments such as MapReduce, specialized multiplication algorithms have been developed.
Other forms of multiplication
Some other ways to multiply two matrices are given below; some, in fact, are simpler than the definition above. The Cracovian product is yet another form.
For two matrices of the same dimensions, there is the Hadamard product, also known as the element-wise product, pointwise product, entrywise product and the Schur product. For two matrices A and B of the same dimensions, the Hadamard product A ○ B is a matrix of the same dimensions, the i, j element of A is multiplied with the i, j element of B, that is:
This operation is identical to multiplying many ordinary numbers (mn of them) all at once; thus the Hadamard product is commutative, associative and distributive over entrywise addition. It is also a principal submatrix of the Kronecker product. It appears in lossy compression algorithms such as JPEG.
The Frobenius inner product, sometimes denoted A : B, often denoted , is the component-wise inner product of two matrices as though they are vectors. It is also the sum of the entries of the Hadamard product. Explicitly,
For two matrices A and B of any different dimensions m × n and p × q respectively (no constraints on the dimensions of each matrix), the Kronecker product is the matrix
|Wikimedia Commons has media related to matrix multiplication.|
- Lerner, R. G.; Trigg, G. L. (1991). Encyclopaedia of Physics (2nd ed.). VHC publishers. ISBN 3-527-26954-1.
- Parker, C. B. (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). ISBN 0-07-051400-3.
- Lipschutz, S.; Lipson, M. (2009). Linear Algebra. Schaum's Outlines (4th ed.). McGraw Hill (USA). pp. 30–31. ISBN 978-0-07-154352-1.
- Riley, K. F.; Hobson, M. P.; Bence, S. J. (2010). Mathematical methods for physics and engineering. Cambridge University Press. ISBN 978-0-521-86153-3.
- Adams, R. A. (1995). Calculus, A Complete Course (3rd ed.). Addison Wesley. p. 627. ISBN 0 201 82823 5.
- Horn, Johnson (2013). Matrix Analysis (2nd ed.). Cambridge University Press. p. 6. ISBN 978 0 521 54823 6.
- Lipcshutz, S.; Lipson, M. (2009). "2". Linear Algebra. Schaum's Outlines (4th ed.). McGraw Hill (USA). ISBN 978-0-07-154352-1.
- Horn, Johnson (2013). "0". Matrix Analysis (2nd ed.). Cambridge University Press. ISBN 978 0 521 54823 6.
- Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISBN 978-0-521-86153-3
- Miller, Webb (1975), "Computational complexity and numerical stability", SIAM News, 4: 97–107, doi:10.1137/0204009, CiteSeerX: 10
.1 .1 .148 .9947
- Press 2007, p. 108.
- Le Gall, François (2014), "Powers of tensors and fast matrix multiplication", Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), arXiv:. The original algorithm was presented by Don Coppersmith and Shmuel Winograd in 1990, has an asymptotic complexity of O(n2.376). It was improved in 2013 to O(n2.3729) by Virginia Vassilevska Williams, giving a time only slightly worse than Le Gall's improvement: Williams, Virginia Vassilevska. "Multiplying matrices faster than Coppersmith-Winograd" (PDF).
- Iliopoulos, Costas S. (1989), "Worst-case complexity bounds on algorithms for computing the canonical structure of finite abelian groups and the Hermite and Smith normal forms of an integer matrix" (PDF), SIAM Journal on Computing, 18 (4): 658–669, doi:10.1137/0218045, MR 1004789,
The Coppersmith–Winograd algorithm is not practical, due to the very large hidden constant in the upper bound on the number of multiplications required.
- Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication" (PDF), SIAM News, 38 (9)
- Robinson, 2005.
- Alon, Shpilka, Umans, On Sunflowers and Matrix Multiplication
- Randall, Keith H. (1998). Cilk: Efficient Multithreaded Computing (PDF) (Ph.D.). Massachusetts Institute of Technology. pp. 54–57.
- Lynn Elliot Cannon, A cellular computer to implement the Kalman Filter Algorithm, Technical report, Ph.D. Thesis, Montana State University, 14 July 1969.
- Hong, J.W.; Kung, H. T. (1981). "I/O complexity: The red-blue pebble game". STOC ’81: Proceedings of the thirteenth annual ACM symposium on Theory of computing: 326–333.
- Irony, Dror; Toledo, Sivan; Tiskin, Alexander (September 2004). "Communication lower bounds for distributed-memory matrix multiplication". J. Parallel Distrib. Comput. 64 (9): 1017–1026. doi:10.1016/j.jpdc.2004.03.021.
- Agarwal, R.C.; Balle, S. M.; Gustavson, F. G.; Joshi, M.; Palkar, P. (September 1995). "A three-dimensional approach to parallel matrix multiplication". IBM J. Res. Dev. 39 (5): 575–582. doi:10.1147/rd.395.0575.
- Solomonik, Edgar; Demmel, James (2011). "Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms". Proceedings of the 17th international conference on Parallel processing. Part II: 90–109. doi:10.1007/978-3-642-23397-5_10.
- Pietracaprina, A.; Pucci, G.; Riondato, M.; Silvestri, F.; Upfal, E. (2012). "Space-Round Tradeoffs for MapReduce Computations". Proc. of 26th ACM International Conference on Supercomputing. Venice (Italy): ACM. pp. 235–244. doi:10.1145/2304576.2304607.
- (Horn & Johnson 1985, Ch. 5)
- Steeb, Willi-Hans (1997), Matrix Calculus and Kronecker Product with Applications and C++ Programs, World Scientific, p. 55, ISBN 9789810232412.
- Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans. Group-theoretic Algorithms for Matrix Multiplication. arXiv:math.GR/0511460. Proceedings of the 46th Annual Symposium on Foundations of Computer Science, 23–25 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379–388.
- Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication. arXiv:math.GR/0307321. Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 11–14 October 2003, Cambridge, MA, IEEE Computer Society, pp. 438–449.
- Coppersmith, D.; Winograd, S. (1990). "Matrix multiplication via arithmetic progressions". J. Symbolic Comput. 9: 251–280. doi:10.1016/s0747-7171(08)80013-2.
- Horn, Roger A.; Johnson, Charles R. (1991), Topics in Matrix Analysis, Cambridge University Press, ISBN 978-0-521-46713-1
- Knuth, D.E., The Art of Computer Programming Volume 2: Seminumerical Algorithms. Addison-Wesley Professional; 3 edition (November 14, 1997). ISBN 978-0-201-89684-8. pp. 501.
- Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (2007), Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8.
- Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing. ACM Press, 2002. doi:10.1145/509907.509932.
- Robinson, Sara, Toward an Optimal Algorithm for Matrix Multiplication, SIAM News 38(9), November 2005. PDF
- Strassen, Volker, Gaussian Elimination is not Optimal, Numer. Math. 13, p. 354-356, 1969.
- Styan, George P. H. (1973), "Hadamard Products and Multivariate Statistical Analysis", Linear Algebra and its Applications, 6: 217–240, doi:10.1016/0024-3795(73)90023-2
- Vassilevska Williams, Virginia, Multiplying matrices faster than Coppersmith-Winograd, Manuscript, May 2012. PDF
|The Wikibook The Book of Mathematical Proofs has a page on the topic of: Proofs of properties of matrices|
|The Wikibook Linear Algebra has a page on the topic of: Matrix multiplication|
|The Wikibook Applicable Mathematics has a page on the topic of: Multiplying Matrices|
- How to Multiply Matrices
- Matrix Multiplication Calculator Online
- The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication
- WIMS Online Matrix Multiplier
- Wijesuriya, Viraj B., Daniweb: Sample Code for Matrix Multiplication using MPI Parallel Programming Approach, retrieved 2010-12-29
- Linear algebra: matrix operations Multiply or add matrices of a type and with coefficients you choose and see how the result was computed.
- Matrix Multiplication in Java – Dr. P. Viry |
Over 40 percent of the cereal crop’s genes respond to drought stress.
Fields of drooping stalks and cracked earth are becoming common images in many regions due to more extreme weather events such as heat waves, droughts and floods. The planet’s resources are being stretched by a growing human population and increasing demand for agricultural products. Sorghum bicolor (L.) Moench is an African grass that adroitly handles droughts, floods and poor soils. This is the first paper that describes sorghum’s response to drought, from a large-scale field experiment to uncover the mechanisms behind sorghum’s capacity to produce high yields despite drought conditions. The field experiment is led by a consortium involving researchers from the University of California (UC), Berkeley, UC Agriculture and Natural Resources (ANR), US Department of Agriculture Plant Gene Expression Center, Pacific Northwest National Laboratory, and the U.S. Department of Energy Joint Genome Institute (JGI), a DOE Office of Science User Facility located at Lawrence Berkeley National Laboratory (Berkeley Lab).
Half a billion people consider sorghum a staple food in their diet, and the Department of Energy (DOE) also considers this to be a candidate bioenergy crop for its potential production of biomass on marginal lands not usable for conventional food crops. For that reason, its genome was sequenced by the JGI in 2009 and it is considered to be a Flagship Plant. By uncovering and characterizing the mechanisms through which sorghum is equipped to deal with adverse environmental conditions, researchers hope to improve yields for other crops under water-limited conditions.
In humans, traits such as height or susceptibility to certain diseases are well known to be partially due to the DNA sequence that makes up an individual’s genome. The genes of sorghum plants are likely responsible for the crop’s ability to produce good yields under water limiting conditions. Epigenetic Control of Drought Response in Sorghum (EPICON) is a five-year, multi-institution project funded by the DOE Office of Biological and Environmental Research to determine what genes respond to drought conditions, and how these conditions impact the crop and its microbiome. For three consecutive years (2016-2018), researchers conducted large-scale experiments in the field at the UC Kearney Agricultural Research and Extension Center (KARE) in Parlier, Calif. They grew two sorghum cultivars under three watering conditions, and then collected root, leaf and rhizosphere (soil surrounding the root) samples from the plants at the same time each week over the full life cycle of the plant. The first year’s gene expression results were published in the Proceedings of the National Academy of Science the week of December 2, 2019.
While sorghum is drought-tolerant, the crop’s precise response is dependent on when exactly water becomes a limiting factor – before or after flowering. For the study, the team grew two sorghum varieties: one that tolerates better pre-flowering drought stress and a “stay-green” variety that tolerates drought conditions better after flowering. All plants were subjected to one of three conditions during their lifecycles. In the pre-flowering drought condition, no water was given during weeks 3 to 8 before flowering. In the post-flowering period drought was implemented by halting irrigation after the weekly watering was applied before week 9 (flowering); and a control condition, with water being applied weekly throughout the duration of the experiment, equal to the amount of water lost through evaporation in the leaves. Each week the team would set up a makeshift lab powered by a generator at the KARE field site and then collect and process the samples starting at 10am Pacific time.
JGI researchers helped develop the overall experimental design, sequenced the RNA from nearly 400 root and leaf samples and helped analyze the sequencing datasets. The results from the first year showed that 10,727 genes or 44% of expressed genes respond to drought stress regardless of whether drought was applied before or after the plants flower. More genes responded to drought in the roots than in the leaves, suggesting that roots are more affected by the lack of water than the leaves. The team found sets of genes that change their expression in the same manner in the two cultivars and some differently. Some different responses relate to functioning of the photosynthetic machinery, which requires water to function. Other responses help the plant deal with excess solar radiation.
Additionally, in both cultivars, the team found that a large set of genes impacted by drought is associated with the symbiotic relationship between plant roots and arbuscular mycorrhizal fungi (AMF), known to provide plants with nutrients and pathogen protection. Specifically, gene expression in that set, implicated in AMF interactions, was dramatically reduced as a result of pre-flowering drought stress.
The first year’s results will be compared to data from additional years of sampling, which are currently being analyzed. All EPICON data collected, along with methodology and results, will ultimately be published on the JGI plant data portal Phytozome.
Ramana Madupu, Ph.D.
Biological Systems Sciences Division
Office of Biological and Environmental Research
Office of Science
US Department of Energy
University of California, Berkeley
DOE Joint Genome Institute
This research was funded in part by DOE Grant DE-SC0014081 (to N.V., B.C., C.G., G.P., M.M., J.H., J.S., Y.Y., J.A.O., V.R.S., S.D., L.X., M.J.B., A.V., C.J., R.H., D.C.-D., R.O., J.W.T., J.D., J.P.V., P.G.L., and E.P.); Gordon and Betty Moore Foundation Grant GBMF3834 and Alfred P. Sloan Foundation Grant 2013-10-27 (to the University of California, Berkeley [N.V.]); L’Ecole Normale Superieure-CFM Data Science Chair (E.P.); and the Office of Science (BER), DOE Grant DE-SC0012460 (to M.J.H.). Work conducted by the DOE Joint Genome Institute is supported by the Office of Science of the DOE Contract DE-AC02-05CH11231. D.P. is supported in part by the Berkeley Fellowship and NSF Graduate Research Fellowship Program Grant DGE 1752814. K.K.N. is an investigator of the Howard Hughes Medical Institute.
- Varoquax N et al. Transcriptomic analysis of field-droughted sorghum from seedling to maturity reveals biotic and metabolic responses. Proc. Natl. Acad. Sci. U.S.A. 2019 Dec 5. doi: .
- UC Berkeley News Release: “Genomic gymnastics help sorghum plant survive drought“
- UCANR News Release: “Genomic gymnastics help sorghum plant survive drought“
- ABC30 Story: “Drought tolerant crop being studied in the Valley”
- EPICON on JGI Phytozome portal
- JGI Plant Flagship Genomes
- JGI News Release: Scientists Publish Genetic Blueprint of Key Biofuels Crop
- Sorghum bicolor genome on JGI Phytozome portal
- JGI Feature: Studying Drought Tolerance in Sorghum
by Massie S. Ballon |
MCQs Basic Statistics 12
This quiz contains MCQs about Basic Statistics with answers covering variable and type of variable, Measure of central tendency such as mean, median, mode, Weighted mean, data and type of data, sources of data, Measure of Dispersion/ Variation, Standard Deviation, Variance, Range, etc. Let us start the MCQs Basic Statistics Quiz.
MCQs about Introductory Statistics
If you Found that the any POSTED MCQ is/ are WRONG
PLEASE COMMENT below the MCQ with the CORRECT ANSWER and its DETAILED EXPLANATION.
Don’t forget to mention the MCQs Statement (or Screenshot), because MCQs and their answers are generated randomly
Basic statistics deals with the measure of central tendencies (such as mean, median, mode, weighted mean, geometric mean, and Harmonic mean) and measure of dispersion (such as range, standard deviation, and variances).
Basic statistical methods include planning and designing the study, collecting data, arranging, and numerical and graphically summarizing the collected data.
Basic statistics are also used to perform statistical analysis to draw meaningful inferences.
A basic visual inspection of data using some graphical and numerical statistics may give some useful hidden information already available in the data. The graphical representation includes a bar chart, pie chart, dot chart, box plot, etc.
Companies related to finance, communication, manufacturing, charity organizations, government institutes, simple to large businesses, etc. are all examples that have a massive interest in collecting data and measuring different sorts of statistical findings. This helps them to learn from the past, noticing the trends, and planning for the future. |
- Special Sections
- Public Notices
Everything around us came from an ancient swirling disk of dusk. Our sun, our planet, and the planets of our solar system all formed from the remnants of this disk. Our mountains, our animals, and even us humans came from the sun that formed from this disk. Nobody on Earth has ever handled anything that came from anywhere else.
Until recently, that is. Scientists think that a few microscopic grains caught by a NASA spacecraft might actually be from outside of our solar system. This interstellar dust – that’s the fancy name for it – was discovered thanks to both NASA and a group of citizen scientists who volunteered their time for the sake of science.
In early 1999, NASA launched a mission called Stardust, whose job was to travel to a comet, collect its dust, and return to Earth. Stardust used a trap with a special gel to collect tiny dust from the glowing area around the comet. On it’s way there, it also collected particles of dust floating around our solar system. After reaching the comet, it returned to Earth in 2006, where it safely returned with its precious cargo. A great success!
But analyzing all those dust grains is really hard. Some of the grains are a thousand times smaller a grain of sand! Once the material was returned to Earth, scientists had to take millions of close up pictures of the gel to help them locate the dust and analyze the dust.
There were too many pictures for the Stardust team to team to analyze in their lifetime. So they uploaded these super zoomed-in dust trap pictures to the Internet and let people – nicknamed ‘Dusters’ – assist in the search. That really sped things along!
Thanks to the volunteers’ work and the hard work of scientists involved with the project, NASA reported that they might have found seven grains of dust that came from outside our solar system. They think they came from somewhere else because their chemistry is very different than usual space dust.
Where did they come from? They might have come from a huge supernova explosion millions of years in the past. Or they could have come from massive faraway stars. Either way, if the scientists are right, and it does turn out to be interstellar dust, it would be very exciting. These seven small grains could teach us about something that astronomers see all over space, but have never seen up close.
Learn more about the disk of dust that formed our solar system as NASA’s Space Place. Check out http://spaceplace.nasa.gov/solar-system-formation.
The LaRue County Herald News will run a monthly NASA column for children. The articles are written at upper-elementary grade level about specific topics related to space and Earth science, as well as space exploration technology. The articles are provided by the creators of The Space Place, a popular and award-winning NASA website for kids. |
A lichen (// LEYE-ken or, sometimes in the UK, //, LICH-en) is a composite organism that arises from algae or cyanobacteria living among filaments of multiple fungi species in a mutualistic relationship. Lichens have different properties from those of its component organisms. Lichens come in many colors, sizes, and forms and are sometimes plant-like, but lichens are not plants. Lichens may have tiny, leafless branches (fruticose), flat leaf-like structures (foliose), flakes that lie on the surface like peeling paint (crustose), a powder-like appearance (leprose), or other growth forms.
A macrolichen is a lichen that is either bush-like or leafy; all other lichens are termed microlichens. Here, "macro" and "micro" do not refer to size, but to the growth form. Common names for lichens may contain the word moss (e.g., "reindeer moss", "Iceland moss"), and lichens may superficially look like and grow with mosses, but lichens are not related to mosses or any plant.:3 Lichens do not have roots that absorb water and nutrients as plants do,:2 but like plants, they produce their own nutrition by photosynthesis. When they grow on plants, they do not live as parasites, but instead use the plants as a substrate.
Lichens occur from sea level to high alpine elevations, in many environmental conditions, and can grow on almost any surface. Lichens are abundant growing on bark, leaves, mosses, on other lichens, and hanging from branches "living on thin air" (epiphytes) in rain forests and in temperate woodland. They grow on rock, walls, gravestones, roofs, exposed soil surfaces, and in the soil as part of a biological soil crust. Different kinds of lichens have adapted to survive in some of the most extreme environments on Earth: arctic tundra, hot dry deserts, rocky coasts, and toxic slag heaps. They can even live inside solid rock, growing between the grains.
It is estimated that 6% of Earth's land surface is covered by lichens.:2 There are about 20,000 known species of lichens. Some lichens have lost the ability to reproduce sexually, yet continue to speciate. Lichens can be seen as being relatively self-contained miniature ecosystems, where the fungi, algae, or cyanobacteria have the potential to engage with other microorganisms in a functioning system that may evolve as an even more complex composite organism.
Lichens may be long-lived, with some considered to be among the oldest living things. They are among the first living things to grow on fresh rock exposed after an event such as a landslide. The long life-span and slow and regular growth rate of some lichens can be used to date events (lichenometry).
Pronunciation and etymologyEdit
English lichen derives from Greek λειχήν leichēn ("tree moss, lichen, lichen-like eruption on skin") via Latin lichen. The Greek noun, which literally means "licker", derives from the verb λείχειν leichein, "to lick".
Lichens grow in a wide range of shapes and forms (morphologies). The shape of a lichen is usually determined by the organization of the fungal filaments. The nonreproductive tissues, or vegetative body parts, are called the thallus. Lichens are grouped by thallus type, since the thallus is usually the most visually prominent part of the lichen. Thallus growth forms typically correspond to a few basic internal structure types. Common names for lichens often come from a growth form or color that is typical of a lichen genus.
Common groupings of lichen thallus growth forms are:
- fruticose – growing like a tuft or multiple-branched leafless mini-shrub, upright or hanging down, 3-dimensional branches with nearly round cross section (terete) or flattened
- foliose – growing in 2-dimensional, flat, leaf-like lobes
- crustose – crust-like, adhering tightly to a surface (substrate) like a thick coat of paint
- squamulose – formed of small leaf-like scales crustose below but free at the tips
- leprose – powdery
- gelatinous – jelly-like
- filamentous – stringy or like matted hair
- byssoid – wispy, like teased wool
There are variations in growth types in a single lichen species, grey areas between the growth type descriptions, and overlapping between growth types, so some authors might describe lichens using different growth type descriptions.
When a crustose lichen gets old, the center may start to crack up like old-dried paint, old-broken asphalt paving, or like the polygonal "islands" of cracked-up mud in a dried lakebed. This is called being rimose or areolate, and the "island" pieces separated by the cracks are called areolas. The areolas appear separated, but are (or were) connected by an underlying "prothallus" or "hypothallus". When a crustose lichen grows from a center and appears to radiate out, it is called crustose placodioid. When the edges of the areolas lift up from the substrate, it is called squamulose.:159
These growth form groups are not precisely defined. Foliose lichens may sometimes branch and appear to be fruticose. Fruticose lichens may have flattened branching parts and appear leafy. Squamulose lichens may appear where the edges lift up. Gelatinous lichens may appear leafy when dry.:159 Means of telling them apart in these cases are in the sections below.
Structures involved in reproduction often appear as discs, bumps, or squiggly lines on the surface of the thallus.:4 The thallus is not always the part of the lichen that is most visually noticeable. Some lichens can grow inside solid rock between the grains (endolithic lichens), with only the sexual fruiting part visible growing outside the rock. These may be dramatic in color or appearance. Forms of these sexual parts are not in the above growth form categories. The most visually noticeable reproductive parts are often circular, raised, plate-like or disc-like outgrowths, with crinkly edges, and are described in sections below.
Lichens come in many colors.:4 Coloration is usually determined by the photosynthetic component. Special pigments, such as yellow usnic acid, give lichens a variety of colors, including reds, oranges, yellows, and browns, especially in exposed, dry habitats. In the absence of special pigments, lichens are usually bright green to olive gray when wet, gray or grayish-green to brown when dry. This is because moisture causes the surface skin (cortex) to become more transparent, exposing the green photobiont layer. Different colored lichens covering large areas of exposed rock surfaces, or lichens covering or hanging from bark can be a spectacular display when the patches of diverse colors "come to life" or "glow" in brilliant displays following rain.
Different colored lichens may inhabit different adjacent sections of a rock face, depending on the angle of exposure to light. Colonies of lichens may be spectacular in appearance, dominating much of the surface of the visual landscape in forests and natural places, such as the vertical "paint" covering the vast rock faces of Yosemite National Park.
Color is used in identification.:4 The color of a lichen changes depending on whether the lichen is wet or dry. Color descriptions used for identification are based on the color that shows when the lichen is dry. Dry lichens with a cyanobacterium as the photosynthetic partner tend to be dark grey, brown, or black.
The underside of the leaf-like lobes of foliose lichens is a different color from the top side (dorsiventral), often brown or black, sometimes white. A fruticose lichen may have flattened "branches", appearing similar to a foliose lichen, but the underside of a leaf-like structure on a fruticose lichen is the same color as the top side. The leaf-like lobes of a foliose lichen may branch, giving the appearance of a fruticose lichen, but the underside will be a different color from the top side.
Internal structure and growth formsEdit
A lichen consists of a simple photosynthesizing organism, usually a green alga or cyanobacterium, surrounded by filaments of a fungus. Generally, most of a lichen's bulk is made of interwoven fungal filaments, although in filamentous and gelatinous lichens this is not the case. The fungus is called a mycobiont. The photosynthesizing organism is called a photobiont. Algal photobionts are called phycobionts. Cyanobacteria photobionts are called cyanobionts.
The part of a lichen that is not involved in reproduction, the "body" or "vegetative tissue" of a lichen, is called the thallus. The thallus form is very different from any form where the fungus or alga are growing separately. The thallus is made up of filaments of the fungus called hyphae. The filaments grow by branching then rejoining to create a mesh, which is called being "anastomose". The mesh of fungal filaments may be dense or loose.
Generally, the fungal mesh surrounds the algal or cyanobacterial cells, often enclosing them within complex fungal tissues that are unique to lichen associations. The thallus may or may not have a protective "skin" of densely packed fungal filaments, often containing a second fungal species, which is called a cortex. Fruticose lichens have one cortex layer wrapping around the "branches". Foliose lichens have an upper cortex on the top side of the "leaf", and a separate lower cortex on the bottom side. Crustose and squamulose lichens have only an upper cortex, with the "inside" of the lichen in direct contact with the surface they grow on (the substrate). Even if the edges peel up from the substrate and appear flat and leaf-like, they lack a lower cortex, unlike foliose lichens. Filamentous, byssoid, leprose, gelatinous, and other lichens do not have a cortex, which is called being ecorticate.
Fruticose, foliose, crustose, and squamulose lichens generally have up to three different types of tissue, differentiated by having different densities of fungal filaments. The top layer, where the lichen contacts the environment, is called a cortex. The cortex is made of densely tightly woven, packed, and glued together (agglutinated) fungal filaments. The dense packing makes the cortex act like a protective "skin", keeping other organisms out, and reducing the intensity of sunlight on the layers below. The cortex layer can be up to several hundred micrometers (μm) in thickness (less than a millimeter). The cortex may be further topped by an epicortex of secretions, not cells, 0.6–1 μm thick in some lichens. This secretion layer may or may not have pores.
Below the cortex layer is a layer called the photobiontic layer or symbiont layer. The symbiont layer has less densely packed fungal filaments, with the photosynthetic partner embedded in them. The less dense packing allows air circulation during photosynthesis, similar to the anatomy of a leaf. Each cell or group of cells of the photobiont is usually individually wrapped by hyphae, and in some cases penetrated by a haustorium. In crustose and foliose lichens, algae in the photobiontic layer are diffuse among the fungal filaments, decreasing in gradation into the layer below. In fruticose lichens, the photobiontic layer is sharply distinct from the layer below.
The layer beneath the symbiont layer called is called the medulla. The medulla is less densely packed with fungal filaments than the layers above. In foliose lichens, there is usually, as in Peltigera,:159 another densely packed layer of fungal filaments called the lower cortex. Root-like fungal structures called rhizines (usually):159 grow from the lower cortex to attach or anchor the lichen to the substrate. Fruticose lichens have a single cortex wrapping all the way around the "stems" and "branches". The medulla is the lowest layer, and may form a cottony white inner core for the branchlike thallus, or it may be hollow.:159 Crustose and squamulose lichens lack a lower cortex, and the medulla is in direct contact with the substrate that the lichen grows on.
In crustose areolate lichens, the edges of the areolas peel up from the substrate and appear leafy. In squamulose lichens the part of the lichen thallus that is not attached to the substrate may also appear leafy. But these leafy parts lack a lower cortex, which distinguishes crustose and squamulose lichens from foliose lichens. Conversely, foliose lichens may appear flattened against the substrate like a crustose lichen, but most of the leaf-like lobes can be lifted up from the substrate because it is separated from it by a tightly packed lower cortex.
In lichens that include both green algal and cyanobacterial symbionts, the cyanobacteria may be held on the upper or lower surface in small pustules called cephalodia.
In August 2016, it was reported that macrolichens have more than one species of fungus in their tissues.
Lichens are fungi that have discovered agriculture— Trevor Goward
A lichen is a composite organism that emerges from algae or cyanobacteria living among the filaments (hyphae) of the fungi in a mutually beneficial symbiotic relationship. The fungi benefit from the carbohydrates produced by the algae or cyanobacteria via photosynthesis. The algae or cyanobacteria benefit by being protected from the environment by the filaments of the fungi, which also gather moisture and nutrients from the environment, and (usually) provide an anchor to it. Although some photosynthetic partners in a lichen can survive outside the lichen, the lichen symbiotic association extends the ecological range of both partners, whereby most descriptions of lichen associations describe them as symbiotic. However, while symbiotic, the relationship is probably not mutualistic, since the algae give up a disproportionate amount of their sugars (see below). Both partners gain water and mineral nutrients mainly from the atmosphere, through rain and dust. The fungal partner protects the alga by retaining water, serving as a larger capture area for mineral nutrients and, in some cases, provides minerals obtained from the substrate. If a cyanobacterium is present, as a primary partner or another symbiont in addition to a green alga as in certain tripartite lichens, they can fix atmospheric nitrogen, complementing the activities of the green alga.
In three different lineages the fungal partner has independently lost the mitochondrial gene atp9, which has key functions in mitochondrial energy production. The loss makes the fungi completely dependent on their symbionts.
The algal or cyanobacterial cells are photosynthetic and, as in plants, they reduce atmospheric carbon dioxide into organic carbon sugars to feed both symbionts. Phycobionts (algae) produce sugar alcohols (ribitol, sorbitol, and erythritol), which are absorbed by the mycobiont (fungus). Cyanobionts produce glucose. Lichenized fungal cells can make the photobiont "leak" out the products of photosynthesis, where they can then be absorbed by the fungus.:5
It appears many, probably the majority, of lichen also live in a symbiotic relationship with an order of basidiomycete yeasts called Cyphobasidiales. The absence of this third partner could explain the difficulties of growing lichen in the laboratory. The yeast cells are responsible for the formation of the characteristic cortex of the lichen thallus, and could also be important for its shape.
The lichen combination of alga or cyanobacterium with a fungus has a very different form (morphology), physiology, and biochemistry than the component fungus, alga, or cyanobacterium growing by itself, naturally or in culture. The body (thallus) of most lichens is different from those of either the fungus or alga growing separately. When grown in the laboratory in the absence of its photobiont, a lichen fungus develops as a structureless, undifferentiated mass of fungal filaments (hyphae). If combined with its photobiont under appropriate conditions, its characteristic form associated with the photobiont emerges, in the process called morphogenesis. In a few remarkable cases, a single lichen fungus can develop into two very different lichen forms when associating with either a green algal or a cyanobacterial symbiont. Quite naturally, these alternative forms were at first considered to be different species, until they were found growing in a conjoined manner.
Evidence that lichens are examples of successful symbiosis is the fact that lichens can be found in almost every habitat and geographic area on the planet. Two species in two genera of green algae are found in over 35% of all lichens, but can only rarely be found living on their own outside of a lichen.
In a case where one fungal partner simultaneously had two green algae partners that outperform each other in different climates, this might indicate having more than one photosynthetic partner at the same time might enable the lichen to exist in a wider range of habitats and geographic locations.
At least one form of lichen, the North American beard-like lichens, are constituted of not two but three symbiotic partners: an ascomycetous fungus, a photosynthetic alga, and, unexpectedly, a basidiomycetous yeast.
Algae produce sugars that are absorbed by the fungus by diffusion into special fungal hyphae called appressoria or haustoria in contact with the wall of the algal cells. The appressoria or haustoria may produce a substance that increases permeability of the algal cell walls, and may penetrate the walls. The algae may lose up to 80% of their sugar production to the fungus.
Lichen associations may be examples of mutualism, commensalism or even parasitism, depending on the species. There is evidence to suggest that the lichen symbiosis is parasitic or commensalistic, rather than mutualistic. The photosynthetic partner can exist in nature independently of the fungal partner, but not vice versa. Photobiont cells are routinely destroyed in the course of nutrient exchange. The association is able to continue because reproduction of the photobiont cells matches the rate at which they are destroyed. The fungus surrounds the algal cells, often enclosing them within complex fungal tissues unique to lichen associations. In many species the fungus penetrates the algal cell wall, forming penetration pegs (haustoria) similar to those produced by pathogenic fungi that feed on a host. Cyanobacteria in laboratory settings can grow faster when they are alone rather than when they are part of a lichen.
Miniature ecosystem and holobiont theoryEdit
Symbiosis in lichens is so well-balanced that lichens have been considered to be relatively self-contained miniature ecosystems in and of themselves. It is thought that lichens may be even more complex symbiotic systems that include non-photosynthetic bacterial communities performing other functions as partners in a holobiont.
Many lichens are very sensitive to environmental disturbances and can be used to cheaply assess air pollution, ozone depletion, and metal contamination. Lichens have been used in making dyes, perfumes, and in traditional medicines. A few lichen species are eaten by insects or larger animals, such as reindeer. Lichens are widely used as environmental indicators or bio-indicators. When air is very badly polluted with sulphur dioxide, there may be no lichens present; only some green algae can tolerations those conditions. If the air is clean, then shrubby, hairy and leafy lichens become abundant. A few lichen species can tolerate fairly high levels of pollution, and are commonly found in urban areas, on pavements, walls and tree bark. The most sensitive lichens are shrubby and leafy, while the most tolerant lichens are all crusty in appearance. Since industrialisation, many of the shrubby and leafy lichens such as Ramalina, Usnea and Lobaria species have very limited ranges, often being confined to the areas which have the cleanest air.
Some fungi can only be found living on lichens as obligate parasites. These are referred to as lichenicolous fungi, and are a different species from the fungus living inside the lichen; thus they are not considered to be part of the lichen.
Reaction to waterEdit
Moisture makes the cortex become more transparent.:4 This way, the algae can conduct photosynthesis when moisture is available, and is protected at other times. When the cortex is more transparent, the algae show more clearly and the lichen looks greener.
Metabolites, metabolite structures and bioactivityEdit
Lichens can show intense antioxidant activity. Secondary metabolites are often deposited as crystals in the apoplast. Secondary metabolites are thought to play a role in preference for some substrates over others.
Lichens often have a regular but very slow growth rate of less than a millimeter per year.
In crustose lichens, the area along the margin is where the most active growth is taking place.:159 Most crustose lichens grow only 1–2 mm in diameter per year.
Lichens may be long-lived, with some considered to be among the oldest living organisms. Lifespan is difficult to measure because what defines the "same" individual lichen is not precise. Lichens grow by vegetatively breaking off a piece, which may or may not be defined as the "same" lichen, and two lichens can merge, then becoming the "same" lichen. An Arctic species called "map lichen" (Rhizocarpon geographicum) has been dated at 8,600 years, apparently the world's oldest living organism.
Response to environmental stressEdit
Unlike simple dehydration in plants and animals, lichens may experience a complete loss of body water in dry periods. Lichens are capable of surviving extremely low levels of water content (poikilohydric).:5–6 They quickly absorb water when it becomes available again, becoming soft and fleshy. Reconfiguration of membranes following a period of dehydration requires several minutes or more.
In tests, lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR).
The European Space Agency has discovered that lichens can survive unprotected in space. In an experiment led by Leopoldo Sancho from the Complutense University of Madrid, two species of lichen—Rhizocarpon geographicum and Xanthoria elegans—were sealed in a capsule and launched on a Russian Soyuz rocket 31 May 2005. Once in orbit, the capsules were opened and the lichens were directly exposed to the vacuum of space with its widely fluctuating temperatures and cosmic radiation. After 15 days, the lichens were brought back to earth and were found to be unchanged in their ability to photosynthesize.
Reproduction and dispersalEdit
Many lichens reproduce asexually, either by a piece breaking off and growing on its own (vegetative reproduction) or through the dispersal of diaspores containing a few algal cells surrounded by fungal cells. Because of the relative lack of differentiation in the thallus, the line between diaspore formation and vegetative reproduction is often blurred. Fruticose lichens can easily fragment, and new lichens can grow from the fragment (vegetative reproduction). Many lichens break up into fragments when they dry, dispersing themselves by wind action, to resume growth when moisture returns. Soredia (singular: "soredium") are small groups of algal cells surrounded by fungal filaments that form in structures called soralia, from which the soredia can be dispersed by wind. Isidia (singular: "isidium") are branched, spiny, elongated, outgrowths from the thallus that break off for mechanical dispersal. Lichen propagules (diaspores) typically contain cells from both partners, although the fungal components of so-called "fringe species" rely instead on algal cells dispersed by the "core species".
Structures involved in reproduction often appear as discs, bumps, or squiggly lines on the surface of the thallus.:4 Only the fungal partner in a lichen reproduces sexually. Many lichen fungi reproduce sexually like other fungi, producing spores formed by meiosis and fusion of gametes. Following dispersal, such fungal spores must meet with a compatible algal partner before a functional lichen can form.
Most lichen fungi belong to Ascomycetes (ascolichens). Among the ascolichens, spores are produced in spore-producing structures called ascomata. The most common types of ascomata are the apothecium (plural: apothecia) and perithecium (plural: perithecia).:14 Apothecia are usually cups or plate-like discs located on the top surface of the lichen thallus. When apothecia are shaped like squiggly line segments instead of like discs, they are called lirellae.:14 Perithecia are shaped like flasks that are immersed in the lichen thallus tissue, which has a small hole for the spores to escape the flask, and appear like black dots on the lichen surface.:14
The three most common spore body types are raised discs called apothecia (singular: apothecium), bottle-like cups with a small hole at the top called perithecia (singular: perithecium), and pycnidia (singular: pycnidium), shaped like perithecia but without asci (an ascus is the structure that contains and releases the sexual spores in fungi of the Ascomycota).
The apothecium has a layer of exposed spore-producing cells called asci (singular: ascus), and is usually a different color from the thallus tissue.:14 When the apothecium has an outer margin, the margin is called the exciple.:14 When the exciple has a color similar to colored thallus tissue the apothecium or lichen is called lecanorine, meaning similar to members of the genus Lecanora.:14 When the exciple is blackened like carbon it is called lecideine meaning similar to members of the genus Lecidea.:14 When the margin is pale or colorless it is called biatorine.:14
A "podetium" (plural: podetia) is a lichenized stalk-like structure of the fruiting body rising from the thallus, associated with some fungi that produce a fungal apothecium. Since it is part of the reproductive tissue, podetia are not considered part of the main body (thallus), but may be visually prominent. The podetium may be branched, and sometimes cup-like. They usually bear the fungal pycnidia or apothecia or both. Many lichens have apothecia that are visible to the naked eye.
Most lichens produce abundant sexual structures. Many species appear to disperse only by sexual spores. For example, the crustose lichens Graphis scripta and Ochrolechia parella produce no symbiotic vegetative propagules. Instead, the lichen-forming fungi of these species reproduce sexually by self-fertilization (i.e. they are homothallic). This breeding system may enable successful reproduction in harsh environments.
Mazaedia (singular: mazaedium) are apothecia shaped like a dressmaker's pin in (pin lichen)s, where the fruiting body is a brown or black mass of loose ascospores enclosed by a cup-shaped exciple, which sits on top of a tiny stalk.:15
Taxonomy and classificationEdit
Lichens are classified by the fungal component. Lichen species are given the same scientific name (binomial name) as the fungus species in the lichen. Lichens are being integrated into the classification schemes for fungi. The alga bears its own scientific name, which bears no relationship to that of the lichen or fungus. There are about 13,500–17,000 identified lichen species. Nearly 20% of known fungal species are associated with lichens.
"Lichenized fungus" may refer to the entire lichen, or to just the fungus. This may cause confusion without context. A particular fungus species may form lichens with different algae species, giving rise to what appear to be different lichen species, but which are still classified (as of 2014) as the same lichen species.
Formerly, some lichen taxonomists placed lichens in their own division, the Mycophycophyta, but this practice is no longer accepted because the components belong to separate lineages. Neither the ascolichens nor the basidiolichens form monophyletic lineages in their respective fungal phyla, but they do form several major solely or primarily lichen-forming groups within each phylum. Even more unusual than basidiolichens is the fungus Geosiphon pyriforme, a member of the Glomeromycota that is unique in that it encloses a cyanobacterial symbiont inside its cells. Geosiphon is not usually considered to be a lichen, and its peculiar symbiosis was not recognized for many years. The genus is more closely allied to endomycorrhizal genera. Fungi from Verrucariales also form marine lichens with the brown algae Petroderma maculiforme, and have a symbiotic relationship with seaweed like (rockweed) and Blidingia minima, where the algae are the dominant components. The fungi is thought to help the rockweeds to resist desiccation when exposed to air. In addition, lichens can also use yellow-green algae (Heterococcus) as their symbiotic partner.
Lichens independently emerged from fungi associating with algae and cyanobacteria multiple times throughout history.
The fungal component of a lichen is called the mycobiont. The mycobiont may be an Ascomycete or Basidiomycete. The associated lichens are called either ascolichens or basidiolichens, respectively. Living as a symbiont in a lichen appears to be a successful way for a fungus to derive essential nutrients, since about 20% of all fungal species have acquired this mode of life.
Thalli produced by a given fungal symbiont with its differing partners may be similar, and the secondary metabolites identical, indicating that the fungus has the dominant role in determining the morphology of the lichen. But the same mycobiont with different photobionts may also produce very different growth forms. Lichens are known in which there is one fungus associated with two or even three algal species.
Although each lichen thallus generally appears homogeneous, some evidence seems to suggest that the fungal component may consist of more than one genetic individual of that species.
Two or more fungal species can interact to form the same lichen.
The photosynthetic partner in a lichen is called a photobiont. The photobionts in lichens come from a variety of simple prokaryotic and eukaryotic organisms. In the majority of lichens the photobiont is a green alga (Chlorophyta) or a cyanobacterium. In some lichens both types are present. Algal photobionts are called phycobionts, while cyanobacterial photobionts are called cyanobionts. According to one source, about 90% of all known lichens have phycobionts, and about 10% have cyanobionts, while another source states that two thirds of lichens have green algae as phycobiont, and about one third have a cyanobiont. Approximately 100 species of photosynthetic partners from 40 genera and five distinct classes (prokaryotic: Cyanophyceae; eukaryotic: Trebouxiophyceae, Phaeophyceae, Chlorophyceae) have been found to associate with the lichen-forming fungi.
Common algal photobionts are from the genera Trebouxia, Trentepohlia, Pseudotrebouxia, or Myrmecia. Trebouxia is the most common genus of green algae in lichens, occurring in about 40% of all lichens. "Trebouxioid" means either a photobiont that is in the genus Trebouxia, or resembles a member of that genus, and is therefore presumably a member of the class Trebouxiophyceae. The second most commonly represented green alga genus is Trentepohlia. Overall, about 100 species of eukaryotes are known to occur as photobionts in lichens. All the algae are probably able to exist independently in nature as well as in the lichen.
A "cyanolichen" is a lichen with a cyanobacterium as its main photosynthetic component (photobiont). Most cyanolichen are also ascolichens, but a few basidiolichen like Dictyonema and Acantholichen have cyanobacteria as their partner.
The most commonly occurring cyanobacterium genus is Nostoc. Other common cyanobacterium photobionts are from Scytonema. Many cyanolichens are small and black, and have limestone as the substrate. Another cyanolichen group, the jelly lichens of the genera Collema or Leptogium are gelatinous and live on moist soils. Another group of large and foliose species including Peltigera, Lobaria, and Degelia are grey-blue, especially when dampened or wet. Many of these characterize the Lobarion communities of higher rainfall areas in western Britain, e.g., in the Celtic rain forest. Strains of cyanobacteria found in various cyanolichens are often closely related to one another. They differ from the most closely related free-living strains.
The lichen association is a close symbiosis. It extends the ecological range of both partners but is not always obligatory for their growth and reproduction in natural environments, since many of the algal symbionts can live independently. A prominent example is the alga Trentepohlia, which forms orange-coloured populations on tree trunks and suitable rock faces. Lichen propagules (diaspores) typically contain cells from both partners, although the fungal components of so-called "fringe species" rely instead on algal cells dispersed by the "core species".
The same cyanobiont species can occur in association with different fungal species as lichen partners. The same phycobiont species can occur in association with different fungal species as lichen partners. More than one phycobiont may be present in a single thallus.
A single lichen may contain several algal genotypes. These multiple genotypes may better enable response to adaptation to environmental changes, and enable the lichen to inhabit a wider range of environments.
Controversy over classification method and species namesEdit
There are about 20,000 known lichen species. But what is meant by "species" is different from what is meant by biological species in plants, animals, or fungi, where being the same species implies that there is a common ancestral lineage. Because lichens are combinations of members of two or even three different biological kingdoms, these components must have a different ancestral lineage from each other. By convention, lichens are still called "species" anyway, and are classified according to the species of their fungus, not the species of the algae or cyanobacteria. Lichens are given the same scientific name (binomial name) as the fungus in them, which may cause some confusion. The alga bears its own scientific name, which has no relationship to the name of the lichen or fungus.
Depending on context, "lichenized fungus" may refer to the entire lichen, or to the fungus when it is in the lichen, which can be grown in culture in isolation from the algae or cyanobacteria. Some algae and cyanobacteria are found naturally living outside of the lichen. The fungal, algal, or cyanobacterial component of a lichen can be grown by itself in culture. When growing by themselves, the fungus, algae, or cyanobacteria have very different properties than those of the lichen. Lichen properties such as growth form, physiology, and biochemistry, are very different from the combination of the properties of the fungus and the algae or cyanobacteria.
The same fungus growing in combination with different algae or cyanobacteria, can produce lichens that are very different in most properties, meeting non-DNA criteria for being different "species". Historically, these different combinations were classified as different species. When the fungus is identified as being the same using modern DNA methods, these apparently different species get reclassified as the same species under the current (2014) convention for classification by fungal component. This has led to debate about this classification convention. These apparently different "species" have their own independent evolutionary history.
There is also debate as to the appropriateness of giving the same binomial name to the fungus, and to the lichen that combines that fungus with an alga or cyanobacterium (synecdoche). This is especially the case when combining the same fungus with different algae or cyanobacteria produces dramatically different lichen organisms, which would be considered different species by any measure other than the DNA of the fungal component. If the whole lichen produced by the same fungus growing in association with different algae or cyanobacteria, were to be classified as different "species", the number of "lichen species" would be greater.
The largest number of lichenized fungi occur in the Ascomycota, with about 40% of species forming such an association. Some of these lichenized fungi occur in orders with nonlichenized fungi that live as saprotrophs or plant parasites (for example, the Leotiales, Dothideales, and Pezizales). Other lichen fungi occur in only five orders in which all members are engaged in this habit (Orders Graphidales, Gyalectales, Peltigerales, Pertusariales, and Teloschistales). Overall, about 98% of lichens have an ascomycetous mycobiont. Next to the Ascomycota, the largest number of lichenized fungi occur in the unassigned fungi imperfecti, a catch-all category for fungi whose sexual form of reproduction has never been observed. Comparatively few Basidiomycetes are lichenized, but these include agarics, such as species of Lichenomphalia, clavarioid fungi, such as species of Multiclavula, and corticioid fungi, such as species of Dictyonema.
Lichen identification uses growth form and reactions to chemical tests.
The outcome of the "Pd test" is called "Pd", which is also used as an abbreviation for the chemical used in the test, para-phenylenediamine. If putting a drop on a lichen turns an area bright yellow to orange, this helps identify it as belonging to either the genus Cladonia or Lecanora.
Evolution and paleontologyEdit
The fossil record for lichens is poor. The extreme habitats that lichens dominate, such as tundra, mountains, and deserts, are not ordinarily conducive to producing fossils. There are fossilized lichens embedded in amber. The fossilized Anzia is found in pieces of amber in northern Europe and dates back approximately 40 million years. Lichen fragments are also found in fossil leaf beds, such as Lobaria from Trinity County in northern California, USA, dating back to the early to middle Miocene.
The oldest fossil lichen in which both symbiotic partners have been recovered is Winfrenatia,an early zygomycetous (Glomeromycotan) lichen symbiosis that may have involved controlled parasitism, is permineralized in the Rhynie Chert of Scotland, dating from early Early Devonian, about 400 million years ago. The slightly older fossil Spongiophyton has also been interpreted as a lichen on morphological and isotopic grounds, although the isotopic basis is decidedly shaky. It has been demonstrated that Silurian-Devonian fossils Nematothallus and Prototaxites were lichenized. Thus lichenized Ascomycota and Basidiomycota were a component of Early Silurian-Devonian terrestrial ecosystems. Newer research suggests that lichen evolved after the evolution of land plants.
The ancestral ecological state of both Ascomycota and Basidiomycota was probably saprobism, and independent lichenization events may have occurred multiple times. In 1995, Gargas and colleagues proposed that there were at least five independent origins of lichenization; three in the basidiomycetes and at least two in the Ascomycetes. However, Lutzoni et al. (2001) indicate that lichenization probably evolved earlier and was followed by multiple independent losses. Some non-lichen-forming fungi may have secondarily lost the ability to form a lichen association. As a result, lichenization has been viewed as a highly successful nutritional strategy.
Lichenized Glomeromycota may extend well back into the Precambrian. Lichen-like fossils consisting of coccoid cells (cyanobacteria?) and thin filaments (mucoromycotinan Glomeromycota?) are permineralized in marine phosphorite of the Doushantuo Formation in southern China. These fossils are thought to be 551 to 635 million years old or Ediacaran. Ediacaran acritarchs also have many similarities with Glomeromycotan vesicles and spores. It has also been claimed that Ediacaran fossils including Dickinsonia, were lichens, although this claim is controversial. Endosymbiotic Glomeromycota comparable with living Geosiphon may extend back into the Proterozoic in the form of 1500 million year old Horodyskia and 2200 million year old Diskagma. Discovery of these fossils suggest that fungi developed symbiotic partnerships with photoautotrophs long before the evolution of vascular plants.
Ecology and interactions with environmentEdit
Substrates and habitatsEdit
Lichens cover about 7% of the planet's surface and grow on and in a wide range of substrates and habitats, including some of the most extreme conditions on earth. They are abundant growing on bark, leaves, and hanging from branches "living on thin air" (epiphytes) in rain forests and in temperate woodland. They grow on bare rock, walls, gravestones, roofs, and exposed soil surfaces. They can survive in some of the most extreme environments on Earth: arctic tundra, hot dry deserts, rocky coasts, and toxic slag heaps. They can live inside solid rock, growing between the grains, and in the soil as part of a biological soil crust in arid habitats such as deserts. Some lichens do not grow on anything, living out their lives blowing about the environment.
When growing on mineral surfaces, some lichens slowly decompose their substrate by chemically degrading and physically disrupting the minerals, contributing to the process of weathering by which rocks are gradually turned into soil. While this contribution to weathering is usually benign, it can cause problems for artificial stone structures. For example, there is an ongoing lichen growth problem on Mount Rushmore National Memorial that requires the employment of mountain-climbing conservators to clean the monument.
Lichens are not parasites on the plants they grow on, but only use them as a substrate to grow on. The fungi of some lichen species may "take over" the algae of other lichen species. Lichens make their own food from their photosynthetic parts and by absorbing minerals from the environment. Lichens growing on leaves may have the appearance of being parasites on the leaves, but they are not. However, some lichens, notably those of the genus Diploschistes are known to parasitise other lichens. Diploschistes muscorum starts its development in the tissue of a host Cladonia species.:30:171
In the arctic tundra, lichens, together with mosses and liverworts, make up the majority of the ground cover, which helps insulate the ground and may provide forage for grazing animals. An example is "Reindeer moss", which is a lichen, not a moss.
A crustose lichen that grows on rock is called a saxicolous lichen.:159 Crustose lichens that grow on the rock are epilithic, and those that grow immersed inside rock, growing between the crystals with only their fruiting bodies exposed to the air, are called endolithic lichens.:159 A crustose lichen that grows on bark is called a corticolous lichen.:159 A lichen that grows on wood from which the bark has been stripped is called a lignicolous lichen. Lichens that grow immersed inside plant tissues are called endophloidic lichens or endophloidal lichens.:159 Lichens that use leaves as substrates, whether the leaf is still on the tree or on the ground, are called epiphyllous or foliicolous. A terricolous lichen grows on the soil as a substrate. Many squamulous lichens are terricolous.:159 Umbillicate lichens are foliose lichens that are attached to the substrate at only one point. A vagrant lichen is not attached to a substrate at all, and lives its life being blown around by the wind.
Lichens and soilsEdit
In addition to distinct physical mechanisms by which lichens break down raw stone, recent studies indicate lichens attack stone chemically, entering newly chelated minerals into the ecology.
The lichen exudates, which have powerful chelating capacity, the widespread occurrence of mineral neoformation, particularly metal oxalates, together with the characteristics of weathered substrates, all confirm the significance of lichens as chemical weathering agents.
Over time, this activity creates new fertile soil from lifeless stone.
Lichens may be important in contributing nitrogen to soils in some deserts through being eaten, along with their rock substrate, by snails, which then defecate, putting the nitrogen into the soils. Lichens help bind and stabilize soil sand in dunes. In deserts and semi-arid areas, lichens are part of extensive, living biological soil crusts, essential for maintaining the soil structure. Lichens have a long fossil record in soils dating back 2.2 billion years.
Lichens are pioneer species, among the first living things to grow on bare rock or areas denuded of life by a disaster. Lichens may have to compete with plants for access to sunlight, but because of their small size and slow growth, they thrive in places where higher plants have difficulty growing. Lichens are often the first to settle in places lacking soil, constituting the sole vegetation in some extreme environments such as those found at high mountain elevations and at high latitudes. Some survive in the tough conditions of deserts, and others on frozen soil of the Arctic regions.
A major ecophysiological advantage of lichens is that they are poikilohydric (poikilo- variable, hydric- relating to water), meaning that though they have little control over the status of their hydration, they can tolerate irregular and extended periods of severe desiccation. Like some mosses, liverworts, ferns, and a few "resurrection plants", upon desiccation, lichens enter a metabolic suspension or stasis (known as cryptobiosis) in which the cells of the lichen symbionts are dehydrated to a degree that halts most biochemical activity. In this cryptobiotic state, lichens can survive wider extremes of temperature, radiation and drought in the harsh environments they often inhabit.
Lichens do not have roots and do not need to tap continuous reservoirs of water like most higher plants, thus they can grow in locations impossible for most plants, such as bare rock, sterile soil or sand, and various artificial structures such as walls, roofs and monuments. Many lichens also grow as epiphytes (epi- on the surface, phyte- plant) on plants, particularly on the trunks and branches of trees. When growing on plants, lichens are not parasites; they do not consume any part of the plant nor poison it. Lichens produce allelopathic chemicals that inhibit the growth of mosses. Some ground-dwelling lichens, such as members of the subgenus Cladina (reindeer lichens), produce allelopathic chemicals that leach into the soil and inhibit the germination of seeds, spruce and other plants. Stability (that is, longevity) of their substrate is a major factor of lichen habitats. Most lichens grow on stable rock surfaces or the bark of old trees, but many others grow on soil and sand. In these latter cases, lichens are often an important part of soil stabilization; indeed, in some desert ecosystems, vascular (higher) plant seeds cannot become established except in places where lichen crusts stabilize the sand and help retain water.
Lichens may be eaten by some animals, such as reindeer, living in arctic regions. The larvae of a number of Lepidoptera species feed exclusively on lichens. These include Common Footman and Marbled Beauty. However, lichens are very low in protein and high in carbohydrates, making them unsuitable for some animals. Lichens are also used by the Northern Flying Squirrel for nesting, food, and a water source during winter.
Effects of air pollutionEdit
If lichens are exposed to air pollutants at all times, without any deciduous parts, they are unable to avoid the accumulation of pollutants. Also lacking stomata and a cuticle, lichens may absorb aerosols and gases over the entire thallus surface from which they may readily diffuse to the photobiont layer. Because lichens do not possess roots, their primary source of most elements is the air, and therefore elemental levels in lichens often reflect the accumulated composition of ambient air. The processes by which atmospheric deposition occurs include fog and dew, gaseous absorption, and dry deposition. Consequently, many environmental studies with lichens emphasize their feasibility as effective biomonitors of atmospheric quality.
Not all lichens are equally sensitive to air pollutants, so different lichen species show different levels of sensitivity to specific atmospheric pollutants. The sensitivity of a lichen to air pollution is directly related to the energy needs of the mycobiont, so that the stronger the dependency of the mycobiont on the photobiont, the more sensitive the lichen is to air pollution. Upon exposure to air pollution, the photobiont may use metabolic energy for repair of its cellular structures that would otherwise be used for maintenance of its photosynthetic activity, therefore leaving less metabolic energy available for the mycobiont. The alteration of the balance between the photobiont and mycobiont can lead to the breakdown of the symbiotic association. Therefore, lichen decline may result not only from the accumulation of toxic substances, but also from altered nutrient supplies that favor one symbiont over the other.
Lichens are eaten by many different cultures across the world. Although some lichens are only eaten in times of famine, others are a staple food or even a delicacy. Two obstacles are often encountered when eating lichens: lichen polysaccharides are generally indigestible to humans, and lichens usually contain mildly toxic secondary compounds that should be removed before eating. Very few lichens are poisonous, but those high in vulpinic acid or usnic acid are toxic. Most poisonous lichens are yellow.
In the past, Iceland moss (Cetraria islandica) was an important source of food for humans in northern Europe, and was cooked as a bread, porridge, pudding, soup, or salad. Wila (Bryoria fremontii) was an important food in parts of North America, where it was usually pitcooked. Northern peoples in North America and Siberia traditionally eat the partially digested reindeer lichen (Cladina spp.) after they remove it from the rumen of caribou or reindeer that have been killed. Rock tripe (Umbilicaria spp. and Lasalia spp.) is a lichen that has frequently been used as an emergency food in North America, and one species, Umbilicaria esculenta, is used in a variety of traditional Korean and Japanese foods.
Lichenometry is a technique used to determine the age of exposed rock surfaces based on the size of lichen thalli. Introduced by Beschel in the 1950s, the technique has found many applications. it is used in archaeology, palaeontology, and geomorphology. It uses the presumed regular but slow rate of lichen growth to determine the age of exposed rock.:9 Measuring the diameter (or other size measurement) of the largest lichen of a species on a rock surface indicates the length of time since the rock surface was first exposed. Lichen can be preserved on old rock faces for up to 10,000 years, providing the maximum age limit of the technique, though it is most accurate (within 10% error) when applied to surfaces that have been exposed for less than 1,000 years. Lichenometry is especially useful for dating surfaces less than 500 years old, as radiocarbon dating techniques are less accurate over this period. The lichens most commonly used for lichenometry are those of the genera Rhizocarpon (e.g. the species Rhizocarpon geographicum) and Xanthoria.
Lichens have been shown to degrade polyester resins, as can be seen in archaeological sites in the Roman city of Baelo Claudia in Spain. Lichens can accumulate several environmental pollutants such as lead, copper, and radionuclides.
Many lichens produce secondary compounds, including pigments that reduce harmful amounts of sunlight and powerful toxins that reduce herbivory or kill bacteria. These compounds are very useful for lichen identification, and have had economic importance as dyes such as cudbear or primitive antibiotics.
In the Highlands of Scotland, traditional dyes for Harris tweed and other traditional cloths were made from lichens, including the orange Xanthoria parietina and the grey foliaceous Parmelia saxatilis common on rocks known as "crottle".
There are reports dating almost 2,000 years old of lichens being used to make purple and red dyes. Of great historical and commercial significance are lichens belonging to the family Roccellaceae, commonly called orchella weed or orchil. Orcein and other lichen dyes have largely been replaced by synthetic versions.
Traditional medicine and researchEdit
Historically in traditional medicine of Europe, Lobaria pulmonaria was collected in large quantities as "Lungwort", due to its lung-like appearance (the doctrine of signatures suggesting that herbs can treat body parts that they physically resemble). Similarly, Peltigera leucophlebia was used as a supposed cure for thrush, due to the resemblance of its cephalodia to the appearance of the disease.
Lichens produce metabolites in research for their potential therapeutic or diagnostic value. Some metabolites produced by lichens are structurally and functionally similar to broad-spectrum antibiotics while few are associated respectively to antiseptic similarities. Usnic acid is the most commonly studied metabolite produced by lichens. It is also under research as an bactericidal agent against Escherichia coli and Staphylococcus aureus.
Colonies of lichens may be spectacular in appearance, dominating the surface of the visual landscape as part of the aesthetic appeal to visitors of Yosemite National Park and Sequoia National Park.:2 Orange and yellow lichens add to the ambience of desert trees, rock faces, tundras, and rocky seashores. Intricate webs of lichens hanging from tree branches add a mysterious aspect to forests. Fruticose lichens are used in model railroading and other modeling hobbies as a material for making miniature trees and shrubs.
In early Midrashic literature, the Hebrew word "vayilafeth" in Ruth 3:8 is explained as referring to Ruth entwining herself around Boaz like lichen. The tenth century Arab physician, Al-Tamimi, mentions lichens dissolved in vinegar and rose water being used in his day for the treatment of skin diseases and rashes.
Although lichens had been recognized as organisms for quite some time, it was not until 1867, when Swiss botanist Simon Schwendener proposed his dual theory of lichens, that lichens are a combination of fungi with algae or cyanobacteria, whereby the true nature of the lichen association began to emerge. Schwendener's hypothesis, which at the time lacked experimental evidence, arose from his extensive analysis of the anatomy and development in lichens, algae, and fungi using a light microscope. Many of the leading lichenologists at the time, such as James Crombie and Nylander, rejected Schwendener's hypothesis because the common consensus was that all living organisms were autonomous.
Other prominent biologists, such as Heinrich Anton de Bary, Albert Bernhard Frank, Melchior Treub and Hermann Hellriegel were not so quick to reject Schwendener's ideas and the concept soon spread into other areas of study, such as microbial, plant, animal and human pathogens. When the complex relationships between pathogenic microorganisms and their hosts were finally identified, Schwendener's hypothesis began to gain popularity. Further experimental proof of the dual nature of lichens was obtained when Eugen Thomas published his results in 1939 on the first successful re-synthesis experiment.
In the 2010s, a new facet of the fungi-algae partnership was discovered. Toby Spribille and colleagues found that many types of lichen that were long thought to be ascomycete-algae pairs were actually ascomycete-basidiomycete-algae trios.
Lobaria pulmonaria, tree lungwort, lung lichen, lung moss; Upper Bavaria, Germany
Cladonia macilenta var. bacillaris 'Lipstick Cladonia'
Usnea australis, a fruticose form, growing on a tree branch
- This was scraped from a dry, concrete-paved section of a drainage ditch. This entire image covers a square that is approximately 1.7 millimeters on a side. The numbered ticks on the scale represent distances of 230 micrometers, or slightly less than 0.25 millimeter.
- Spribille, Toby; Tuovinen, Veera; Resl, Philipp; Vanderpool, Dan; Wolinski, Heimo; Aime, M. Catherine; Schneider, Kevin; Stabentheiner, Edith; Toome-Heller, Merje (21 July 2016). "Basidiomycete yeasts in the cortex of ascomycete macrolichens". Science. 353 (6298): 488–92. Bibcode:2016Sci...353..488S. doi:10.1126/science.aaf8287. ISSN 0036-8075. PMC 5793994. PMID 27445309.
- "What is a lichen?". Australian National Botanic Gardens. Archived from the original on 2 July 2014. Retrieved 10 October 2014.
- Introduction to Lichens – An Alliance between Kingdoms Archived 22 August 2014 at the Wayback Machine. University of California Museum of Paleontology.
- Brodo, Irwin M. and Duran Sharnoff, Sylvia (2001) Lichens of North America. ISBN 978-0300082494.
- Galloway, D.J. (13 May 1999). "Lichen Glossary". Australian National Botanic Gardens. Archived from the original on 6 December 2014.
- Margulis, Lynn; Barreno, EVA (2003). "Looking at Lichens". BioScience. 53 (8): 776. doi:10.1641/0006-3568(2003)053[0776:LAL]2.0.CO;2.
- Sharnoff, Stephen (2014) Field Guide to California Lichens, Yale University Press. ISBN 978-0-300-19500-2
- Speer, Brian R; Ben Waggoner (May 1997). "Lichens: Life History & Ecology". University of California Museum of Paleontology. Archived from the original on 2 May 2015. Retrieved 28 April 2015.
- Gadd, Geoffrey Michael (March 2010). "Metals, minerals and microbes: geomicrobiology and bioremediation". Microbiology. 156 (Pt 3): 609–643. doi:10.1099/mic.0.037143-0. PMID 20019082.
- McCune, B.; Grenon, J.; Martin, E.; Mutch, L.S.; Martin, E.P. (March 2007). "Lichens in relation to management issues in the Sierra Nevada national parks". North American Fungi. 2: 1–39. doi:10.2509/pnwf.2007.002.003.
- "Lichens: Systematics, University of California Museum of Paleontology". Archived from the original on 24 February 2015. Retrieved 10 October 2014.
- Lendemer, J. C. (2011). "A taxonomic revision of the North American species of Lepraria s.l. that produce divaricatic acid, with notes on the type species of the genus L. incana". Mycologia. 103 (6): 1216–1229. doi:10.3852/11-032. PMID 21642343.
- Casano, L. M.; Del Campo, E. M.; García-Breijo, F. J.; Reig-Armiñana, J; Gasulla, F; Del Hoyo, A; Guéra, A; Barreno, E (2011). "Two Trebouxia algae with different physiological performances are ever-present in lichen thalli of Ramalina farinacea. Coexistence versus competition?". Environmental Microbiology (Submitted manuscript). 13 (3): 806–818. doi:10.1111/j.1462-2920.2010.02386.x. hdl:10251/60269. PMID 21134099.
- Honegger, R. (1991) Fungal evolution: symbiosis and morphogenesis, Symbiosis as a Source of Evolutionary Innovation, Margulis, L., and Fester, R. (eds). Cambridge, MA, USA: The MIT Press, pp. 319–340.
- Grube, M; Cardinale, M; De Castro Jr, J. V.; Müller, H; Berg, G (2009). "Species-specific structural and functional diversity of bacterial communities in lichen symbioses". The ISME Journal. 3 (9): 1105–1115. doi:10.1038/ismej.2009.63. PMID 19554038.
- Barreno, E., Herrera-Campos, M., García-Breijo, F., Gasulla, F., and Reig-Armiñana, J. (2008) "Non photosynthetic bacteria associated to cortical structures on Ramalinaand Usnea thalli from Mexico"[permanent dead link]. Asilomar, Pacific Grove, CA, USA: Abstracts IAL 6- ABLS Joint Meeting.
- Morris J, Purvis W (2007). Lichens (Life). London: The Natural History Museum. p. 19. ISBN 978-0-565-09153-8.
- "Lichen". spectator.co.uk. 17 November 2012. Archived from the original on 23 December 2014. Retrieved 2 November 2014.
- "Lichen". Oxford Living Dictionary. Oxford University Press. Archived from the original on 29 August 2014. Retrieved 10 January 2018.
- The Oxford English Dictionary cites only the "liken" pronunciation: "lichen". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. Retrieved 10 January 2018. (Subscription or UK public library membership required.)
- Harper, Douglas. "lichen". Online Etymology Dictionary.
- lichen. Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project.
- λειχήν. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project.
- λείχειν in Liddell and Scott.
- Beekes, Robert S. P. (2010). "s.v. λειχήν, λείχω". Etymological Dictionary of Greek. Leiden Indo-European Etymological Dictionary Series. 1. With the assistance of Lucien van Beek. Leiden, Boston: Brill. pp. 846–47. ISBN 9789004174184.
- "Lichens and Bryophytes, Michigan State University, 10-25-99". Archived from the original on 5 October 2011. Retrieved 10 October 2014.
- Lichen Vocabulary, Lichens of North America Information, Sylvia and Stephen Sharnoff, Archived 20 January 2015 at the Wayback Machine
- "Alan Silverside's Lichen Glossary (p-z), Alan Silverside". Archived from the original on 31 October 2014. Retrieved 10 October 2014.
- Dobson, F.S. (2011). Lichens, an illustrated guide to the British and Irish species. Slough, UK: Richmond Publishing Co. ISBN 9780855463151.
- "Foliose lichens, Lichen Thallus Types, Allan Silverside". Archived from the original on 19 October 2014. Retrieved 10 October 2014.
- Mosses Lichens & Ferns of Northwest North America, Dale H. Vitt, Janet E. Marsh, Robin B. Bovey, Lone Pine Publishing Company, ISBN 0-295-96666-1
- "Lichens, Saguaro-Juniper Corporation". Archived from the original on 10 May 2015. Retrieved 10 October 2014.
- Michigan Lichens, Julie Jones Medlin, B. Jain Publishers, 1996, ISBN 0877370397, 9780877370390, Archived 24 November 2016 at the Wayback Machine
- Lichens: More on Morphology, University of California Museum of Paleontology, Archived 28 February 2015 at the Wayback Machine
- Lichen Photobionts, University of Nebraska Omaha Archived 6 October 2014 at the Wayback Machine
- "Alan Silverside's Lichen Glossary (g-o), Alan Silverside". Archived from the original on 2 November 2014. Retrieved 10 October 2014.
- Büdel, B.; Scheidegger, C. (1996). Thallus morphology and anatomy. Lichen Biology. pp. 37–64. doi:10.1017/CBO9780511790478.005. ISBN 9780511790478.
- Heiđmarsson, Starri; Heidmarsson, Starri (1996). "Pruina as a Taxonomic Character in the Lichen Genus Dermatocarpon". The Bryologist. 99 (3): 315–320. doi:10.2307/3244302. JSTOR 3244302.
- Sharnoff, Sylvia and Sharnoff, Stephen. "Lichen Biology and the Environment" Archived 17 October 2015 at the Wayback Machine. sharnoffphotos.com
- Pogoda, C. S.; Keepers, K. G.; Lendemer, J. C.; Kane, N. C.; Tripp, E. A. (2018). "Reductions in complexity of mitochondrial genomes in lichen‐forming fungi shed light on genome architecture of obligate symbioses – Wiley Online Library". Molecular Ecology. 27 (5): 1155–1169. doi:10.1111/mec.14519. PMID 29417658.
- Basidiomycete yeasts in the cortex of ascomycete macrolichens – Science
- Skaloud, P; Peksa, O (2010). "Evolutionary inferences based on ITS rDNA and actin sequences reveal extensive diversity of the common lichen alga Asterochloris (Trebouxiophyceae, Chlorophyta)". Molecular Phylogenetics and Evolution. 54 (1): 36–46. doi:10.1016/j.ympev.2009.09.035. PMID 19853051.
- Spribille, Toby; Tuovinen, Veera; Resl, Philipp; Vanderpool, Dan; Wolinski, Heimo; Aime, M. Catherine; Schneider, Kevin; Stabentheiner, Edith; Toome-Heller, Merje; Thor, Göran; Mayrhofer, Helmut (29 July 2016). "Basidiomycete yeasts in the cortex of ascomycete macrolichens". Science. 353 (6298): 488–492. Bibcode:2016Sci...353..488S. doi:10.1126/science.aaf8287. ISSN 0036-8075. PMID 27445309.
- Ramel, Gordon. "What is a Lichen?". Earthlife Web. Archived from the original on 19 January 2015. Retrieved 20 January 2015.
- Ahmadjian V. (1993). The Lichen Symbiosis. New York: John Wiley & Sons. ISBN 978-0-471-57885-7.
- Honegger, R. (1988). "Mycobionts". In Nash III, T.H. (ed.). Lichen Biology. Cambridge: Cambridge University Press (published 1996). ISBN 978-0-521-45368-4.
- Ferry, B. W., Baddeley, M. S. & Hawksworth, D. L. (editors) (1973) Air Pollution and Lichens. Athlone Press, London.
- Rose C. I., Hawksworth D. L. (1981). "Lichen recolonization in London's cleaner air". Nature. 289 (5795): 289–292. Bibcode:1981Natur.289..289R. doi:10.1038/289289a0.
- Hawksworth, D.L. and Rose, F. (1976) Lichens as pollution monitors. Edward Arnold, Institute of Biology Series, No. 66. ISBN 0713125551
- "Oak Moss Absolute Oil, Evernia prunastri, Perfume Fixative". Archived from the original on 25 December 2014. Retrieved 19 September 2014.
- Skogland, Terje (1984). "Wild reindeer foraging-niche organization". Ecography. 7 (4): 345. doi:10.1111/j.1600-0587.1984.tb01138.x.
- Lawrey, James D.; Diederich, Paul (2003). "Lichenicolous Fungi: Interactions, Evolution, and Biodiversity" (PDF). The Bryologist. 106: 80. doi:10.1639/0007-2745(2003)106[0080:LFIEAB]2.0.CO;2. Archived (PDF) from the original on 3 January 2011. Retrieved 2 May 2011.
- Hagiwara K, Wright PR, et al. (March 2015). "Comparative analysis of the antioxidant properties of Icelandic and Hawaiian lichens". Environmental Microbiology. 18 (8): 2319–2325. doi:10.1111/1462-2920.12850. PMID 25808912.
- Odabasoglu F, Aslan A, Cakir A, et al. (March 2005). "Antioxidant activity, reducing power and total phenolic content of some lichen species". Fitoterapia. 76 (2): 216–219. doi:10.1016/j.fitote.2004.05.012. PMID 15752633.
- Hauck, Markus; Jürgens, Sascha-René; Leuschner, Christoph (2010). "Norstictic acid: Correlations between its physico-chemical characteristics and ecological preferences of lichens producing this depsidone". Environmental and Experimental Botany. 68 (3): 309. doi:10.1016/j.envexpbot.2010.01.003.
- "The Earth Life Web, Growth and Development in Lichens". earthlife.net. Archived from the original on 28 May 2015. Retrieved 12 October 2014.
- "Lichens". National Park Service, US Department of the Interior, Government of the United States. 22 May 2016. Archived from the original on 5 April 2018. Retrieved 4 April 2018.
- Nash III, Thomas H. (2008). "Introduction". In Nash III, T.H. (ed.). Lichen Biology (2nd ed.). Cambridge: Cambridge University Press. pp. 1–8. doi:10.1017/CBO9780511790478.002. ISBN 978-0-521-69216-8.
- Baldwin, Emily (26 April 2012). "Lichen survives harsh Mars environment". Skymania News. Archived from the original on 28 May 2012. Retrieved 27 April 2012.
- "ESA — Human Spaceflight and Exploration – Lichen survives in space". Archived from the original on 26 February 2010. Retrieved 16 February 2010.
- Sancho, L. G.; De La Torre, R.; Horneck, G.; Ascaso, C.; De Los Rios, A.; Pintado, A.; Wierzchos, J.; Schuster, M. (2007). "Lichens survive in space: results from the 2005 LICHENS experiment". Astrobiology. 7 (3): 443–454. Bibcode:2007AsBio...7..443S. doi:10.1089/ast.2006.0046. PMID 17630840.
- Eichorn, Susan E., Evert, Ray F., and Raven, Peter H. (2005). Biology of Plants. New York: W. H. Freeman and Company. p. 1. ISBN 0716710072.
- Cook, Rebecca; McFarland, Kenneth (1995). General Botany 111 Laboratory Manual. Knoxville, TN: University of Tennessee. p. 104.
- A. N. Rai; B. Bergman; Ulla Rasmussen (31 July 2002). Cyanobacteria in Symbiosis. Springer. p. 59. ISBN 978-1-4020-0777-4. Archived from the original on 31 December 2013. Retrieved 2 June 2013.
- Ramel, Gordon. "Lichen Reproductive Structures". Archived from the original on 28 February 2014. Retrieved 22 August 2014.
- Murtagh GJ, Dyer PS, Crittenden PD (April 2000). "Sex and the single lichen". Nature. 404 (6778): 564. Bibcode:2000Natur.404..564M. doi:10.1038/35007142. PMID 10766229.
- Kirk PM, Cannon PF, Minter DW, Stalpers JA (2008). Dictionary of the Fungi (10th ed.). Wallingford: CABI. pp. 378–381. ISBN 978-0-85199-826-8.
- "Form and structure – Sticta and Dendriscocaulon". Australian National Botanic Gardens. Archived from the original on 28 April 2014. Retrieved 18 September 2014.
- Lutzoni, F.; Kauff, F.; Cox, C. J.; McLaughlin, D.; Celio, G.; Dentinger, B.; Padamsee, M.; Hibbett, D.; et al. (2004). "Assembling the fungal tree of life: progress, classification, and evolution of subcellular traits". American Journal of Botany. 91 (10): 1446–1480. doi:10.3732/ajb.91.10.1446. PMID 21652303.
- Sanders, W. B.; Moe, R. L.; Ascaso, C. (2004). "The intertidal marine lichen formed by the pyrenomycete fungus Verrucaria tavaresiae (Ascomycotina) and the brown alga Petroderma maculiforme (Phaeophyceae): thallus organization and symbiont interaction – NCBI". American Journal of Botany. 91 (4): 511–22. doi:10.3732/ajb.91.4.511. PMID 21653406.
- "Mutualisms between fungi and algae – New Brunswick Museum". Archived from the original on 18 September 2018. Retrieved 4 October 2018.
- Miller, Kathy Ann; Pérez-Ortega, Sergio. "Challenging the lichen concept: Turgidosculum ulvae – Cambridge". The Lichenologist. 50 (3): 341–356. doi:10.1017/S0024282918000117. Archived from the original on 7 October 2018. Retrieved 7 October 2018.
- Rybalka, N.; Wolf, M.; Andersen, R. A.; Friedl, T. (2013). "Congruence of chloroplast – BMC Evolutionary Biology – BioMed Central". BMC Evolutionary Biology. 13: 39. doi:10.1186/1471-2148-13-39. PMC 3598724. PMID 23402662.
- Lutzoni, Francois; Pagel, Mark; Reeb, Valerie (21 June 2001). "Major fungal lineages are derived from lichen symbiotic ancestors". Nature. 411 (6840): 937–940. Bibcode:2001Natur.411..937L. doi:10.1038/35082053. PMID 11418855.
- Hawksworth, D.L. (1988). "The variety of fungal-algal symbioses, their evolutionary significance, and the nature of lichens". Botanical Journal of the Linnean Society. 96: 3–20. doi:10.1111/j.1095-8339.1988.tb00623.x.
- Rikkinen J. (1995). "What's behind the pretty colors? A study on the photobiology of lichens". Bryobrothera. 4 (3): 375–376. doi:10.2307/3244316. JSTOR 3244316.
- Friedl, T.; Büdel, B. (1996). "Photobionts". In Nash III, T.H. (ed.). Lichen Biology. Cambridge: Cambridge University Press. pp. 9–26. doi:10.1017/CBO9780511790478.003. ISBN 978-0-521-45368-4.
- "Alan Silverside's Lichen Glossary (a-f), Alan Silverside". Archived from the original on 31 October 2014. Retrieved 10 October 2014.
- Hallenbeck, Patrick C. (18 April 2017). Modern Topics in the Phototrophic Prokaryotes: Environmental and Applied Aspects. ISBN 9783319462615. Archived from the original on 4 October 2018. Retrieved 4 October 2018.
- Rikkinen, J. (2002). "Lichen Guilds Share Related Cyanobacterial Symbionts". Science. 297 (5580): 357. doi:10.1126/science.1072961. PMID 12130774.
- O'Brien, H.; Miadlikowska, J.; Lutzoni, F. (2005). "Assessing host specialization in symbiotic cyanobacteria associated with four closely related species of the lichen fungus Peltigera". European Journal of Phycology. 40 (4): 363–378. doi:10.1080/09670260500342647.
- Guzow-Krzeminska, B (2006). "Photobiont ?exibility in thelichen Protoparmeliopsis muralis as revealed by ITS rDNA analyses". Lichenologist. 38 (5): 469–476. doi:10.1017/s0024282906005068.
- Ohmura, Y.; Kawachi, M.; Kasai, F.; Watanabe, M. (2006). "Genetic combinations of symbionts in a vegetatively reproducing lichen, Parmotrema tinctorum, based on ITS rDNA sequences" (2006)". Bryologist. 109: 43–59. doi:10.1639/0007-2745(2006)109[0043:gcosia]2.0.co;2.
- Piercey-Normore (2006). "The lichen-forming asco-mycete Evernia mesomorpha associates with multiplegenotypes of Trebouxia jamesii". New Phytologist. 169 (2): 331–344. doi:10.1111/j.1469-8137.2005.01576.x. PMID 16411936.
- Lutzoni, François; Pagel, Mark; Reeb, Valérie (2001). "Major fungal lineages are derived from lichen symbiotic ancestors". Nature. 411 (6840): 937–940. Bibcode:2001Natur.411..937L. doi:10.1038/35082053. PMID 11418855.
- "Lichens: Fossil Record" Archived 25 January 2010 at the Wayback Machine, University of California Museum of Paleontology.
- Speer BR, Waggoner B. "Fossil Record of Lichens". University of California Museum of Paleontology. Archived from the original on 25 January 2010. Retrieved 16 February 2010.
- Poinar Jr., GO. (1992). Life in Amber. Stanford University Press.
- Peterson EB. (2000). "An overlooked fossil lichen (Lobariaceae)". Lichenologist. 32 (3): 298–300. doi:10.1006/lich.1999.0257.
- Taylor, T. N.; Hass, H.; Remy, W.; Kerp, H. (1995). "The oldest fossil lichen". Nature. 378 (6554): 244. Bibcode:1995Natur.378..244T. doi:10.1038/378244a0. Archived from the original on 11 January 2007.
- Taylor WA, Free CB, Helgemo R, Ochoada J (2004). "SEM analysis of spongiophyton interpreted as a fossil lichen". International Journal of Plant Sciences. 165 (5): 875–881. doi:10.1086/422129.
- Jahren, A.H.; Porter, S.; Kuglitsch, J.J. (2003). "Lichen metabolism identified in Early Devonian terrestrial organisms". Geology. 31 (2): 99–102. Bibcode:2003Geo....31...99J. doi:10.1130/0091-7613(2003)031<0099:LMIIED>2.0.CO;2. ISSN 0091-7613.
- Fletcher, B. J.; Beerling, D. J.; Chaloner, W. G. (2004). "Stable carbon isotopes and the metabolism of the terrestrial Devonian organism Spongiophyton". Geobiology. 2 (2): 107–119. doi:10.1111/j.1472-4677.2004.00026.x.
- Edwards D; Axe L (2012). "Evidence for a fungal affinity for Nematasketum, a close ally of Prototaxites". Botanical Journal of the Linnean Society. 168: 1–18. doi:10.1111/j.1095-8339.2011.01195.x.
- Retallack G.J.; Landing, E. (2014). "Affinities and architecture of Devonian trunks of Prototaxites loganii". Mycologia. 106 (6): 1143–1156. doi:10.3852/13-390. PMID 24990121.
- Karatygin IV; Snigirevskaya NS; Vikulin SV. (2009). "The most ancient terrestrial lichen Winfrenatia reticulata : A new find and new interpretation". Paleontological Journal. 43 (1): 107–114. doi:10.1134/S0031030109010110.
- Karatygin IV, Snigirevskaya NS, Vikulin SV (2007). "Two types of symbiosis with participation of Fungi from Early Devonian Ecosystems". XV Congress of European Mycologists, Saint Petersburg, Russia, September 16–21, 2007. 1 (1): 226. Archived from the original on 24 April 2013. Retrieved 18 February 2011.
- "Lichens Are Way Younger Than Scientists Thought – Likely Evolved Millions of Years After Plants". 15 November 2019. Archived from the original on 18 December 2019. Retrieved 18 November 2019.
- Schoch CL; Sung GH; López-Giráldez F; Townsend JP; Miadlikowska J; Hofstetter V; Robbertse B; Matheny PB; et al. (2009). "The Ascomycota tree of life: a phylum-wide phylogeny clarifies the origin and evolution of fundamental reproductive and ecological traits". Syst. Biol. 58 (2): 224–239. doi:10.1093/sysbio/syp020. PMID 20525580.
- Gargas, A; Depriest, PT; Grube, M; Tehler, A (1995). "Multiple origins of lichen symbioses in fungi suggested by SSU rDNA phylogeny". Science. 268 (5216): 1492–1495. Bibcode:1995Sci...268.1492G. doi:10.1126/science.7770775. PMID 7770775.
- Honegger R. (1998). "The lichen symbiosis – what is so spectacular about it?" (PDF). Lichenologist. 30 (3): 193–212. doi:10.1017/s002428299200015x. Archived (PDF) from the original on 26 April 2019. Retrieved 30 January 2019.
- Wedin M, Döring H, Gilenstam G (2004). "Saprotrophy and lichenization as options for the same fungl species on different substrata: environmental plasticity and fungal lifestyles in the Strictis-Conotrema complex". New Phytologist. 16 (3): 459–465. doi:10.1111/j.1469-8137.2004.01198.x.
- Yuan X, Xiao S, Taylor TN (2005). "Lichen-like symbiosis 600 million years ago". Science. 308 (5724): 1017–1020. Bibcode:2005Sci...308.1017Y. doi:10.1126/science.1111347. PMID 15890881.
- Retallack G.J. (2015). "Acritarch evidence of a late Precambrian adaptive radiation of Fungi" (PDF). Botanica Pacifica. 4 (2): 19–33. doi:10.17581/bp.2015.04203. Archived (PDF) from the original on 22 December 2016. Retrieved 22 December 2016.
- Retallack GJ. (2007). "Growth, decay and burial compaction of Dickinsonia, an iconic Ediacaran fossil". Alcheringa: An Australasian Journal of Palaeontology. 31 (3): 215–240. doi:10.1080/03115510701484705.
- Retallack GJ. (1994). "Were the Ediacaran Fossils Lichens?". Paleobiology. 20 (4): 523–544. doi:10.1017/s0094837300012975. JSTOR 2401233.
- Switek B (2012). "Controversial claim puts life on land 65 million years early". Nature. doi:10.1038/nature.2012.12017. Archived from the original on 1 January 2013. Retrieved 2 January 2013.
- Retallack, G.J.; Dunn, K.L.; Saxby, J. (2015). "Problematic Mesoproterozoic fossil Horodyskia from Glacier National Park, Montana, USA". Precambrian Research. 226: 125–142. Bibcode:2013PreR..226..125R. doi:10.1016/j.precamres.2012.12.005.
- Retallack, G.J.; Krull, E.S.; Thackray, G.D.; Parkinson, D. (2013). "Problematic urn-shaped fossils from a Paleoproterozoic (2.2 Ga) paleosol in South Africa". Precambrian Research. 235: 71–87. Bibcode:2013PreR..235...71R. doi:10.1016/j.precamres.2013.05.015.
- In the Race to Live on Land, Lichens Didn't Beat Plants - The New York Times
- "Pollution, The Plant Underworld". Australian National Botanic Gardens. Archived from the original on 17 February 2014. Retrieved 10 October 2014.
- Chen, Jie; Blume, Hans-Peter; Beyer, Lothar (2000). "Weathering of rocks induced by lichen colonization — a review" (PDF). CATENA. 39 (2): 121. doi:10.1016/S0341-8162(99)00085-5. Archived (PDF) from the original on 2 April 2015. Retrieved 21 March 2015.
- Jones, Clive G.; Shachak, Moshe (1990). "Fertilization of the desert soil by rock-eating snails". Nature. 346 (6287): 839. Bibcode:1990Natur.346..839J. doi:10.1038/346839a0.
- Walker, T. R. (2007). "Lichens of the boreal forests of Labrador, Canada: A checklist". Evansia. 24 (3): 85–90. doi:10.1639/0747-9859-24.3.85.
- Oksanen, I. (2006). "Ecological and biotechnological aspects of lichens". Applied Microbiology and Biotechnology. 73 (4): 723–734. doi:10.1007/s00253-006-0611-3. PMID 17082931.
- Lawrey, James D. (1994). "Lichen Allelopathy: A Review". In Inderjit; K. M. M. Dakshini; Frank A. Einhellig (eds.). Allelopathy. Organisms, Processes, and Applications. ACS Symposium Series. 582. American Chemical Society. pp. 26–38. doi:10.1021/bk-1995-0582.ch002. ISBN 978-0-8412-3061-3.
- Nash III, Thomas H. (2008). "Lichen sensitivity to air pollution". In Nash III, T.H. (ed.). Lichen Biology (2nd ed.). Cambridge: Cambridge University Press. pp. 299–314. doi:10.1017/CBO9780511790478.016. ISBN 978-0-521-69216-8.
- Knops, J.M.H.; Nash, T. H. III; Boucher, V.L.; Schlesinger, W.H. (1991). "Mineral cycling and epiphytic lichens: Implications at the ecosystem level". Lichenologist. 23 (3): 309–321. doi:10.1017/S0024282991000452.
- Halonen P, Hyvarinen M, Kauppi M (1993). "Emission related and repeated monitoring of element concentrations in the epiphytic lichen Hypogymnia physodes in a coastal area, western Finland". Annales Botanici Fennici. 30: 251–261.
- Walker T. R.; Pystina T. N. (2006). "The use lichens to monitor terrestrial pollution and ecological impacts caused by oil and gas industries in the Pechora Basin, NW Russia". Herzogia. 19: 229–238.
- Walker T. R.; Crittenden P. D.; Young S. D.; Prystina T. (2006). "An assessment of pollution impacts due to the oil and gas industries in the Pechora basin, north-eastern European Russia". Ecological Indicators. 6 (2): 369–387. doi:10.1016/j.ecolind.2005.03.015.
- Walker T. R.; Crittenden P. D.; Young S. D. (2003). "Regional variation in the chemical composition of winter snowpack and terricolous lichens in relation to sources of acid emissions in the Usa River Basin, northeastern European Russia". Environmental Pollution. 125 (3): 401–412. doi:10.1016/s0269-7491(03)00080-0. PMID 12826418.
- Hogan, C. Michael (2010). "Abiotic factor". Encyclopedia of Earth. Washington, D.C.: National Council for Science and the Environment. Archived from the original on 8 June 2013. Retrieved 27 October 2013.
- Beltman IH, de Kok LJ, Kuiper PJC, van Hasselt PR (1980). "Fatty acid composition and chlorophyll content of epiphytic lichens and a possible relation to their sensitivity to air pollution". Oikos. 35 (3): 321–326. doi:10.2307/3544647. JSTOR 3544647.
- Emmerich R, Giez I, Lange OL, Proksch P (1993). "Toxicity and antifeedant activity of lichen compounds against the polyphagous herbivorous insect Spodoptera littoralis". Phytochemistry. 33 (6): 1389–1394. doi:10.1016/0031-9422(93)85097-B.
- Beschel RE (1950). "Flecten als altersmasstab Rezenter morainen". Zeitschrift für Gletscherkunde und Glazialgeologie. 1: 152–161.
- Curry, R. R. (1969) "Holocene climatic and glacial history of the central Sierra Nevada, California", pp. 1–47, Geological Society of America Special Paper, 123, S. A. Schumm and W. C. Bradley, eds.
- Sowers, J. M., Noller, J. S., and Lettis, W. R. (eds.) (1997) Dating and Earthquakes: Review of Quaternary Geochronology and its Application to Paleoseismology. U.S. Nuclear Regulatory Commission, NUREG/CR 5562.
- Innes, J. L. (1985). "Lichenometry". Progress in Physical Geography. 9 (2): 187. doi:10.1177/030913338500900202.
- Cappitelli, Francesca; Sorlini, Claudia (2008). "Microorganisms Attack Synthetic Polymers in Items Representing Our Cultural Heritage". Applied and Environmental Microbiology. 74 (3): 564–569. doi:10.1128/AEM.01768-07. PMC 2227722. PMID 18065627.
- Casselman, Karen Leigh; Dean, Jenny (1999). Wild color: [the complete guide to making and using natural dyes]. New York: Watson-Guptill Publications. ISBN 978-0-8230-5727-6.
- Muller, K (2001). "Pharmaceutically Relevant Metabolites from Lichens". Applied Microbiology and Biotechnology. 56 (1–2): 9–10. doi:10.1007/s002530100684. PMID 11499952.
- Morton, E.; Winters, J. and Smith, L. (2010). "An Analysis of Antiseptic and Antibiotic Properties of Variously Treated Mosses and Lichens" Archived 20 August 2017 at the Wayback Machine. University of Michigan Biological Station
- Bustinza, F. (1952). "Antibacterial Substances from Lichens". Economic Botany. 6 (4): 402–406. doi:10.1007/bf02984888.
- "Themodelrailroader.com". Archived from the original on 15 October 2014. Retrieved 10 October 2014.
- Thus explained by Rabbi Enoch Zundel ben Joseph, in his commentary Etz Yosef ("Tree of Joseph"), on Sefer Midrash Rabbah, vol. 2, New York 1987, s.v. Ruth Rabba 6:3
- Zohar Amar and Yaron Serri, The Land of Israel and Syria as Described by Al-Tamimi, Ramat-Gan 2004, pp. 56, 108–109 ISBN 965-226-252-8 (Hebrew)
- Honegger R. (2000). "Simon Schwender (1829–1919) and the dual hypothesis in lichens". Bryologist. 103 (2): 307–313. doi:10.1639/0007-2745(2000)103[0307:SSATDH]2.0.CO;2. ISSN 0007-2745. JSTOR 3244159.
- Treub, Melchior (1873) Onderzoekingen over de natuur der lichenen. Dissertation Leiden University.
- Yong, Ed (21 July 2016), "How a guy from a Montana trailer park overturned 150 years of biology", The Atlantic, archived from the original on 23 July 2017, retrieved 23 July 2017.
- Jorgensen, Per M., and Lücking, Robert (April 2018). "The 'Rustici Pauperrimi': A Linnaean Myth about Lichens Rectified". The Linnean 34(1), pp. 9–12.
|Wikimedia Commons has media related to Lichens.|
|Look up lichen in Wiktionary, the free dictionary.|
Identification and classificationEdit
- University of California Museum of Paleontology microscopic image of cross section of crustose or squamulose lichen
- Earth Life Web – Schematic drawings of internal lichen structures for various growth forms
- University of Sydney lichen biology
- Memorial University's NL Nature project, focusing primarily on lichens
- Fungi that discovered agriculture
- Lichens of Armenia
- Lichens of Ireland
- Lichens of North America
- Pacific Northwest Fungi Online Journal, includes articles on lichens
- Pictures of Tropical Lichens
- Lichen species found in Joshua Tree National Park
- Very High Resolution Image of a Lichen Covered Rock |
An economy is primarily divided into two categories - microeconomics and macroeconomics. Microeconomics is the study of the economy on an individual level. Contrarily, macroeconomics observes a nation’s economy as a whole, including its performance, structure, and future direction.
Micro and macroeconomics are interdependent to some extent. Several differences also exist between these two segments of economics.
Microeconomics focuses on the choices made by individual consumers as well as businesses concerning the fluctuating cost of goods and services in an economy. Microeconomics covers several aspects, such as –
Supply and demand for goods in different marketplaces.
Consumer behaviour, as an individual or as a group.
Demand for service and labour, including individual labour markets, demand, and determinants like the wage of an employee.
One of the main features of microeconomics is it focuses on casual situations when a marketplace experiences certain changes in the existing conditions. It takes a bottom-up approach to analyse the economy.
Macroeconomics studies the economic progress and steps taken by a nation. It also includes the study of policies and other influencing factors that affect the economy as a whole. Macroeconomics follows a top-down approach, and involves strategies like –
The overall economic growth of a country.
Reasons that are likely to influence unemployment and inflation.
Fiscal policies that are likely to influence factors like interest rates.
Effect of globalisation and international trade.
Reasons that affect varying economic growths among countries.
Another feature of macroeconomics is that it focuses on aggregated growth and its economic correlation.
There are a few differences between these two categories. Here are the primary dissimilarities –
Example of Microeconomics –
Price determination of a particular commodity.
Output generated by an individual organisation.
Individual income and savings.
Example of Macroeconomics –
National income and savings.
General price level.
Aggregated demand as well as supply.
Rate of unemployment.
The unique characteristics of microeconomics and macroeconomics form a corresponding and co-dependent relation between the two schools of economics. Factors that might directly affect microeconomic factors can also impact macroeconomics in the long run.
Similarly, State-level policies, a component of macroeconomics, can also affect individual consumers and businesses. For example, a tax hike (macroeconomics) can increase the retail price of certain products, affecting the rate of consumption (microeconomics).
Any changes in these categories have a direct impact on a country’s economy. Several factors affect it; let’s take a look –
Decision Making -
Uncontrollable external factors such as changes in interest rate, regulations, number of competitors present in the market, cultural preferences, etc. play a key role influencing an organisation’s strategies and performance. These can have a cumulative effect on a nation’s economy as well.
Economic Cycles –
Experts consider macroeconomics as a cyclic design. Higher demand level, personal income, etc. can influence price levels, which in turn can affect a nation’s economy. Contrarily, when supply outweighs demand, the cost of daily goods reduces. This pattern continues until the next cycle of supply and demand.
Price of Products and Services –
The primary goal of an organisation is to keep cost at the minimum and increase the profit margin. The cost of labour is one of the highest expenses incurring factors in microeconomics, thereby directly affecting the overall cost of production and retail.
To understand the uses of microeconomics and macroeconomics as well as several other central components of an economy, visit Vedantu’s official website today.
1. What are Microeconomics and Macroeconomics?
Ans – Microeconomics studies the economy at an individual, cluster, or organisational level. Macroeconomics is the study of the economy at the national level.
2. What is the difference between Micro and Macroeconomics?
Ans– The primary difference between Micro and Macroeconomics is that microeconomics focuses on issues regarding individual income, output, price of goods, etc. whereas macroeconomics deals with issues like employment rate, national household income, etc.
3. Example of Microeconomics and Macroeconomics?
Ans – Individual income, individual savings, price of a particular commodity, etc. will be considered amongst microeconomics. Aggregated demand, aggregated supply, poverty, rate of unemployment, etc. are considered under macroeconomics.
4. Limitations of Microeconomics and Macroeconomics?
Ans – Micro and macroeconomics are correlated with each other. Any drastic change in the critical components of one discipline is likely to have a significant effect on the other. These two fields of economy are complementary to each other, which somewhat limits the flexibility of the system. |
Apply and extend previous understandings of arithmetic to algebraic expressions
Write and evaluate numerical expressions involving whole-number exponents.
Write, read, and evaluate expressions in which letters stand for numbers.
Write expressions that record operations with numbers and with letters standing for numbers. For example, express the calculation “Subtract y from 5†as 5 – y.
Identify parts of an expression using mathematical terms (sum, term, product, factor, quotient, coefficient); view one or more parts of an expression as a single entity. For example, describe the expression 2 (8 + 7) as a product of two factors; view (8 + 7) as both a single entity and a sum of two terms.
Evaluate expressions at specific values of their variables. Include expressions that arise from formulas used in real-world problems. Perform arithmetic operations, including those involving whole-number exponents, in the conventional order when there are no parentheses to specify a particular order (Order of Operations).
Apply the properties of operations (including, but not limited to, commutative, associative, and distributive properties) to generate equivalent expressions. The distributive property is prominent here. For example, apply the distributive property to the expression 3 (2 + x) to produce the equivalent expression 6 + 3x; apply the distributive property to the expression 24x + 18y to produce the equivalent expression 6 (4x + 3y); apply properties of operations to y + y + y to produce the equivalent expression 3y.
Identify when expressions are equivalent (i.e., when the expressions name the same number regardless of which value is substituted into them). For example, the expression 5b + 3b is equivalent to (5 +3) b, which is equivalent to 8b.
Reason about and solve one-variable equations and inequalities
Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true? Use substitution to determine whether a given number in a specified set makes an equation or inequality true.
Use variables to represent numbers and write expressions when solving a real-world or mathematical problem; understand that a variable can represent an unknown number, or, depending on the purpose at hand, any number in a specified set.
Solve real-world and mathematical problems by writing and solving equations of the form x + p = q and px = q for cases in which p, q and x are all nonnegative rational numbers.
Write an inequality of the form x > c or x < c to represent a constraint or condition in a real-world or mathematical problem. Recognize that inequalities of the form x > c or x < c have infinitely many solutions; represent solutions of such inequalities on number line diagrams.
Represent and analyze quantitative relationships between dependent and independent variables
Use variables to represent two quantities in a real-world problem that change in relationship to one another. For example, Susan is putting money in her savings account by depositing a set amount each week (50). Represent her savings account balance with respect to the number of weekly deposits (s = 50w, illustrating the relationship between balance amount s and number of weeks w).
Write an equation to express one quantity, thought of as the dependent variable, in terms of the other quantity, thought of as the independent variable.
Analyze the relationship between the dependent and independent variables using graphs and tables, and relate these to the equation. |
Economics as a branch of knowledge is concerned with studying human behaviour based on the allotment of scarce resources such that the producers can maximise their profits, consumers can maximise their satisfaction and society can achieve the maximum amount of social welfare. In short, economics is about creating and making choices on the face of scarcity.
The subject matter of this discipline is studied under two branches. Microeconomics is one of those two broad branches.
Definition of Microeconomics
The English term ‘Micro’ traces its origin in the Greek word ‘mikros,’ meaning small. In the context of microeconomics, this term is used to refer to small individual units. To be more specific, microeconomics is the study of economic problems at the individual level.
It observes and investigates the economic activities of individual units of an economy—for example, a firm, a market, a household, individual industry and many more.
Characteristics of Microeconomics
In the context of its primary goal to balance scarcity with choice, microeconomics exhibit the following characteristic features:
- It observes, investigates and predicts the behaviour of individual units of an economy.
- As the ambit of its study is limited to individual units, the degree of aggregation tends to be limited. For example, a collection of firms signify an industry.
- It primarily deals with problems like distribution of resources and related policies and principles.
- The main tools or instruments it uses to study economic problems include supply and demand.
- The principal determinant that solves the problems in microeconomics is the price.
- The method of study that this branch of economics uses is namely, partial equilibrium analysis. Under this method, the impact of one variable over all other variables is considered as equal.
- As the determination of the output and price of an individual economic unit forms one of the primary concerns of microeconomics, the branch is also referred to as ‘Price Theory.’
Significant Theories in Microeconomics
The following are some of the significant theories employed in Microeconomics.
- Theory of Production Input Value
This theory argues that a product or service’s cost is determined by the number of inputs used in its production. For example, land, labour, capital, taxation and so on.
2. Theory of Consumer Demand
It is concerned with the correlation between products and services consumption preference and consumption expenditure.
3. Theory of Opportunity Cost
According to this theory, the value or cost of the next best existing alternative is the opportunity cost. Opportunity cost depends on the quality or value of the next best option and not on the number of choices.
4. Production Theory
It is concerned with the process of conversion of inputs into outputs. The goal is to select the right combination of commodities and combining techniques to reduce cost yet maximise profit.
Advantages of Microeconomics
The following are some principal advantages of Microeconomics:
- It helps in predicting the potential rise in prices based on the study of demand and supply.
- Observing and analysing the behaviour of small economic units and the demand-supply chain allows the decision-makers to make the authoritative allocation of resources.
- The concepts of microeconomics allow business associations to chalk out their future course of action.
- With its simple models of analysing economic problems, understanding the overall economic phenomena becomes more straightforward.
- Microeconomics provides the basis for studying Macroeconomics.
Disadvantages of Microeconomics
Even though Microeconomics is vital for studying the individual units of an economy, it suffers from inherent limitations.
- Microeconomics assumes that the impact of one production variable is equal over all other production variables. Such an assumption is unrealistic.
- It strives for laissez-faire policy or pure capitalism, which is not practicable.
- It depends on macroeconomics, specifically for the rate of interest and profit determination.
Table of Contents |
LED Lighting and Resistors
Light-Emitting Diodes (LEDs) are solid state light emitting devices that are known for their low current draw (typically 20-60 milliwatts) and long lifespan (100,000 hours). They will burn out if too much current is allowed to flow through them.
An LED is a diode, therefore current can only flow through it in one direction.
A common application is replacement of lamps or additional lighting effects. While LEDs are available in a variety of sizes and colours, white LEDs do not look like incandescent lamps. The preferred colour is Sunny White. It is also possible to tint the lens of a white LED to get the correct effect.
- 1 Light Emitting Diodes and Series Resistors
- 2 An Example
- 3 Ohm's Law
- 4 Identifying the Anode and Cathode
- 5 Failure modes
- 6 LED Colours
- 7 Reading the Data Sheet
- 8 See Also
Light Emitting Diodes and Series Resistors
An LED is a current controlled device. In most applications related to Digital Command Control, a simple, low cost resistor is used to limit the current flowing through the LED. For more exacting applications a controlled current source would be used instead of a resistor.
The LED is a semiconductor device that will burn out quite quickly if its specifications are exceeded.
The following explains how to wire an LED for your model train or layout without letting the magic smoke out of the LED.
In this example, our LED specifies a forward current of 20mA (milliAmps) at a forward voltage of 2 volts. We will be wiring the LED from a power source, and using a resistor to limit both the current and the voltage. (Always read the spec sheet for your LED as it will have the details you need to calculate the series resistor.)
Voltage from one end of a string of devices in series (like the LED and Resistor, above) to the other is divided among the items in the series, in proportion to their resistance. Assume that the voltage source is 9 Volts DC. Since the voltage across both devices is 9 Volts, and the LED is rated for 2 volts, the voltage drop across the series resistor must be 7 volts.
Since the current through two devices in series is the same through both devices, and we want the maximum current through the LED to be 20 mA, the resistor must drop 7 volts at a current of 20mA.
We need to calculate the Resistance of the resistor. Ohm's law tells us how Voltage, Current, and Resistance in a circuit are related. The 3 variables are:
- Voltage (measured in Volts, represented by the letter V, or it may be E.)
- Current (measured in Amperes, represented by the letter I ('eye'))
- (don't ask why it's not A)
- Resistance (measured in Ohms, represented by the letter R or the symbol Ω)
The law says that V=IxR. This can be re-arranged to say I = V/R or R = V/I. This 3rd form is the one we want:
- R = 7 Volts / 20 milliAmps
- R = 7 Volts / .020 Amps
- R = 7/.020
- R = 350 Ω (Ohms).
So, we need:
- LED, forward current: 20mA, forward voltage: 2V
- Power supply: 9VDC
- Resistor, 350 Ω
The next step is to choose a resistor with a value of at least 350 ohms. (You may see this value written as 350R.)
Resistors are only available with certain standard values. Standard values near 350 Ω include 330, 360, and 390 Ω. Due to manufacturing processes, resistors are manufactured with 5, 10, or 20% tolerance. The tolerance means the value may vary ±5%, 10% or 20% from the stated value. Many of the resistors you see today will be 10% or better tolerance, as the old carbon resistors are no longer manufactured.
With a manufacturing tolerance of 10%, a 390 Ω resistor could be anywhere from 351 to 429 Ω. If the value is precisely as marked, a 390 Ω resistor would limit the current to 7/390, or 18mA. This is below the 20mA maximum for the LED. At the lower extreme (assuming a worst case value of 351 Ω) the maximum current would be 20mA.
Don't forget-- the ratings for the LED are the MAXIMUM values! Limiting the current to less than 20 mA or reducing the voltage will extend the life of the LED, just as running a light bulb at less than rated voltage will extend its life! And, just as in a light bulb, reducing the current or voltage will reduce the brightness of the LED.
Also remember that if a power supply isn't supplying a full load, the rated output voltage is often exceeded! So you may need to calculate the resistance based on a higher than rated voltage, and a lower than rated current.
Wiring Multiple LEDs
When wiring multiple LEDs in a circuit, it is best to connect them in parallel, each with its own series resistor to limit the current. In this case, the power source must be able to supply enough current in total to insure each device gets enough current. Ten LEDs at 20mA each would need a total current of 200mA.
Large numbers of LEDs can be powered by a lower current power source by multiplexing them. Each LED is rapidly switched ON, then OFF, but to the human eye, it appears to be constantly lit. For the additional complexity of the driver circuits, a larger power supply is simpler.
Why Can I Not Use a Single Resistor for Multiple LEDs?
You can, but you really should not.
Ten LEDs in parallell, 20mA IFWD. VFWD is 2V.
Power source is 12V.
There are two components to this circuit:
- Ten LEDs in parallel, VDrop = 2V, ILED = 20mA × 10 = 200mA
- Series Resistance: VDrop = 10V, IResistor = 200mA.
Ohms Law says RSeries = V÷I. 10 ÷ 0.2 = 50Ω
Should one LED fail, the current flowing through RSeries will still be 200mA to maintain the 10V drop. ILED is now 200 ÷ 9, or 22.2mA.
Not all LEDs are alike. Some may flow more current, others less. So one might be flowing 25mA, until it fails. Now you have 8 LEDs remaining, flowing on average 25mA. When another fails, it becomes 29mA...
LED Parameters IFWD and VFWD are nominal, not absolute values for a specific LED. These values vary by batch, and by individual LED.
Identifying the Anode and Cathode
Identifying the Anode (A) and Cathode (K) can be easy with some LEDs. The typical LED with the coaxial leads has a flat spot on the package, and a shorter lead, which is the cathode. The positive lead (Anode) is the longer lead.
Another technique is to hold the device up to the light so you can see the internal structure. The cathode (or anvil) looks like a flag.
Surface Mount Devices
Surface Mount Devices are very small components. By eliminating most of the packaging, the component can often be made smaller, which means less cost for packaging it, and higher board density at manufacture. SMT LEDs can be used in tight spaces. They may or may not come with leads.
To identify the K cathode, there may be a stripe (like a diode), a dot, or a chamfer on one corner of the package. If not, there may be a dot printed on the underside of the package. As always, consult the data sheet for the device if you are unsure.
Another type may look like a little button, with three leads, two long ones and a short one, with a hole. The two long leads are the anode and cathode. The short lead, or tab with the hole, is part of the lead frame. The lead opposite it is the cathode. The lead next to the tab is the anode. As always, check the data sheet.
Why is the Cathode marked with a K?
The terms Anode and Cathode go back to the vacuum tube diode. The Anode is the positive connection which is in turn connected to the B+ power supply connection. The Cathode was usually grounded, and marked as K on schematics. The K was used, as C was already claimed to indicate a capacitor on a schematic. The term C- referred to the control grid voltages used in an amplifier.
These terms also resulted in the A, B, and C designations for batteries. The A cell was a large 1.5V cell (about the diameter of a D cell but about four times longer) that supplied the filament voltage, the B battery was 90V, connected to the anode, supplying the B+ voltage. Prior to the invention of the indirectly heated cathode, all radios were battery powered, using an A cell and B battery.
The most common way for LEDs to fail is the gradual lowering of light output and loss of efficiency. However, sudden failures can occur as well. When operated at, or just below their rated current (I.e. 20mA) an LED should last 100,000 or more hours before failure.
Other methods include:
- Extreme current - Too high of a current will let the magic smoke out of the LED, causing premature failure.
- Extreme heat - Caution should be used when soldering LEDS to a board or wires to the LED leads. It's recommended to use a soldering heat sink.
- Electrostatic discharge - ESD may cause immediate failure of the semiconductor junction. Be sure to ground yourself and your workstation when working on any electronics.
- Excessive Reverse Voltage - LEDs can be very intolerant of excessive reverse voltage. Diodes are designed to withstand a reverse voltage, and have a PRV (Peak Reverse Voltage) specified, but LEDs are not designed to be used as rectifiers.
Avoid the excessive current failure vector by using a resistor for each LED. This prevents increasing current if one LED fails, since a single resistor is calculated to limit the current with multiple LEDs connected to it. Two LEDs connected in parallel with one series resistor will flow 40mA of current, and the resistor will be half the size. Meaning it will allow 40mA through one LED should the other fail. See the above section on Ohm's Law.
The colour of a LED is important.
The first LEDs were Red in colour. Later, Green and Yellow LEDs appeared. For a number years, those were the only colours available, with Blue ones eventually appearing, and later still, White
For the examples below, a 5mm commodity LED is the norm for values presented.
Red, Red-Orange, and Orange LEDs are typically 1.7 to 2.4V at 10mA.
Green, and yellow-green, are typically 1.8 to 2.4V. Some green LEDs can be 3 to 3/5V.
2.8 to 3.5V. Early "blue LEDs" were in fact, small incandescent lamps with a blue diffuser. Later ones were true semiconductors and further development brought blue LEDs five times brighter than their predecessors .
True white-light-emitting LEDs do not exist. LEDs emit one wavelength.
White does not appear in the colour spectrum; instead, perceiving white requires a mixture of wavelengths. A trick is employed to make white LEDs: Blue-emitting semiconductor base material is covered with a converter material that emits yellow light when stimulated by the blue light, similar to a fluorescent light's construction. The result is a mixture of blue and yellow light that is perceived by the eye as white. The mixing of specific colour values is the principle behind colour television.
White LEDs exhibit a colour shift due to different concentrations of converter material, in addition to a change of wavelength with forward voltage for the blue-emitting InGaN material. So when changing the forward current, the colour balance of the LED with shift. This can be an issue if multiple LEDs are used. Also, dimming the LED will have some effect.
Reading the Data Sheet
Manufacturers and distributors of LEDs will supply a data sheet outlining their products parameters. There are a number of details you need to know. (These were taken from a Vishay white LED data sheet.)
Forward Voltage indicates the amount of voltage drop across the junction, which must not be exceeded.
|DC Forward Current||IF||30||mA|
|Soldering Temperature||t≤ 5s||TSD||260||°C|
Two details are important: VR and IF. The maximum current, IF must not exceed 30 mA, and the maximum reverse voltage VR applied to the LED is 5V.
What the Data Sheet Tells You
As seen in the tables above, the maximum current is 30mA, and forward voltage is 3.6 MAX. For easy calculations, a nominal VF of 3.0V will be used.
Power supply, VS will be 14V.
The forward voltage is 3V. Therefore the series resistor must flow enough current to develop a potential across it of 14 − 3 or 11V.
Using the value of 11V, Ohms Law says that R=V ÷ I, or 14 ÷ 0.03, which is 467Ω. Resistor values in this range are multiples of 4.3, 4.7, and 5.1. Using that rule, 470Ω is a reasonable choice, at a tolerance of 5% or ±24Ω. Since a lower value will cause more current to flow, the better choice is 510Ω, which results in a current of 22mA. This also gives a margin of safety.
If this LED will be powered by a DCC track signal, it gets more interesting. It will be exposed to a reverse voltage of ≈14V Note 1 when the phases switch, exceeding the 5V VR. To prevent that, another diode will be placed in parallel and reverse biased to protect it. The reverse current will flow through it, with the VF of a typical diode being 1.4V, which is below the rating for the LED. (This section also applies to layouts using Analog Direct Current for power.)
Note 1: The DCC voltages on the track may be more than 14V. Measure it or estimate it to be greater than the amounts specified in the NMRA Standards.
Also see this website for more information: SM LEDs. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.